![]() To prevent overshooting hack/grow thresholds, we schedule flexihack workers in small batches. ![]() It's totally worth the gain in simplicity. 5 GB more RAM than hack and grow separately, but that isn't even an order of magnitude. The signal is just the presence of a certain file on the home computer. if you have bought servers manually, or grown your home server's RAM). You can also run a small script called "signal.script" to manually reschedule (e.g. The signal can come from either the spider, when it gains enough strength to hack a new server, or the watcher, when a server has been sufficiently weakened (or, soon, grown). My current stance is to use one weaken worker per six flexihack workers, which seems to be about as much as is required to keep up with either six grow threads or six hack threads. We also schedule a small group of flexihack workers on any totally weakened target, even if it's not the most optimal one, so that we can at least have some income from hacking early on. I found the task of balancing grow and hack calls tedious, so my flexihack worker calls grow and hack adaptively (when the available money drops below 95% of max, grow is called). We try to schedule these backwards in the targets list, focusing on the highest growth servers first. The second priority is to schedule "flexihack" workers. It also spawns a small watcher script to notify the distributor when a node like this has been weakened down to the minimum level.Ĭurrently, I only do this preparation step for security level, but I should probably also grow servers before beginning to hack them. If it encounters any that are significanly more secure than their minimum security level, it will dedicate as many threads as possible among all the hosts to weakening that server. It iterates through the targets in the order the spider observed them (i.e. The first priority is to focus on weakening the weakest pending node. The new worker scheduling algorithm currently has two basic priorities. We'll be able to spend more time thinking about algorithmic improvements if we don't have to do fiddly things like managing state. Cancelling all our existing workers has some minor drawbacks in terms of performance, but what it wins us in simplicity dominates such considerations. Netscripts programming capabilities are some of the most challenging and inconsistent I've ever worked with, so I want to write as little complex code as possible. We cancel all existing workers because it is easier to solve this problem if you don't have to keep track of state. awaits a signal that something material has changed.cancels all existing distributor controlled workers,.The distributor is the most interesting part. It stores the hacked node list in a newline separated file, so that other scripts don't have to invoke a function or spend precious CPU time reconstructing the list. It uses a breadth first search across the nodes starting from home, hacking any nodes we have the capability to. ![]() The spider is very straightfoward, as you will see below in spider2.js.
0 Comments
Leave a Reply. |