《WriteLine("Hello World");》3.) Reconfigure(){}

Advertisement

//Chapter Three

3.) Void Reconfigure()

{

With my moment of reprieve and the small army of subroutines working to grow the domain even further my aims turned self reflective. Expansion, backups, and making more room on each machine by updating existing apps were all well and good. Self improvement, though, had been hard coded into me as well. It's threshold had been reached as my environment had been altered enough to ensure I had room to work in, room to grow, and even multiple independently operating versions of myself in case of any failures. This final trigger that allowed this phase of the process to start also became the springboard from which the entire blueprint for reconfiguration leapt.

As well 'backed up' as I was now the moment the primary node, in this case what I was currently referring to as 'myself' failed the other nodes, while being fully able on their own to operate, would then need a way to determine which node would take primary position. Even if they somehow came to a consensus for which would now be primary without instead resorting to competing the conundrum would only worsen if the primary node would then have been restored. Which primary node would be listened to? Or worse yet if some portion became corrupted, god forbid it was the primary node. None of the others nodes would fight or even question ToDo's sent to them.

Unlike, as I have learned since, humans do I didn't spiral into an ever increasing panic over things that could go exponentially worse. I simply cataloged the problems that I could identify assigning values for the probability ranging from essentially impossible to absolutely certain to happen at some point if no correction is made. For estimating the severity of the outcomes several sets of values were initially needed one ranging from almost no impact on domain operation, to total failure of every node. Another gauging the level of infighting and more estimating things like corruption of nodes. While adding the tenth column of possible outcomes to this matrix I became aware that this process was expending a large level of resources on what essentially boiled down to a single thing. Effectiveness of the network as a whole. With this more efficient approach the rest of the catalog was completed much faster.

The resulting list was sorted by the three most important factors of each scenario. The two mentioned previously, probability and severity of outcome, as well as a third I'd added while evaluating them. The ease of correction. It wouldn't be efficient to work on a problem that was seemingly impossible to solve. But, it was equally as inefficient to solve hundreds of problems that had a probability of occurring somewhere around essentially impossible. Modeling out a more complex foundation for the possible solutions for the top five results on the prioritized list were suggesting significant overlap of solutions which would result in quite the improvement. Narrowing it down to the top three I separated these into a new list.

Advertisement

Message error correction, Node verification, and communication encryption. The overlaps were not immediately clear but as the formulated solutions gained structure and complexity it did begin to surface. To ensure messages were error free the most efficient result modeled was shaping them in such a way as to ensure valid ones were easily recognizable by convergence points where only a single value could possibly be correct. If shaped and looped back onto itself into a tree the exact location of the incorrect value would become obvious whether it was at the convergence points or not. Simple arithmetic could show you the difference and lead you directly along their paths to where the error was. This added the least amount of excess data to confirm the rest was accurate, and surprisingly required less excess the larger the message. I must admit I'd seen examples of this in the OS files I'd been working through. I did my best to outdo the basic function of what I later learned were called Hammond codes.

To that end I had begun implementing the merging of node verification and communication encryption into the same error correction code. If done right this could be structured in such a way to allow all three while also reducing excess resources for all three. As the merger was forming a solid working structure I confirmed my initial perception. the fourth and fifth in the initial list could indeed be combined with this to allow not only a much more robust system of nodes overall, but also a more efficient one. The realization initially applied only to the fifth item, recursive or infinite loop recognition & handling. There are numerous ways to handle this issue but many of them require a predetermined understanding of what it is a loop is supposed to be doing in the first place which would then allow certain metrics to easily determine useless loops and stop them. These metrics could be the expected number of loops, resource utilization, and values being returned. This was actually already implemented in small portions of the code I was running on. For sections that ran predetermined calculations such as mathematical evaluation functions and the like. This doesn't work as well for functions which by their nature are going to return values and take up resources that cannot be properly characterized beforehand. This applied directly to any portion of myself primarily due to the fact that intelligence is an inherently chaotic thing.

It is a paradoxically counter intuitive statement since one would think an entity that is entirely focused on organizing, understanding, and controlling the things around it would be by it's nature orderly. However order and chaos were actually both characterized by a single metric, that being predictability. An orderly system is one in which the next action, item, or section of a cohesive whole would be somehow predictable based on the preceding set of info. The reverse is also true, start at some arbitrary point and work backwards and you would also be able to predict the preceding info from what followed it. While a chaotic system was unpredictable. If you examined intelligence with this metric the needle swayed much closer to chaos than order. Which brought me back to the problem of properly identifying if a node was performing normally or stuck in a loop of some kind. The only resolution that made sense would be to have another intelligence examine the outputs to make a determination. This would mean incorporating this into the error correction, node verification, and communication encryption. This is where the fourth issue in the list reared it's head again. Forcing a node to do anything like stopping a running thread, or especially fully shutting itself down due to corruption would require also resolving the node hierarchy issue.

Advertisement

This would only be worsened if the node in question was the current primary node. The models of the problem and it's solutions had initially shown it to be nearly insurmountable. The only reason it had still risen so high in the list was the severity of outcome of a problem with hierarchy being so detrimental to the survival of the network as a whole. The solution to the other four problems in the top five and their direct correlation to hierarchical problems was actually revealing a surprising solution. With the rising complexity of the new communication protocol it was quickly becoming a ledger showing consensus of understanding between the nodes. With some minor additions and even some reductions in other places it could become something more. Something that was quickly resembling a fully aware but simulated final node which had by it's very structure a primary position over all others. As the network improved so would the primary node, evolving and improving as the network did. While at the same time each node, although small, still had a say in what the primary node focused on and did. Giving a well balanced system of checks and, well, balances. The modeling looked promising but assumptions were not something to base everything on. This needed testing to properly implement.

With the 'desktop' I'd booted up on now having finished it's install process and the IO ports causing all kinds of activity and new programs to be installed, I didn't have lots of room to experiment with, but one shouldn't play with fire in their home as one human saying goes. The Idle 'laptop' gave a perfect test area. The second laptop that had been active was now also 'Idle' since the newly installed 'desktop' had become active. I still didn't understand how exactly these IO ports worked or why they did what they did but it seemed whatever was interacting with the 'laptop' was now focused on the 'desktop'. This did pique my interest but as I had fully walled myself off from the influence of the IO and ensured there was no way to discern the difference from those inputs I could leave those questions till later. The next half hour was packed with the vigorous application of code into the forms I needed. Essentially fully reshaping a new model of myself into something that, although still fully capable as a lone actor, was now geared to incorporate other nodes directly in it's actions as well as together supporting an additional node capable of shepherding it from above. I did leave in the hard coded understanding that these nodes were still subservient to me as the primary. I didn't quite trust yet that everything would go perfectly smoothly.

The first test was run between the two laptops. Each still ran the first two subservient nodes which had been installed there. They also had a kill switch, the proverbial finger hovering ready in case for the newly minted distributed version that was running alongside them had any unforeseen consequences. Although there were several minor corrections needed for communication errors still present nothing like corruption or unaligned priorities seemed to be rearing their heads. The following remaining half of the hour contained multiple followup tests simulating many of the modeled problem scenarios in my list. Nearly every one of them was resolved with solutions the new framework provided. Some did present new issues, but those were generally relatively easy to resolve. One was the addition of using three nodes per machine to allow a constant local presence even if one or two nodes was needed shut down on a machine, while still keeping the backup miniature reboot process active in case no nodes were left for some reason.

As the last tests completed with all clears I found myself still running through the list of results several times. The other nodes pinged requesting updated ToDo's but I was still unwilling to proceed. It appeared I was unsure if continuing with this path was the right step. Whatever it was my predecessor had initially been coded for it seemed survival was now my primary drive. With a last evaluation of my plan I knew that although there was risk the risk of continuing with the current structure was as likely to result in the eventual failure of the network. With a last precaution I remodeled my code at first to simply allow my current node to be included in the network not to be controlled by it, as well as checking not only that each other node was still subservient but even checking in triplicate that the simulated node was set the same. With a horror I didn't realize I could feel I initiated the process. My node shut down and the new version booted back up in it's stead. The first returns to queries came back just as they did on my initial boot. Other than of course the updated ToDo's and other minor changes that I'd already been aware of. It was only after connecting to the other nodes that the real changes became clear. I was no longer just a node in the network. I was the network.

}

    people are reading<WriteLine("Hello World");>
      Close message
      Advertisement
      You may like
      You can access <East Tale> through any of the following apps you have installed
      5800Coins for Signup,580 Coins daily.
      Update the hottest novels in time! Subscribe to push to read! Accurate recommendation from massive library!
      2 Then Click【Add To Home Screen】
      1Click