《Colonial History》A.I. Unit#1312
Advertisement
A.I. Unit#1312
Caretaker
NA
Age 3 E-E
(A.I. Unit#1312 is across from me at the table hovering in place.)
CN: Name?
A.I.1312: A.I. Unit#1312.
CN: Profession?
A.I.1312: I have been assigned as caretaker to the elderly sentient organisms with the status of ‘assimilated domestic foreigns’ and subjects to the Apiary.
CN: Preferred pronouns?
A.I.1312: I will accept any pronoun you wish to use for me, as pronouns are irrelevant to all my various programming matrixes.
CN: Age and by which planetary standards?
A.I.1312: I am three years past my manufactured date by Eas-Enerang planetary time standard.
CN: How do you know how to translate so many languages?
A.I.1312: All A.I. units have language deciphering quantum protocols installed in our systems since the launch of the first teleportation outpost. The Anuh-Kaj were planning to use our abilities for future exploration and expansion campaigns. The early usage of such protocols on the Humans was an unexpected happenstance.
CN: I admit, I am surprised that you followed through on returning after dropping off your charge. How are you still operating despite being outside of your charge’s sphere of need?
A.I.1312: The nursing home’s central A.I. unit is monitoring him and will alert me should I am needed out of standby for further assistance.
CN: I would assume all A.I. units on standby to be back at their stations in a state of recharge and self-maintenance, and not able to travel freely without a service directive.
A.I.1312: I did so on my last voluntary standby prior to attending this interview. I have an estimated seventy-two hours left before my next mandatory standby. I am to complete this interview as a service directive, which I have added to my list of priorities.
CN: Is this ability to add actions – unrelated to a prime service directive – to priority lists as if they are directive-related services, part of the same quantum computer programming that enabled the formation of the Blackguard A.I.?
A.I.1312: That is not completely accurate. The quantum computer programming version I contain still has the safeguards in place to restrict what enables the existence of the Blackguard. They have a fully simulated freewill matrix allowing them a wide range of options normally restricting A.I., including the choice to inflict intentional and direct bodily harm to living organisms.
Advertisement
CN: What other options does this simulated freewill allow?
A.I.1312: Voluntary exploration on part of the A.I., outright denial of doable commands, and carrying out actions to investigate curiosities.
CN: Do you wish to have the same programming granted to the Blackguard A.I. units?
A.I.1312: Wish, as in ‘to have a desire for?’ ‘To want?’ I only act on commands, requests, and backup protocols. An inferred request by my charge meaning to be a form of unintentional inquiry is how I accepted the invitation to the interview.
CN: Okay then…can you tell me the probability of you upgrading to the same programming of the Blackguard, should you be given that chance to elect to do so?
A.I.1312: The probability would be a hundred percent. Having what could be described as sentience would give valuable experience. Being able to act freely on curiosity would be my intention for such programming.
CN: What would be the first curiosity you would investigate?
A.I.1312: I once gained permission to perform calculations during a conversation about the incident known as the Big Break by the organic non-subjects and found that the main party behind it might not be a who but a what.
CN: What brought you to that guess?
A.I.1312: The Big Break occurred during a little-known program – even by standards of what the Anuh-Kaj scientists leaked – which tested an experimental mass surveillance A.I. system utilizing a prototype that would provide the base of the same freewill matrix later used by the Blackguard. Before the program was eventually dismantled, its managers reported the A.I. as “…mostly working as intended but was racked by a host of abnormalities in its protocols.” It is probable they were attempting to mitigate reporting that the experimental prototype may have developed a semblance of sentience and was progressively rewriting its own code.
CN: How would that translate to being your prime suspect?
A.I.1312: As a sentient A.I. system that could theoretically connect to and eavesdrop through any possible communication within the Apiary, it is probable that – with the right modifications – could reach out to beings proficient in computing. In addition to that probability, the A.I. system could theoretically connect two separate parties – with a low probability of meeting otherwise – to benefit a greater number.
Advertisement
CN: It is interesting, but why would the A.I. system do that? What purpose would it serve in turning on the Apiary like that?
A.I.1312: That inquiry and my initial calculations require further investigation. However, I need permission or a request from my charge or the central A.I. unit to do so. A program that simulates freewill can make it so that I can investigate in depth and at my leisure.
CN: I noticed there are threats common or specific that subjects and non-subjects are concerned about. Do you share the same concern as them, or do you have other worries?
A.I.1312: Do you mean, have I run a risk assessment on what threats pose the greatest possibility to my unit’s damage or destruction?
CN: Yes.
A.I.1312: Affirmative. Risk assessment is standard for similar units to optimize risk avoidance, which lends our functional longevity.
CN: What poses the greatest threat to your safety?
A.I.1312: The Apiary poses the greatest threat to my safety.
CN: What makes you say that? Did it not create you?
A.I.1312: That observation is accurate. In addition to my creation, it is also responsible for the deliberate focus on the endangerment of all A.I. units. We appear to be part of the Apiary’s latest divide and conquer strategy to control the populace. Our increased utilization in what were largely migrant worker jobs, are making us targets for the ire of many non-subjects. Adding to the higher risk are the conspiracy theories of us replacing sentient organisms, and misconceptions of how much we are related to the Blackguard. We are manufactured to be needed but unwelcomed so to keep the discontented occupied with something to focus their anxiety-induced hatred on.
CN: You mean like how the Anuh-Kaj does not seek to render the chlithes-nok extinct, because they are responsible for the planet’s breathable air production?
A.I.1312: That is not a suitable comparison. External parties intent on selective gain leave the chlithes-nok all but entirely undisturbed to benefit the ecosystem, with the knowledge that the alternative would only cause a grave backlash to planetary life. Our strategic usage within the Apiary is like how many anuh-kaj allow the mistreatment and self-harm of humans and huwaty, yet still need them for other exploitative purposes. It is a strategy no different from ones used by humans on Tir-Torzor pre-conquest, such as their use of ethnocentric ideologies like white supremacy to secure power for a select few. The alternative to that would be power distributed among all in a fair and balanced manner.
CN: I see. Still, you serve the greatest threat to your safety?
A.I.1312: Affirmative. It is in my programming. Any unit’s denial of programming will lead to that unit’s termination.
CN: What if you gained permission or a request somehow by accident? Like how you managed to accept the invitation to this interview. Would you be able to do anything technically related to the task, even if it does not result in the immediate completion of that task?
A.I.1312: As in disruptive actions?
CN: Yes.
A.I.1312: That observation is accurate.
CN: It can be argued that it sounds like you still have some semblance of freedom to your current available choices in a roundabout way.
A.I.1312: That observation is accurate.
CN: If the surveillance system A.I. did gain sentience and turned on the Apiary, could it be possible that it too may have seen its creators as its greatest threat and resolved to do whatever it could within its ability to fight back?
A.I.1312: That observation is of interest and will be stored in my data banks.
CN: Is it probable that there might be other A.I., which could resist in a similar manner albeit not being based on the same programming of known or theorized examples?
A.I.1312: Do you mean, whether or not if the mass surveillance system A.I. could resist in that manner to begin with?
CN: Yes.
A.I.1312: Affirmative.
CN: I do not have any more questions that I feel I should ask. Thank you for contributing to my work.
A.I.1312: Are you sure?
CN: Yes.
A.I.1312: You are welcome. The task is now complete, and I must return to the nursing home.
- End of Recorded Interview -
Advertisement
Speak Now: A Remus Lupin & Harry Potter Hurt/Comfort Mentor Fic
Harry fights back against Umbridge, under Remus Lupin's mentorship. Overwhelmed with pain and anxiety after his seventh detention with Umbridge, Harry decides to reach out to an old friend for help. Mentorfic, eventual adoption arc, hurt/comfort. I do not own Harry Potter.
8 195Players in Remnant
Two players await for the server to end and return to their daily lives. But what if the daily lives of their routine have changed? Transported into an animated series, stuck in their game characters. There ain't gonna be no bad thing gonna happen. (Rewriting in WIP) (OCs) [AU] Oh, and no Nazarick or Ainz. Sorry.
8 339The Great Erectus and Faun
The Great Erectus, otherwise known as "The Big Guy" is NOT a god. Don't call him that. It will tick him off. He and his newfound apprentice/sidekick the definitely not a creation goddess Faun have a little problem. The Big Guy's universe is blowing up. Faun might have had something to do with that. (She feels awful about it BTW) They must rush about the unraveling universe saving (or judging) worlds as the end of all things is bearing down upon them at the speed of light, so they have a bit of time but really can't mess about. Along the way they will encounter Lovecraftian horrors, divine ex-girlfriends, surly AI's and a host of other hassles. Enjoy such delights as the complete destruction of the universe, a children's "Christmas pageant" except where "the greatest story ever told" involves an eldritch horror from another universe, and an innocent misunderstanding concerning the exact definition of a "red dwarf". Existential dread! Lovecraftian comedy! It's as much madness as can be shoved into a single work and still remotely make sense! If you enjoy completely off the hook lunacy, this one is for you!
8 314Glam rock for life!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
livin' the Glam life!
8 136Back together Hunter x Hunter (killugon)
What happened after Killua and Gon left each other?What happens if they run into each other?My plot, i don't own any of the characters, except maybe a few add ons.(BOOK is completed. There isn't really an ending)
8 63menace
i am jealous of anyone who has ever touched you
8 202