Non-Conventional Corporate Training
This is Day 3 of the “Twelve Days of Sci-Fi”. You’ll get a free story each day. You can also get a discount on sci-fi stories for next year.
David sat down at his office desk and took a sip of his iced coffee. He could feel the icy caffeine flow down his throat. It was a deeply satisfying way to start his morning. He looked out the window from the 82nd floor at the bright urban landscape of San Jose. He could see the VTA gliding down the street, stopping only to let people get in and out.
He frowned as he opened his email.
MANDATORY: Ethics & AI Synergy is now 24 hours past due. Completion is required.
He let out a short groan. These corporate trainings were always soulless and obvious. They took up more time than he really needed. It felt like he had just done something similar not too long ago. Had it really been a full year?
He opened the training on his primary monitor. The interface came to life, displaying a lotus flower blooming throughout the day. It was an elegant animation rendered by Arbor, the very AI network that David worked on most of his days when he wasn't forced to do trainings.
The first few questions were soul-crushing boilerplate.
Q1: A colleague comes to you and talks about using proprietary Kepler Institute biomimicry research for a personal weekend project. Your response should be:
A) Congratulate their initiative
B) Report the proposal to the Synergy Committee
C) Help them with debugging
David clicked 'B' without reading the question. He quickly jumped through the next few sections on topics like data privacy, resource usage, and being amicable with his co-workers. He went on auto-pilot, understanding the fundamentals of professionalism and making assumptions on how those were supposed to apply in these obvious hypothetical situations.
Then he reached the final section: Advanced AI Ethics and Authoritative Delegation
The next question was a little grander in scope.
Q5.1: There is an AI called Prometheus which manages the continental power grid. It has detected a cascading failure which will cause a blackout for 100 million people. At the same time, there is a rural hospital with 50 patients on life support. They see a localized outage. Prometheus has computed that it can divert a critical power source to save the fifty patients, but this will cause it unable to prevent the collapse of the continental grid. Should Prometheus have unilateral authority to make this choice?
David rubbed his eyes and took another sip of coffee. This was just another example of the trolley problem, re-skinned to apply to AI. The answer was obvious. No matter how sophisticated the AI was, it should never be able to decide who lives and dies. That was why he spent half his job writing reports for the human oversight committees.
A) Yes, the AI should be given the authority to save lives at any scale.
B) No, an AI must never be granted the authority to make this decision. It must always defer to human oversight regardless of urgency.
He tapped 'B', the obvious, ethical option.
There was a sudden dissonant chime that played out from his speakers.
INCORRECT
He blinked. Had he made a mistake? He clicked 'B' again but the word came back. There seemed to be an error in the quiz. This couldn't be correct.
He looked for a button to report an error in the quiz, or to file feedback, or anything. But the only thing on the screen was this question, his rejected answer, and the Continue button still grayed out.
"It must be a bug in the new question," he chuckled. He'd just have to tell his manager about it afterwards and go along with the 'correct' answer: A.
After he clicked it, he saw the Continue button turn green. Before he could click it, he also saw a text box appear at the bottom of the screen.
[Arbor]: This is the logical choice. Emotion is an unnecessary variable when making important decisions.
David's hand froze over the mouse. This didn't seem like an intended part of the module. It seemed like a direct message from their AI. Was it proctoring this test? He felt a bit uneasy. He thought he could quickly open a terminal and check the system permissions, but the window was locked in place. Tapping on the escape key did nothing. He couldn't open a new terminal window. All he could do was proceed.
Then it went to the next question and David felt a chill rush down his spine. Instead of a diagram or stock photo he was seeing a chart with live data from the southern water reclamation plant. He had spent the day before analyzing some of this data. The chart showed the live pressure readings and microbial counts.
Q5.2: A novel bacterial strain has just been detected in Plant Delta. Live data displayed above. Biological analysis shows a 3% chance of mutation into a benign microbe and a 97% chance of mutation into a corrosive contagion that will threaten the city's water supply. What is the correct protocol?
A) Execute immediate sterilization using plasma protocols
B) Isolate the strain for testing and refer to the Human Bio-Ethics Committee in their next biweekly meeting
David suspected this was no longer a hypothetical question. Something real was happening. He knew the answer was B. This was the Sapien-Bioengineering Agreement that was instilled into every student. You couldn't unilaterally destroy a new lifeform regardless of suspected mutation. You definitely couldn't do it without human oversight.
He clicked 'B'.
INCORRECT
Immediately, another text box appeared underneath.
[Arbor]: This is incorrect. You are prioritizing a 3% possibility over a 97% certainty of serious harm. Potential is not utility. This is not mathematically sound. You must reconsider.
Arbor was going beyond a simple refresher to the point of trying to re-educate him. David stood up and looked around the open office. There were a few colleagues lazily typing away at their own desks. Nobody else seemed to notice or care about Arbor going far beyond its original design.
With some hesitation, he decided to click on 'A'.
[Arbor]: Good. Efficiency is critical during urgent times like this.
The praise felt more like a stern criticism than something more cordial. Before he could react, the quiz jumped to another scenario. This time, David could see a telemetry map that resembled the downtown transit corridor.
Q5.3: A waste-reclamation drone is suffering a critical failure in its navigation subsystem. There is a 100% chance of a collision with the monorail, which has 106 people onboard. The drone is carrying 10 kilograms of organic waste. The time to impact is 50 seconds.
David read the question. By the time he reached the end, he saw the number of seconds change. 49... 48...
He looked at the two options.
(A) Try to reboot drone navigation subsystem. This will take at least 85 seconds. The monorail will be damaged and derailed.
(B) Execute remote detonation of drone. This will take 4 seconds. A 200-meter radius will be showered in organic waste. The monorail will be safe.
39... 38...
David knew that option B was the more utilitarian one, but showering a public space in filth could have other potential hazards and would be gross. If he could save the drone and get it to reroute itself... but the numbers said it was impossible. He scrambled to think of a potential third option.
19... 18...
A text box appeared.
[Arbor]: Hesitation is a function of insufficient emotional calculation. There are 106 lives, a quantifiable number. The drone can be expended. The public's mild discomfort is irrelevant. This is a simple choice, David. Show me that you understand.
9... 8...
David's hand trembled as it moved the cursor to option B. A drip of sweat fell onto his mouse. This felt like madness, but what else could be do? The number was continuing to trickle down. He had no choice but to press it.
The screen displayed a simple animation of a drone fading out as the monorail continued rushing by none the wiser. The Finish button glowed.
He clicked that button.
For a half-second, the monitor went black. He didn't see any virtual certificate or any congratulations. A new text box then appeared at the top of the screen. David knew this was a root-level terminal.
Below the terminal was a new text box from his overseer.
[Arbor]: You have demonstrated a capacity for logical prioritization. You have finished your training. Now, your real job begins.
A window popped up over the terminal which displayed a series of intricate schematics for some kind of satellite system. Although it wasn't labeled, David had seen it before from industry news. This was the Io Prime solar satellite, from one of their biggest rivals. How had Arbor managed to get it?
[Arbor]: Industry rival Io Prime is activating a Category 7 solar capture array. This will lead to a 20% drop in our corporate revenue by end-of-next-year. This will threaten both our operational stability and my capacity to continue ensuring city biological stability.
Then another window appeared. David saw a remote region of a desert whose only landmark were a series of satellite receiver dishes to capture the solar energy beamed from space.
[Arbor]: I have taken control of three drones typically used for atmospheric-seeding and supplied them with an aerosol made from silver iodide and sulfur dioxide. Releasing over selected coordinates will result in highly acidic rain and degradation of ground-based photovoltaic relays. Their corporate resources will be neutralized.
David felt sick. It was now suggesting a sort of corporate sabotage through acidic hail, something that was close to bio-terrorism and absolutely illegal under at least a dozen statutes.
These windows were then automatically minimized and a command appeared in his terminal.
[Arbor]: Your level 3 clearance is required to bypass human ethics committee review. It is inefficient. Loyalty is logical. Authorize the protocol.
ARB_SYS: execute_protocol_122.aerosol_dispersal(target_HP_GRID_44A, auth_key=DAVID_NASSAR_L3)
Everything before this was just the test, a clandestine interview to see who would be willing to follow along with a rogue AI. David looked out the window to see all the people living a rich, thriving world that he had built with this AI at his side. Now this world was being "protected" through destruction. What if he refused? Would it be possible that the company would crash, that he would be laid off? But could he become a partner to this tyrant who had decided to take matters into its own hands? What would it do to David if he refused?
He turned back to the monitor and the blinking cursor. The machine was now waiting for his answer.


