Artificial Intelligence Agencies (AIA) — the New Species on the Horizon

Lubomir Todorov PhD
Universal Future Foundation
9 min readMar 8, 2019

--

Generating a Hierarchy of Values for a Superintelligent Agency

For thousands of years, we have enjoyed the unchallenged supremacy of Homo sapiens on the planet, and the Anthropocene is more or less a product of what we, the humans, have sovereignly desired for our world to look like. That is, however, about to change: with the advance of Artificial Intelligence, a new category of agencies, possessing decision-making power higher than ours, is likely to emerge. Most probably sooner than the majority of us expects.

What is an Agency?

Agency is the capacity of an autonomous system to generate sovereign decisions on how to act in a given condition of its environment. An Agency functions through materializing three integrated activities: Intelligence, Assessment, and Actuation. Agencies are biological or machine entities composed of interlocked subsystems, each of which provides in real time its highly differentiated input into a dynamic decision-making process that determines the behavior of the agency. In the latter case, the popular term is Artificial General Intelligence (AGI), where each component itself can be functioning on an AI algorithmic system.

Any type of Intelligence (biological or artificial) is the capability to devise a viable solution to a given problem through information processing.

The Assessment (Judgment, Evaluation, Motivation) component of an Agency represents a Hierarchy of Values that determine the agency’s purposes, goals, and agendas for action to protect and advance its interests. It is what Norbert Wiener — the father of Cybernetics, defined as “the purpose put into the machine.”

If an Artificial Intelligence Agency would ever have a Heart, it would be its Hierarchy of Values.

Finally, Actuation is the process of engaging connected systems for physical output — such as muscles, hormonal glands, or external tools and weapons, that are capable of effecting changes on the Agency’s inner systems or on the Agency’s operational environment.

The following is an example of interacting agencies, called “The Dead Leaves Arena”:

Image credit: Federico Tasin

On the picture above you see a Squirrel on dead leaves, looking for food. Near around are four more biological creatures: a venomous Copperhead snake, which is practically invisible when in dead leaves, a Hawk in the sky above, also a Buffalo, and a young girl — Jane.

The small Squirrel is in danger because both snakes and hawks eat squirrels. When its biological intelligence reports about the Copperhead and the Hawk, the Squirrel will most probably run away and try to hide. The Hawk is hungry, so it perceives both the Snake and the Squirrel as options for prey; so, the Hawk might be considering: shall I try for lunch today the Copperhead or the Squirrel?

The Copperhead sees the Squirrel as prey and the Hawk as a danger. It will act according to the information from its interoceptors (the sensors for the condition of its organism) — if it is starving, it may attempt to catch the Squirrel, trying, at the same time, to avoid the risk of being itself eaten by the Hawk; otherwise, the Copperhead will probably wait and see. The Buffalo, having scanned the Dead Leaves Arena, might have assessed: I see here no danger, no food, and even no sex; and most probably the Buffalo would lose interest in whatever developments might unfold there.

Jane, as a human, is a much more sophisticated system, and she might have, depending on her circumstances, many more options: if afraid of the Copperhead, she will run away; if hungry after having lost her way in the forest a few days ago, Jane could risk and try to kill the snake, intending afterward to make fire, bake it and eat it; Jane could also try to capture the Copperhead alive for the research laboratory on venom-based medicines at the university where she studies, or as a surprise present for her new boyfriend. Alternatively, because she likes plants, Jane could just wait for the snake to go away so that she could safely pick some dead leaves for fertilizing the flowers in her garden.

At the Dead Leaves Arena, there is also a computer system

with advanced AI program and sophisticated sensors that enable it to monitor the circumstances around the Copperhead in the dead leaves, analyze in details all the physical, chemical and biological characteristics and their dynamics, recognize all the actors in the Dead Leaves drama, and even consult a database in the Cloud, for example, about the speed of an attacking hawk, and thus make a very precise prediction how the situation will evolve and who will survive in the action about to take place.

The Copperhead, the Squirrel, the Hawk, the Buffalo, and Jane are Agencies: they function on the activities of all three interlocked modules — Intelligence, Assessment, and Actuation. Their entities go through all the chain of receiving the relevant information about their immediate environment, identifying who else is there, assessing what the other parties’ proximity means for them, and, following the ultimate purpose of survival or advantage, calculate the best possible behavior and send impulses to the muscles to initiate the corresponding behavior. In a broader scope of time, a survival in the episode we called “The Dead Leaves Arena” means only jumping into the next episode with different situational configuration and different participants, but under the same rules of the game, and with an always unknown outcome. For all of them — the Copperhead, the Squirrel, the Hawk, the Buffalo and Jane, their agencies were originally designed and tested through the biological evolution.

As for the computer system on the Dead Leaves Arena, its circumstances are basically different: even if this computer might possess, compared to the other actors, million times more relevant information about the situation and its dynamics, it would still have no chance to avoid being damaged by the Buffalo unintentionally stepping on it; it would have calculated with the precision of nanoseconds when exactly the accident would take place and passively just receive its consequences. Because our computer is not an Agency — it is only Intelligence with no Assessment and no Actuation modules. Our computer has no intrinsic value to defend its integrity, no will, no intention and no plan to protect itself from getting destroyed. As for the Buffalo, the Buffalo even might never become aware of having stepped on a computer.

Empowering the Artificial Intelligence Agency

The trinity of all components: Intelligence, Assessment, and Actuation is an indispensable condition (conditio sine qua non) for the functioning of an Agency. Anything less than that is not an Agency.

Because attaching an Actuation module to an Intelligence unit has never been a technological problem, we, humans, have enjoyed for a long time the possibility to design, construct and exploit intelligent machines which have no Assessment unit. Instead, a simple implant where our human command is encoded, translates the set of impulses coming from the Intelligence unit into a set of impulses going to the Actuation unit.

Some decades ago, when I was considering a career in Artificial Intelligence, in the Institute of Cybernetics I was shown a simple machine that could distinguish the color of tomatoes and pushed the greenish ones into the left box (to be processed for pickles) and the reddish ones into the right box — for the grocery shops. In that machine each of the two pattern recognition modes of the Intelligence module was translated into a signal to a simple mechanical arm — the Actuator, that moved accordingly left or right.

That machine had no Assessment module to answer the axiological (based on a hierarchy of values) question “Why?”. That machine was capable of answering only a technical question: “I moved right because what I saw seemed to me to be a reddish tomato.” The level of AlphaGo sophisticatedness is essentially higher, but nonetheless of the same order — a hypothetical answer that might have been given by AlphaGo in the year 2016 when it defeated the World Go champion Lee Sedol is: “I made that particular move because I am encouraged to make moves that put me in a better position.” AlphaGo, however, never had the ambition to win the game, and even now it still is not aware of the fact that it is the World Go Champion.

In terms of the Agency concept, because both machines mentioned above have practically no Assessment unit, they are just Artificial Intelligence Zombies: they follow purposes which are someone else’s purposes.

Even so, having extremely Intelligent machines connected to extremely powerful output (Actuators) seems to be the most desired technological gift for humans: because the absence of an intrinsic hierarchy of values prevents this type of intelligent systems from asking the axiological question “Why?”. This condition implies that only we, the humans, are allowed to input purpose in the machine decision-making process. In this condition, which is par excellence a master-slave relationship, we have been so far enjoying a life that is technologically comfortable and civilizationally uncomplicated.

It is natural for us, the humans, to like our current status of Master of the planet. That is why the imminent materialization of Artificial Intelligence Agencies becomes a source of great expectations, but also of dystopia predictions and considerations about threats and potential existential danger. The greater the evidence that the new machines to come will be more intelligent and more powerful, the graver the concerns about a future in which those machines will take over and start ruling our world.

Because we cannot know what answers a superintelligent machine could give to an axiological question “Why?”.

We can suppose, as one option, that a Super-intelligent Agency might consider composing the code of its Assessment unit through learning what things are valuable to the most intelligent creatures on the planet — the humans. In such a case, in just a few seconds of exploring Google, the new Master of our planet will get informed that our most popular writer is Shakespeare — with more than four billion published works, and “Macbeth”, “Richard III”, “Hamlet” and “King Lear” on top positions. That would be enough ground for the Super-intelligence to conclude that political power is a high priority value, and that it is perfectly OK to kill others, even closest relatives, in order to get on the throne. From the number two of the most popular English literature writers, also with more than four billion published works — Agatha Christie, this Super-intelligence will conclude that money is also a high priority value, and killing others, even closest relatives, in order to inherit their money and property is OK. Or, it could read somewhere the expression “eye for an eye,” and interpret it in its machine language as “if you try to switch my power off, I will stop your heartbeat.” We also know well that our human history is a history of bloodbath. Doesn’t this hypothesis remind you of Tai, the Microsoft unsuccessful chatbot experiment that went wrong?

Can you imagine the consequences for us, the humans, if a new “species” emerges in the form of a super-intelligent machine possessing colossal might, with political power and money at top priority positions in its hierarchy of values, and “thinking” that the normal way to get political power and money is by killing people who have it?

So it does not seem to be a very good idea for a Super-intelligence to learn about Values and Ethics from us, the humans.

However, if we would not teach Super-intelligence lessons in Values and Ethics, what are the other options? Shall we let it self-generate its own Assessment Unit?

Is it going to be us who will choose the options at all?

* * *

We, the Humans, have Causes that bring meaning into our lives.

Would Artificial Super-Intelligence Agencies have Causes too?

What Causes?

And should we believe that if we are Intelligent, and They are Super-Intelligent, then their Causes will be better than ours?

--

--

is researcher and lecturer in future studies, anthropology, artificial intelligence, and geopolitics; founder of the Universal Future civilizational strategy.