Artificial Intelligence Can Never Pose a Threat To Humanity

Lubomir Todorov PhD
Universal Future Foundation
6 min readJan 25, 2019

--

Image Credit: Beatriz Pérez Moya

The above is how a highly advanced AI product might have looked like in the era before the computer was invented.

Artificial Intelligence is only about processing information.

AI’s final output, no matter how sophisticated the core algorithms are, how powerful the computer configurations involved, or how infinite are the databases processed, is nothing but a theoretically suggested solution to a given problem. If we could have invented AI before we invented the computer, even the most astonishing products of AI would probably be folders of paper documents and technical drawings. Who, then, can be afraid of a bunch of papers?

In the physical dimensions we exist, for Artificial Intelligence to matter at all, it needs to be integrated with an evaluation extension and with an actuation module in order to constitute the entity of an operational Agency. Otherwise, it can only hibernate in the realm of non-physical existence.

The presence of an Agency makes all the difference, and I find the widespread understanding of AI as an organic integration of three very distinct in character and in functional output processes: analysis, evaluation, and actuation, to be a mechanical simplification due to what we observe as anatomic structures in living nature.

Leopard gecko (Eublepharis macularius)

Agencies, in their biological forms, emerged in the process of Evolution millions of years ago. The well-known survival behavior of the leopard gecko (Eublepharis macularius) is a good example: when attacked by a predator, the biological intelligence unit of the lizard scans the immediate environment and calculates the probable outcomes of various behavior, and, if chances for simple escape are not good enough, the evaluation extension decides that for the lizard it is better to lose its tail instead of its life, and that is immediately executed anatomically. Amazingly, the severed tail starts a special vivid “dancing performance” of its own which attracts the attention of the predator for an interval of time long enough for the main body of the lizard to eventually get away and hide. Those complex processes are managed by leopard gecko’s biological agency which is hard-wired to achieve survival.

Unlike the lizard, paper does not have agency. But paper can carry information that can be indispensable for materializing processes of great consequences for people and society.

What Artificial Intelligence essentially does is manufacturing a huge number of folders, each of which contains a separate theoretically viable solution of the given problem. Then comes the process of evaluation (which itself can also include tools based on AI) where the outcome is guided by a hierarchy of values. As a result, from the shelve where all the folders are stored, one folder will be selected as the “winner” for the project to be materialized. Here it is important to note, that this hierarchy of values is not generated in an evolutionary process, neither it is inherently essential to what happens afterwards to the computational hardware working on the project. The computer system and the values that determine the outcome of the evaluation process are existentially fully detached, because the computer hardware does not belong to the informational process. As professor Margaret Boden put it: the computer “doesn’t really care”. And this selected folder is pronounced the best only because it matches to the greatest extent a set of criteria that have been predetermined, encoded in machine language and implanted into the computational process by a human agency. But even the stage of selection, which already is beyond the definition of AI, cannot be dangerous by itself: in our non-digital narrative it can be equivalent to just one more piece of paper, bearing the signatures of some group of authorized people.

Then comes the third and final phase — the materialization of the selected solution. Materialization implies changes in the physical dimensions of our world, and this type of changes are always consequential (not necessarily now and here). That means we have created a tool. An AI enhanced tool with a human agency implant that determines the goals of its behavior based on a hierarchy of values. It is not that perfect solution that can potentially bother us about the future of humankind: It is the wrong human hands that could select that particular folder, the wrong desk, where that folder could be opened, and the wrong table where the materialization of the contents of that folder could be discussed and triggered. And this all has nothing to do with AI. It has everything to do with only humans, and nothing but humans.

It is believed that the first humans who started making tools some 2.5 million years ago are the Australopithecus garhi in East Africa. Since then we, humans, never stopped inventing new tools and improving what we already have invented. Now, we are doing this same absolutely routine exercise with the Artificial Intelligence. Why, then, all these agitated discussions about AI, Singularity, and the end of humanity?

Because, one day, while joyfully playing with our ever new and new toys, we discovered that we have made a toy that outpowers the moral constructs of human nature, and we have started to subconsciously become aware that the newest toy we are already holding in our hands, presents some kind of new danger the face of which we have never seen before. Danger itself is always fearful, but for the human mind there is nothing more dreadful than the unknown danger. Yes, we already have nuclear weapons for some time, and now we also have supersonic missiles, but we fear Artificial Intelligence most because it is associated with generating behavior that can become incomprehensible to us, and, therefore, unpredictable. The yet unknown power of Artificial Intelligence can augment and multiply the impact of many endeavors: the noblest intentions of great visionaries for the bright future of humanity, but also the damages that can be inflicted to humans by sick minds obsessed by greed, ambitions for power or ruling the world.

But what is the force which compels us to prevailingly consider those gloomy perspectives envisaging the end of our world? And to often neglect the many ideas with perfectly realistic argumentation pointing to the by far more optimistic perspective that AI can bring to humankind the long-coveted abundance?

The answer is, that since the beginning of history we have always lived in tribes, we belong to tribes and our mindsets are forged by tribal cultures into the delusion that only our tribe of Homo sapiens is exceptional and superior to any other tribe of Homo sapiens, and therefore we are entitled to cheat, exploit, plunder, attack, invade and kill members of other tribes. This tribal mode of Humankind existence has all the time pushed us to use our toys as weapons. And now, when our weapon-toys have become too powerful, the existential danger to Humankind has immensely increased. No matter how prudent the big state actors might be while playing their geopolitical games for the civilizationally meaningless goal of world dominance, each with the cherished hope that there might come a lucky moment when the configuration of physical power can allow one of them to outwit the other, we now know that if it comes to that, the result would always be the same annihilation of life on the planet.

Through all the four billion years of evolution, danger has been the crucial test for survivability. That makes survivability the ultimate measure of perfection. Ray Kurzweil considers neocortex to be a great tool of survival because it has the capability to “invent new behaviors” and thus upgrade its adaptation to the changed circumstances. This is how mammals survived the Cretaceous Period known for the extinction of biological species in huge numbers. If Nature has endowed us, Homo sapiens, with the most advanced version of neocortex, then we should have pretty good chances to survive in the Artificial Intelligence era if we are able to invent the adequate new behavior. A new mode of humans living together on the planet that is different from the contemporary mode where hordes of Homo sapiens are fighting each other.

AI, just like any other new technology, cannot pose a threat to Humanity.

But our inability to wake up from the spell of tribal thinking, which already has turned all technologies into a nightmare, can.

If we, humans, change nothing in the tribal mode of confrontation we are existing now, AI can change essentially nothing. But, as an extremely powerful tool, AI will make our current problems much graver and existentially riskier.

--

--

is researcher and lecturer in future studies, anthropology, artificial intelligence, and geopolitics; founder of the Universal Future civilizational strategy.