Modern technology - the set of tools we use to improve our relationships with the world and to interact with it - has always been an extension of our body; the wheel that allows us to carry heavier weights than our muscles can bear; the lens allows us to see details that our eye cannot pick up on, and so on. Technology, therefore, is a way to fall in the embrace of new possibilities, going beyond the limits enforced by our body. Over the centuries, thanks to the development of science and technology, our tools – our creations – were potentiated ever more. This widened the gulf that emerged between our primitive "prosthetic" forms i.e., the extensions of our body, to the point of humans taking on entirely new functions. Today, the technology created by us in this precise way, and with this specific purpose, not only performs these tasks better than any man could; above all, it does things that are impossible for us in every form – consider this, who among us is able to fly? Who among us has the capacity to memorize human history in its intricate details as Wikipedia does?
Technology, therefore, is a potent human imposition on nature’s limitations – whether it is through our individual body, or forces of nature in the ‘external’ world. The resultant increase in our capacities, however, has become exponential. Exactly twenty years ago, The Matrix was premiered, the movie soon became a cult, because it dealt with the man-machine relationship through an exemplary narration – with some exciting special effects for 1999. This was amidst the evergreen philosophical suggestions of the indiscernibility of truth from lies; reality from fiction; and, sleep from wake. The relationship is one of a slavish dystopia, where machines successfully deceive, and enslave mankind.
Despite two decades having passed, The Matrix continues to make for a compelling re-watch. Aside from its terminator style logic, the film also sheds light on a much less recognized theme of this genre – the question of perceiving the threat. The film demonstrates that while machines are enslaving us, we as a species lose the power to conceive the possibility that our existence is being endangered by technology. In fact however, the opposite is true: thanks to technology and its inventions, our existence is safer, and our lives are longer. The Matrix, therefore, was very wrong: the unease we have towards our machines does not arise from the fact that we feel threatened.
In nineteenth century England, a workers’ protest movement arose, identifying machines as the cause of the ills of the working class. This movement, out to sabotage machines, was known as Luddism, led by a mythical leader Nedd Ludd, who is said to have broken a stocking frame in anger. Neo from The Matrix acted like Nedd Ludd: but humans, generally, are not rebelling against the machines. Our technology no longer has exclusively "prosthetic" results; it has a new 'nature' (latu sensu). However, could our unease arise from the fact that, mindful of the inventions-prostheses’ millennial efficacy, we are trying to make our creations ever more similar to us? Here we find a very problematic knot: the more we try to bring machines closer to us, the more we see them become something different from us. This contradictory character is easily verifiable – on the one hand, we want machines to resemble us but, because we have created them to be better than us (to improve ourselves), they work with much higher power, precision and perfection. This is why we feel both equal and different.
A first hypothesis appears, therefore: the feeling of unease originates and is nourished whenever we realize that we no longer know what distinguishes us from the machines, qualitatively (not quantitatively). We have identified that we are certainly not threatened, and that technology generates a highly potentiated version of humankind. However, can we ever say: "This is the line a machine could never go beyond; this is the line demarcating the inimitable human substance?" In other words: can we clearly define our qualitative superiority?
On January 29, 2019, a group of researchers from Columbia University, led by Dr. Nima Mesgarni and assisted by Dr. Ashesh Dinesh Mehta (a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute), published a paper in Scientifc Reports. The group managed to create a system that can read a patient’s brain activity when listening to a word and, via a vocal synthesizer, recreate, and repeat what the patient has just heard, using the brain activity alone. In other words, the system automatically turns the brain activity reading into words. Scientists pointed out that the ‘vocoder technology’ used for their system is the same that consumers already use in Amazon Echo or Apple Siri. Therefore, at the basis of this research, there is something very familiar to us. Medically, this is a fundamental step in restoring the power of speech into patients who have lost it. The news was however interpreted by many as the first step towards a different direction: a machine that can understand, think, speak, by reading a person’s mind – the first step towards ending the privacy of one’s personal thought process.
So again, we viewed our creations with the potential to mimic us, but be better at it. How then do we save our human uniqueness? The deductions this research caused among its readers can be understood by comparing it with a famous thought experiment: the Chinese Room, conceived by John Searle, philosopher of language and mind. The experiment is centered in the context of an important debate about the Strong Artificial Intelligence position. A man is alone in a room containing only a manual written in English, with instructions on how to write and manipulate the Chinese alphabet; a paper; and, a pen. The room has a slot to the outside – its only opening. From that slot, Chinese characters (written on small notes) are sent inside the room, the man collects them and, according to what is written in the manual, elaborates them, and writes other Chinese characters on his blank paper; he then sends this paper out of the room. The process continues for a long time in this way.
Searle argues that if the person outside the room inserting the Chinese characters knows the Chinese language, seeing the answers that come out, he will deduce that the man in the room also knows the language. In reality, however, the man in the room does not know Chinese, he is just processing the characters by following the given instructions. Now, in this thought experiment, consider the room with the man represents a computer, while the English manual represents a computer program, with its own programming language. Through the Chinese room experiment, therefore, Searle argues that artificial intelligence can only understand the syntax (the way in which the symbols are connected to each other), but cannot use a language with its semantics (meanings). This thought experiment has been very important for the debates it spurred in the field of philosophy, and on the subject of the mind-body relationship and consciousness.
Now, in the system developed by Columbia researchers, the computer is simply reading brain reactions according to the instructions provided. It elaborates symbols by transforming them into other vocal symbols, but we cannot in any way logically infer that the system understands the true meaning of what the patient is thinking; we cannot infer that it has a grip over the meaning of the language. The system speaks but it does not understand and does not think; it is only executing symbols’ elaborations, it is only stirring up the grammar without any awareness of those sign’s content. So, we can rest easy. The mind reading ability of machines is, and will remain impossible. It’s a qualitative leap that can never be made. Understanding, realizing, having a grip on the meaning, are all part of the exclusive human substance. Our uniqueness is therefore, preserved.
Or maybe not. Let's think about this - who among us feels completely reassured by this argumentation? The question we started from is not disqualified once we define this potentially insurmountable difference between humans and artificial intelligence. Our first hypothesis, therefore, cannot content us. In order to understand the feeling that binds us, willingly or otherwise, to our machines, it is not simply enough to delineate a line between us and them. We ought to focus on another aspect – the nature of our interaction with the world. More often than not, to find this kind of wise and fruitful perspective, we can refer to mythology; in this case, to the ancient Greek myth of Prometheus.
Prometheus is humankind’s forger on the divine Mount Olympus, he creates humans, and then provides them with the "good qualities" of intelligence and memory. Prometheus is the creator of humanity, so it nourishes it with a sense of responsibility. When Zeus takes fire – the knowledge of making fire, and consequently metallurgy, and science – away from mankind and throws it into cold and darkness, Prometheus, moved by that feeling, steals the flame and brings it back to humankind. For this, Zeus orders that Prometheus be eternally tied to a rock, while an eagle ate his liver, which would regrow each day to be eaten again.
Promethean shame is a concept enunciated by German philosopher Günther Anders, in his work, "The Obsolescence of Humankind" (1956). Humankind is a new Prometheus and it feels inferior to the machines he has created: they shrink its responsibility. It depends on them, and longs to be like them – feeling ashamed of it (its own limitations). Therefore, the issue’s focus is upset: what is the point of marking an insurmountable difference between man and machine if we then yearn to be like our creatures? Anders had intuitions that only today, with the widespread diffusion of technology in everyday life, can be fully understood. Alexa tells us who, where, and when we have to meet in the afternoon; Google warns us to change route because the one we usually frequent is busy; our smartwatch warns us to eat rice instead of the cake we took a picture of, because our pulse is already too fast. Technology has made life simpler, healthier, and more organized; better, from many points of view. However, that's not the question, and neither is that the point conveyed by Promethean Shame.
Promethean Shame is the effect originating from the reversal of the man-technology relationship – the creator aspires to become the creature it just created. We feel imperfect, and therefore ashamed; our body is subject to evolution’s biological laws, which require millennia to perfect. Machines, on the other hand, are plastic, malleable, and can be perfected in a very short time – and thanks to machine learning, they can do all this on their own. They are subject to those more efficient laws we ourselves have enacted. The algorithms that allow us to make the best choices know more about us than we know ourselves.
We are individuals, unique, that exist sans choice, and cannot repeat their existence. When compared with technology’s design, seriality, refinement, and reproducibility, this is something that hurts us: that is why we attempt to produce ourselves too. The continuous creation of our images on Facebook, Instagram, mechanical work on our body with fitness, bodybuilding and human engineering – even cloning – are all attempts at mimicking the reproducibility of machines. We therefore show that we also desire the same seriality and perfectibility we gave to our machines; we wish we too were derived from a matrix, not locked in a battle against it. How does this make us feel? How many attempts do we make to assert our indisputable superiority? Are we certain that we are not making these assertions of superiority to keep that shame away from us, that subtle awareness we have to deal with every moment? We want and do everything to be less human, and more a thing. Perhaps, we can now understand why that contradictory motion - through which we try to make machines similar to us even if we know from the beginning that we cannot succeed - is false. This is because a feeling of shame arises when we realize that now, in reality, it’s us trying to be similar to machines, and not vice versa – for all imperfections might not be beautiful.
...We have a small favor to ask. Polemics and Pedantics is a non-profit educational venture whose writers work only because of their penchant for the art. If you like our work, please support us by sharing it on social media and helping us reach more people. Remember to subscribe and never miss an update by providing your email on the Contact Page. We don't sell ads, and won't spam you or share your details with anyone. Comments and suggestions are welcome at firstname.lastname@example.org.
About the Author
Francesco Ziveri holds a Bachelor's Degree in Philosophy from Università Statale in Milan, Italy; he also attained a first level Master's Degree in Strategic Management for Global Business from Università Cattolica del Sacro Cuore. Holding a long experience in public debates, local politics and newspapers, he strongly believes in the tools of dialectic and methodological skepticism. His unconditional love for western philosophy, perpetrated by reading books, by dialoguing and by writing, brings him in a tireless journey in search of new and alternative points of view on the world. He aspires to grow a mustache as thick as Nietzsche's.