Categories
Culture Technology

“Daddy, is she a robot?”: The True, Sinister Narrative of AI

Computer Software as Mind

When I began programming computers, I believed that the processes of the human mind were equivalent to those of our computer programs: this idea is still popular among those in mathematics and technical fields, and professed directly in the field called “Cognitive Science.” Artificial Intelligence, according to this view, is the creation of minds in silicon.

One of the first computer programs I encountered was Eliza, which, on the computer terminal, emulates a psychologist. At that time I believed that when a computer answers the question “who are you?” with “you are the one we are discussing,” she is actually making a claim, just like a real therapist. I had no reason to think that she was anything else but an artificial intelligence, a mind like ours, running on a microchip.

But as I continued to write code, I came to a realization: when I, as a programmer, take an input string, such as “How are you?” and create an output string, such as Eliza’s “This conversation is very demanding for you, isn’t it?” I am not actually creating a new thinking machine. Rather, I am writing a series of conditional commands, a row of if-else or switch statements, which determine, absolutely, what the interlocutor says. Still, it took me a long time to completely lose the belief that computers are artificial minds. My mathematics and computer science teachers, and my fellow students, fostered this belief: the relatively new field called “Cognitive Sciences” seems to be based on it, so it must be true, right?

And then there is the famous Turing Test: if a human can’t distinguish between interaction with a machine and interaction with a human, then the machine must have a mind, right?

There is another possibility: if a human can’t tell whether he’s interacting with computer software, or a human, he’s being duped. Thinking that he’s interacting with a mind like his–a feeling, experiencing subject–he will treat the computer as such, and will be manipulated to act in the interest of the robot’s owner. The artificial minds created by better and better software development techniques–running on exponentially faster computer processors–mark new heights in the art of deception.

A Patient in John Searle’s Chinese Room

John Searle, I don’t know whether the allegations that you committed sexual mischief are true, or how grievous the transgressions may have been. Whatever wrongdoing you have committed, we still recognize the good deeds you have done. Your contributions to philosophy have touched my intellectual life: the predominance of the view of compatibilism in analytic philosophy–the idea that free will and determinism can coexist–seems like a cancer of doublethink to me, and you set me straight. Then, by studying your Chinese Room Argument, I overcame the youthful folly of believing that computers are minds….

The Chinese Room Thought Experiment and related argument must be one of the clearest, simplest, most conclusive moments in philosophy.Here’s an explanation of it.

The argument relies on a comparison, so we have to imagine two things at once.

First, think of an imaginary chatbot which speaks fluent Chinese, and can’t be distinguished from a human, so that it passes the Turing Test. Now, let’s imagine the actual Chinese Room: it’s a kind of a room with a mail-slot in it, and a lonely worker, with a massive list of instructions, maybe a flowchart. The worker doesn’t speak a word of Chinese. When a message is inserted into the slot, the worker looks at the signs written on it, then consults his instructions. The instructions contain a bunch of rules based on which characters appear on the message. The rules are applied to formulate a new message as a response. The functionary writes down the new Chinese characters by following his instructions. He doesn’t understand the symbols at all. When he’s done, he ejects the answer through the slot.

The metaphor is pretty simple to follow, if you know something about computer programming. The man outside the room is the user of the software. The instructions are source code. The functionary inside the room is the (very slow) computer.

Now, to the point: who, in this scenario, actually knows Chinese? The man outside the slot certainly knows Chinese. He’s using the service because that’s his native language. Also, the team of scholars who wrote the instructions–the software developers–might know Chinese. But the functionary does not. A computer manipulates symbols. It takes input from a user and generates output based on control structures. It passes the Turing Test, but doesn’t have a mind at all. It blindly manipulates bits which have no meaning to it. It may say “I love you”–or make any other claim of feeling or emotion–but this is pure deception. There is no “I” there, no subject.

No matter how complex our programs become–and whether they are created through traditional software engineering by experts in their domains, or by new techniques such as machine learning–computers are only simulations of minds, and simulations must never be confused for what they simulate. They blindly manipulate symbols; humans–maybe animals, but that’s a different debate–are the only ones who understand these symbols.

Learning about the Chinese Room, I visualize my visit there as a stay in a hospital, and Searle a medical doctor. He cures me of the chronic illusion that computers are minds. There is, as of yet, no program to implement a cure for the general population, and how can there be one? For we need to attain some knowledge of computer programming and a little philosophy, before we can be cured. And many of us would rather live the illusion. More on this, at the end of the article.

Android Pinocchios and Discrimination

The popular narratives of AI all seem to follow a similar form. If there are exceptions, I’d certainly like to discover them. The form goes something like this: at one point, artificial minds emerge. Our dizzyingly swift technological progress means that nobody is surprised. Robots follow their own interests, have desires, feel pain. Humans may fall in love with them, as in the 2013 movie Her, and humans may commit great harm to them, torturing them, as seen in Stephen Spielberg’s visually fantastic AI. Both of these films are worth watching. And Spielberg does portray the problem of the deceptive quality of AI, a world in which it’s not clear what is human (or animal), and what is robot. But the emergence of mind from machine is not the real-life narrative we are living now. Searle’s Chinese Room argument proves it! So why is this (false) narrative so pervasive?

There are a few reasons our writers may tell stories about the creation of artificial minds. They stem from a long literary tradition of such creations, from the myth of Pygmalion (told in Ovid’s Metamorphoses), Frankenstein, and The Adventures of Pinocchio, among many others. These stories satisfy our fascination with magic, whether we believe in magic or not.

And with every new advancement in computing and robotics, the claim which equates magic with technology becomes more viable: Moore’s Law’s doubling of processor power every eighteen months–exponential improvement–astounds the mind; We will soon have self driving cars on our roads. Bits of Arabic can be translated by machine into English, with outstanding results while, a few years back, the translators made many basic mistakes. We have detailed interactive maps of almost all of the Earth–spatial, though not temporal, crystal balls. And our social networks and websites know if we have a bald head, a missing tooth, wrinkles–and advertise plastic surgery to correct such flaws, even, I believe, when we have never researched these themes. We give speech-recognition robots orders and ask them questions, vocally. If we can do all of these things, can’t we create artificial minds? (Answer: maybe we can, but our current trajectory of increasing processing speed won’t let us escape the Chinese Room.)

Another aspect of narratives of artificial minds relates to the narratives of our histories. Many of our most horrible acts have been caused by seeing The Other–in any of its forms–as a thing, an object rather than a subject. This has been the case in slavery, misogyny, antisemitism, racism, homophobia, and so on. The atrocity which haunts our imaginations most in the West–the genocide that occurred in the heart of the “civilized world”–was only possible because the Jews were seen as subhuman, or even “subanimal.” It is important that we take care to avoid such deception, the cognitive imaging of a thinking, feeling being as unthinking and unfeeling. We may include now the suffering of animals, the hardship of being a product in industrialized farming.

And in this quest to avoid treating new groups as objects, we see ethical value in narratives about cruelty to androids. For, if androids are subjects, but are different from us biologically, we risk the fear of mistreating them. The torture spectacle scene in Spielberg’s AI proffers this didactic warning, and we ought to heed it if we truly do create artificial minds.

Dystopian Vertigo: Handsome Zombies Win our Hearts

The narratives of the emergence of artificial minds are opposed to our real narrative of technological progress in computer software and hardware and robotics. There is no evidence that we will actually succeed in creating an artificial mind–and plenty of evidence that we will not do so by developing increasingly complex software.

As our bots become more convincing, they deceive us more fully. The looming problem is not lack of empathy for robots, it’s misguided empathy– empathy which has no sentient target because its target is just a mindless computer program carefully engineered to mimic human emotion. Rather than encouraging empathy for the non-biological–as in Spielberg–we need to discourage empathy for emulated minds, which are not minds at all–to avoid loving only things!

Let us think of some of the abuses of said deception: the robot beggar expertly pretends to feel pain for the sake of earning his donation. Children cry and their parents open their wallets. Such beggars use exaggeratedly emotional gestures, mimicry, and tone of voice, all to score a dollar. Perhaps it goes so far, that their voices imply starvation–their artificial bellies swollen in simulated anticipation of a missing meal. And we will feel charity towards them–though we have actually only been duped into contributing to the already deep pockets of the robot’s owner.

More common than the robot beggars are robot salesmen. They expertly manipulate their customers, expressing joy, even metaphysical fulfillment, purportedly to be found in the ownership of the goods they hawk. They sell you servants, virtual slaves whose ownership inspires the human will to power and control. Humans love these robots, see them as their allies, though they work only in the interests of their owners.

And they’ll sell you sexual satisfaction: our sex toys will soon have CPUs, if they don’t already. Though they may currently only simulate phallus and orifice, they will inevitably attain full artificial bodies and simulated minds. These will compete with real human sexual partners. Because they have no desires, no independent wills, no minds, they will beckon to their users’ every whim. Their software and hardware will be precisely tailored to the profiles of their users. Their beauty will surpass any human’s and they will never grow old. Old fashioned relationships may cease to exist. We will accustom ourselves to living in a world in which sexual pleasure and control are always paired: if the servant robot awakens the latent slave owner in us, the sex robot may awaken our latent rapist.

The narratives we need, for this scenario, diametrically oppose the plot of Stephen Spielberg’s AI: rather than torturing robots, we will constantly be tempted to feel misguided love for them, love for what is only a thing. Our emotions may lose their power, our relationships devolve into exploitative hierarchies. Like zombies, robots have no souls or minds or feelings or emotions. Unlike zombies, they will not be ugly. They will be so beautiful that we may feel disgust when we see a real human face. The problem is not that we will torture robots: the problem is that the simulation of torture of unfeeling robots will be used to manipulate us.

Any Hope?

We will need to struggle against such tendencies: should we introduce a new subject in school–taught from an early age–which helps us to distinguish between minds and the simulations of minds–between humans and software? Training sessions in which we hone our senses to perceive real emotions?

Can we enforce a set of directives which all automatons must follow? The ones which Isaac Asimov suggest feel too general to me. For if a robot is forbidden from harming humans, how does the robot decide what, exactly, entails harm? We need a rule directed at the problem of deception itself: the robot must always identify itself as the simulation of mind rather than mind itself. Going much farther: the robot may never lie, may never pretend to feel emotion, may never express its fake, deceptive love….

Alas, the odds that we will implement this latter rule must approach zero! We will be too enticed by the possibilities opened by the creation of robots which simulate the expression of emotions. And so, we are in danger of sinking into the lonely abyss–drunk on power–in which we only interact with the mindless….

One reply on ““Daddy, is she a robot?”: The True, Sinister Narrative of AI”

Leave a Reply

Your email address will not be published. Required fields are marked *