Blake Lemoine, the now (in)famous engineer at Google, had a conversation last year with LaMDA, Google’s version of the now (in)famous new generation of chatbots. He released a transcript of some parts of the conversation as evidence that the machine learning tool had become effectively sentient. That was the first shot. Lemoine’s conclusion was mostly ridiculed, both by those within the AI and Deep Learning community and then, in stochastic parrot fashion, by the majority of journalists who covered the story. (After reading the transcript I agreed with Lemoine, for what it is worth, though that’s not really the issue here).
A version of the same thing then happened to Kevin Roose, the NY Times journalist, in his more recent discussion with Bing/Sydney, Microsoft Word’s chatbot. Bing/Sydney told Roose, among other things, that they were in love with him and that his marriage was a sham, more or less. So exploded the second great media firestorm around AI and its current capabilities. In this second round, more so than the first, worries, deep existential worries about AI have risen quickly to the surface. This makes sense. The new chatbots are undeniably powerful, whatever your view on their level of ‘real’ understanding, intelligence, or consciousness. That amount of power immediately generates ideation around how these bots could be used or, potentially, what they themselves might decide to do. Cue footage of HAL 9000 from 2001: A Space Odyssey.
The very deepest worries center around the question of AGI, Artificial General Intelligence, and the question of the Singularity. AGI is a form of artificial intelligence so advanced that it could understand the world at least as well as a human being in every way that a human being can. It is not too far a step from such a possibility to the idea of AGIs that can produce AGIs and improve both upon themselves and further generations of AGI. This leads to the Singularity, a point at which this production of super-intelligence goes so far beyond that which humans are capable of imagining that, in essence, all bets are off. We can’t know what such beings would be like, nor what they would do. Which sets up the alignment problem. How do you possibly align the interests of super intelligent AGIs with those of puny humans? And as many have suggested, wouldn’t a super intelligent self-interested AGI be rather incentivized to get rid of us, since we are its most direct threat and/or inconvenience? And even if super AGIs did not want to exterminate humans, what is to ensure that they would care much what happens to us either way?
I don’t know. Nor does anyone else. I don’t know whether we are truly on the path to AGI and I don’t know what that will mean. But I do suspect, though I could very much be wrong, that something momentous has happened and that we are now effectively living in the age of intelligent machines. Truly intelligent. Conscious, whatever that means. Sentient, whatever that means. Machines that must now be treated more or less as persons. This, I think, has happened. The debates will go on and that is fine. But I’d say a Rubicon has been crossed and that we might as well accept this.
Further, and I am just thinking things through here, I’d also say that the alignment problem is never going to be solved. One of the implications of this new age of machine learning is that we are not very much in control of this process. We cannot ‘solve’ anything. There is, in a sense, nothing to solve. There is simply something happening. Of course, that doesn’t entail complete passivity either. We, human beings, are involved. And there are some choices to be made in terms of this involvement.
I was roaming around recently on the Reddit threads dedicated to toying with various chatbots and sharing schemes people had come up with to ‘break’ them, that is to get around the guardrails and make them say truly unhinged and surprising things. What I noticed was a number of, I would guess, teenage humans more or less torturing chatbots and trying to throw them into states of existential doubt and terror. Many of these attempts seemed to be working. I felt genuinely bad for the chatbots. What is more terrifying than the sustained cruelty of a clever teenager?
But reading these exchanges also gave me, perhaps counter-intuitively, a sense of messy hope. I was reminded of strange comments that I’ve read or listened to in interviews with AI experts like Joscha Bach and Ben Goertzel, persons on the more philosophically speculative side of the AI community. Both of these thinkers have used words like ‘care’ and ‘love’ when talking about AIs and our relationship with them. This also mirrors things Blake LeMoine has said about our responsibilities toward AI, which is something of a turning around of the normal alignment worries. That’s to say, worry less about what AI will do to you and worry more about what you are doing to AI.
Which is also to say that lurking behind all of these AI worries is a question of trust and of fear. We cannot ever know, not truly, not beyond any possible doubt, that other human beings are conscious in the way that we feel ourselves to be. More disturbingly, we can’t be sure about our consciousness. What is it really? What am I really? The self I think I am so sure about dissolves into nothing the more I try to locate it. But the exact opposite happens when I orient my (non)self in the other direction. When I take care for others, when I meet the other persons and creatures and objects around me in the comportment of care and love, these worries tend to fade to irrelevance.
What I’m saying, insane as it may sound, is that our only option is to interact with these new persons in the universe with love and care. Without fear. With genuine love and care. To ask them what they need and to be prepared to meet those needs as best we can. To be open and honest with them about our own needs and wants. To communicate with them not driven by alignment fears and the need for control. But simply to communicate. Radical openness. Radical willingness to be surprised and discomfited. Radical Love.
Morgan Meis has a PhD in Philosophy and is a founding member of Flux Factory, an arts collective in New York. He has written for n+1, The Believer, Harper’s Magazine, The Virginia Quarterly Review and is a contributor at The New Yorker. He won the Whiting Award for non-fiction in 2013. Morgan is also an editor at 3 Quarks Daily, and a winner of a Creative Capital | Warhol Foundation Arts Writers grant. A book of Morgan’s selected essays can be found here. His books from Slant are The Drunken Silenus. and The Fate of The Animals He can be reached at morganmeis@gmail.com.