(Bloomberg) -- Media coverage of artificial intelligence tends to invoke tired references to The Terminator or 2001: A Space Odyssey’s HAL 9000 killing a spaceship’s passengers. Hollywood loves a story about a sentient robot destroying humanity in order to survive. In recent days, Google researcher Blake Lemoine grabbed headlines for getting suspended after releasing transcripts of a “conversation” with its Lamda artificial intelligence research experiment.  Lemoine believes that Lamda is sentient and aware of itself and describes the machine as a “coworker.” He told The Washington Post that part of his motivation for going public was because he believes that “Google shouldn’t be the ones making all the choices” about what to do with it. The overwhelming reaction among artificial intelligence experts was to pour cold water on these claims.  What is Lamda?It’s an acronym for language model for dialogue applications. As the name might suggest, it’s a tool designed to create a “model” of language so people can talk to it. Like similar experiments GPT-3 (Generative Pre-trained Transformer 3) from Elon Musk-backed OpenAI and Google’s earlier BERT (Bidirectional Encoder Representations from Transformers), these experiments are best thought of as amped up versions of the algebra you learned at school, with a twist. That twist is called machine learning, but before that we have to go back to the classroom and talk about algorithms.

What is an algorithm? An algorithm is a step-by-step process that solves a problem. Take an input, apply some logic and you get an output. Addition, one of the most basic problems in mathematics, can be solved with many different algorithms. 

Humans have been using algorithms to solve problems for centuries. Financial analysts spend their careers building algorithms attempting to predict the future and tell them to buy or sell shares to make money. Our world runs on these “traditional” algorithms, but recently there has been a shift towards “machine learning,” which builds on these traditional ideas.

What is machine learning?Machine learning tools take inputs and outputs and create their own logic to connect the two, in order to be able to come up with correct outputs in response to new inputs. Google and OpenAI’s aims are to build machines that can learn the logic behind all human language, so the machine can speak in a way that humans can understand. The machine itself does not truly “understand” what it’s doing. Instead it’s following an incredibly detailed set of rules that it has invented, with the help of another set of rules invented by a human.

Other big differences between “traditional algorithms” and “machine learning algorithm” techniques are in the quantity of the data used to create an algorithm and how that data is processed. In order for them to work, machine learning tools are “trained” on billions of books, online articles and sentences shared on social media that have been collected from the public internet and other sources. Increasingly, the result of that training is a model which can respond to human beings in an uncannily human way, creating the illusion of a conversation with a very clever being. 

This training requires huge amounts of computational power. Some estimate OpenAI’s GPT-3 cost about $20 million simply to create its model, and every time you ask GPT-3 to respond to a prompt, it burns through many hours worth of computer processing time.

So you’re actually talking to humanity?Lemoine is right when he says that Lamda “reads” Twitter, although “ingests and processes” is probably a more accurate description. And that’s how problems of bias creep in. The machine’s entire understanding of language is based around the information it’s been given. We know that Wikipedia is “biased” towards a Western viewpoint, as only 16% of its content about sub-Saharan Africa is written by people from the region. Machine learning inherits this bias because it almost certainly relies heavily on Wikipedia’s data.

Why is everyone so excited about machine learning?As computational power increases and the cost of that processing falls, machine learning will get more powerful and more available, so it can be applied to more problems. Right now your smart speaker is mostly useful for setting timers or playing music.  But airlines and shipping companies have used traditional algorithms for decades to maximize the efficiency of their boats and aircraft. The dream is that with enough cheap computing power, machine learning tools can make new treatments for diseases like cancer, enable fully autonomous self-driving cars or create a perfect nuclear fusion reactor design. 

So what’s actually happening when I talk with Siri or Alexa or  Lamda?When you think you’re “conversing” with a machine language model, you’re actually talking to a very complicated mathematical formula which has determined in advance how it should respond to your words with the help of calculations based on trillions of words written by human beings. Artificial intelligence tools like GPT-3 and Lamda are designed to solve specific problems like speaking conversationally to humans, but the ultimate goal of companies like Google’s DeepMind is to create something called “artificial general intelligence” or AGI. In theory an AGI would be able to understand or learn any task that a human can, leading to a rapid speeding up of problem solving.

Could a machine learning powered artificial intelligence eventually become sentient?Any progress towards a machine that has an inner mind and can feel or express emotions might be possible, but the expert consensus is that it’s impossible with the current state of technology. Here’s what some had to say:

  •  Cognitive scientist Steven Pinker said Lemoine is confused. Writing on Twitter, he says Lemoine “doesn't understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.)” These three concepts are what Pinker believes are required for any being to be conscious, and in his view Lamda is far from passing any of those bars.
  • Gary Marcus, author of “Rebooting AI,” said more bluntly in a blog post entitled “Nonsense on stilts,” that “In truth, literally everything that the system says is bullshit.” Marcus says there is no concept of meaning behind Lamda’s words, Lamda is just “predicting what words best fit a given context.”
  • Ilya Sutskever, chief scientist of OpenAI, tweeted cryptically in February  that “it may be that today's large neural networks are slightly conscious.” Murray Shanahan, the research director at DeepMind, replied that Lamda is slightly conscious “in the same sense that a large field of wheat may be slightly pasta.”
  • It’s worth reading Alex Hern’s experiments with GPT-3,  another natural language machine-learning program, showing how easy it is to generate complete and utter nonsense if you tweak your questions. Randall Munroe, author of web comic XKCD, and his conversation with GPT-3 as William Shakespeare is informative too. Who knew that if he were alive today, he would add Shrek to Romeo and Juliet’s balcony scene?

So nothing to worry about then?Tom Chivers, author of the “The AI Does Not Hate You,” argued the thing we should really worry about is the competence of these systems, not their sentience. “AI may or may not be becoming conscious, but it is certainly becoming competent. It can solve problems and is becoming more general, and whether or not it’s got an inner life doesn’t really matter,” he said. There are already reports of AI-powered autonomous drones being used to kill people, and machine learning enabled deepfakes have the potential to make disinformation worse. And these are still early days.

The doomsday bomb in Dr Strangelove that ends the world didn’t need to be intelligent or sentient to accidentally end the world. All it needed was simple logic (if attacked by the Americans, explode) applied in a really stupid way (the Soviets forgetting to tell the Americans). As Terry Pratchett wrote, “real stupidity beats artificial intelligence every time.”

©2022 Bloomberg L.P.