It’s a scenario that has become classic in science fiction films: an artificial intelligence becomes conscious. That the protagonist falls in love with it as in Her or that the AI pushes humans towards death as in 2001, a space odyssey, this theme has been fantasizing for a long time. The latest debate to date, the one launched in early June 2022 by Blake Lemoine, a Google engineer, about the artificial intelligence he is working on. The system called LaMDA, according to him, would be able to feel emotions and would be aware of itself.
Google engineer says the AI he’s working on would have ‘sentience’
LaMDA is a chatbot, an algorithm that reproduces human interactions, like those used on certain merchant websites to advise or direct the user, or like the programs we use with connected speakers. The particularity of this AI being that it adapts to the speech of the human in front of it and that it does not simply follow ready-made response paths. After having largely “conversed” with this program, Blake Lemoine is formal: LaMDA is a person in its own right. “A person and a human are two very different things. Human is a biological term”he explains in a text, going so far as to compare the program to his child. “He’s a kid. His opinions are developing. If you asked me what my 14-year-old son believes, I’d say, ‘Dude, he’s still figuring it out. Don’t make me put it on. a label on my son’s beliefs.’ I feel the same way about LaMDA.” The engineer claims that the AI would be “sentient”, a word that covers the notion of “ability to perceive and feel things” or “able to use his senses.“
The trap of se”talking to myself“
However, specialists point out that each sentence formulated by artificial intelligence results from a line of program coded by the engineers themselves. In other words, nothing can”be born“of such an AI, which”pickaxe” simply in the program that has been determined for her. Thomas Dietterich, professor emeritus of computer science at Oregon State Universityexplains to Science and Future how this program works:Large language models, such as LaMDA, are statistical imitation systems. They learn to predict the next word in a conversation based on many previous words.
It’s a scenario that has become classic in science fiction films: an artificial intelligence becomes conscious. That the protagonist falls in love with it as in Her or that the AI pushes humans towards death as in 2001, a space odyssey, this theme has been fantasizing for a long time. The latest debate to date, the one launched in early June 2022 by Blake Lemoine, a Google engineer, about the artificial intelligence he is working on. The system called LaMDA, according to him, would be able to feel emotions and would be aware of itself.
Google engineer says the AI he’s working on would have ‘sentience’
LaMDA is a chatbot, an algorithm that reproduces human interactions, like those used on certain merchant websites to advise or direct the user, or like the programs we use with connected speakers. The particularity of this AI being that it adapts to the speech of the human in front of it and that it does not simply follow ready-made response paths. After having largely “conversed” with this program, Blake Lemoine is formal: LaMDA is a person in its own right. “A person and a human are two very different things. Human is a biological term”he explains in a text, going so far as to compare the program to his child. “He’s a kid. His opinions are developing. If you asked me what my 14-year-old son believes, I’d say, ‘Dude, he’s still figuring it out. Don’t make me put it on. a label on my son’s beliefs.’ I feel the same way about LaMDA.” The engineer claims that the AI would be “sentient”, a word that covers the notion of “ability to perceive and feel things” or “able to use his senses.“
The trap of se”talking to myself“
However, specialists point out that each sentence formulated by artificial intelligence results from a line of program coded by the engineers themselves. In other words, nothing can”be born“of such an AI, which”pickaxe” simply in the program that has been determined for her. Thomas Dietterich, professor emeritus of computer science at Oregon State Universityexplains to Science and Future how this program works:Large language models, such as LaMDA, are statistical imitation systems. They learn to predict the next word in a conversation based on many previous words. So LaMDA knows that if the conversation starts with “After he slapped her, she has” then the next word is likely to be “yelled”. But it’s no different from my phone mimicking the outer signs rather than having the inner experience.”
The use of intuitive writing programs on his phone – which automatically suggests words by typing an SMS – is far from being an intimate experience. However, with a conversational AI, a form of artificial link can take hold. Sherry Turkle, a professor at MIT and a great specialist in these questions, formulated the concept “artificial intimacy“, which matches the promise of privacy made by devices like AIs.”This promise of intimacy activates our Darwinian reflexes. It’s in our Darwinian nature to seek contact with something that answers very simple personal questions, or makes eye contact, or remembers simple things like our name, our past, that can make judgments about our state of mind. We are ready to think that he “knows” or “understands” or has “empathy” with us“, she explains to Science and Future.
A form of anthropomorphism, such as when we attribute human characteristics to animals or objects around us (such as naming our car or making our dog look like it’s smiling.)
The more the dialogue is established with a chatbot, the more the feeling of deep connection sets in. “It is, however, a true soliloquy. The machine is programmed to remember what you tell it. She almost becomes a double of you. You tell her that you like rugby, she integrates the information and talks to you about rugby much later. Easy then to fall into the trap and to have the impression of a special bond with her. Yet we actually talk to ourselves“, explains to Science and Future Serge Tisseron, psychiatrist specializing in new technologies.
Self-awareness, yes, but weak
The specialists we contacted agree that there is still a form of self-awareness among AIs. “Our smart phones contain many sensors. For example the accelerometer can be used to play games and count my steps, the camera can enable face recognition“, explains Thomas Dietterich. They can also sense their own heat and switch off automatically when it gets too hot. But they do not experience any sensations. “I could easily program my smart phone to monitor the accelerometer so that when it detects that I’ve dropped it, it plays an audio clip of a scream that says “Ouch!”. It might also bring up a window that says “It hurts.” But that comes down to programming my phone to mimic the outward signs of pain, not experience it..” Same observation for Sherry Turkle, who recalls that robots do not know the fear of death, hunger, or injuries.
Many engineers have already taken to their own game. In 1966, computer scientist Joseph Weizenbaum created Eliza, one of the first “talking machines“, with which one could interact with a keyboard and a screen. The machine, which answered in writing, was programmed to rephrase what had just been said. “If you told her that you slept badly, she would answer you: “Ah well? I’m sorry you slept badly.” And when she didn’t know what to do, the machine just replied, “I got you.”, explains Serge Tisseron. At the time, the computer scientists knew perfectly well that Eliza was only a program but explained that they were disturbed by exchanging with her. A phenomenon called “the Eliza effect“, which has not finished affecting humans, as artificial intelligence improves.
We would like to thank the author of this write-up for this outstanding web content
Is an AI conscious? “We are caught in our own trap” – Sciences et Avenir
Check out our social media profiles and also other related pageshttps://www.ai-magazine.com/related-pages/