Is an AI conscious? “We are caught in our own trap” – Sciences et Avenir

It’s a scenario that has become classic in science fiction films: an artificial intelligence becomes conscious. That the protagonist falls in love with it as in Her or that the AI ​​pushes humans towards death as in 2001, a space odyssey, this theme has been fantasizing for a long time. The latest debate to date, the one launched in early June 2022 by Blake Lemoine, a Google engineer, about the artificial intelligence he is working on. The system called LaMDA, according to him, would be able to feel emotions and would be aware of itself.

Google engineer says the AI ​​he’s working on would have ‘sentience’

LaMDA is a chatbot, an algorithm that reproduces human interactions, like those used on certain merchant websites to advise or direct the user, or like the programs we use with connected speakers. The particularity of this AI being that it adapts to the speech of the human in front of it and that it does not simply follow ready-made response paths. After having largely “conversed” with this program, Blake Lemoine is formal: LaMDA is a person in its own right. “A person and a human are two very different things. Human is a biological term”he explains in a text, going so far as to compare the program to his child. “He’s a kid. His opinions are developing. If you asked me what my 14-year-old son believes, I’d say, ‘Dude, he’s still figuring it out. Don’t make me put it on. a label on my son’s beliefs.’ I feel the same way about LaMDA.” The engineer claims that the AI ​​would be “sentient”, a word that covers the notion of “ability to perceive and feel things” or “able to use his senses.

The trap of se”talking to myself

However, specialists point out that each sentence formulated by artificial intelligence results from a line of program coded by the engineers themselves. In other words, nothing can”be born“of such an AI, which”pickaxe” simply in the program that has been determined for her. Thomas Dietterich, professor emeritus of computer science at Oregon State Universityexplains to Science and Future how this program works:Large language models, such as LaMDA, are statistical imitation systems. They learn to predict the next word in a conversation based on many previous words.

We would like to thank the author of this write-up for this outstanding web content

Is an AI conscious? “We are caught in our own trap” – Sciences et Avenir


Check out our social media profiles and also other related pageshttps://www.ai-magazine.com/related-pages/