By FactsWow Team
Posted on: 21 Jun, 2022
For a long time, the Robocalypse — when machines become sentient and begin to control humans — has been a favorite science fiction topic. Some scientists, including the late Stephen Hawking, have expressed concern about it. However, until last week, when a Google engineer claimed the corporation had crossed the sentience barrier, the thought of a sentient machine seemed incredibly far away — if at all.
Blake Lemoine used transcripts of talks he had with LaMDA — Language Model for Dialogue Applications — a Google system for creating chatbots based on a big language model that ingests trillions of words from the internet — to support his thesis. The transcripts can be frightening, such instance when Lemoine asks LaMDA what it (the AI likes the pronouns it/its) is most afraid of:
Lemoine's allegations that LaMDA is sentient are dismissed by Google and others. 'Some in the larger AI community are discussing the long-term prospect of sentient or general AI,' said Google spokesperson Brian Gabriel, 'but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient.' 'These systems can riff on any fantasy topic — if you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on,' he told TechNewsWorld.
He explained, 'LaMDA tends to follow suggestions and leading inquiries, going along with the pattern created by the user.' 'In accordance with our AI Principles, our team — which includes ethicists and engineers — has investigated Blake's concerns and notified him that the evidence does not support his allegations.' 'We are not aware of anyone else making the broad assumptions, or anthropomorphizing LaMDA, the way Blake has,' he continued.
Alex Engler, a fellow at The Brookings Institution in Washington, D.C., vehemently disagreed that LaMDA is sentient and pushed for more transparency in the field. He told TechNewsWorld, 'Many of us have urged for transparency rules for AI systems.' 'As it becomes more difficult to distinguish between a human and an AI system, more people will mistake AI systems for people,' he said. 'This could result in real harms, such as misinterpreting important financial or health information.' 'Rather than letting consumers be confused, as they often are by commercial chatbots,' he continued, 'companies should properly reveal AI systems as they are.'
According to Julian Sanchez, a senior fellow at the Cato Institute, a public policy think tank in Washington, D.C., chatbots like ELIZA have been fooling users into thinking they're interacting with a sophisticated intelligence since the 1960s by using simple tricks like turning a user's statement into a question and echoing it back at them.
'LaMDA is undoubtedly more complex than relatives like ELIZA,' he told TechNewsWorld, 'but there's no reason to believe it's conscious.' With a large enough training set and some advanced language rules, LaMDA can generate an answer that sounds like an actual human would give. Still, it doesn't guarantee the program understands what it's saying any more than a chess program understands what it's saying.
'Sentience refers to awareness or consciousness, and in theory, a software may perform intelligently without being sentient,' he explained. 'For example, a chat programme might have very advanced algorithms for recognising insulting or offensive statements and answer with the output 'That hurt my feelings!'' he went on to say. 'However, that does not imply that it truly feels anything.' 'The software has just discovered what phrases make people say, 'that hurt my feelings.''
When and if a machine becomes sentient, declaring it so will be difficult. 'The truth is, we don't really understand why human people are conscious, therefore we don't have clear criteria for recognising when a machine might be truly sentient — as opposed to being very excellent at copying the responses of sentient humans,' Sanchez explained. 'We don't fully understand how consciousness emerges from the brain, or how much it depends on things like the type of physical matter that human brains are made of,' he said. So it's a difficult challenge to say if a sophisticated silicon 'brain' is conscious in the same way that a human brain is,' he continued.
He went on to say that intelligence is a separate issue. The Turing Test is a well-known test for machine intelligence. You have a human conduct 'conversations' with a number of different partners, some of them are humans, and others are machines. If the person is unable to distinguish between the two, the machine is said to be intelligent.
Determining sentience is crucial since it poses ethical issues for people who aren't machines. Castro remarked, 'Sentient beings feel pain, have consciousness, and experience emotions.' 'We treat live things, especially sentient ones, differently than inanimate items from a moral standpoint.' He went on to say, 'They aren't merely a means to a goal.' 'As a result, any sentient being should be treated with care. This is why there are rules against animal abuse.' 'Again, there is no indication that this has happened,' he stated emphatically. Furthermore, even the potential remains science fiction for the time being.'
'When a person is terrified, there are a lot of processes going on in their brain that have nothing to do with the language centres that form the statement 'I am scared,'' he stated. 'Similarly, a computer would need to be doing something other than linguistic processing to truly convey 'I am terrified,' rather than just generating that string of letters.' 'There's no reason to believe there's any such procedure going on in LaMDA's case,' he said. It's nothing more than a language processor.'
For more stories like this
Explore our website