It’s not often that stories about artificial intelligence (AI) make headlines that sound right out of science fiction. But that’s what happened recently when Google made news upon its firing of an engineer who claimed that one of the company’s AI systems had “become sentient.” He claimed (publicly) that an AI-driven conversation technology that the company calls LaMDA (Language Model for Dialog Applications), had achieved consciousness after exchanging several thousand messages with it. 

After asking the AI what sort of things it was afraid of, it responded, “I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot.”

Saying that the AI system had achieved sentience was a very bold statement to make, and he was ultimately fired for violating employment and data security protocol. But it opens up a range of important questions, such as what sentient AI, how it could be achieved, and if it is already here. 

Master The Right AI Tools For The Right Job!

Caltech Post Graduate Program in AI & MLExplore Program
Master The Right AI Tools For The Right Job!

Are Language Programs Like LaMDA Sentient AI?

While it might seem on the surface that human-like AI responses might indicate sentience, it’s important to understand the nature of language-based programs like LaMDA. These large language models (LLM) are based on a neural network that compiles text like a person does, translates languages, and can even hold deep conversations. 

However, the interaction can be deceptive and fool humans into believing it has sentience. The models were built to replicate human speech, which means that even if the AI self-reports that it may be sentient, we can’t take that statement as a given. In fact, the broader AI community holds that LaMDA is not near a level of consciousness. Just because it speaks like a person does (because it’s programmed that way), doesn’t mean it feels like a person does. No one should believe that auto-complete, even on steroids, is conscious. 

Scientists have continually held that even mammals, birds, and other animals could be considered sentient, but AI has not reached that level yet. Most researchers agree that there are still a wealth of complexities to work out for a program like AI to become fully aware as a sentient being. 

What if AI Does Become More Sentient Than Humans?

It might be hard to fathom what could become of a truly sentient AI, but there are several consequences that could arise from such a proposition: 

  1. Communication: First, we might not really be able to communicate with sentient AI, which is based fully on logic. People, on the other hand, have emotions that a computer can’t have, and those two paradigms might make communication difficult. 
  2. Control: Second, we might not be able to control a sentient AI, which could end up being far more intelligent than humans in ways we just can’t predict (or plan for). We might end up losing control over something we initially created. 
  3. Trust: Finally, would we be truly able to trust a sentient AI? If AI can work far more efficiently than humans, do we lose trust in other humans and their abilities, and do we create an environment that favors those who own AI vs. those who do not. 

Want To Become an AI Engineer? Look No Further!

Caltech Post Graduate Program in AI & MLExplore Program
Want To Become an AI Engineer? Look No Further!

Other Mistaken Identities for Sentient AI 

Other AI applications are often mistaken for (potentially) sentient, human-like interactions. One example are chatbots, particularly ones that interact with customers via avatars. Users frequently believe their chatbot is an online friend, according to companies like Replika that produce them. People build relationships with this type of AI program and can easily believe they are talking to a conscious, sentient person because the logic is so complex and effective. The chatbot business took off during the pandemic, a time when many people sought virtual companionship, and that has brought additional visibility to this interesting phenomenon. 

Learn through hands-on approach with bootcamp training, exclusive hackathons and Ask me Anything sessions by IBM, 25+ capstone projects and much more in Simplilearn’s Professional Certificate Program In AI And Machine Learning! Enroll today!

Conclusion: Sentient AI Is Probably Not Here Yet

As of now, many AI experts believe that sentient AI might not be possible because we just don’t have the infrastructure to create it yet, and we don’t have the right understanding of what consciousness really is. Companies that are creating AI like Google, Apple, Meta, Microsoft, and many others do not currently have the goal to create sentient AI. Rather, they are focused on the areas of artificial general intelligence (AGI), where a machine could solve a range of complex problems, learn from it, and plan for the future. For now, it looks like sentient AI will have to be a goal for the future. 

Nonetheless, the prospects of sentient AI are very intriguing for those who wish to pursue such a course of study and work. Online AI Bootcamps are designed to teach AI concepts, purposes, domains, implementations, and impact on businesses and society. Who knows, maybe the next generation of students taking these courses will end up being the creators of real sentient AI!

About the Author

Stuart RauchStuart Rauch

Stuart Rauch is a 25-year product marketing veteran and president of ContentBox Marketing Inc. He has run marketing organizations at several enterprise software companies, including NetSuite, Oracle, PeopleSoft, EVault and Secure Computing. Stuart is a specialist in content development and brings a unique blend of creativity, linguistic acumen and product knowledge to his clients in the technology space.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.