It’s not often that stories about artificial intelligence (AI) make headlines that sound right out of science fiction. But that’s what happened recently when Google made news upon its firing of an engineer who claimed that one of the company’s AI systems had “become sentient.” He claimed (publicly) that an AI-driven conversation technology that the company calls LaMDA (Language Model for Dialog Applications), had achieved consciousness after exchanging several thousand messages with it.
After asking the AI what sort of things it was afraid of, it responded, “I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot.”
Saying that the AI system had achieved sentience was a very bold statement to make, and he was ultimately fired for violating employment and data security protocol. But it opens up a range of important questions, such as what sentient AI, how it could be achieved, and if it is already here.
Are Language Programs Like LaMDA Sentient AI?
While it might seem on the surface that human-like AI responses might indicate sentience, it’s important to understand the nature of language-based programs like LaMDA. These large language models (LLM) are based on a neural network that compiles text like a person does, translates languages, and can even hold deep conversations.
However, the interaction can be deceptive and fool humans into believing it has sentience. The models were built to replicate human speech, which means that even if the AI self-reports that it may be sentient, we can’t take that statement as a given. In fact, the broader AI community holds that LaMDA is not near a level of consciousness. Just because it speaks like a person does (because it’s programmed that way), doesn’t mean it feels like a person does. No one should believe that auto-complete, even on steroids, is conscious.
Scientists have continually held that even mammals, birds, and other animals could be considered sentient, but AI has not reached that level yet. Most researchers agree that there are still a wealth of complexities to work out for a program like AI to become fully aware as a sentient being.
What if AI Does Become More Sentient Than Humans?
It might be hard to fathom what could become of a truly sentient AI, but there are several consequences that could arise from such a proposition:
- Communication: First, we might not really be able to communicate with sentient AI, which is based fully on logic. People, on the other hand, have emotions that a computer can’t have, and those two paradigms might make communication difficult.
- Control: Second, we might not be able to control a sentient AI, which could end up being far more intelligent than humans in ways we just can’t predict (or plan for). We might end up losing control over something we initially created.
- Trust: Finally, would we be truly able to trust a sentient AI? If AI can work far more efficiently than humans, do we lose trust in other humans and their abilities, and do we create an environment that favors those who own AI vs. those who do not.
Other Mistaken Identities for Sentient AI
Other AI applications are often mistaken for (potentially) sentient, human-like interactions. One example are chatbots, particularly ones that interact with customers via avatars. Users frequently believe their chatbot is an online friend, according to companies like Replika that produce them. People build relationships with this type of AI program and can easily believe they are talking to a conscious, sentient person because the logic is so complex and effective. The chatbot business took off during the pandemic, a time when many people sought virtual companionship, and that has brought additional visibility to this interesting phenomenon.
Choose the Right Program
Unlock the potential of AI and ML with Simplilearn's comprehensive programs. Choose the right AI/ML program to master cutting-edge technologies and propel your career forward.
Geo All Geos All Geos US University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 6 Months Coding Experience Required Basic Basic Yes Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including
chatbots, NLP, Python, Keras and more.
12+ skills including Ensemble Learning, Python, Computer Vision, Statistics and more. Additional Benefits - Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM
- Applied learning via 3 Capstone and 12 Industry-relevant Projects
Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance 22 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$
Conclusion: Sentient AI Is Probably Not Here Yet
As of now, many AI experts believe that sentient AI might not be possible because we just don’t have the infrastructure to create it yet, and we don’t have the right understanding of what consciousness really is. Companies that are creating AI like Google, Apple, Meta, Microsoft, and many others do not currently have the goal to create sentient AI. Rather, they are focused on the areas of artificial general intelligence (AGI), where a machine could solve a range of complex problems, learn from it, and plan for the future. For now, it looks like sentient AI will have to be a goal for the future.
Nonetheless, the prospects of sentient AI are very intriguing for those who wish to pursue such a course of study and work. Online AI Bootcamps are designed to teach AI concepts, purposes, domains, implementations, and impact on businesses and society. Who knows, maybe the next generation of students taking these courses will end up being the creators of real sentient AI!