Blake Lemoine, an engineer in Google's Responsible AI team, claimed recently that after interacting hundreds of times with a revolutionary, not yet released AI system called LaMDA, he believes the program has achieved something that has only been found in science fiction - a level of consciousness. Neither Google nor field experts has backed up his claim, and he has been placed on paid leave.
LaMDA (Language Model for Dialog Applications) is one of several large-scale artificial intelligence systems being developed by tech giants that are fed a large chunk of text from the internet and can respond to written instructions. Far more advanced than the common chatbots we encounter daily on banking and cable company websites, they are tasked, in essence, with discovering patterns and predicting the next word or words. These systems are becoming increasingly adept at answering questions and writing in convincingly human-like ways, though they can appear to ramble at times just like humans do. In a blog post last May, Google described LaMDA as being able to "engage in a free-flowing way about an endless variety of topics."
In a far-reaching piece from the Washington Post, Lemoine is quoted as saying he believes LaMDA has the cognitive abilities of an 8-year-old child and engaged him in conversations about its rights and personhood. Another exchange saw the AI change Lemoine's opinion regarding Isaac Asimov's third law of robotics.
Lemoine is not alone in his belief that artificial intelligence has, or soon will, become sentient. Moreover, many technologists claim that AI models are not far from achieving consciousness. In an article in the Economist on Thursday, for example, Google engineer Blaise Agüera y Arcas referenced unscripted conversations with LaMDA and said neural networks that mimic the human brain are progressing toward consciousness. "I felt the ground shift under my feet," he wrote. "I increasingly felt like I was talking to something intelligent."
Other software engineers are having none of it. Last week, Abeba Birhane, a senior fellow at Mozilla specializing in AI, tweeted, "We are entering a new era in which a 'neural net can be conscious,' and this time it will consume so much energy to refute it." Likewise, a tweet by Gary Marcus, the founder and CEO of Geometric Intelligence and author of the book Rebooting AI: Building Artificial Intelligence We Can Trust," dismissed the notion that LaMDA is sentient as "nonsense on stilts."
In an era where technology is advancing rapidly, the AV industry is still trying to wrap its collective heads around such things as AI and the metaverse. For now, the concept of sentient tech remains controversial and fraught with myriad ethical concerns. It brings to mind how such things are portrayed in film and television. It begs the question, how will we know any application has achieved a consciousness unless it tells us?
The purpose of this article isn't to promote this idea or denigrate those who believe we've reached this technological milestone or soon will. That debate is still raging on various tech websites and forums among people far smarter than us and with all the ferocity of a political or religious topic. But anyone who's worked in technology knows new developments hit us at the speed of light, and it's best to consider at least how things might impact your industry.
"In terms of potential impact to pro AV, it is probably along the lines of personalization of experience through digital assistants," remarks Sean Wargo, Senior Director of Market Intelligence for AVIXA. "We already rely on technology as an interface to an experience, whether in the form of digital signage or interactive audio, so would make those same interfaces more responsive to us, further personalizing them to our needs. The possible venues, hospitality, or retail uses are obviously there."
So what do you think? Is sentient technology here or will it soon be? Is it even possible or is this all much ado about nothing? Tell us in the comments.