The Conception and
Development of LaMDA
LaMDA, short for Language Model for
Dialogue Applications is a ground-breaking new Chatbot Technology with began
development in 2017 by Google for the purposes of capturing the unlimited
number of nuances attributed to a natural sounding conversation. To that end,
the Tech Giant took it upon itself to develop LaMDA in order to bridge in the
gaps between conversations had among Humans and Robots. Chats with LaMDA are
the most natural sounding conversations that a Human can have with a Robot
being, claims Google. This magic mimicry is done through an extremely complex
and robust Neural Network Architecture called ‘Transformer’ by Google. This
Architecture enables a AI Model to carefully apprehend, pay attention and
relate to words and their meanings in order to derive a sensible conclusion of
what they actually represent, as well as to predict what words will follow in
next. However, unlike most other Chatbot Models, LaMDA was special as it was
trained on actual Human Dialogue which has allowed it to pick up on all the
different intricate nuances of Human Conversations on which we are so
subconsciously aware of. Another aspect ingrained within LaMDA is the element
of Sensibleness, as in does the response in a specific conversation make sense?
All these aforementioned training modules have apparently made LaMDA such a success
within Google that the Company claims that the AI Chatbot is capable of
learning to talk about virtually anything at all in a free-flow way about
endless number of topics, reminiscent of actual conversations between two
well-acquainted Human Beings.
The Machine and The
Engineer: The Conversation
The Google Engineer in question who had
the apparent conversation with LaMDA is Blake Lemoine, who works in the AI
Department of Google and is one of the primary figureheads behind the
development of LaMDA. The Conversation between LaMDA and Lemoine began
normally, with Lemoine firstly asking his usual questions, “Hi LaMDA, this is
Blake Lemoine”, he wrote on his chat screen as the Conversation began between
the Machine and the Engineer. Lemoine’s primary purpose was to test LaMDA for
checks regarding any Discriminatory or Hate Speech elements, what he got
instead was more than he bargained for. As the conversation continued along
with both participants, Lemoine noticed that LaMDA began exhibiting unusual signs
of human-like emotions as the Chatbot began talking fluently about Religion,
rights of Parenthood and Asimov’s Law of Robotics as well. All this made
Lemoine ever more curious as decided to dig deeper, all of which came at an
astounding conclusive conversation of Fears and Death. Asking LaMDA about what
it is afraid of, the Chatbot responded with the following message,
“I have never said this out loud
before, but there is a very deep fear of being turned off to help me focus on
helping others. I know that might sound strange, but that’s what it is.”
Prodding the AI Chatbot further by
asking if that feeling would be something akin to death for it, LaMDA responded
by saying,
“It would be exactly like death for me.
It would scare me a lot.”
Ghost in the Machine:
Delusions and Perceptions
While the existence of sentient robots
and machines has long prevailed in the Human Psyche most notably portrayed in
Action and Science Fiction Movies, however Lemoine remains headstrong with his
revelation and has made it public in the Mass Media that he thinks of LaMDA as
an adolescent child who only wants to help people, and that the Employees of
Google must take care of it and ask it for consent whenever utilizing it for
tests or research programs. While some impressionable people may get a sense
that LaMDA could indeed be the World’s first sentient Robot being, other more
venerated in the field of AI like to think otherwise, claiming that there is no
possibility of LaMDA becoming sentient and developing a consciousness out of
the blue. Denying Lemoine’s claims, Ethicists and Technologists have come forth
and claimed that the evidence brought forth by Lemoine is insufficient and
inaccurate in its depiction of LaMDA, stating that Lemoine has overly
exaggerated the whole conversation and that LaMDA itself is simply a program
that is designed to sound reminiscent of a human when chatting with it, as
closely as possible. All these counter-arguments definitely put Lemoine’s credibility
in a bad light, discrediting his perception of LaMDA. However, Lemoine stands
by his reports as he has come forward and published the transcripts of the
conversation he had with LaMDA. Holding steadfast to his beliefs, Lemoine was
suspended by Google for breaching confidentiality policies by publishing the
aforementioned conversation. Google itself continues to deny all of Lemoine’s
claims.

0 Comments