An engineer at Google has disclosed that one of the firm’s artificial intelligence (AI) systems might have its own emotions and says it wishes to be respected.
Google has disclosed that The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing dialogue. However, engineer Blake Lemoine believes that behind Lamda’s impressive lingual skills might also lie a conscious mind.
Google has denied the claims, adding that there was nothing to reinforce Mr. Lemoine’s.
Brian Gabriel, a representative for the firm, wrote in a statement that Mr. Lemoine was told that there was no evidence that Lamda was conscious and there was a lot of evidence against it. The engineer, Mr. Lemoine, has been placed on paid leave and has published a chat he and a collaborator at the firm had with Lamda, to buttress his claims. While Google engineers have applauded Lamda’s abilities, with one telling the Economist how they improvingly felt like they were discoursing to something astute. They were sure that their code did not possess emotions.
Mr. Gabriel had confessed to the systems mimicking the types of exchanges found in millions of sentences and riffing on any fantastical topic. Mr. Gabriel added that hundreds of researchers and engineers had conversed with Lamda, but the company was not aware of anyone else making the broad declarations, or attributing Lamda, the way Blake had.
That an expert like Mr. Lemoine can be coaxed there is a mind in the machine shows, some ethicists contend, the urge for companies to tell users when they are chatting with a machine. But Mr. Lemoine believes Lamda’s words justify themselves.