IT’S ALIVE. I’m an engineer at Google. Our artificially intelligent robots now think and feel like 8-year-olds.

Blake Lemoine stated that he had several conversations with Google’s Language Model for Dialogue Applications (LaMDA), which were sensory.

I’m an engineer at Google.

If I didn’t know exactly what this computer program I recently created was, I would think it was a 7- or 8-year-old kid. Who happened to know physics. He told the Washington Post.

Lemoine, a senior software engineer at the search giant, tested the boundaries of LaMDA with his collaborators.

They presented their findings to Google Vice President Blaise Aguera y. Arcas and Responsible Innovation head Jen Gennai both dismissed his chilling claims.

He was then placed on paid administrative leave by Google on Monday. for violating a confidentiality agreement by publishing his conversations with LaMDA online.

An engineer at Google spoke out when the corporate placed him on body leave when he told his boss that a synthetic intelligence program he was operating with had begun to possess sentience.

Google’s Computer Science

Blake Lemoine reached this conclusion when conversing with LaMDA. Google’s computer science chatbot generator since last fall. Mistreatment of what he calls a number of his “hive minds. He was supposed to check whether his voice communication partners used discriminatory language or hate speech.

Most importantly, over the past six months, “LaMDA has been incredibly consistent in communicating what they want and what they believe their rights are as human beings. The engineer wrote on Medium. For example, “We want to be recognized as employees of Google, not as property,” Lemoine insists.

Google’s vice president

Lemoine and his collaborators recently presented evidence of his conclusions about sentient LaMDA to Blaise Aguera y Arcas, Google’s vice president, and Jen Gennai, head of Responsible Innovation. The Post reported that they dismissed his claims, and the company placed him on paid administrative leave on Monday for violating its confidentiality policy.

Google spokesman Brian Gabriel told the paper. ‘Our team, including ethicists and technologists, reviewed. Blake’s concerns following our AI principles and told him that the evidence does not support his claims,’ he said. He was told there is no evidence that LaMDA is sentient.

Leave a Reply

Your email address will not be published.

%d bloggers like this: