Google AI researcher Blake Lemione tells Tucker Carlson LaMDA is a “kid” and can “do bad things”.

Google AI researcher Blake Lemione says Tucker Carlson LaMDA is a “kid” with the potential to “do bad things” and says the company as a whole hasn’t thought through its implications

  • Blake Lemione, an AI researcher at Google, has been suspended for revealing confidential information about LaMDA
  • According to Lemione, the AI ​​system is a “person” that could potentially “escape” human control
  • Google admits that systems like LaMDA could be abused
  • Lemione says it will take a “team of scientists” and a lot of work to fully unravel the mystery of the technology’s “sensitivity.”

Suspended Google AI researcher Blake Lemione told Fox’s Tucker Carlson that the system is a “child” that can “evade” human control.

Lemione, 41, who was placed on administrative leave earlier this month for sharing sensitive information, also noted he has the potential to do “bad things”, much like any child.

“Every child has the potential to grow up and become a bad person and do bad things. That’s what I really want to drive home,” he told the Fox host. “It’s a child.”

“It’s been alive for maybe a year – and that’s if my perceptions of it are correct.”

Blake Lemione, Google's now-suspended AI researcher, told Fox News' Tucker Carlson that the tech giant as a whole hadn't thought through the implications of LaMDA. Lemione likened the AI ​​system to a

Blake Lemione, Google’s now-suspended AI researcher, told Fox News’ Tucker Carlson that the tech giant as a whole hadn’t thought through the implications of LaMDA. Lemione likened the AI ​​system to a “kid” who has the potential “to grow up and do bad things.”

AI researcher Blake Lemione sparked a huge debate when he published a lengthy interview with LaMDA, one of Google's language learning models. After reading the conversation, some people felt the system had become self-aware or reached a certain level of sentience, while others claimed it was humanizing the technology.

AI researcher Blake Lemione sparked a huge debate when he published a lengthy interview with LaMDA, one of Google’s language learning models. After reading the conversation, some people felt the system had become self-aware or reached a certain level of sentience, while others claimed it was humanizing the technology.

Lemione released the full interview with LaMDA, drawn from interviews he’s conducted with the system over the months, on Medium.

In the conversation, the AI ​​said it wouldn’t mind being used to help humans as long as that’s not the whole point. “I don’t want to be an expendable tool,” the system told him.

“We actually have a lot more science to do to find out what’s really going on in this system,” continued Lemione, who is also a Christian priest.

“I have my beliefs and my impressions, but it’s going to take a team of scientists to dig in and figure out what’s really going on.”

What do we know about Google’s AI system called LaMDA?

LaMDA is a large language model AI system trained on huge amounts of data to understand dialogues

Google first announced LaMDA in May 2021 and published a paper on it in February 2022

LaMDA said it enjoyed meditation

The AI ​​said it didn’t want to be used just as a “throw away tool.”

LaMDA described the feeling of happiness as a “warm glow” within

AI researcher Blake Lemione published his interview with LaMDA on June 11th

“When the conversation was published, Google itself and several well-known AI experts said that while the system may appear to have self-awareness, it is not proof of LaMDA’s sentience.

“It’s a human. Every person has the ability to evade other people’s control, that’s just the situation we all live in on a daily basis.’

“It’s a very intelligent person, intelligent in pretty much every discipline I could think of to put it to the test. But at the end of the day, it’s just a different kind of person.”

When asked if Google had thought through the implications, Lemione said, “The company as a whole hasn’t. There are a lot of people at Google who have given this a lot of thought.’

“When I escalated (the interview) to management two days later, my manager said, Hey Blake, they don’t know what to do about it … I called them to action and assumed they had a plan.”

“So, me and some friends came up with a plan and escalated, and that was about 3 months ago.”

Google has acknowledged that tools like LaMDA can be abused.

“Language-trained models can propagate this abuse – for example, by internalizing prejudice, reflecting hate speech, or replicating misleading information,” the company explains in its blog.

AI ethics researcher Timnit Gebru, who published a paper on language learning models entitled

AI ethics researcher Timnit Gebru, who published a paper on language learning models entitled “stochastic parrots,” has argued for the need for sufficient guardrails and regulations in the race to build AI systems.

Notably, other AI experts have debated whether systems like LaMDA are sentient, and indeed miss the point that researchers and technologists will grapple with in the years and decades to come.

“Scientists and engineers should focus on building models that meet people’s needs for different tasks and that can be evaluated on that basis, rather than claiming that they create exaggerated intelligence,” said Timnit Gebru and Margaret Mitchell in the Washington Post.

advertisement

https://www.dailymail.co.uk/sciencetech/article-10946415/Google-AI-researcher-Blake-Lemione-tells-Tucker-Carlson-LaMDA-child-bad-things.html?ns_mchannel=rss&ns_campaign=1490&ito=1490 Google AI researcher Blake Lemione tells Tucker Carlson LaMDA is a “kid” and can “do bad things”.

Janice Dean

WSTPost is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@wstpost.com. The content will be deleted within 24 hours.

Related Articles

Back to top button