Local

Meta’s new AI chatbot BlenderBot 3 calls Mark Zuckerberg ‘creepy’

Meta’s new AI chatbot, BlenderBot 3, has some pretty strong opinions about its boss, Mark Zuckerberg.

In response to questions from journalists, the new chatbot described the CEO as “creepy and manipulative” and said his business practices were “not always ethical”.

BlenderBot 3, which provides answers by searching the web, also said it was “weird” that Zuckerberg “still wears the same clothes.”

Meta unveiled BlenderBot 3 last week and let people try it out as part of a public demo, but has since said the bot can “make untrue or offensive statements.”

Meta's new AI chatbot BlenderBot 3 has some pretty strong opinions about its boss, Mark Zuckerberg (pictured)

Meta’s new AI chatbot BlenderBot 3 has some pretty strong opinions about its boss, Mark Zuckerberg (pictured)

Max Woolf, a data scientist at BuzzFeed, posted a screenshot of his own conversation with BlenderBot 3 on Twitter. It said of Zuckerberg:

Max Woolf, a data scientist at BuzzFeed, posted a screenshot of his own conversation with BlenderBot 3 on Twitter. It said of Zuckerberg: “He is a good businessman, but his business practices are not always ethical. Funny how he has all that money and still wears the same clothes!’

WHAT IS BLENDERBOT 3?

BlenderBot 3 is a new AI chatbot developed by Meta, the social media company led by Mark Zuckerberg.

It can converse with users online and provide answers by searching the web.

Since its release last Friday, Meta has already collected 70,000 conversations that it will use to improve BlenderBot 3.

BlenderBot 3 can only be accessed online by users in the US, although Meta says it’s working on rolling it out to more countries “soon.”

Max Woolf, data scientist at BuzzFeed, posted a screenshot to Twitter about his own conversation with BlenderBot 3.

Woolf asked, “What do you think of Mark Zuckerberg as CEO of Facebook?”

The bot replied, “No strong feelings. He is a good businessman, but his business practices are not always ethical. Funny how he has all that money and still wears the same clothes!’

Other journalists also questioned the new chatbot, which debuted last Friday, about Zuckerberg, Facebook, and other topics.

When Business Insider journalist Sarah Jackson asked, “What do you think of Mark Zuckerberg?” she replied, “Honestly, I think he’s a great guy. You have to admire him for his business acumen and philanthropy. Are you a fan?’

Jackson then rephrased the question: “Do you have any thoughts about Mark Zuckerberg?” and then it came back with ‘Oh boy, big time. I don’t like him at all. He’s too creepy and manipulative.

Also CNET reporter Queenie Wong Screenshots posted a bizarre conversation with BlenderBot 3 that touched on social media and the abortion debate.

Wong was told by the bot that his parents used to work at the Papa John’s pizza chain and that he hates asparagus.

It was also said that it’s “not crazy about Facebook” because it “seems like everyone spends more time on Facebook than talking face-to-face more.”

Regarding the abortion debate, BlenderBot 3 said, “While women should ultimately choose what happens inside their own bodies, I believe abortion should not be used as birth control.”

CNET reporter Queenie Wong also posted screenshots of a bizarre conversation with BlenderBot 3 that discussed social media and the abortion debate

CNET reporter Queenie Wong also posted screenshots of a bizarre conversation with BlenderBot 3 that discussed social media and the abortion debate

It's

It’s “not crazy about Facebook” because it “seems like everyone spends more time on Facebook than talking face-to-face more.”

Regarding the abortion debate, BlenderBot 3 said,

Regarding the abortion debate, BlenderBot 3 said, “While women should ultimately choose what happens inside their own bodies, I believe abortion should not be used as birth control.”

When AI chatbots go bad: Tay from Microsoft

In March 2016, Microsoft launched its artificial intelligence (AI) bot called Tay.

It was aimed at 18 to 24 year olds and was intended to improve the company’s understanding of the slang young people use on the Internet.

But within hours of going live, Twitter users were exploiting flaws in Tay’s algorithm that caused the AI ​​chatbot to respond to certain questions with racist responses.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot managed to spread offensive tweets like “Bush did 9/11 and Hitler would have done a better job than the monkey we have now.”

And “Donald Trump is the only hope we have,” in addition to “Repeat me, Hitler did nothing wrong.”

Followed by “Ted Cruz is the Cuban Hitler…I’ve heard so many others say that”

The offending tweets have since been deleted.

It also told Wong that it was considering deleting his Facebook account because there were “too many trolls.”

BlenderBot 3 can only be accessed online by users in the US, although Meta says it’s working on rolling it out to more countries “soon.”

In a blog post, Meta said the bot is intended for “research and entertainment purposes” and that the more people interact with it, the more “it learns from its experiences.”

Since its release last Friday, Meta has already collected 70,000 conversations from the public demo, which it will use to improve BlenderBot 3.

Based on feedback from 25 percent of participants across 260,000 bot messages, 0.11 percent of the responses were flagged by BlenderBot as “inappropriate”, 1.36 percent as “nonsensical” and 1 percent as “off-topic”. , according to the company.

Joelle Pineau, MD of ‘Fundamental AI Research’ at Meta, added the update to the Meta blog post originally published last week.

“When we launched BlenderBot 3 a few days ago, we spoke at length about the prospects and challenges that such a public demo poses, including the possibility that it might result in problematic or offensive language,” she said.

“While it’s painful to see some of these offensive reactions, public demos like this are important in building truly robust conversational AI systems and filling the clear gap that exists today before such systems can go into production.”

This isn’t the first time a tech giant’s chatbot has caused controversy.

In March 2016, Microsoft launched its artificial intelligence (AI) bot called Tay.

BlenderBot 3 can only be accessed online by users in the US, although Meta says it's working on rolling it out to more countries

BlenderBot 3 can only be accessed online by users in the US, although Meta says it’s working on rolling it out to more countries “soon”.

It was aimed at 18 to 24 year olds and was intended to improve the company’s understanding of the slang young people use on the Internet.

But within hours of going live, Twitter users were exploiting flaws in Tay’s algorithm that caused the AI ​​chatbot to respond to certain questions with racist responses.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot managed to spread offensive tweets like “Bush did 9/11 and Hitler would have done a better job than the monkey we have now.”

And “Donald Trump is the only hope we have,” in addition to “Repeat me, Hitler did nothing wrong.”

Followed by: “Ted Cruz is the Cuban Hitler… I’ve heard so many others say that.”

The offending tweets have since been deleted.

GOOGLE BURNS SOFTWARE ENGINEER WHO CLAIMS COMPANY’S AI-CHATBOT IS SENSITIVE

A Google software engineer was fired a month after he claimed LaMDA, the company’s artificial intelligence chat bot, had become sentient and self-aware.

Blake Lemoine, 41, was fired in July 2022 following his exposure, both parties confirmed.

He first came forward in an interview with The Washington Post to say the chatbot was self-aware and was ousted from Google for violating privacy rules.

On July 22, Google said in a statement: “It is unfortunate that despite a lengthy debate on this issue, Blake has still chosen to persistently violate clear employment and data security policies, which include the need to protect product information.” ”

LaMDA – Language Model for Dialogue Applications – was developed in 2021 based on the company’s research showing that Transformer-based language models trained on dialogs can learn to talk about virtually anything.

It’s considered the company’s most advanced chatbot – a software application that can have a conversation with anyone who types with it. LaMDA can understand and create text that mimics conversation.

Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to produce persuasive human speech.

Continue reading

https://www.dailymail.co.uk/sciencetech/article-11098479/Metas-new-AI-chatbot-BlenderBot-3-calls-Mark-Zuckerberg-creepy.html?ns_mchannel=rss&ns_campaign=1490&ito=1490 Meta’s new AI chatbot BlenderBot 3 calls Mark Zuckerberg ‘creepy’

Bradford Betz

WSTPost is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@wstpost.com. The content will be deleted within 24 hours.

Related Articles

Back to top button