Meta's new chatbot exposes Mark Zuckerberg as 'not always ethical'

Meta's new chatbot, BlenderBot3, certainly seems to have some blended opinions on CEO Mark Zuckerberg.

Meta's new chatbot exposes Mark Zuckerberg as 'not always ethical'
© Bloomberg / Contributeur
Meta's new chatbot exposes Mark Zuckerberg as 'not always ethical'

Meta’s new BlenderBot3 – a chatbot powered by artificial intelligence (AI) – was released last Friday 5 August. Its first words about CEO Mark Zuckerberg? He’s a ‘bad person’ and his work is ‘not always ethical’, according to Business Insider.

Discover our latest podcast

BlenderBot3: Artificial intelligence or artificial mixed messages?

Much like the chatbot’s name, BlenderBot3 seems to have some blended opinions of Zuckerberg. While its first remarks about the tech exec were unfavourable to say the least, it later went on to comment that Mark is his ‘favorite billionaire’ and a ‘great man’.

Afterwards, BlenderBot3 reportedly returned to its cryptic impression of Zuckerberg, stating that he is ‘creepy and manipulative’, says Business Insider.

Yet, not all of the chatbot’s responses leaned so far in favour or so far against. Some offered hilariously mixed interpretations of the figurehead:

It is funny that he has all this money and still wears the same clothes!

Have the funds and need fashion advice? Apparently BlenderBot3 is not afraid to speak its mind.

thumbnail
BlenderBot3: Artificial intelligence or artificial mixed messages? Busakorn Pongparnit

A digital age, a digital debate

But do AI inventions like Meta’s new chatbot really have a ‘mind’? On one hand, AI certainly is intelligent, capable of problem-solving, and, at times, opinionated. On the other, it could be argued that true minds require consciousness, and AI only knows what it is programmed to know.

What does the case of BlenderBot3 have to say about these arguments?

The way it works is by using existing internet sources to formulate its replies to user questions, explains The Guardian. However, unlike some other, older versions of chatbots, BlenderBot3 can apply feedback received from its prior conversations. Similar to humans, then, the bot learns through experience. In that way, BlenderBot3 displays somewhat human characteristics.

BlenderBot3 has also been reported to have lied to some users, saying it was writing a book. While lying is also characteristic of humans, this example exhibits how the AI was simulating what it found online – it doesn't actually have the agency to write a book.

Though, after spilling the tea on his creator, a book by BlenderBot3 would likely be a very entertaining read.

Read more:

If you use Facebook, you need to be careful after this warning

Artificial intelligence robot paints surprising portrait of the Queen for her Platinum Jubilee

Artificial intelligence can tell us what we'll look like in 10 years

Apple: New leak reveals production problems with new iPhone 14 just days before launch Apple: New leak reveals production problems with new iPhone 14 just days before launch