Home > Smart Living > Innovation > This AI chatbot from OpenAI is blowing people’s minds

This AI chatbot from OpenAI is blowing people’s minds

A new AI chatbot created by OpenAI Inc has taken the internet by storm, as users speculated on its ability to replace everything from playwrights to college essays

FILE: AI is already very good at generating human voices in deepfake audio form. (Photo: iStock)

By Bloomberg

LAST PUBLISHED 02.12.2022  |  03:02 PM IST

Listen to this article

(Bloomberg) -- A new chatbot created by artificial intelligence non-profit OpenAI Inc. has taken the internet by storm, as users speculated on its ability to replace everything from playwrights to college essays.

From historical arguments to poems on cryptocurrency, users took to Twitter to share their surprise at the detailed answers the so-called ChatGPT provided, after the startup sought user feedback on the AI model Wednesday.

TRENDING STORIES

Also read: Your future co-worker could be a robot

OpenAI’s chief executive officer Sam Altman said in a tweet Thursday that there has been “a lot more demand" than expected.

California, San Francisco-based OpenAI has made headlines over its GPT-3 software which allows AI models to respond intelligently to text prompts. Earlier this year, the second version of its DALL-E model went viral for its ability to generate photo-realistic images from user submissions.

OpenAI was co-founded by Tesla Inc. CEO Elon Musk and Altman with other investors about seven years ago to develop AI technologies that “benefits all of humanity." While Musk left the company in 2018 after disagreements over its direction, on Thursday, he offered an endorsement of the model’s abilities on Twitter.

Also read: Why everyone is talking about Midjourney

Chatbot technology is not new, although its deployment has seen mixed success. Microsoft Corp.’s AI bot ‘Tay’ was taken down in 2016 after Twitter users taught it to say racist, sexist and offensive remarks. Another developed by Meta Platform Inc. suffered similar issues this year. 

MORE FROM THIS SECTION

view all

Developers acknowledge the model “sometimes writes plausible-sounding but incorrect or nonsensical answers" and can be “excessively verbose" due to the training it received from humans.

While most people were delighted with the bot’s musings, some were quick to point out flaws, such as the model giving a detailed but incorrect answer to a question on algebra, and its ability to override limits on output related to issues like gore, crime and racism.

More stories like this are available on bloomberg.com

©2022 Bloomberg L.P.

Also read: Here's how scientists made those Boston Dynamics robots dance like a human