advertisement

Follow Mint Lounge

Latest Issue

Home > Smart Living> Innovation > Explained: ‘Godfather of AI’ quits Google, warns about misinformation

Explained: 'Godfather of AI’ quits Google, warns about misinformation

Artificial intelligence pioneer Geoffrey Hinton has warned that the advancements in AI pose significant risks to humans and society

Geoffrey Hinton has quit Google and is warning about AI advancements.
Geoffrey Hinton has quit Google and is warning about AI advancements. (REUTERS/Mark Blinch/File Photo)

Listen to this article

Geoffrey Hinton, who built a foundation technology for artificial intelligence (AI) systems, quit his job at Google and has warned about dangers of the AI technology. Hinton said advancements in the field posed significant risks to humans and society. 

The computer scientist, often touted as the "godfather of artificial intelligence" told The New York Times that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, which has put jobs at risk and spread misinformation. "It is hard to see how you can prevent the bad actors from using it for bad things," he told the NYT.

Also read: MIT scientists grow atomically thin transistors on computer chips

In 2022, as OpenAI launched its AI chatbot ChatGPT, there was been unsaid competition between the likes of Google and Microsoft to launch the most appealing AI tech to push ahead in the race. Although both companies have launched chatbots that have delivered mixed results, their continued investment in AI shows they aren’t slowing down anytime soon. 

This level of acceleration in the AI space has got many questioning its impact on jobs. Hinton reiterates this in his interview and says that while AI “takes away the drudge work" it "might take away more than that.” While AI has been promoted as a way to support human work, it is rapid expansion could put jobs at risk, according to an AFP report. 

Hinton also voiced another concern that has been on everyone’s mind. What if we can’t tell what’s AI and what’s not? Hinton said there could be potential spread of misinformation because of AI and that the average person will “not be able to know what is true anymore.”

In response to Hinton, Jeff Dean, lead scientist for Google AI said in a statement that the company remains committed to a responsible approach to AI, an AFP report said. “We're continually learning to understand emerging risks while also innovating boldly.”

This is not the first time that Google has cut ties with someone for criticizing its AI approach. In 2019, Timnit Gebru, co-lead of Google’s ethical AI team, announced on Twitter that the company had forced her out. Gebru is known for co-authoring an important paper that showed facial recognition to be less accurate at identifying women and people of colour, which means that using it would discriminate against them, according to a report in the MIT Technology Review. She had said that Google has fired her after she refused to retract a research paper on the risks of large language models and how in rushing to build these powerful models, tech giants, including Google, were not thinking about the biases built into them.

Last year in July, Blake Lemoine, a senior software engineer in Google’s Responsible AI organization was fired after he had said that a conversation technology called LaMDA had reached a level of consciousness, a report in CNN said.

Also read: Amazon Echo Dot 5th Gen smart speaker review: Small yet mighty

Next Story