Follow Mint Lounge

Latest Issue

Home > Smart Living> Innovation > Explained: How Indians are falling for AI voice scams

Explained: How Indians are falling for AI voice scams

A recent McAfee survey shows that almost 47% of Indians have experienced – or know someone who has encountered – some kind of artificial intelligence voice scam

Scamsters are using AI to clone voices and ask people to send money to a family member or friend in need. (Pexels)
Scamsters are using AI to clone voices and ask people to send money to a family member or friend in need. (Pexels)

Listen to this article

Earlier this week, after quitting Google, artificial intelligence (AI) pioneer Geoffrey Hinton warned about the rise in misinformation as it is getting increasingly difficult to differentiate between real and AI-generated information. 

The rapid advancement of AI has charmed the world with ever-increasing possibilities but kept the dangers tucked away in the footnotes.

A recent survey of about 7,000 people from seven countries, including India, by cybersecurity firm McAfee, showed that more than half of Indians do not know how to differentiate between fake and real voices. This makes it an easy tool for scammers who send fake voice messages to friends and family members pretending to need help and convince people to send them money. 

The findings published in the report, The Artificial Imposters, revealed that about half (47%) of Indian adults have experienced or know someone who has been scammed using AI voice, which is almost double the global average (25%). 83% of Indian victims said they had a loss of money – with 48% losing over   50,000, findings from the survey revealed.

Also read: Explained: 'Godfather of AI’ quits Google, warns about misinformation

“Artificial Intelligence brings incredible opportunities, but with any technology, there is always the potential for it to be used maliciously in the wrong hands. This is what we’re seeing today with the access and ease of use of AI tools helping cybercriminals to scale their efforts in increasingly convincing ways,” Steve Grobman, CTO, McAfee, said in a statement. 

One of the ways we remember people is through their voice; it’s a unique aspect and familiarity with it is also a way of trusting the person on the other end. However, today it’s also one of the most circulated forms of data. More than 80% of Indian adults share their voice data online or send voice notes once a week. While privacy might be assured, cybercriminals use loopholes to clone people’s voices, making shared voice data a dangerous tool.

Moreover, manipulated images and videos or deepfakes are disturbingly popular. Although it’s popularly used for humour, it has also been used to spread misinformation. For instance, in March a video showing the attack on migrant workers in Bihar was widely circulated, leading to people posting strong reactions on social media. The video was eventually found to be a deepfake. Using AI to manipulate technology can blur the lines between real and fake, which can have severe repercussions.

The McAfee survey revealed that when scammers use AI technology to clone voices and send a fake voicemail or voice note, most Indians were not confident that they could identify the cloned version. More than half of the respondents said they would reply to such a message, especially if they thought the request for money came from their parent, partner, or child. 

Messages that were most likely to get a response were those claiming that the sender had been robbed (70%), was involved in a car incident (69%), lost their phone or wallet (65%) or needed help while travelling abroad (62%).

While it might sound like cloning voice must require extensive technical expertise, McAfee researchers have shown that the accessibility, ease of use and efficacy of AI voice cloning tools – which are often freely available on the Internet – makes it child’s play for many. 

Free and paid tools require a basic level of expertise and can produce audio with a close match using three seconds of someone’s recorded voice. If these models are trained, they can achieve a 95% match using a small number of files. A close match of someone's voice can increase the chances of a successful dupe by exploiting emotional vulnerabilities.

According to the survey, many users fear that AI has made it easier than ever to commit a cybercrime, but the increasing scams have also made people more cautious. Almost one-third of surveyed Indians are now less trusting of social media than ever before and almost half of them are concerned about the rise of disinformation.

What you can do?

There are some simple ways to protect yourself from such AI voice scams. For instance, set a verbal codeword with family and friends which can be used when asking for help. If you get an unknown call, pause and think. Do you really recognise the voice? 

Try verifying the information before responding. Identity theft services can also be used to make sure your personally identifiable information is not accessible or get notified if it is uploaded to the Dark Web. 

Finally, be thoughtful about who you share voice notes and personal information. Make sure you trust them.

Also read: MIT scientists grow atomically thin transistors on computer chips

Next Story