Artificial intelligence (AI) is a double-edged sword. On one hand, it is powering useful tools and apps on our smartphones and computers, making many daily tasks easier; on the other, it is being leveraged by scammers and cybercriminals to con individual users and organisations.
Earlier this week, Meta launched a dedicated fact-checking helpline on WhatsApp with the Misinformation Combat Alliance (MCA), a Delhi-based body, to combat AI-generated misinformation in India, where the general election is due later this year. A Meta statement said the initiative would allow MCA and its associated network of independent fact-checkers and research organisations to address viral misinformation. Users will be able to flag deepfakes by sending it to a WhatsApp chatbot, which will offer support in English, Hindi, Tamil, and Telugu.
Also read: A new malware has been mimicking Google Translate
Reports from both IBM and McAfee have highlighted that 2024 is going to be a “busy year” for cybercriminals with elections in India, the US and Europe.
Earlier this month, Microsoft and OpenAI published research on emerging threats in the age of AI. Over the last year, “the speed, scale and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries,” it noted.
Voice-cloning, deepfakes (artificial media), new types of malware and malicious websites are just the tip of the iceberg. “AI is constantly learning, which means it can analyse vast amounts of data, far more than human cybersecurity professionals, making it the perfect tool for cybercriminals,” says Pratim Mukherjee, senior director of engineering, McAfee.
Mukherjee explains how scammers now use tools to create fake links sent via email or SMS. A single click can give them total access to your personal data. “Cybercriminals are also crafting fake photos to assume the physical likeness of another with the help of generative AI,” Mukherjee says on email, giving the example of Taylor Swift’s likeness being used in explicit photos across social media. In India, cricketers Virat Kohli and Sachin Tendulkar have been its victims.
McAfee’s 2024 Cybersecurity Predictions report says that with the help of AI, cybercriminals can manipulate social media platforms and shape public opinion in ways that were not possible before.
IBM experts say 2024 will be the year of “deception”. A recent example of this was seen in the US, where AI robocalls mimicked President Joe Biden’s voice to discourage people from voting in the New Hampshire primary election in January.
In an IBM report on cybersecurity trends, predictions for 2024, Charles Henderson, global head of IBM’s X-Force team of cybersecurity experts, writes: “It’s a perfect storm of events that’s going to see disinformation campaigns on a whole new level.” This was in reference to the elections coming up and the Paris Olympics.
In India, mobile malware campaigns, where malicious apps impersonating banks and government services were distributed via social media platforms, are prevalent, according to Check Point Software’s 2024 Cyber Security Report.
The technology industry is slowly making moves as well. Earlier this month at the Munich Security Conference, a host of technology companies, including Adobe, Amazon, Google, IBM, Meta and Elon Musk’s X, signed a pact to voluntarily adopt “reasonable precautions” to prevent AI tools from being used to disrupt elections around the world. Executives from the companies also announced a new framework to tackle AI-generated deepfakes, which deliberately trick voters.
The other big event that could attract plenty of email and phishing scams is the Summer Olympics in Paris. As McAfee’s report warns, scammers capitalise on the excitement around such events and “take a chunk out of your wallet and steal personal info” at the behest of tickets, merchandise, and other promises.
Experts have coined a term that could define the cybersecurity threat landscape—“scamverse”. Mukherjee warns that these new scams are unlike anything seen before. “Cybercriminals are getting smarter at lightning speed and their scams are becoming harder to distinguish and easier to fall for,” he says.
In the past, it was easy for users to spot such scams—a wrong logo, poor grammar or fake websites would give it away. That has changed because of generative AI, which is being used by cybercriminals to remove these traditional red flags. “Generative AI scams are better at impersonating real humans... it utilises a person’s unique traits like appearance and voice extracted from social media to scam users. Generative AI also allows scammers to custom create phishing websites in different languages to target individuals based on location,” says Mukherjee.
A simple example of a modern-day scam revolves around bogus QR codes. QR code scams work almost like doctored links, and there is no way you can spot errors. Scanning fake QR codes could give scammers access to other functionalities on smartphones such as payment apps, contacts, messaging or making a phone call.
Indians are also having a tough time on dating apps, thanks to AI. McAfee’s recent Modern Love study revealed that 39% of Indians’ potential love interests turned out to be scammers and 77% of Indians came across fake profiles/photos that looked like they had been generated by AI on dating platforms and social media.
Dating platforms seem to be paying heed. Earlier this week, dating platform Tinder announced that it is expanding its identity verification programme in the US, the UK, Brazil and Mexico though there are no plans for India. The ID verification system has been tested in Australia and New Zealand, where people who had been verified saw a 67% increase in matches compared to those who didn’t.
No matter where you are, be vigilant when you share or swipe. Trouble could just be a click away.
Also read: How to protect your phone from malware and cybercriminals