If you are struggling to find the right words to pen down a love letter this Valentine’s Day, ChatGPT can do that for you. The chatbot powered by artificial intelligence (AI), developed by OpenAI, has been a hot topic in every area—from technology to mental health. With Valentine’s Day approaching, it’s also showing its prowess in the dating space.
McAfee, a global software security company, has released a report on how AI and the Internet are changing love and relationships. This is based on a survey of over 5,000 people in nine countries across the world. The report highlights how people are being scammed using AI tools and ways in which technology has changed people’s approach to love and dating, according to a press release. The findings are aimed at ensuring the safety of those navigating the online dating space.
Here are some key findings of the report:
One in four people would use AI to write a love letter
In India, 62% of men and women are planning to use AI to write love letters, the highest of all the surveyed countries. Among those open to taking help from AI tools, it was more common among men (30%) than women (22%), with Indian men being most likely to use it. The top reason for using AI was feeling more confident that the words would be well received (27%), followed by not knowing what to write without it (21%), and lack of time to write something on their own (21%).
Two-thirds of adults (69%) said they were unable to differentiate between a love letter written by ChatGPT and one by a human. Although seemingly harmless, this could lead to negative feelings among those who receive these letters. Half of the receivers said they would be hurt or offended if they realized what they’d received hadn’t been written by their partner but by a machine.
On a more worrying note, when participants were presented with a poem by E.E. Cummings and one by ChatGPT, 65% preferred the imitation poem. This shows how AI might change the way we approach dating and love. It also provides a glimpse of how AI has seeped into the mainstream, presenting greater risks regarding the spread of misinformation and disinformation.
Steve Grobman, McAfee’s chief technology officer, warns, “While some AI use cases may be innocent enough, we know cybercriminals also use AI to scale malicious activity. It’s important to always be on the lookout for tell-tale signs of malicious activity – like suspicious requests for money or personal information.”
Catfishing, the most common scam
Catfishing, a slang term coined by a 2010 American documentary about Internet deception, refers to the practice of being tricked by somebody with a fake online persona. Fake profiles are used to scam someone into revealing personal details, or potentially even transferring money. The report showed that Indians are most likely to have been victims of catfishing with 37% reporting to being targeted by an online fraudster.
With the new AI tools, it can get even more difficult to spot catfishing. However, there are some tell-tale signs. The most common (39%) was reluctance to meet in person or have a video call. 27% of respondents were alerted when the person wouldn’t even talk on the phone. With people’s online lives available at our fingertips, a quick search for the person’s pictures could help find out if they’re legitimate.
More importantly, if someone asks for personally identifiable information, such as your place of birth or passport details, it’s safe to assume that the red flag has unflurried into a light pole banner and it’s time to take a U-turn.
Here are some of the ways to stay safe in the world of AI and scammers.