advertisement

Follow Mint Lounge

Latest Issue

Home > News> Big Story > How deepfake porn and propaganda threatens India

How deepfake porn and propaganda threatens India

The new AI innovation is upending the age-old trusim: seeing is believing. Is India prepared to handle it?

Illustration by Jayachandran
Illustration by Jayachandran

There is a popular internet meme called “Rule 34”. It goes, “If it exists, there’s a porn of it.” There is no exception, it is said, to this “rule”. Not Pokémon, not Tetris blocks, not even unicorns.

In 2016, Ogi Ogas, a computational neuroscientist at Harvard, published a study on whether the “rule” held up. It did—for the most part. The more obscure pornography could be difficult to access but “it’s out there if you want to find it”, he told The Washington Post. And if it isn’t, there’s the lesser-known “Rule 35”: “If there’s no porn of it, porn will be made of it.”

It’s such “rules” that probably best sum up what happened on Reddit in late 2017. On the discussion thread “r/deepfake”, a user shared a face-swapping algorithm he had developed to create videos. People used it to create pornographic videos of well-known women—including Scarlett Johansson, Gal Gadot, Kristen Bell and Michelle Obama—and posted it on the thread. By February 2018, when Reddit banned it, the thread had nearly 90,000 subscribers and worldwide notoriety.

Digital morphing of celebrities’ images isn’t particularly new. From MS Paint to CorelDRAW and Photoshop, internet-based apps and tools have been used to show famous people—generally women—in sexually explicit situations. What makes deepfakes especially unnerving is that these are created with a machine-learning algorithm that gets better with use. “Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired,” Johansson told The Washington Post after her deepfake video surfaced. Legal action was useless, she added, “mostly because the internet is a vast wormhole of darkness that eats itself”.

A similar threat is playing out in India today. There are nearly a dozen websites hosting deepfake pornography featuring Indian women. Most of them are from the entertainment industry, including some of the best-known actors from films. India has banned over 800 adult websites since 2015 for allegedly hosting paedophile content but these deepfake videos are only a few clicks away. So far, there has been little discussion and no strategy on how to deal with these.

To be fair, creating deepfakes isn’t exactly easy – you ideally need a few thousand images to train the AI in recognizing a person’s facial expressions and create a believable video. But publicly available softwares like Faceswap and DeepFakeLab dramatically reduce the need for manual intervention. This opens up several possibilities for misuse: to harass and blackmail someone, to generate a false digital evidence or alibi, or to create political propaganda. And they do this by upending the age-old truism: seeing is believing.

Indeed, deepfakes have come a long way from discussion threads in the backrooms of internet. Sensity, an Amsterdam-based visual threat intelligence company, found such videos had increased from 7,964 in December 2018 to 49,081 in June 2020. Nearly 96% of these videos were pornographic in nature; the top four deepfake websites had received over 134mn views until 2019. In the recent months, there have also been bots on messaging apps like Telegram that could turn an image of a fully clothed person into their nude photo. One could simply put up a picture and pay up a fee. Nearly 680,000 people – including Indians – had been targeted by such bot, according to a Sensity report. In October, the Bombay High Court directed the Ministry of Information and Broadcasting to look into it.

Deepfakes can also be a potent tool for political propaganda. Visual artists have created videos of Barack Obama, Mark Zuckerberg and Kim Jong-un saying the most unexpected things to illustrate the dangers of a deepfake. In look and sound, you would be hard-pressed to tell if it’s real or made up. The only giveaways were the disclaimers by the creators, like the one at the end of the Kim Jong-un video: “This footage is not real, but the threat is.”

Ahead of the Delhi assembly election in February, a video featuring Bharatiya Janata Party (BJP) leader Manoj Tiwari criticising chief minister Arvind Kejriwal in the Haryanvi dialect went viral on WhatsApp. But Tiwari’s voice-over was actually done by a dubbing artist, and later lip-synced using deepfake software. A report by the digital news platform Vice found a Chandigarh-based company had partnered with the Delhi BJP’s IT cell to reach different linguistic voter bases. The BJP said the video was a one-time experiment, not part of its social media campaign.

The recent US election was believed to be most at risk from deepfakes but all one saw were poorly edited “cheapfakes”. This, however, doesn’t mean that the videos aren’t getting more sophisticated and accessible. “Today, I can create a deepfake video even with one photo,” says Ananth Prabhu, a cybersecurity trainer for the Karnataka police. “Ten months from now, we will have very good tools for a flawless output. In one-and-a-half years, we will have extraordinary tools.”

The National Commission for Women didn’t respond to Lounge’s requests for an interview, nor did the ministry of information and broadcasting. Experts, however, maintain that India is likely to be caught off-guard. “The problem is, our systems and institutions belong to the 20th century,” says Sanjay Sahay, a retired additional director general of police who set up the police IT network in Karnataka. “And our mindset and DNA are in the 19th century.”

This time, short-cuts like websites bans won’t work, he adds. It’d require political will, personnel training and a large-scale infrastructure upgrade. “The cyber capabilities have to improve,” says Sahay. “It has to improve within the shortest span of time.”

Legislation may be some way off but over the past two years, Facebook, Twitter and Discord have banned deepfake videos. Big adult websites like PornHub and Xhamster have mechanisms to block deepfakes and “non-consensual porn”. Yet many creators still manage to use these platforms to connect, exchange notes and solve technical glitches. Most stay anonymous or use an alias.

PM Faker, a prolific deepfake porn creator, is one of them. He and I connected on his Discord channel. He wouldn’t say much about himself, except that he is a professional gamer, lives outside India but is of Indian origin. He started with photoshopping nude images of celebrities before moving on to deepfake pornography two years ago. His first video, featuring a popular actress from Karnataka, took him a month. “But as the technology grows and powerful graphic cards came, the table is turned now,” he says.

PM Faker’s channel is a deepfake porn creators’ playground. It has threads titled “training”, “progress” and “celebrity facesets”, a collection of hundreds of images that can be fed into deepfake softwares and used to create pornography. “(It was) Personal pleasure at first,” he says. “Once my name established in both Indian and international (deepfake porn) sites, (I started) doing it for fame.”

He has so far created over 40 videos, featuring Bollywood and Hollywood actresses. His work has been noticed in his community (“I have been in touch with the legendary Fakers out there in the world”) but hasn’t translated into the kind of money he would have liked. “Indian viewers—most of them want to enjoy all perks without paying for the service rendered,” he says. “Can’t blame them. It’s rooted in our blood, using pirated copy of softwares and watching movies/TV series from Torrent without paying up a dime.”

He doesn't think he’s doing anything wrong. “According to the law, it’s wrong. But for me, it’s okay to have some fun,” he says. There are many like him: “The community will grow bigger and bigger as long as (there is) human desire,” he told me.

There is also a rise in the number of websites eager to host such content. Take MrDeepFakes.com, which describes itself as “the largest and best” for deepfake porn. Along with videos, it has a discussion forum “to provide a safe-haven without censorship, where users can learn about this new AI technology, share deepfake videos, and promote development of deepfake apps”. It encourages users to remain anonymous and claims to not allow videos of minors or “non celebrities”. “However, if you want to request a fake of yourself or your girlfriend/wife and can provide a video where they consent to being faked you can private message creators with requests,” it says in its guidelines.

It’s a pattern often seen in deepfake sites: an unverified claim of content moderation and the belief that public personalities are fair game. Like MrDeepFakes, SexCelebrity.net too only allows deepfakes of YouTube stars, Twitch streamers, actresses, singers and “other types of public persons”. “These videos are FAKES,” the website says, adding that its content is for “learning and entertainment purposes”. “If you have any issues with the content on this website, please leave it and don’t visit us anymore.”

The main reason such websites exist is to make money. “The views of the video are monetised,” says Giorgio Patrini, founder of Sensity, an Amsterdam-based visual threat intelligence company. “While you are on these communities, watching, commenting or uploading, those video portals would make money out of it.” They charge per view or clip, and have a network of advertisers. Often, payments are made via cryptocurrency.

“If only public figures were attacked earlier, it was because of abundant materials available from videos or movies,” says Patrini. “But 2020 is a tipping point. Any one of us can be attacked in terms of ability to be attacked. Any social media account is enough to make possible creation of deepfake of some sort. This technology is getting less data hungry and easier to use.”

There haven’t been any reported cases of deepfakes used to target private individuals in India so far. That’s not to say this hasn’t occurred. Advocate Debarati Halder, founder of the Ahmedabad-based Centre for Cyber Victim Counselling, says that in the past two years, she has counselled over a dozen women whose faces have been morphed into deepfake pornography. Increasingly, she says, it’s being seen in rural areas too.

“In November, I had the case of a minor woman whose face was used in a sexually explicit video. The culprit had used a software that was freely available online, like a photo mashup. When I was interacting with the victim, I asked if she knew the person. He knew the victim for several years. So he had altered expressions to show that she was in a deep activity and actively engaging in it.”

Halder advised the victim to go to the police. Her family wasn’t willing. “They said, either you take it down or we will go to a hacker. This happens in 90% of the cases—a police investigation takes time. At times, they might not be able to convince the host websites to take down the videos. So women prefer to go to hackers.”

The video wasn’t sophisticated. But the quality is hardly what causes distress. “If I am able to create a deepfake of someone’s wife, whether I put it in the public domain or just send it to 10 people, as long as her husband and son can see, two fellows is also okay. She can keep defending herself for the rest of her life. This is the psychology,” says Sahay. Very often, it’s a revenge mindset at work.

Something similar happened to journalist Rana Ayyub, a critic of the BJP-led government. In 2018, a sexually explicit video featuring a lookalike was circulated on social media and uploaded on adult websites. Another tweet shared the video’s URL and Ayyub’s personal number; she was inundated with suggestive and threatening messages.

“The police had no sympathy whatsoever,” says advocate Vrinda Grover, who had accompanied Ayyub to register a complaint at Delhi’s Saket police station. “They were trying their level best to make us go to any other police station. I remember clearly that they said, ‘If we find out that you noticed it not in Saket jurisdiction but somewhere else, we will have to take action.’ I said, ‘What if I was mid-air when I opened the phone?’”

Initially, it was believed that the video was a deepfake. Pratik Sinha, founder of the fact-checking portal AltNews, clarified that it was actually a lookalike of Ayyub, wrongly identified as the journalist. Ayyub lodged a police complaint but nothing has happened. “Despite our efforts, the culprit couldn’t be identified yet,” the Delhi police wrote to Ayyub in an email, a copy of which is with Lounge. “It is, therefore, the investigation of the case is being closed and ‘untraced report’ being filed...”

“I always thought no one could harm me or intimidate me, but this incident really affected me in a way that I would never have anticipated,” Ayyub wrote in Huffington Post in November 2018. “Now I don’t post anything on Facebook. I am constantly thinking what if someone does something to me again.”

Until now, the misinformation on the internet are mainly in form of unverified forwards, malicious editing, wrongful attribution and low-quality morphed images. The rise of deepfakes thus puts us at crossroads. "It’s only a matter of someone taking a decision whether they want to go down this path,” says Sinha. “It hasn’t happened yet. But with every passing day, the ethical quotient that people subscribe to is falling.”

The only way to deal with the threat, say experts, is awareness, education and legislation. “So if you are watching a video that’s very sensational, question the veracity of it,” says Patrini. If something is on social media and it has not been reported or fact-checked by a credible news organisation, it’s already a sign that something might be wrong. As he says, “It’s good to be suspicious.”

***

How to Spot a Deepfake

For all the advancement in technology, a lot of deepfakes out there have glitches. Here are a few clues that can help give it away:

  • The image and its sound are not in sync.
  • A person’s face ‘lags’ while emoting or moving.
  • The eyes of the subject can be of different colour.
  • Their teeth look as a single white block.
  • The background of the video is foggy.
  • The colour in the video flickers, in spite of the light conditions remaining the same.

The rate at which technology is improving, these signs may disappear over time. So if you’re watching a video that’s very sensational, question the veracity of it or wait for an expert or a fact-checker to take a look at it. If its only on social media and not been reported or fact-checked by credible news organizations, it’s already a sign that something might be wrong.

Courtesy: Sensity

Next Story