advertisement

Follow Mint Lounge

Latest Issue

Home > Smart Living> Innovation > As generative AI grows, deepfake detection is still lagging behind

As generative AI grows, deepfake detection is still lagging behind

Manipulated images, or so-called deepfakes, have become a permanent fixture in the digital world

There is a need for investment in enhanced deepfake detection. (Pixabay)
There is a need for investment in enhanced deepfake detection. (Pixabay)

Listen to this article

Recent photos of Donald Trump getting arrested to Pope Francis wearing a Balenciaga coat made people double check on how real it all seemed. 

Manipulated images, or so-called deepfakes, have become a permanent fixture in the digital world. However, while artificial intelligence-powered content generation booming, the deepfake detection space is lagging behind.

This was highlighted by researchers at Deakin University’s School of Information Technology, outside of Melbourne whose algorithm to detect the altered images of celebrities in deepfakes performed the best last year, according to Stanford University’s Artificial Intelligence Index 2023. The algorithm was proved correct 78% of the time. However, the method needs further improvements, Chang-Tsun Li, who developed the algorithm, told Bloomberg earlier this week.

Also read: We need to talk about the environmental impact of AI models

Deception due to deepfakes can cause harm and result in harmful consequences, like diminished trust in media, damaging lives through manipulated images, and using it for political propaganda.

While big tech companies have heavily invested in generative AI, the technology needed to detect manipulated content is limited by a lack of funds. According to research firm HSRC, the global market for deepfake detection was valued at $3.86 billion in 2020 and is expected to expand at a compound annual growth rate of 42% through 2026.

“I talk to security leaders every day,” said Jeff Pollard, an analyst at Forrester Research told Bloomberg. “They are concerned about generative AI. But when it comes to something like deepfake detection, that’s not something they spend a budget on. They’ve got so many other problems.”

Some companies have dipped their feet in the space recently. In November 2022, Intel introduced FakeCatcher, which was able to differentiate videos of real people from deepfakes with 96 per cent accuracy in milliseconds, according to Intel's press statement. FakeCatcher is a detector designed by Ilke Demir in collaboration with Umur Ciftci from the State University of New York at Binghamton. The detector looks for authentic clues in real videos, by subtle “blood flow” in the pixels of a video. When our hearts pump blood, our veins change colour. These blood flow signals are collected from all over the face and algorithms translate them into spatiotemporal maps. Using deep learning, it can instantly detect whether a video is real or fake.

“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” Ilke Demir, a senior staff research scientist in Intel Labs said in the press statement.

Startups such as the Netherlands-based Sensity AI and Estonia-based Sentinel are also developing deepfake detection technology, according to the Bloomberg report. However, the current methods to detect fake videos might not keep up with the pace of AI growth. Most methods to detect fake images and videos involve comparing visual characteristics by training computers to learn from examples. There is a need for more powerful algorithms and computing resources, according to Xuequan Lu, another Deakin University professor who worked on the algorithm to detect deepfakes.

(With inputs from agencies)

Also read: Elon Musk's Twitter starts removing blue ticks

Next Story