The war against digitally altered video content has begun.

AI-generated video clips that realistically superimpose a person’s face and voice onto another person (also known as “deepfakes”) have since become a rising concern.

IMAGE CREDIT: https://www.linkedin.com/posts/leocaisse

According to Trend Micro, the rise of generative AI tools like ChatGPT has opened up new opportunities for cybercriminals through the faster production of more potent malware, as well as new scams. One of these entails the use of ‘deepfakes,’ with malicious users now utilizing them to spread misinformation, damage reputation, and even commit fraud.

To address this need, the global cybersecurity leader announced that it is taking a stand against the growing threat of ‘deepfakes’ and that they are now actively developing new software that can analyze video and audio to identify content manipulated with artificial intelligence.

Brandcomm ad

“This is actively in development right now,” said Shannon Murphy, Trend Micro Global Risk and Security Strategist, during a media briefing held in Taguig City recently. She adds that they are now doing lots of research into different trends and activities of cybercriminals, and this has influenced the company to invest in deepfake and audiofake detection.

Shannon Murphy, Trend Micro Global Risk and Security Strategist

Deepfake detection: Technology takes the lead

Trend Micro’s research into cybercriminal activities has highlighted the need for advanced detection methods.

Their software, expected to be released later this year, aims to automate the identification of deepfakes, taking the burden off individual users to spot inconsistencies.

The software analyzes biological signals, audio frequency, and spatial cues. For example, it can detect subtle changes in skin temperature or the lack of natural variations in vocal frequencies, which are giveaways of a deepfake.

Additionally, the software can analyze video pixels to identify inconsistencies between the face and the background.

Deepfakes and the looming 2025 elections

The Philippines has already seen the negative impact of deepfakes, with a recent manipulated audio clip purporting to be President Ferdinand Marcos Jr. authorizing force against China.

As the 2025 elections approach, concerns about deepfakes being used to sway voters are also high.

David Ng, managing director for Trend Micro in Singapore, the Philippines and Indonesia, emphasizes the crucial role of AI in the upcoming polls. “There will be a lot of AI used to slander opposition,” he said.

However, Ng also expressed optimism about the increasing focus on responsible AI development by tech firms and the need for stricter cybersecurity regulations. He added that social media platforms also need to step up even as he stressed the importance of stronger content moderation by social media platforms to inform voters and prevent manipulation.

“The second is a call to the social platforms themselves to do better content moderation, to actually flag that type of behaviors as well, to help inform the electorate so they can make the best possible decision,” Ng said.

Criminal AI: Hype or reality?

While discussions exist in cybercriminal forums about using large language models (LLMs) like DarkBard or FraudGPT to create malware or launch attacks, Trend Micro is downplaying the immediate danger.

According to the firm’s experts, these concepts appear to be more theoretical than practical — with no evidence of actual deployment by criminal groups. “It’s very challenging to pull this off,” Murphy explains, citing the example of WormGPT, an AI initially presented as a potential tool for criminals but was quickly shut down by its developer.

“It was up for about two weeks, I believe, and then it was pulled down from the developer because it hit this mainstream media and he was afraid of going to jail, essentially, so he pulled it out,” she related.

Murphy suggests that instances of “criminal AI” might be vaporware or scams targeting other criminals.

“Bad guys are now totally willing to scam other bad guys,” she said. Murphy adds that a more likely tactic is for criminals to attempt to “jailbreak” existing AI systems and exploit them for malicious purposes.

Trend Micro’s development of deepfake detection software is a significant step towards safeguarding online information and ensuring a level playing field in the upcoming elections. As both technology and regulations evolve, the fight against misinformation and cybercrime will continue.

By Ralph Fajardo

Ralph is a dynamic writer and marketing communications expert with over 15 years of experience shaping the narratives of numerous brands. His journey through the realms of PR, advertising, news writing, as well as media and marketing communications has equipped him with a versatile skill set and a keen understanding of the industry. Discover more about Ralph's professional journey on his LinkedIn profile.