Entertainment

Expert worries over threat of deepfakes on media environment

Published by
A digital expert and Co-founder of Eybrids, Ahmed Olajide Olabisi, has expressed concerns over the threats posed by deepfakes, a synthetic media using artificial intelligence (AI), to the media environment.
Olabisi, in a piece titled “The Threat of Deepfakes: AI and ML in The Fight Against Synthetic Media”, noted that deepfakes  are becoming increasingly popular by the day, due to their incredible realism in audio, video, and picture manipulation.
According to him, the synthetic media, presently being used for public dishonesty, false information distribution, and reputation damage, have progressed from a benign form of online amusement, to a major cause of worry, to the public, since their increasing complexity makes it more difficult to separate actual from fake information.
He however believed that  in order  to keep people’s trust in news sources in today’s fast-paced, social media-driven media environment, it has become imperative to identify deepfakes,  and prevent them from festering.
“Given the prospective disruption of political processes, effects on financial markets, and erosion of public trust brought about by deepfakes, effective detection techniques are important,” he added.
Olabisi, therefore, harped on the significance of Artificial Intelligence (AI), and Machine Learning (ML),  in the battle against deepfakes, as well as the continuous developments of these technologies.
He argued that AI-driven systems are advancing in their capacity to detect deepfakes, using pattern, anomaly, and deviation analysis of audiovisual data; and by also using  deep learning and neural networks to enable the recognition of changing material, therefore protecting digital platforms from the dangers of synthetic content.
Tracing the evolution of deepfakes, from simple picture editing to complex film and audio creation, Olabisi noted that the  technique, which originated in early work on face recognition in the 1990s, did not initially come up until 2017.
According  to him, deepfakes  overcame its initial limitation (that of face-swapping in still images) partly due to developments in machine learning, especially Generative Adversarial Networks (GANs), introduced in 2014 to distinguish between real and fake; the competitive process that  continually improves the quality of contents generated.
“By 2018, deepfake technology had progressed to creating convincing video and audio, sparking both awe and concern.
“Today, deepfakes can manipulate facial expressions, lip movements, and voice in real-time, blurring the line between reality and fiction,” he stated.
Olabisi also argued that the fact that the technology has found applications in entertainment and education, therefore,  raises  privacy concerns and security threat.
Privacy concerns arise since anyone’s likeness can be co-opted without consent, while security threats emerge from potential misuse in fraud or disinformation campaigns.
Most critically, he added, deepfakes also erode trust in digital media, making it increasingly difficult to discern authentic content from fabrications.
The Eybrids boss, therefore, believed that  detecting deepfakes requires sophisticated AI and ML techniques, capable of recognizing subtle anomalies in synthetic content.
 “Deepfakes, generated using techniques like Generative Adversarial Networks (GANs), often contain small discrepancies that are difficult for the human eye to detect, but can be identified through AI-powered analysis.
AI-based detection techniques leverage vast amounts of data and advanced algorithms to learn patterns that differentiate real media from deepfakes,” he added.
On audio, he stated,  AI can also pick up  unnatural speech patterns, irregularities in sound frequency, or mismatches between mouth movements and spoken words, an activity that is rather too subtle for human viewers to notice.
Olabisi, however, cautioned that the advancements in AI detection technologies has however prompted creators to innovate further, adding that the.popularity of  facial movement analysis, as a popular detection method, has led to deepfake algorithms, improving their ability to replicate natural facial dynamics.
“Similarly, pixel-level analysis of deepfakes spurred creators to enhance image resolution and reduce detectable inconsistencies. As detection techniques evolve, so do the methods of countering them, resulting in a constant tug-of-war, and with AI  central to both sides of this arms race,” he stated.

Recent Posts

Lagos LG polls: Conducting exercise in 20 LGAs, 37 LCDAs will be nullity — Ex-minister Olanrewaju

"It is obvious now that the state indigenes have lost their patrimony. I think one…

10 minutes ago

How religious fanaticism, ethnic bigotry have truncated Nigeria’s growth since 1960

By Festus A. Akande NIGERIA, often described as the “Giant of Africa,” is a country…

22 minutes ago

Business executive says he likes going on vacation to test if he’s hired the right people 

“When I hear people say they have to check in while they're on vacation for…

24 minutes ago

NCDF $1bn investment in affordable housing, agric, renewable energy, infrastructure, others, kicks-off

The Nigerian Capital Development Fund (NCDF) has announced the commencement of its landmark $1 billion…

28 minutes ago

How $21bn investment stalled Lagos enforcement of electronic call-up

"Truckers too joined in solidarity and threatened to withdraw their services at the Lekki Deep…

30 minutes ago

How $21bn investment stalled enforcement of Lagos electronic call-up

Checks by the Nigerian Tribune has revealed that the threat by petroleum tanker and trucker…

32 minutes ago

Welcome

Install

This website uses cookies.