Advertisement
Advertisement

The rise of deepfakes and the fight for digital truth

BY PRUDENCE OKEOGHENE EMUDIANUGHE

On May 30, 2025, a 49-minute video surfaced on a YouTube channel named Bold Pan-African, seemingly featuring Peter Obi, Nigeria’s 2023 presidential candidate, delivering a passionate speech to Burkina Faso’s military leader, Ibrahim Traoré. In his signature black kaftan, his voice was calm.

“My dear brother, President Ibrahim Traoré. Permit me to write to you, not merely as a fellow African leader, but as a concerned son of our continent,” the voice appearing to be that of Obi said. “I see courage, not rebellion. I write to you not to offer advice, but to offer brotherhood. Your voice is not alone.”

The video, titled “Peter Obi’s Powerful Letter to President Ibrahim Traoré | Africa’s Path to Justice & Unity”, rapidly gained traction. Pan-Africanists hailed it as visionary, amassing over 136,000 views and 730 comments by late June, 2025. But murmurs of doubt swelled in the comment section.

Advertisement

Fact-checkers and media researchers at Dubawa began to dig and quickly found that Obi had not released such a statement. No verified press channels carried it, no news outlet reported it, and no aides confirmed its authenticity. Analysts reviewing the footage noted slight inconsistencies in lip-syncing and frame fluidity, signs related to AI-generated content.

The video is widely suspected to be a deepfake, generated using synthetic voice cloning and archival footage of Obi. Experts suggest it may have been designed to tap into the growing Pan-Africanist sentiments and position Obi as an ideological figurehead for the anti-Western rhetoric gaining momentum in parts of Francophone Africa.

According to Dubawa, “The video is a deepfake, created by overlaying manipulated audio onto authentic footage from a media address. In the original video, Obi spoke strictly on Nigerian governance issues, not on Burkina Faso’s leadership.”

Advertisement

This case represents a new frontier for synthetic media in Africa: one where deepfakes are not just tools for scandal or satire, but for influence-building and ideological persuasion. In a region with high mobile phone usage and low digital literacy, such misinformation doesn’t just go viral, it rewrites public memory. And like most deepfakes, the video has remained online, untouched by platform moderation or takedown efforts.

The Obi-Traoré deepfake raises urgent questions: Who made this? For what purpose? And what happens when truth becomes optional in shaping Africa’s future?

FROM PARODY TO POLITICAL WEAPONRY: THE EVOLUTION OF DEEPFAKES

Advertisement

Deepfakes began on the internet’s margins. In 2017, AI-generated celebrity pornography on Reddit triggered moral panic. By 2018, filmmaker Jordan Peele released a warning disguised as satire: a deepfake of Barack Obama calling Donald Trump names. The message was clear: the fakes were coming. They’ve arrived.

By 2024, Meta unveiled MovieGen, capable of generating video from a single photo. Google’s Veo-3, released in 2025, could produce high-definition videos with sound, voice, and seamless transitions. These innovations, meant for entertainment and education, have also enabled bad actors. Within minutes, someone with no coding knowledge can create a convincing fake of a journalist, activist, or public official.

But not all deepfakes come as sophisticated videos. Many appear as low-quality screenshots, manipulated WhatsApp messages, or AI-enhanced images.

A chilling example surfaced in Bali, Indonesia, where a student, SLKDP, from Udayana University got expelled for creating sexually explicit deepfakes of over 37 female classmates. Images pulled from social media were doctored using AI tools like Stable Diffusion.

Advertisement

“This is a grave ethical violation that tarnishes the university’s reputation,” said Dewi Pascarini, head of Udayana’s communications unit.

SLKDP admitted guilt during an ethics hearing. The university’s Task Force for Prevention and Handling of Sexual Violence led the internal investigation. None of the victims filed police reports, fearing stigma.

Advertisement

This is no isolated case. As recently as April, Indonesian police arrested suspects for using deepfakes of regional governors to promote fake motorcycle sales on TikTok. In another case, scammers impersonated President Prabowo Subianto using AI-generated footage to support a fraudulent aid campaign.

These incidents echo a global trend: synthetic media used for fraud, blackmail, and psychological harm.

Advertisement

SYNTHETIC MISOGYNY IN NIGERIA: A CHALLENGE

At the heart of the matter were three critical concerns: Generative AI can now convincingly mimic a person’s voice, face, and mannerisms, creating synthetic media that is almost too difficult to differentiate from reality.

Advertisement

Tools meant for entertainment and productivity are being repurposed to impersonate others, forge conversations, and destroy trust sometimes without even touching video. As more manipulated content floods the internet, it becomes harder to verify what’s authentic, especially in emotionally charged or viral situations.

This isn’t just a celebrity issue. It speaks to how easily any individual’s identity can be hijacked with a few clicks and a convincing narrative. In a country with high smartphone penetration but limited access to forensic verification tools, deepfakes become a potent vehicle for misinformation, character assassination, and emotional manipulation.

Social media platforms continue to struggle with moderating manipulated content. Meanwhile, users, especially young Nigerians, are left to navigate a landscape where screenshots can ruin lives, and seeing is no longer believed.

A recent exposé by 21 Magazine reveals how some online users in Nigeria increasingly use generative AI tools to create sexually explicit deepfakes of women without their consent. With platforms like Stable Diffusion and LoRA extensions, users can easily generate manipulated images that place women’s faces on pornographic content or enhance intimate images into more explicit ones. This growing misuse of AI reflects a disturbing digital extension of misogyny, where technology is weaponised to harass and degrade women.

According to Factchecker, Habeeb Adisa, “We’ve noticed that the Grok AI has been used to strip off from the female images found online. These are sexual defects that go against the ethics of AI.”

He also pointed to the weaponisation of deepfakes during political transitions: “…especially as different politicians are moving from one party to another, there are defects in the sense of different audios, portraying that one politician has announced his movement from one party to another, or even images showing, edited images showing one party member decamping to another party by having the picture edited or old pictures edited into looking like they are real and they are new.”

In rural Nigeria, where media literacy remains low, these fakes have real consequences. False narratives spiral unchecked, often endangering reputations and influencing public perception.

LEGAL VACUUM AND POLICY PARALYSIS

Nigeria is among the few African countries that has formally acknowledged the threat. The White Paper on Online Harm Protection and Content Moderation, drafted by the National Information Technology Development Agency (NITDA), outlines a plan to address misinformation, manipulated media and makes general recommendations for AI ethics, calling for transparency, fairness, and content moderation, but remains non-binding as of mid-2025.

According to Adeola Adeyemi, a legal expert, there are “no clear regulatory frameworks addressing the creation or distribution of manipulated media. However, there are laws addressing media matters in Nigeria such as The constitution of the Federal Republic of Nigeria, The Cybercrimes (Prohibition, Prevention, Etc.) Act 2015, Criminal Code, Copyright Act 2022 and the Evidence Act 2011”.

Adeyemi insisted that we need a “balanced regulatory framework that protects trust and humanity dignity while making rooms for creativity should be embraced, Ethical values should be inculcated in AI usage also introduce adequate and mandatory disclosure of content sources, such as watermarks and labelling AI generated content as such.”

FACT-CHECKERS AND CITIZEN VIGILANCE

In the absence of clear regulation, groups like Dubawa, FactCheckAfrica and Africa Check train journalists to detect deepfakes manually. But with AI detection tools like Sensity and Hive Moderation too expensive for most newsrooms, it’s like a digital game of whac-a-mole.

“In Nigeria, the rate of the Nigerian media literacy is a bit low, especially in the rural areas. And different organisations have been working on trying to get digital literacy and media literacy to these rural areas, but that is not enough. We see a big gap, a big disconnect amongst the citizens on media literacy, ” said Habeeb Adisa.

“A lot of people do not know how to source news. They believe that anything they see online is real and is true. So, it is always very difficult to convince them that these are not real, especially when they see it on social media or when they hear that it is from social media.”

Uchechi Blessing, a social media user from Port Harcourt, said: “I might believe some videos, but there are some videos that actually have to go to the official page of the politician or celebrity to be sure that it is from them because people are doing crazy things right now with AI.”

“If you get a content and you’re not sure of it, don’t spread it. If you get content, a video, a write-up, a picture, and you’re not sure of it, in order to reduce the spread, don’t spread it. Because most of the reasons why this fake contents are spread are that we get it and we don’t even verify.”

TOWARD A REGULATED FUTURE

A safe digital future may lie in a mix of measures: watermarking AI-generated content, digital literacy campaigns, open databases of known deepfakes, and cross-border regulation that acknowledges the internet’s global nature.

It will also require platforms like TikTok, Meta, and YouTube to move from reactive moderation to proactive and preventive designs, detecting manipulation before it goes live.

More importantly, African voices must be central in global AI governance conversations. With the right support, the continent can pioneer ethical, inclusive responses to technological threats and not just react to them after the damage is done.

Still, regulation alone won’t save us. The average person scrolling on a smartphone is now on the frontlines of an information war. That’s why media literacy isn’t optional, it is a form of social immunity. Citizens must be sensitised to the signs of synthetic content:

  • Look for unnatural blinking, distorted backgrounds, or mismatched lip movements.
  • Cross-check audio for robotic pacing or tone inconsistency.
  • Verify content against known sources and official platforms.

Teaching people to question, pause, and investigate before sharing is one of our most powerful defences. In a sense, we must all become digital detectives in an era where seeing is no longer believing.

The battle against deepfakes is not just about technology, it’s about democracy, trust, and truth itself. In an age where a face and a voice no longer guarantee authenticity, societies must decide how to rebuild public trust.

But this battle is not one that ends. It is an unceasing game of whac-a-mole, where every detection breakthrough is followed by a new deception technique. Deepfakes, in the right hands, can serve satire, art, and even accessibility but when left unchecked, they can cause more harm

Until regulation catches up, the question remains: Is this real?

Because in the age of AI, reality is no longer self-evident.

And if we fail to act, we may all be caught in an endless loop of illusion, stuck in the machine, whacking at shadows.


This report was produced with support from the Centre for Journalism Innovation and Development (CJID) and Luminate.

error: Content is protected from copying.