Artificial Intelligence AI Photo: Swiss Technology Institute
The Brain Builders Youth Development Initiative (BBYDI), a non-governmental organisation, has warned that if urgent action is not taken, AI might become the new frontier for digital gender-based violence, pushing women further into fear, silence, and invisibility.
In a statement on Wednesday issued by Sanni Alausa-Issa, BBYDI’s communications director, the organisation condemned the disturbing trend currently circulating on X (formerly Twitter), where users are leveraging artificial intelligence tools to “digitally undress women and non-consensually alter their images”.
BBYDI described the growing trend as vile and dangerous. The organisation said women, many of them young and unsuspecting, are having their images manipulated and shared across the internet without their knowledge or consent.
The organisation said the practice constitutes technology-facilitated gender-based violence (TFGBV) and must be recognised and prosecuted as such.
Advertisement
The NGO added that what some portray as “just tech” is, in fact, a violation of consent, a digital form of sexual violence, and a chilling reflection of the dangers posed by unregulated AI.
“This is a violation,” said Nurah Jimoh-Sanni, executive director of BBYDI. “Using AI to strip women of their dignity is a form of digital sexual assault. The psychological trauma, reputational damage, and fear it instils are real and devastating. We cannot allow AI to become a tool of exploitation and abuse.”
“BBYDI is particularly alarmed by the role of X (formerly Twitter) in allowing this trend to fester. Despite repeated incidents of digital abuse on the platform, a shocking lack of adequate moderation, policy enforcement, and transparency persists. There are growing concerns that Grok, the AI chatbot developed by X, may be indirectly enabling or failing to flag content that perpetuates digital harassment.
Advertisement
“We demand the following from X and all social media platforms: Immediate removal of all AI-generated content that violates privacy and dignity; permanent suspension and reporting of all accounts found circulating non-consensual AI-altered images; a public explanation of Grok’s moderation capabilities and AI guardrails to prevent abuse; and a commitment to transparent AI and content policies that protect women and girls globally.
“What happens online is never ‘just online’. The manipulation and sexualisation of women’s images lead to real-life consequences, including emotional distress, cyberbullying, blackmail, career loss, and even physical harm. TFGBV is part of a larger ecosystem of violence and impunity that continues to silence women and reinforce power imbalances.”
The NGO also called on stakeholders to take immediate and concrete steps to prevent further misuse of AI to perpetrate gender abuse online. The organisation further asked the federal government to pass urgent AI governance legislation that includes strong safeguards against gender-based harms and misuse.
“The National Human Rights Commission and NITDA should investigate online AI-enabled violations and establish a digital GBV response framework. Civil society and tech coalitions should unite to demand AI ethics that centre on consent, privacy, and the safety of marginalised groups,” the NGO said.
Advertisement
The organisation asked social media companies to also integrate gender impact assessments into their AI development and platform moderation processes.
“We must not wait until more women are harmed before we act,” Jimoh-Sanni added. “Every moment of inaction is a moment when someone’s daughter, sister, or friend is being digitally violated. We must protect women online with the same urgency we demand in offline spaces.
“As part of our response, BBYDI is building a WhatsApp chatbot called Kemi under our HerSafeSpace project, a confidential digital tool where women can report cases like these and seek redress safely and swiftly.”
Advertisement