A new report released by pi-labs has revealed that 93 per cent of explicit deepfake victims are women, highlighting the growing misuse of artificial intelligence and the disproportionate impact of synthetic media on female users in digital spaces.
The findings indicate a sharp surge in deepfake content targeting women, with incidents increasing by nearly 900 per cent in recent years. The report notes that synthetic media, once regarded as a niche technological concern, has now evolved into a widespread digital threat with serious social and reputational consequences.
In India, the broader trend of cyber abuse mirrors this escalation. Cybercrime complaints involving women have risen significantly, increasing from around 50,000 cases in 2024 to nearly 80,000 by 2026, reflecting a rise of approximately 60 per cent.
The research further highlights that almost 98 per cent of deepfake pornography is directed at women. This surge has largely been driven by the availability of face-swapping applications and automated bot networks that disproportionately target female individuals, including school-aged girls and young professionals.
Most victims in India fall within the age group of 18 to 30, ranging from students to working professionals. The report identifies Bengaluru as the city reporting the highest number of such cyber harassment cases, with incidents involving school-aged girls also becoming increasingly common.
The study also points to a widespread culture of silence surrounding digital abuse. Globally, nearly 62 per cent of deepfake abuse cases involving women remain unreported due to stigma and social pressure. Similar patterns have been observed in India, where more than one-third of women experiencing online harassment choose not to pursue any formal complaint and often reduce their online activity after such incidents.
Another concerning finding is the lack of awareness regarding legal protection. Around 33 per cent of women surveyed said they were not aware of the laws available to safeguard them from online harassment and digital exploitation.
City-level data highlights the growing geographical spread of cybercrime across India’s major metropolitan centres. Bengaluru accounts for nearly 30 per cent of reported cases, followed by Hyderabad with around 14 per cent and Mumbai at approximately 13 per cent. Other cities such as Chennai and Kolkata contribute about 5 per cent each, while Delhi accounts for roughly 3 per cent of reported cases.
Commenting on the findings, Anukush Tiwari said that artificial intelligence has become one of the most powerful technologies of the modern era, but its misuse is creating a growing trust deficit in digital environments. He noted that manipulated identities and reputational harm can occur within minutes, leaving victims to deal with significant emotional and social consequences.
The report identifies image morphing and deepfake video creation as the most common forms of AI misuse affecting women. Deepfake pornography remains among the most frequently produced categories of manipulated content and often continues to circulate even after being identified as fabricated.
Researchers also observed an emerging trend involving fully AI-generated female personas that are not based on real individuals. These synthetic personalities are gaining substantial engagement on social media platforms, raising questions about digital authenticity and public trust in online identities.
Containing the spread of deepfakes remains a major challenge due to the widespread availability of generative AI tools. Industry estimates suggest that more than 5,000 face-swap tools and over 1,000 voice-cloning applications are currently accessible online, enabling the rapid creation and distribution of manipulated content.
To address these risks, the company has developed pi-authentify, an artificial intelligence-driven detection system designed to identify markers left by generative tools in digital media. The platform analyses suspected content and generates an authenticity score that can assist in content takedowns or support legal action.
Another solution introduced by the company, NaMoKavach, offers a verification service where users can submit suspicious media through a secure portal and receive a confidential assessment within two working days.
The report concludes that reducing digital exposure and adopting deepfake detection tools could help individuals limit the misuse of their identities online. It emphasises that the gendered impact of synthetic media represents a growing digital safety concern that requires coordinated action from technology platforms, regulators and individuals.
Send news announcements/press releases to:
editor@thefoundermedia.com
