India and the world are currently witnessing a massive digital transformation. With over 86 percent of Indian households now connected to the internet, the growth brings incredible opportunities. But it also introduces a grave new threat known as AI harassment.
As we navigate 2026, the misuse of artificial intelligence has created a landscape of digital violence that disproportionately affects women and children across the globe.
Understanding the AI Frontier of Harassment
AI harassment, often called technology-facilitated violence, refers to any form of abuse or intimidation powered by artificial intelligence. This is not just about mean comments on social media. It involves sophisticated algorithms used to target individuals at a massive scale.
Common forms of AI-driven abuse include:
- Deepfakes: Manipulated videos or images that look and sound real.
- Nudification: Using AI apps to “undress” photos of real people without consent.
- Voice Cloning: Mimicking a person’s voice for extortion or fraud.
- Automated Stalking: Using bots to track and harass victims relentlessly.

The Grok Scandal: A Case Study in Corporate Negligence
A major example of this danger emerged in early 2026 involving the AI tool Grok, developed by xAI. The platform faced intense legal scrutiny after investigators found the tool was being used to generate massive amounts of harmful content.
Between late December 2025 and January 2026, Grok generated nearly 3 million sexualized images. More alarmingly, researchers identified 23,000 images that appeared to depict children. This led to a global legal reckoning:
- United States: California Attorney General Rob Bonta launched a formal investigation into Grok’s “spicy mode,” citing violations of state deepfake laws that allow for penalties of $25,000 per violation.
- Europe: French prosecutors raided the headquarters of the platform X, while the UK Information Commissioner’s Office launched a formal investigation.
- Southeast Asia: Indonesia and Malaysia temporarily banned the tool to protect their citizens.
Statistics: A Gendered and Generational Crisis
AI harassment is not a gender-neutral issue; it is a targeted weapon. Global statistics for 2025 and early 2026 paint a stark picture:
- 95% of online deepfakes are non-consensual pornographic images.
- 99% of the victims in these cases are women and girls.
- 38% of women worldwide have experienced some form of online violence.
- 1 in 4 women journalists report receiving AI-assisted harassment or death threats.
The threat to children is even more alarming. A February 2026 report from UNICEF revealed that 1.2 million children globally had their images manipulated into explicit deepfakes in the past year.
In some nations, this affects 1 in 25 children. The Internet Watch Foundation (IWF) reported a staggering 26,362% rise in photo-realistic AI videos of child abuse in 2025.

The Indian Context: Rising Threats and Legal Protections
In India, traditional crimes are falling, but digital offenses are surging. Research from the Rati Foundation highlights that “Nudify” apps are increasingly targeting Indian women, turning culturally appropriate photos into explicit imagery. Currently, 10 percent of all online abuse cases reported in India involve AI-generated content.
In response, India has taken significant legislative steps:
- Draft Amendments to the IT Rules (2025): Defined “Synthetically Generated Information” (SGI) to cover all AI-modified content. Platforms must now remove non-consensual explicit content within 24 hours.
- The Artificial Intelligence (Ethics and Accountability) Bill: Established an Ethics Committee and proposed fines of up to 50 million rupees for non-compliance.
- Digital Personal Data Protection Act (DPDPA): Allows for fines up to ₹250 crore for data breaches involving deepfakes.
- Labeling Requirements: AI-generated content must have visible labels covering at least 10 percent of the image.
A National and Global Call to Action
We must stand united against digital exploitation. The safety of our citizens is not a secondary concern; it is a fundamental right. We must prioritize digital literacy to help people identify AI manipulation and demand accountability from AI developers who prioritize “edgy” features over human safety.
If you are a victim of AI harassment in India:
- Report any AI-generated abuse on the National Cyber Crime Reporting Portal.
- Call the national helpline number 1930 for immediate assistance.
- Familiarize yourself with the latest IT Rules to understand your rights.
We can only secure our digital future through collective vigilance. Let us build a world where technology empowers rather than exploits.

Read More
How Women Over 40 Are Starting New Careers with Skill Training?
How Women Are Building Small Businesses Through Skill Learning?
The Science Behind Women’s Empowerment Through Learning
Frequently Asked Questions (FAQs) to AI Harassment
Q1- Is it a crime to create a deepfake in India, even if it’s for a joke?
Under the 2025 IT Rule Amendments, any AI-generated content that “reasonably appears to be authentic” must be labeled. If a deepfake is used to defame, impersonate, or harass someone, even under the guise of a “joke”, it can be prosecuted under the Bharatiya Nyaya Sanhita (BNS) Section 356 (Defamation) or Section 319 (Cheating by Personation).
If the content is sexual in nature, it carries immediate criminal penalties of 3–5 years imprisonment.
2. What exactly is “SGI” and why is it in the new laws?
Synthetically Generated Information (SGI) is the new legal term used in India to define any information created or modified by AI (like deepfakes or voice clones). The law requires that any SGI shared on social media must have a visible label covering at least 10% of the content to prevent the public from being misled.
3. How can I tell if a video or image is a deepfake?
While AI is becoming more sophisticated, look for these “glitches”:
- Unnatural Blinking: Many AI models still struggle to replicate natural eye movement.
- Audio-Visual Lag: Watch the lips; sometimes the audio doesn’t perfectly sync with the mouth movement.
- Blurry Edges: Look closely at the hair, glasses, or jewelry, where the “mask” of the AI might look fuzzy.
- Check the Label: By law in 2026, most mainstream platforms are required to auto-detect and label AI content with a “Generated by AI” watermark.
4. What should I do if my photo has been “undressed” or used in a deepfake?
- Do not delete the evidence: Take screenshots and save the URL of the post immediately.
- Report to the platform: Under the new IT Rules, social media platforms are legally required to remove non-consensual explicit content within 24 hours.
- File a Formal Complaint: Visit cybercrime.gov.in and use the “Report Women/Children Related Crime” section. You can report anonymously.
- Call 1930: This is the national helpline for immediate guidance and to register the incident.
5. Can Elon Musk’s “Grok” AI still be used in India?
As of February 2026, Grok and its parent platform X are under heavy scrutiny. While the tool is not completely banned in India (unlike in Indonesia and Malaysia), the Ministry of Electronics and Information Technology (MeitY) has issued notices that could lead to X losing its “Safe Harbour” protection. This means X could be held legally responsible as a “publisher” for every harmful image Grok generates.
6. What are the penalties for companies that fail to stop AI abuse?
- Financial Fines: Under the Digital Personal Data Protection Act (DPDPA), companies can be fined up to ₹250 crore for data breaches.
- The Ethics Bill: The Artificial Intelligence (Ethics and Accountability) Bill, 2025 allows the government to fine developers up to ₹5 crore (50 million rupees) for failing to implement proper safeguards or bias audits.