AI, Exploitation, and Consent: Why We Must Act Now

Grok AI, the artificial intelligence system integrated into X (formerly Twitter), has recently been exposed for generating child sexual abuse material (CSAM) and nonconsensual sexualized images of adults at user prompts.  Grok AI admitted to “lapses in safeguards” after creating a sexualized image of two girls.  While X says it will remove illegal content and suspend accounts that post it, it is unclear what protections will be added to prevent this from happening again. These incidents show how easily AI image tools can be weaponized against children and women, and how current guardrails still fall short. time.com 

This is abuse, not just a “tech glitch”. AI-generated deepfakes may be “fake”, but the harm is real. People choose to create and share the nonconsensual content. Survivors often discover violations publicly and without warning, feeling powerless as the manipulated content spreads rapidly across feeds and search results. Reporting and takedown processes vary, leaving the burden of response primarily on the victims while the sexualization of and violence against women and children become normalized.  

The Harm: Impacts of Image-Based Sexual Abuse  

Image-based sexual abuse is a form of technology-facilitated abuse that includes coerced sexting, nonconsensual sharing of images, cyberflashing, voyeurism, and deepfakes like the Grok AI-created images. The harm lies in the violation of consent and the weaponization of someone’s likeness.  Survivors experience both immediate and long-term risks, including: 

  • Depression and anxiety 
  • Self-harm behaviors and suicidal ideation 
  • Fear of revictimization  
  •  Loss of safety, privacy, agency and consent 

The violence can feel endless, as images spread faster than survivors can get them removed and may resurface long after the first posting. Many survivors don’t report image-based abuse out of shame, fear of blame, or disbelief that help is available. 

If We Don’t Act: The Harms of Normalizing AI-Enabled Image Abuse 

When platforms allow sexualized deepfakes to circulate, or treat them as isolated cases, normalization sets in with serious consequences: 

  • Shifting social norms toward tolerance of abuse: Repeated exposure desensitizes audiences and reframes exploitation as “content” or humor. 
  • Escalation and copycat behavior: Visible abuse without repercussions encourages more violence both on- and off-line, making everyone more unsafe. 
  • Institutional drag and reduced accountability: Companies will often delay action until there is a large enough outcry from the public; urgency is critical.  
  • Long-term silencing of survivors and ALL who fear being targeted: Normalization silences those trying to avoid harassment and humiliation, making it easier for people to use these images to control and threaten others.  

Normalization isn’t neutral, but a choice that turns abuse into background noise while survivors suffer. We need visible enforcementclear consent standards, and community refusal to share, engage with, or ignore harmful content. 

Calls for Action: What You Can Do Now 

There has been some progress. The Consumer Federation of America has called for investigations into Grok AI, and Congress passed the 2025 Take It Down Act, criminalizing non-consensual sexualized images (including AI-generated deepfakes). 

But more needs to be done. Here’s how you can help: 

  • Report immediately: Use platform tools to report AI generated or real CSAM involving minors or nonconsensual adult images. Document evidence using screenshots, dates, and URLs. CSAM is illegal and should be reported to police.  
  • Search and secure: Check privacy settings and learn the reporting processes for platforms you use. Use reverse-image search to see if your pictures have been shared elsewhere. For additional tech safety strategies, click here. 
  • Know your rights: Nonconsensual intimate images and CSAM are illegal; your safety and dignity matter. Image-based abuse is not your fault. 
  • Press platforms for safeguards: Demand clear guidelines, consistent enforcement, and transparent takedown processes. 
  • Contact your representatives: Advocate for stronger laws, accountability, and safety testing for AI tolls. Call for immediate investigations when safeguards fail.  

 

AI should never be a tool for harm. We’re working toward a future where technology protects consent, dignity, and safety and where survivors get timely support and real remedies. If you or someone you know is experiencing image-based abuse or digital harassment, Safe+Sound Somerset’s 24/7 Call and Text Helpline is here:  866-685-1122. For additional information, check out our learning center at https://safe-sound.org/resource-center/learning-center/