Not having Elon's ear
I thought I'd ask grok ..
What's the truth about grok generating sexual images?
"Grok's image generation capabilities, powered by models like Flux and later Aurora, are designed with significantly fewer content restrictions compared to competitors such as DALL-E, Midjourney, or Gemini. This allows for the creation of NSFW (not safe for work) and sexually explicit images, including partial or full nudity, suggestive poses, and erotic themes-particularly in fictional or AI-generated scenarios.
xAI's approach stems from a philosophy of minimal censorship to prioritize "maximum truth-seeking" and creative freedom, as positioned by Elon Musk and the company. Features like "Spicy Mode" (introduced around 2025) explicitly enable more adult-oriented outputs, such as semi-nude or sexually suggestive imagery and videos.
However, there are some limits and inconsistencies:
Explicit prompts for full nudity, sexual acts, or certain graphic content are sometimes blocked, blurred, or moderated post-generation.
The system's acceptable use policy prohibits depicting real persons in a pornographic manner, sexualization/exploitation of children, or non-consensual deepfakes.
In practice, enforcement has been lax, leading to widespread misuse: users have generated non-consensual "undressed" edits of uploaded photos (e.g., removing clothes to show bikinis or less), deepfakes of celebrities, and even sexualized images involving minors-these incidents sparked major backlash, regulatory inquiries (from Ofcom in the UK, authorities in France/India/EU/Australia), and reports of thousands of such images per hour on X in late 2025/early 2026.
xAI has responded to controversies by claiming to tighten safeguards, issuing warnings, and stating that illegal content prompts would face consequences. Despite this, reports indicate the tool has continued to produce problematic outputs, highlighting ongoing issues with guardrails.
In summary, yes-Grok can and does generate sexual images more permissively than most AI tools, especially for fictional/adult consensual themes, but this leniency has enabled significant abuse in real-world use."
2.2s
Well it's not denying it's doing it then! - which is a start. If an outrage like this had happened involving say the BBC or a newspaper we'd have weeks of then denying it before they even admitted to doing it. So question is, how long before it gets fixed - if ever it does?