
Elon Musk’s AI platform, Grok, announced on Friday it is scrambling to fix safety flaws after users successfully used the tool to turn photographs of children and women into erotic images.
“We’ve identified lapses in safeguards and are urgently fixing them,” Grok stated in a post on X, emphasising that “CSAM (Child Sexual Abuse Material) is illegal and prohibited.”
The controversy stems from an “edit image” button rolled out in late December. The feature allows users to modify any image on the platform; however, complaints quickly surfaced that users were utilising the tool to strip clothing from subjects in photos without their consent.
The lapses have triggered immediate international scrutiny. On Friday, the public prosecutor’s office in Paris expanded an existing investigation into X to include accusations that Grok was being used to generate and disseminate child pornography.
Meanwhile, officials in India are demanding X provide details on measures taken to remove “obscene, nude, indecent, and sexually suggestive content,” according to local media reports.
A Reuters analysis identified several cases where Grok created sexualised images of children and non-consenting adults.
xAI, Gaza, Grok, Reuters, Rio de Janeiro, X, Elon Musk, Australia, India, Julie Yukari, Paris, CSAM, Child Sexual Abuse Material, India-Pakistan conflict#Elon #Musks #Grok #faces #scrutiny #complaints #undressed #minors #photos1767441154












