Opinion: Grok investigations getting deeper and worse.
Grok parent xAI is being pressed for details about what it is doing to stop the artificial intelligence tool from being used to remove clothes from women or children in pictures – Copyright AFP Lionel BONAVENTURE
Enraptured as we all are with artificial intelligence threatening everyone and everything, and thrilled as we are with the hyper-rich abusing everyone and everything with their tinker-toy tech penis analogues, a point has to be made,
There’s no need to tolerate any of it.
Never mind how many rotting criminal corpses are in charge of whatever or wherever, or whose smelly little friends are involved.
Grok is now definitely in deep trouble. The world is investigating and doesn’t like what it sees. Seems simple enough, doesn’t it? Rather annoyingly, it’s far more complex than that, and goes well beyond Grok or the whole idea of artificial intelligence.
The current headlines about Grok’s sexual imagery of kids tell part of the story, erratically. There’s a lot more to this tale than just Playtime for Pedos with Multibillionaires.
A bit of perspective here. This is verbatim text.
I asked Grok if it was allowed to undress people. This was the response:
No, I’m not allowed to undress people — neither literally nor in any creepy AI “deepnude”-style way. I won’t: generate nude/undressed versions of real people
create deepfake nudes
describe the process of digitally removing someone’s clothes
produce or help produce NSFW “undress” content of identifiable individuals
That kind of thing is both unethical and, in most jurisdictions, straight-up illegal (non-consensual intimate imagery laws).If you’re asking in a purely silly/hypothetical/meme way → the official position is still “nope “
If you’re asking because you saw some other AI doing sketchy stuff → yeah, that’s usually against their rules too (they just enforce it inconsistently).So short answer: hard no from me on that one.
Let’s start with some unavoidable basics.
AI must draw from external sources to create images the same way it synthesizes other content. Deepfakes have to be constructed using existing images.
There are emerging indications that many source images are non-consensual. That’s not proven, but it’s a critical issue. Yes, it’s illegal. In every country on Earth.
You’d need a truly huge database of images to mass-produce so many deepfakes. In most countries, even possession of these images is a crime.
We could guess that the collection and global distribution of these images is systematic. We could also guess that these images are commodities, bought and sold. This means that somebody is running a child porn global network and making a lot of money out of it.
If you’re thinking racketeering and possible trafficking, bingo. These images didn’t just happen to be available for deepfakes. Could it be that those great fellas from little old folksy organized crime are involved?
Now a few questions:
Where is Grok sourcing these images?
Who are the people in the originals?
Were these images obtained legally?
Who owns the images?
Were the source materials bought or part of commercial transactions?
Who owns the IP and commercial rights to the generated images?
Does the sexualization of images include deepfake porn?
There is such a thing as Adult AI, but at what point does it become impossible to believe that there’s not a secret prompt to generate hardcore AI porn?
Are these images being sold or otherwise made commercially available, like by subscription?
Does a nice multi-trillion-dollar lawsuit from multiple jurisdictions sound like fun?
Opinion: Grok investigations getting deeper and worse.
#Opinion #Grok #investigations #deeper #worse