An unemployed 31-year-old man has been arrested by Sapporo police for using a free generative AI tool to make obscene images of celebrities.
Tatsuro Chiba allegedly trained the AI on images of actual women and then produced the pornographic content. He targeted some 300 celebrities, mainly female idols.
Police seized a stash of 500,000 images from his computers, an undisclosed number of which were obscene.
While we probably all have stuff on our computers meant for our private consumption, Chiba was actively pursuing ways to monetize his exploitation of the technology. In two years, he allegedly made over ¥11 million by selling the images online.
Indeed, we suspect it was his success that got him noticed. Presumably, he attracted the attention of police because he was making so many images — and so much money.
We can’t help feel curious about which celebrities and idols Chiba was targeting with his deepfakes, though perhaps these reflected less his personal tastes as his efforts to cast a wide net and meet the demands of the market to make the most money.
While the Japanese government and industry believes the hype about AI and has accordingly thrown millions at it, the police remains suspicious of the technology.
In recent weeks, we had netizens using Grok to generate obscene images of Princess Kako.
Socially too, there are concerns that AI is enabling a generation of misfits not to seek out human relationships, and be satisfied with “romance” with AI personalities (see the woman who “married” an AI character, reported gleefully by the mainstream media, the app that offers AI “women” for married men).
More seriously, though, the police are actively pursuing investigations into people who use AI to create porn and deepfakes, whether of celebrities or other real people.
In April last year, we saw the first arrests made for the use of AI to generate and sell obscene material. In September, a college kid was nabbed for using an AI image generator to make and sell posters.
Be the first to comment