The deletion of thousands of links to suspected child sexual abuse imagery from a widely-used artificial intelligence training dataset marks a significant step forward in efforts to prevent the spread of such content online.
Background on the Issue
A report last year by the Stanford Internet Observatory highlighted the presence of links to sexually explicit images of children within the LAION research dataset, which is used to train popular AI image-maker tools like Stable Diffusion and Midjourney. This led to concerns about the ease with which some AI tools could produce photorealistic deepfakes depicting children.
Actions Taken by Researchers
Following the report's findings, the nonprofit organization behind LAION, Large-scale Artificial Intelligence Open Network (LAION), immediately removed its dataset. Eight months later, LAION announced that it had worked with anti-abuse organizations in Canada and the UK to create a cleaned-up version of the dataset for future AI research.
Concerns Remain
Despite these efforts, concerns remain about the potential for previously trained models to continue producing child abuse imagery. A Stanford researcher commended LAION's improvements but emphasized the need to withdraw from distribution the "tainted models" that are still able to produce such content.
Tech Companies Respond
In response to these concerns, tech companies have begun to take action. Runway ML, a New York-based company, recently removed an older and lightly filtered version of Stable Diffusion from the AI model repository Hugging Face, citing a "planned deprecation" of research models that have not been actively maintained.
Global Efforts
The cleaned-up LAION dataset comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. Recent actions include San Francisco's city attorney filing a lawsuit seeking to shut down websites that enable the creation of AI-generated nudes of women and girls, and French authorities bringing charges against Telegram's founder and CEO for allegedly distributing child sexual abuse images on the platform.
Personal Responsibility in Tech
The arrest of Telegram's CEO "signals a really big change in the whole tech industry," said David Evan Harris, a researcher at the University of California, Berkeley. This shift in focus towards personal responsibility among tech founders could have significant implications for the industry as a whole.