"Nightshade: Empowering Artists to Defend Against AI Image Generators"

1 min read
Source: Ars Technica
"Nightshade: Empowering Artists to Defend Against AI Image Generators"
Photo: Ars Technica
TL;DR Summary

Researchers at the University of Chicago have developed a data poisoning technique called "Nightshade" to disrupt the training process for AI models that scrape art without consent. The open-source tool alters images in ways invisible to the human eye, corrupting the training process and misidentifying objects within the images. The goal is to protect visual artists and publishers from having their work used without permission to train generative AI image synthesis models. The researchers hope that Nightshade will encourage AI training companies to license image datasets, respect crawler restrictions, and conform to opt-out requests.

Share this article

Reading Insights

Total Reads

0

Unique Readers

0

Time Saved

3 min

vs 4 min read

Condensed

87%

70094 words

Want the full story? Read the original article

Read on Ars Technica