
"Nightshade: Empowering Artists to Defend Against AI Image Generators"
Researchers at the University of Chicago have developed a data poisoning technique called "Nightshade" to disrupt the training process for AI models that scrape art without consent. The open-source tool alters images in ways invisible to the human eye, corrupting the training process and misidentifying objects within the images. The goal is to protect visual artists and publishers from having their work used without permission to train generative AI image synthesis models. The researchers hope that Nightshade will encourage AI training companies to license image datasets, respect crawler restrictions, and conform to opt-out requests.

