Tag

Data Poisoning

All articles tagged with #data poisoning

The Threat of AI Poisoning: Risks and Safeguards

Originally Published 2 months ago — by The Conversation

Featured image for The Threat of AI Poisoning: Risks and Safeguards
Source: The Conversation

AI poisoning involves intentionally corrupting AI models, especially large language models like ChatGPT, through malicious data or model manipulation, leading to errors, misinformation, or hidden malicious functions, and poses significant security and ethical risks.

AI Vulnerabilities Exposed: Hugging Face API Tokens Compromise Meta's Llama 2

Originally Published 2 years ago — by The Register

Featured image for AI Vulnerabilities Exposed: Hugging Face API Tokens Compromise Meta's Llama 2
Source: The Register

Researchers at Lasso Security discovered over 1,500 exposed API tokens on the Hugging Face platform, including tokens from tech giants Meta, Microsoft, Google, VMware, and more. These exposed tokens granted write permissions, allowing potential attackers to modify files in account repositories. The researchers were able to gain access to 723 organizations' accounts, including those of Meta, EleutherAI, and BigScience Workshop. If exploited, these tokens could have led to data theft, poisoning of training data, and stealing of models, impacting over 1 million users. The exposed tokens have since been revoked and the vulnerabilities closed.

"Unleashing Chaos: How to 'Poison' Images to Disrupt AI Generators"

Originally Published 2 years ago — by Digital Camera World

Featured image for "Unleashing Chaos: How to 'Poison' Images to Disrupt AI Generators"
Source: Digital Camera World

Nightshade, a new tool developed by a team at the University of Chicago, allows creators to add invisible alterations to their artwork that disrupt the functionality of AI models used for image generation. By "poisoning" the training data, Nightshade causes erratic outcomes, such as dogs becoming cats and mice appearing as men. This tool aims to protect artists' rights and address the issue of AI models using images without permission. Nightshade is open source, allowing users to customize and strengthen the tool. However, there are concerns about potential malicious use and the need for defenses against data poisoning techniques.

"Nightshade: Empowering Artists to Defend Against AI Image Generators"

Originally Published 2 years ago — by Ars Technica

Featured image for "Nightshade: Empowering Artists to Defend Against AI Image Generators"
Source: Ars Technica

Researchers at the University of Chicago have developed a data poisoning technique called "Nightshade" to disrupt the training process for AI models that scrape art without consent. The open-source tool alters images in ways invisible to the human eye, corrupting the training process and misidentifying objects within the images. The goal is to protect visual artists and publishers from having their work used without permission to train generative AI image synthesis models. The researchers hope that Nightshade will encourage AI training companies to license image datasets, respect crawler restrictions, and conform to opt-out requests.

"Artists Empowered: Nightshade AI Tool Counters AI Image Scrapers and Protects Art"

Originally Published 2 years ago — by The Verge

Featured image for "Artists Empowered: Nightshade AI Tool Counters AI Image Scrapers and Protects Art"
Source: The Verge

Artists now have a tool called Nightshade that can corrupt training data used by AI models, such as DALL-E, Stable Diffusion, and Midjourney, by attaching it to their creative work. Nightshade adds invisible changes to pixels in digital art, exploiting a security vulnerability in the model's training process. This tool aims to disrupt AI companies that use copyrighted data without permission. Nightshade can be integrated into Glaze, a tool that masks art styles, allowing artists to choose whether to corrupt the model's training or prevent it from mimicking their style. The tool is proposed as a last defense against web scrapers that ignore opt-out rules. Copyright issues surrounding AI-generated content and training data remain unresolved, with lawsuits still ongoing.

"Nightshade: Empowering Artists to Safeguard Art from AI Exploitation"

Originally Published 2 years ago — by PetaPixel

Featured image for "Nightshade: Empowering Artists to Safeguard Art from AI Exploitation"
Source: PetaPixel

Nightshade, a new tool developed by researchers at the University of Chicago, aims to help creatives protect their work from AI image generators by adding undetectable pixels to images, effectively poisoning the AI's training data. The tool, currently under peer review, alters how machine-learning tools interpret data scraped from online sources, resulting in AI models reproducing something entirely different from the original image. By using Nightshade and another tool called Glaze, artists can protect their images while sharing them online. The hope is that widespread use of these tools will encourage larger companies to properly compensate and credit original artists.

Nightshade: Empowering Artists to Combat AI Art Scraping with Data Poisoning

Originally Published 2 years ago — by Cointelegraph

Featured image for Nightshade: Empowering Artists to Combat AI Art Scraping with Data Poisoning
Source: Cointelegraph

Researchers at the University of Chicago have developed a tool called "Nightshade" that allows artists to "poison" their digital art to prevent AI systems from training on their work without permission. By modifying the pixels of an image, Nightshade can trick AI systems into misinterpreting the content, potentially damaging their ability to generate accurate outputs. The tool will be integrated into the existing artist protection software called Glaze, which allows artists to obfuscate the style of their artwork. Experts suggest that even robust AI models could be vulnerable to such attacks.