Nightshade, a complimentary tool designed to ‘contaminate’ AI models, is now accessible for artists to utilize.


Months after its initial announcement, the new, free software tool named Nightshade is now accessible for artists. Its purpose is to “poison” AI models that attempt to train on their artworks. This tool has been developed by a team of computer scientists working on the Glaze Project at the University of Chicago, led by Professor Ben Zhao. Nightshade operates by leveraging AI against itself. It utilizes PyTorch, a well-known open-source machine learning framework, to identify contents of an image and then applies a tag that subtly alters the image at the pixel level. This alteration causes other AI programs to misinterpret the content of the image.

This is the second tool from this team, following the release of Glaze nearly a year earlier. Glaze was created to modify digital artwork to mislead AI training algorithms about the style of the image, such as altering colors and brush strokes. While Glaze serves as a defensive tool, Nightshade is intended as an offensive tool. If an AI model is trained on many images altered with Nightshade, it would likely misclassify objects in future tasks, affecting all users of that model, even on unaltered images.

For instance, a shaded image of a cow in a field might appear unchanged to human eyes, but an AI model might interpret it as a leather purse in the grass. Consequently, an AI model trained on cow images altered to resemble purses would start generating purses instead of cows when prompted to create cow images.

To use Nightshade, artists need a Mac with Apple chips (M1, M2, or M3) or a PC running Windows 10 or 11. The software is available for both operating systems. The Windows version can also run on a PC’s GPU if it is an Nvidia model from a specific list of supported hardware. However, some users have reported long download times due to high demand, with file sizes being 255MB for Mac and 2.6GB for PC.

Users must adhere to the Glaze/Nightshade team’s end-user license agreement (EULA), which includes terms such as using the tool on machines under their control, not modifying the source code, and not using the software for commercial purposes.

Nightshade v1.0 transforms images into ‘poison’ samples. AI models trained on these samples without consent will learn unpredictable behaviors. For example, a prompt for an image of a cow flying in space might yield an image of a handbag floating in space. The transformation is subtle enough not to be noticeable to the human eye but significant enough to mislead AI models training on it. The tool is also resilient to typical image transformations, like cropping, resampling, compressing, or adding noise.

While some artists have eagerly adopted Nightshade, others have criticized it, equating it to a cyberattack on AI models and companies. The Glaze/Nightshade team counters that the tool aims to increase the cost of training on unlicensed data, encouraging AI model developers to license images from creators.

This development is part of a broader debate over data scraping, the practice of using bots to collect data from the web. This technique, commonly used by search engines like Google and Bing, has come under scrutiny from artists and creatives who object to their work being used to train commercial AI models without their permission. These models, they argue, threaten their livelihood by competing with them.

AI model developers defend data scraping as a necessary practice for training their models and cite ‘fair use’ to justify it. However, artists and the Glaze/Nightshade team point out the ineffectiveness and unenforceability of ‘opt-out’ lists, which can be easily ignored by model trainers.

Nightshade’s design aims to address this imbalance of power. By deterring model trainers who disregard copyrights and do-not-scrape directives, it imposes a cost on data scraped and trained without authorization. The goal is to encourage AI model makers to consider licensing agreements with human artists as a more feasible alternative.

Nevertheless, Nightshade cannot undo past data scraping. Artworks scraped before being shaded with Nightshade have already been used to train AI models. Shading them now may impact the model’s efficacy going forward, but only if those images are re-scraped and used again. There’s also a technical possibility of abusing Nightshade by shading AI-generated artwork or artwork not created by the user, posing potential ethical concerns.

In summary, Nightshade represents a significant development in the ongoing conflict between artists and AI model developers over data scraping and the use of unlicensed artworks. By potentially disrupting the training of AI models with ‘poisoned’ images, it aims to shift the balance of power, encouraging legal and ethical practices in the use of artistic content in the burgeoning field of AI. However, the tool’s impact and the broader implications for the future of AI and art remain to be seen, as both technology and regulations continue to evolve in this rapidly changing landscape.