Poison Pill

Invisible protection against AI training

Poison Pill uses adversarial noise algorithms to give musicians, photographers, and other creators a way to protect their work. We apply multiple, invisible, adversarial techniques to images and audio files to disrupt unauthorized model training — so artists can take back control with AI.

Multi-technique, creator-first protection for images (JPEG/PNG) and audio (WAV/MP3).
Invisible to humans — preserves visual and audio fidelity while disrupting model training.
Targets gradient signals used during training to degrade dataset usefulness without harming your originals.
Simple pipeline: drop files in, download protected outputs, keep your rights and your revenue.
Apply for the Beta The app is currently in closed beta, apply to join our active creator-tester community.