Poisoning AI

This is a fascinating development. Nightshade and Glaze have been developed for artists to use to alter images which ‘poison’ image AI training algorithms.

Nightshade, the free tool that ‘poisons’ AI models, is now available for artists to use
The tool’s creators are seeking to make it so that AI model developers must pay artists to train on data from them that is uncorrupted.

So after the article about ‘Sleeper agents’ potential to poison coding AI algorithms, how do you monitor the health of an AI model? Training or retraining is the most expensive part in every way, so how do you ‘clean’ a model efficiently?

People are already starting to discover this. Just for maintaining models. In traditional software development an application is released and mostly it can be left alone until the next release. It works. Not so for models - which seem to actually need more aftercare - this was an interesting starter article to read

A Comprehensive Guide on How to Monitor Your Models in Production
A guide on monitoring ML models in production, tackling challenges and best practices for functional and operational observability.

“How do we monitor the effectiveness, who deals with issues, how do we respond?’ Are all questions that should be asked and answered when deploying ML/AI solutions. I haven’t seen that a lot in the hype. But it’s interesting from a support perspective. Imagine everything is working…just not correctly anymore and you have to fix it before you make the news. Never make the news.

Subscribe to Gary P Shewan

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe