Poisoning AI
This is a fascinating development. Nightshade and Glaze have been developed for artists to use to alter images which ‘poison’ image AI training algorithms.
So after the article about ‘Sleeper agents’ potential to poison coding AI algorithms, how do you monitor the health of an AI model? Training or retraining is the most expensive part in every way, so how do you ‘clean’ a model efficiently?
People are already starting to discover this. Just for maintaining models. In traditional software development an application is released and mostly it can be left alone until the next release. It works. Not so for models - which seem to actually need more aftercare - this was an interesting starter article to read
“How do we monitor the effectiveness, who deals with issues, how do we respond?’ Are all questions that should be asked and answered when deploying ML/AI solutions. I haven’t seen that a lot in the hype. But it’s interesting from a support perspective. Imagine everything is working…just not correctly anymore and you have to fix it before you make the news. Never make the news.