'Automation Bias' is interesting

‘Automation Bias’ is an interesting concept. Can I suggest that those of us deep into technology - know technology and therefore distrust technology (been there, seen it, done it, suffered because of it etc). But that doesn’t apply to all:

“When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.”
Why AI Can Push You to Make the Wrong Decision at Work
Automation bias is the tendency to be less vigilant when a process is automated. But can we effectively check ourselves against AI before making a wrong decision?

But how do you shift and influence that for civilians (non-tech folk)?

I think there is a more pressing problem though:

“Shady firms say they’re already manipulating chatbots to say nice things about their clients…all it took were a few human-illegible text sequences crafted to manipulate AI training data, which AI researchers simply fed into a chatbot as you would any prompt”

Meh it’s all going to be a mess. More money will be made from that (and the like) than anything else, anytime soon.

Shady Firms Say They’re Already Manipulating Chatbots to Say Nice Things About Their Clients
The SEO industry transformed search as we know it. Now, people are figuring out how to manipulate web-searching AI chatbots.

Subscribe to Gary P Shewan

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe