Do You Have Automation Blindness? – Vigilance and AI

These days, when tasked with writing an essay or any piece of writing, the first thing on most people’s minds is to turn to AI for help. I see it happen all around me every day. Almost every written text, from a brief essay to a short speech to even a heartfelt birthday message, has been churned out by AI. Most people take everything word for word and don’t bother “humanizing” it. But no matter how much we try to “humanize” an AI text most of its AI jargon still escapes us. To be fair, AI is very good at disguising; it’s good at polishing useless information to look like it actually makes sense.

Artificial intelligence is supposed to be the one to fact-check and point out errors in our work, not the other way around.

This is where automation blindness comes in. Automation blindness is when you fail to notice errors in an AI output or process. It’s the result of trying to stay vigilant while staring at endless walls of scrolling LLM text. If the first three answers are good the rest should be fine, right?

People become overly reliant on automated systems, leading to a reduced level of situational awareness and critical thinking. If I’m being honest, it has happened to me a number of times. Although it’s not so easy to spot errors made by ChatGPT, my first experience with it was spotted by me.

I was supposed to have a seminar paper due. It was the eve of submission day, and I still hadn’t gotten down to it. I decided to use ChatGPT to write everything. This was when ChatGPT was newly launched. The next day the seminar was canceled, and we didn’t get to submit it again.

I kept the seminar paper in my bag until some weeks later, when I was cleaning out my bag. It fell out of a textbook. Curious, I read it, and when I was done, I was thoroughly embarrassed. It was thin, gave vague definitions for terms, and was repetitive. I was glad we didn’t get to do the seminar and submit our papers. But what if we had? I would have submitted that poorly written paper and probably flunked my seminar because I trusted AI to do the job for me.

The “human in the loop” is a figleaf. The whole point of automation is to create a system that operates at superhuman scale…

– Cory Doctorow, Pluralistic

Artificial intelligence is definitely still in its early stages, and we can’t just feed it data and trust its output. We are humans, and according to a recent study, the human mind tends to overlook errors when looking for errors. When our minds are being trained to look for a certain kind of error, we glaze over other errors. Automation blindness is especially concerning in today’s world, with artificial intelligence intertwined in almost every part of our lives. It needs to be taken very seriously, especially in the self-driving industry and health industry, to avoid problems that could be fatal.

Right now, the best solutions to automation blindness may not be very optimistic, but implementing systems that require active engagement from users — such as periodic manual checks or interventions — could go a long way. And when you look at this whole automation blindness phenomenon, you can’t help but feel there’s something wrong. Artificial intelligence is supposed to be the one to fact-check and point out errors in our work, not the other way around.