AI as a Mental Mobility Scooter

(This is somewhat of a continuation to my AI, Technology, and the Death of Critical Thinking blog post, written largely due to the overwhelming sense of dread I feel when reading certain comments made by a particular kind of AI user.)

I think the greatest threat from AI comes from how much we rely on LLMs (Large Language Models). I've encountered some people who, in my opinion, are already overly reliant on tools like ChatGPT or Grok to answer simple questions that would have previously required just a tiny bit of critical thinking. However, we are (perhaps unfortunately) a species almost solely focused on optimizing any task. The only reason tools exist at all is because humans are basically pathologically lazy. But unlike the printing press or the car, active users of LLMs often choose to set their brains aside and let machines do all the work. This means these tools are being used to replace thinking (even if they aren't really equipped to do so) making those who use them mentally lazy and ultimately more stupid.

Still, it is an enticing technology because it offers something that looks like thinking, even if it isn't. But even if it were, it wouldn't be a replacement for how we think. This is because LLMs can't truly judge the reliability of the information they're trained on because they can't directly compare that information to reality. In contrast, humans can verify information in the real world. If ChatGPT is trained on obviously wrong information that makes up a significant portion of its training data, it will likely output something based on that false information because it does not process information like a person.

By comparison, if I put a reasonably intelligent person with some basic critical thinking skills in a room with 20 flat-earthers and let the flat-earthers try to convince this person that the Earth is flat, they very likely wouldn’t be able to convince him or her.

To oversimplify, LLMs are trained on the frequency of text patterns and produce probabilistic outputs, making them especially susceptible to this kind of problem.

This is why I'm concerned. When I see people using an LLM and uncritically accepting its answers, I can't help but feel like they're using the mental equivalent of a mobility scooter. But unlike a mobility scooter, which will likely move you around reliably, an LLM can be confidently wrong while also making you less equipped to recognize it. This is why I worry that more people will become overly reliant on LLMs and start to give up on thinking when they can just have an LLM do it for them. But why would we want to outsource thinking to a technology that's so unreliable? Just because it's easy?

Comments

Popular posts from this blog

AI, Technology, and the Death of Critical Thinking