AI as a Mental Mobility Scooter
(This is somewhat of a continuation to my AI, Technology, and the Death of Critical Thinking blog post, written largely due to the overwhelming sense of dread I feel when reading certain comments made by a particular kind of AI user.)
Still, it is an enticing technology because it offers something that looks like thinking, even if it isn't. But even if it were, it wouldn't be a replacement for how we think. This is because LLMs can't truly judge the reliability of the information they're trained on because they can't directly compare that information to reality. In contrast, humans can verify information in the real world. If ChatGPT is trained on obviously wrong information that makes up a significant portion of its training data, it will likely output something based on that false information because it does not process information like a person.
By comparison, if I put a reasonably intelligent person with some basic critical thinking skills in a room with 20 flat-earthers and let the flat-earthers try to convince this person that the Earth is flat, they very likely wouldn’t be able to convince him or her.
To oversimplify, LLMs are trained on the frequency of text patterns and produce probabilistic outputs, making them especially susceptible to this kind of problem.
This is why I'm concerned. When I see people using an LLM and uncritically accepting its answers, I can't help but feel like they're using the mental equivalent of a mobility scooter. But unlike a mobility scooter, which will likely move you around reliably, an LLM can be confidently wrong while also making you less equipped to recognize it. This is why I worry that more people will become overly reliant on LLMs and start to give up on thinking when they can just have an LLM do it for them. But why would we want to outsource thinking to a technology that's so unreliable? Just because it's easy?
Comments
Post a Comment