"@Grok is this true?"
LLMs exploit a fundamental flaw in human nature: optimization.
I think the greatest threat from AI comes from how much we rely on LLMs (large language models). This is because, on multiple occasions, I've encountered people who, in my opinion, are already overly reliant on tools like ChatGPT or Grok to answer even simple questions. However, we are (perhaps unfortunately) a species almost solely focused on optimizing any task. Arguably, the only reason tools exist at all is that humans are essentially pathologically lazy. But users of AI tools, unlike users of the printing press or the car, often choose to set their brains aside and let machines do 100 percent of the work. This means, in these instances, that these AI tools are being used to replace thinking (even if they aren't really equipped to do so) and are making those who use them irresponsibly into mentally lazy and ultimately less capable thinkers.
Regardless, AI has its allure because it offers something that looks like thinking, which is amazing, even if it isn't thinking. But even if it were, it wouldn't be an adequate replacement for human thought. This is partially because LLMs can't truly judge the reliability of the information they're trained on, and they can't directly compare that information to reality. In contrast, humans can verify information in the real world. We're even intuitively skeptical of some sources. But if ChatGPT is trained on obviously wrong information, and that information makes up a significant portion of its training data, it will likely output something based on that false information because it does not process information like a person.
For comparison, if I put a reasonably intelligent person with some basic critical thinking skills in a room with 20 flat-earthers and let the flat-earthers try to convince this person that the Earth is flat, they very likely wouldn’t be able to convince him or her. If we did a similar test on an LLM, I'm confident it would not be as successful. This is because (in simplistic terms) LLMs are trained on the frequency of text patterns and produce probabilistic outputs, making them especially susceptible to this kind of frequency problem.
"LLMs are like calculators. They augment our thinking."
I frequently see people compare LLMs to calculators, because many were also apprehensive about calculators when they began to gain mainstream use. But there's a stark difference between the two tools. Calculators are specific, fast, and accurate, but, at the moment, ChatGPT and Grok are really just fast. LLMs are known to make up (hallucinate) sources, they'll tell you to add glue to your pizza to help keep the cheese from falling off, and they can even be manipulated by their creators to skew information.
These are not problems anyone has ever needed to worry about while using a calculator. A calculator can't lie about historical events, and it can't fabricate believable sources. Calculators only augment our ability to do math.
So when I see people using an LLM and uncritically accepting its responses, I can't help but feel like they're using the mental equivalent of an unreliable, remote-controlled mobility scooter (despite presumably having a functioning brain). But why would we want to outsource thinking to a technology that's so unreliable, potentially compromised, and also actively making us less capable of thinking for ourselves? Just because we're lazy?
Comments
Post a Comment