A few months after ChatGPT burst into public view, several articles reported on the fact that the tool seemed to be getting dumber since it had been released from the strictures of the lab. Now, a study on using ChatGPT as either a study aide or a tutor indicates that the technology perhaps isn't ready for prime time as a replacement for other modes of learning and academic performance. A study of high school students using ChatGPT to practice math problems or to be tutored in math did worse on a test than students who learned the old-fashioned way. The issue seems to have been a combination of students learning differently when using AI, and the fact that the AI tools gave wrong instructions or answers.
WHY IT MATTERS
It is not yet clear what effect AI has on the learning process in humans, although it seems reasonable to expect that a student who relies on AI to give them answers might learn less than a student who learns to solve problems on their own. This was true of calculators when I was in high school, too. But it also points up questions worth keeping in mind when evaluating AI tools for business use: whether a tool can reliably produce correct answers and is using correct reasoning. Sometimes, this may be down to how the tool was trained. It may also involve questions of how the logic was built.
For the time being, it is prudent to require human review or at least spot audits of AI-assisted decisions and AI output. In other words, don't take an AI at face value – and don't present its work to customers unless you have told the customer that an AI was involved.