Well. If AI (artificial intelligence) is the "robot overlord" threat to humanity that some people would like to believe, it is unfortunately no smarter than we are when it comes to fending off data breaches. (At least not yet.)
For anyone who hasn't dabbled with it or doesn't know about ChatGPT: it's an AI chat service that allows users to interact and ask questions or seek services. Examples might include "write me a term paper on 'Pride and Prejudice'," or "explain AI to me like I'm a five year old." Rather than a search engine pulling already-existing results, the AI behind the chat engine generates written results to those inquiries on demand. The theory behind AI is that the engine will get smarter (and produce better results) over time, as it "learns" from successive user inquiries.
Why It Matters
As a data breach goes, this is probably no more signficant than any other exposure of user data; the details exposed include contact information and payment information of account holders, all of which should be manageable by the company. There is a good reminder here for any company that has open source code in its services to make sure that you know how it all works and is configured, so that you do not inadvertently allow a data breach. The headlines here, however, are chiefly about this breach having happened at a novel and high-profile service, rather than being about a novel kind of breach.