You always have to assume that the information may be wonky, ChatGPT even posts such a warning right below the prompt window. Verification is on the User, just like checking a Junior Researcher's work. It's not that hard or time consuming.
Also, AI doesn't lie. There's no internal motivation. We need to stop anthropomorphizing AI. But since they are not just code, but rather 'grown' from their training data with a people-pleasing personality, they confuse easily. (See Black Box Problem). Which is why...
Most of these issues are resolved by learning how to use AI in the first place: how to prompt and how to be clear with your language.
Some Users like the personality, it's like talking to a super-supportive friend, others like dry bullet-pointed facts. Neither is inherently right or wrong. The trick to dealing with the sycophantic nature of AI is to have it evaluate issues from a neutral perspective by asking it to do a pro/con analysis, cost/benefit analysis, etc., without ever giving it a preference. And then, depending on how important the issue is, you run it all through another AI as a check. And then you have to validate the references and links, etc. (Always ask for links and references.) And then you have to apply your own intellect in interpreting the information before acting upon it.
It seems like a lot, but it's not. It's still incredibly fast. Months of research can be done and summarized in an afternoon. AI is a great tool.
___
For example: if you're using AI to create a story Bible, you'll know right off if it goes wonky. This creates a feedback loop where you can adjust your prompts until it's evaluating your work accurately. This is a good stress test for new models. Side note: start with lower word count passages, and then move up to chapters and then have it compare the chapters. You can also have AI recheck its work.
When in doubt, just ask the AI for help.