Yes, copyright theoretically protects your work from the moment you write it. Proving that may be the hard part. But I suppose it's true that a publication date on Amazon or elsewhere is pretty difficult to argue with. Substack posts, of course, pose different issues, because if they are serials, it's tough to copyright before the serial is finished. However, you do have a Substack publication date for each episode.
I also save any earlier drafts, just in case. They can help pinpoint a date at which work reached a certain point.
On AI, you have to decide as an author whether you want to use a tool that's built on ethically questionable use of the intellectual property of others. You also need to consider the occasional odd lapses, some of which are worse than others.
For anyone writing nonfiction, keep in mind that AI will invent facts. It's produced false case citations for lawyers and false medical studies in a recent HHS report. Given that tendency, I don't thing I'd trust it to make literary judgments.
Even in AI-generated search results, it still occasionally flubs. Just yesterday, it gave me a factual error on Greek mythology, and not so long ago it did the same with the KDP TOS. The latter is harder to understand. Also, when I was trying to check whether West LA College had an indoor pool, it told me yes and showed me a picture--which was actually of an outdoor pool at Valley College. (WLA has no pool, indoor or outdoor.) It also told me West LA has no Saturday classes. I checked one of the linked sources, which said exactly the opposite. If it can't even get simple facts right, why are we trusting its literary judgement?
As others have pointed out, AI uses your work for further training. Don't feed the beast that may one day devour you.
As PJ demonstrated, it can produce reasonable summary responses on some issues--but only if someone else has already written on them. And usually, when I check the response against the sources, the original human writing is just as good. Occasionally, AI may successful combine two pieces into one, but otherwise, it seems better to just look at the human sources. At least on Google, AI seems mostly to draw on the two or three top search results. It's not as if it actually synthesizes everything on the whole internet. (That's probably just as well, given the high and growing energy costs.)
PJ makes a good point, though, about not letting someone (or something) edit out your individual voice. I've had more problem with tools like pre-AI Grammarly in that regard than I've ever had with a human editor, though.