Getty's fine print only restricts the right to reproduce their images, because, until now, that was the only way to use them. Legal fine print is tricky. If it doesn't expressly prohibit something, then is it okay?
I'm not a lawyer, but I don't think that's the way copyright law works. In any case, Getty anticipated just that argument.
No other rights or warranties are granted for comp use.
That seems to close off any argument that something which isn't expressly prohibited can be done. The only other permitted use is in the form of an embed. I'd sure hate to be a lawyer having to argue that by allowing embeds, Getty images opened the door to database use.
As far as I can tell, AI's training method is only valid under a very expansive interpretation of fair use at a time when the Supreme Court seems to be trying to rein in just such expansive interpretations.
Fair use is also problematic if the copies of works used in the training database were not obtained legally. As I've asked before, do we think the AI developers bought copies of all the books used in the training database? And if they did, did they have some way of incorporating them in the database without violating DMCA? In
Copyright Clarity, Hobbs addresses the point that a fair use claim is invalid if the work is obtained illegally. She is talking about copyright in education specifically, but if educators can't claim fair use of material obtained illegally, I don't know on what basis anyone else could claim it.
Also in the educational context, fair use can't be claimed if the source of the material isn't cited. Have the AI developers released a complete list of materials used? Again, if educators can't claim fair use on unacknowledged borrowing, it's difficult to see the rationale for allowing someone else to do it.
Well, sort of, but that would also limit medical advances, engineering use, making our cars safer, and maybe even curing cancer. AI is all or nothing. We can't have some infringement be okay but not others.
I'm not at all sure that AI is all or nothing. All that needs to happen to clear the legal hurdles is that material used to train AI needs to be licensed and/or otherwise used in a
way consistent with copyright law. For instance, if medical research needs to be digested for the AI to make the medical advances you're talking about, why not just have the rights holders for the relevant studies work out an agreement for profit sharing of any breakthroughs achieved by an AI model trained on protected IP? Also, I'm not sure how fair use works for medical studies, but researchers in general seem happy to share their work, particularly with other researchers working on the same problems. (It's hard for science to advance without cooperation among scientists.) Sure, lots of different companies want to have that cancer cure, but any individual company would take a lot longer to develop one. If AI can really do the job, companies would make more in the long run by letting it come up with the cure much faster, even if that means profit sharing among several companies. Similarly, car companies all benefit from auto safety advances (and presumably fewer product liability lawsuits). Sure, one company might like to hold the patent, but companies in general would still save more money if they allowed a cooperative solution generated by AI. Better a smaller share of profits in the new advance right now than a possible larger share decades from now.
All of that can happen without having AI invade creative fields. Lots more rights holders are involved, and if some don't want to participate, they should be able to opt out. If screenwriters want to use contract negotiations to ban AI altogether, so be it. None of that stops cancer from being cured. AI is not all or nothing.
Some companies are already embracing a licensing model that involves pay creators royalties on their work, to the extent that it is incorporated in AI products.
When I found out Shutterstock was now selling AI-generated images, I almost closed my account with them. But then I discovered this:
https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ?language=en_USBasically, Shutterstock is championing "responsible AI." Their AI is trained on images licensed from them, and the creators of those images get paid when customers license AI images based in part on theirs. Sure, since every Ai image may have several human contributors, the payment for each creator isn't going to be as big as when someone licenses one of the creators' own images. But over time, they could receive a reasonable amount of money from that arrangement--some of it from customers who wouldn't necessarily have licensed one of their unaltered images. (The AI stuff is designed for people who want a very specific image, something like X number of people arranged in a particular way in front of a forest with a certain kind of tree in it. In that example, some creators who do a lot of people photographs and some who specialize in nature photographs would all profit. Maybe none of them would have if none of them had what the customer was looking for.)
Shutterstock also indemnifies companies who use its AI images against lawsuits. It can do that because it knows it has followed copyright law to the letter.
Think of how much better off Open AI and other companies would be right now if they'd followed such a pattern themselves. As it is, they may end up spending millions on court costs for litigation that was totally avoidable--even if they win. If they lose or get a mixed result, their costs could be much higher.
I think a responsible AI model has a better change of adoption than the massive kinds of changes you must be anticipating to compensate for the millions of unemployed people, among other things. It's possible a structure to deal with issues like that might develop, but given the current state of politics, I can't see it happening in the short. I can see a licensing model developing in the short term, however.