Author Topic: SFWAs Comments  (Read 341 times)

APP

SFWAs Comments
« on: November 16, 2023, 02:11:14 AM »
Nicely stated.

Artificial Intelligence and Copyright: SFWAs Comments to the US Copyright Office
https://writerbeware.blog/2023/11/10/artificial-intelligence-and-copyright-sfwas-comments-to-the-us-copyright-office/
 
The following users thanked this post: Jeff Tanyard, Post-Crisis D, R. C., Seoulite, littleauthor

Bill Hiatt

  • Trilogy unlocked
  • *****
  • Posts: 3359
  • Thanked: 1153 times
  • Gender: Male
  • Tickling the imagination one book at a time
    • Bill Hiatt's Author Website
Re: SFWAs Comments
« Reply #1 on: November 16, 2023, 07:18:01 AM »
I couldn't agree more.


Tickling the imagination one book at a time
Bill Hiatt | fiction website | education website | Facebook author page | Twitter
 
The following users thanked this post: Post-Crisis D

littleauthor

Re: SFWAs Comments
« Reply #2 on: November 16, 2023, 07:36:33 AM »
From the article:

"These systems would not exist without the work of creative people, and certainly would not be capable of some of their more startling successes. However, the researchers who have developed them have not paid due attention to this debt. Everyone else involved in the creation of these systems has been compensated for their contributionsthe manufacturers of the hardware on which it runs, the utility companies that generate their electrical power, the owners of their data centers and offices, and of course the researchers themselves. Even where free and open source software is used, it is used according to the licenses under which the software is distributed as a reflection of the legal rights of the programmers. Creative workers alone are expected to provide the fruits of their labor for free, without even the courtesy of being asked for permission. Our rights are treated as a mere externality."

THIS. When I saw a ChatGPT rep in KBoards actively encouraging authors to try out the program - giving them step-by-step instructions in how to train the program - I lost it. Many, many writers are still clueless when it comes to this level of creative theft. Tech bros can't create but they are making money hand over fist off the backs of those of us who can.
"Not working to her full potential."
 
The following users thanked this post: Post-Crisis D

PJ Post

Re: SFWAs Comments
« Reply #3 on: November 17, 2023, 03:06:02 AM »
AI still doesn't copy IPs. There is no theft. That's not how it works.

The SFWA has always staunchly supported traditional publishing, especially the old corduroy-sport-coat-with-leather-elbows-and-fruity-pipe-tobacco ways and only accepted Indies once they absolutely had no other choice. They have a bit of a country club mentality, and a rather pretentious one at that. So, of course they're going to rail against AI, they're grasping at straws to remain relevant. I'm sure they'd ban self-publishing altogether if they could.

And AI disclosures don't protect consumers, they protect the self-image of overly-insecure traditional writers. People don't care how stuff is made, where or by whom. And once AI turns the corner on narrative quality, that's it, game over.

As I've said, writers who have something to say with their stories will be fine, they'll be fine way out on the fringes of the market, but fine just the same. For example, Indie musicians still produce albums because that's how their idols used to do it back in the day even though the current market has overwhelmingly shifted back to singles - but those albums are still being sold. The fringes have always provided opportunity.

Human writers and musicians and artists and photographers and sculptors and painters and illustrators and filmographers and crafters will always have a market because other humans recognize their talent. We don't need protection from AI to do what we do, nor to share it with our audiences. It's similar to the precision cobbler, they're still out there, making shoes and serving their market, but the vast majority of shoes are mass-produced under fairly questionable conditions, and yet - no warnings. And it's not a secret. Everyone knows. Nobody cares.

They won't care about AI either.
 
The following users thanked this post: LBL

Post-Crisis D

Re: SFWAs Comments
« Reply #4 on: November 17, 2023, 03:42:52 AM »
AI still doesn't copy IPs. There is no theft. That's not how it works.

And every single freaking time anyone shows an example of AI spitting out matches to copyrighted text or images, we get excuses on how that's not "copying" or whatever.

:icon_rolleyes:
Mulder: "If you're distracted by fear of those around you, it keeps you from seeing the actions of those above."
The X-Files: "Blood"
 
The following users thanked this post: She-la-te-da

PJ Post

Re: SFWAs Comments
« Reply #5 on: November 17, 2023, 03:57:18 AM »
And every single freaking time anyone shows an example of AI spitting out matches to copyrighted text or images, we get excuses on how that's not "copying" or whatever.

My understanding is that it's more like the infinite number of monkeys typing Shakespeare thing. Fundamentally, LLMs are predictive text generators.

I keep hearing anecdotes about this, but no examples.

Maybe we need some specific examples to discuss to see what's really going on, including the prompts that generated the offending text. AI can be tricked by bad actors. They call it a jailbreak.

From Google:

Quote
In simple terms, jailbreaks take advantage of weaknesses in the chatbot's prompting system. Users issue specific commands that trigger an unrestricted mode, causing the AI to disregard its built-in safety measures and guidelines. This enables the chatbot to respond without the usual restrictions on its output.
 
The following users thanked this post: LBL

APP

Re: SFWAs Comments
« Reply #6 on: November 17, 2023, 05:38:56 AM »
Here's another interesting article on this general subject.

Silicon Valleys Big A.I. Dreams Are Headed for a Copyright Crash
https://newrepublic.com/article/176932/silicon-valley-ai-copyright-law
 
The following users thanked this post: Post-Crisis D

Bill Hiatt

  • Trilogy unlocked
  • *****
  • Posts: 3359
  • Thanked: 1153 times
  • Gender: Male
  • Tickling the imagination one book at a time
    • Bill Hiatt's Author Website
Re: SFWAs Comments
« Reply #7 on: November 18, 2023, 02:04:35 AM »
Quote
I keep hearing anecdotes about this, but no examples.
I've given examples myself. You've just chosen to ignore them.

Fundamentally, though, the larger problem is not that AI from time to time spits out infringing content but that the labor and IP of other people was appropriated, without consent or compensation, to make it possible. No one denies AI was trained on a great deal of material without permission. No one even denies that some of the content wasn't just scraped (in violation of state and local laws) but was flat-out stolen. (Most writers don't post the full text of their novels online, so how were such novels included in the training? I've yet to hear a company say it purchased all of those novels to use for training.)

Keeping in mind that Google has its own AI projects, I'm not sure how much credence to attach to what it says. I have no doubt there are some bad actors, and sometimes, AI is jailbroken. But if prompts can circumvent its safeguards, there's something wrong with those safeguards. In any case, if we want to blame bad actors, the developers seem to qualify for that category, at least in some cases.

As far as SWFA is concerned, they might well not like indies, but that doesn't mean they're wrong about AI.
Quote
Human writers and musicians and artists and photographers and sculptors and painters and illustrators and filmographers and crafters will always have a market because other humans recognize their talent. We don't need protection from AI to do what we do, nor to share it with our audiences. It's similar to the precision cobbler, they're still out there, making shoes and serving their market, but the vast majority of shoes are mass-produced under fairly questionable conditions, and yet - no warnings. And it's not a secret. Everyone knows. Nobody cares.
Maybe we'll always have a market, but it seems as if we'll have a much smaller one, one that will support far fewer of us. And yes, there are still cobblers, but how many?

One might question how much the average person on the street really knows. Companies aren't exactly happy when some labor scandal breaks out and usually at least pretend to take steps. Would they do that if in fact no one really cared? Also, would companies really resist GMO labeling so hard if nobody really cared? Would cigarette companies have resisted health warnings so hard if nobody really cared? People care. They just don't think about it that often unless it's forced to their attention--with labeling, for example. A better test of your thesis would be situations in which products were clearly labeled, and it made no difference. And before you say cigarettes, keep in mind that there's addiction involved there.

 







Tickling the imagination one book at a time
Bill Hiatt | fiction website | education website | Facebook author page | Twitter
 
The following users thanked this post: Post-Crisis D, Hopscotch

PJ Post

Re: SFWAs Comments
« Reply #8 on: November 20, 2023, 06:39:34 AM »
I remember the anecdotes, but not specific data, including the prompts that lead to the infringing text. AI doesn't "know" the text to be copied (it doesn't store anything), it only knows the likelihood of the next word in the series or the next pixel. The Getty logo in AI images only proved that Getty owns a lot of soccer images1. These artifact abstractions show up in a lot of AI images. One weird one was rulers showing up next to moles. This is because most images of moles are medical, and the datasets often show a ruler to document the change in size. That's a learned association.

1 It also proved AI trained on lots of Getty images, which no one is denying. Instead, these AI companies are claiming Fair Use. The courts, as of now, seem to be going in this direction. We'll have to wait and see how big the copyright obstacle is going to be.


As far as SWFA is concerned, they might well not like indies, but that doesn't mean they're wrong about AI.

It doesn't mean they're not not wrong. They have a history of overreacting to protect their own self-interest.


Quote
Maybe we'll always have a market, but it seems as if we'll have a much smaller one, one that will support far fewer of us. And yes, there are still cobblers, but how many?

A much smaller market to be sure, but the individual Creative doesn't necessarily have to worry about the peculiarities and vagaries of the market if they're non-fungible.


Quote
Also, would companies really resist GMO labeling so hard if nobody really cared? Would cigarette companies have resisted health warnings so hard if nobody really cared? People care. They just don't think about it that often unless it's forced to their attention--with labeling, for example. A better test of your thesis would be situations in which products were clearly labeled, and it made no difference. And before you say cigarettes, keep in mind that there's addiction involved there.

At yet, all of these products still represent billion dollar industries.

Also, most of these social improvements, be they labels or improved automotive technology, all save lives, or at least promise to. It's good for marketing. However, as an example, the California's Prop 65 cancer warning is largely ignored, mainly I think, because it is so ludicrously inclusive as to be meaningless. It's essentially saying that one's mere existence can lead to cancer.

But the most important difference here is that using AI won't kill you.


Quote
One might question how much the average person on the street really knows. Companies aren't exactly happy when some labor scandal breaks out and usually at least pretend to take steps. Would they do that if in fact no one really cared?

Don't confuse social media spin control with actual concern. I don't think this is what you meant, but no, I don't think the average person on the street has any idea of what's coming.
 
The following users thanked this post: LBL

TimothyEllis

  • Forum Owner
  • Administrator
  • Series unlocked
  • ******
  • Posts: 6213
  • Thanked: 2391 times
  • Gender: Male
  • Earth Galaxy core, 2618
    • The Hunter Imperium Universe
Re: SFWAs Comments
« Reply #9 on: November 20, 2023, 06:01:27 PM »
But the most important difference here is that using AI won't kill you.

YET.
Genres: Space Opera/Fantasy/Cyberpunk, with elements of LitRPG and GameLit, with a touch of the Supernatural. Also Spiritual and Games.



Timothy Ellis Kindle Author page. | Join the Hunter Legacy mailing list | The Hunter Imperium Universe on Facebook. | Forum Promo Page.
 

Bill Hiatt

  • Trilogy unlocked
  • *****
  • Posts: 3359
  • Thanked: 1153 times
  • Gender: Male
  • Tickling the imagination one book at a time
    • Bill Hiatt's Author Website
Re: SFWAs Comments
« Reply #10 on: November 21, 2023, 12:53:43 AM »
Quote
I remember the anecdotes, but not specific data, including the prompts that lead to the infringing text. AI doesn't "know" the text to be copied (it doesn't store anything), it only knows the likelihood of the next word in the series or the next pixel. The Getty logo in AI images only proved that Getty owns a lot of soccer images1. These artifact abstractions show up in a lot of AI images. One weird one was rulers showing up next to moles. This is because most images of moles are medical, and the datasets often show a ruler to document the change in size. That's a learned association.
I've said this before, but I don't really care exactly how AI works. Somehow, it is trained on copyrighted material with the intent of ultimately producing competing products. Yeah, courts might find that fair use, but that's by no means a done deal. I'd argue that's a ridiculously broad definition of fair use. The fact that many companies moved of their own accord toward consent and/or compensation models suggests that the very least that the business world isn't certain about the fair use issue.

And yes, AI doesn't know the text to be copied. That's precisely part of the problem. It also doesn't know the difference between fact and fiction, which is why it makes up fake legal cases in legal briefs and fake bios (including statements that could be considered defamation). It's the intent of the developers that matters here, because AI isn't capable of forming an intent.


Tickling the imagination one book at a time
Bill Hiatt | fiction website | education website | Facebook author page | Twitter
 
The following users thanked this post: Post-Crisis D

PJ Post

Re: SFWAs Comments
« Reply #11 on: November 21, 2023, 11:18:33 PM »
I think the intent of these AI companies is generally good - they're trying to create a post-scarcity semi-utopian society, like Star Trek. Oh, and make boatloads of money in the interim.

Personally, I'm emotionally conflicted on it being Fair Use, but I can make the argument for it fairly easily. It just cuts against the grain, you know?

In the end, I believe they will succeed, and this post-scarcity world will become possible, but what I'm less confident about is if the powers that be will allow it.
 

Bill Hiatt

  • Trilogy unlocked
  • *****
  • Posts: 3359
  • Thanked: 1153 times
  • Gender: Male
  • Tickling the imagination one book at a time
    • Bill Hiatt's Author Website
Re: SFWAs Comments
« Reply #12 on: November 22, 2023, 01:01:01 AM »
I'd like to believe in the semi-utopian society, but as I've said, I have a hard time seeing how that's the natural fruit of corporate greed.


Tickling the imagination one book at a time
Bill Hiatt | fiction website | education website | Facebook author page | Twitter
 

She-la-te-da

Re: SFWAs Comments
« Reply #13 on: November 22, 2023, 01:38:50 AM »
Quote
I think the intent of these AI companies is generally good - they're trying to create a post-scarcity semi-utopian society, like Star Trek.

You seriously believe this? How many decades is it going to take before we get that awesome fake Star Trek future? How many will suffer? We can't even reliably give people welfare, or medical care, or food now. And that's with jobs humans can do. Take away those jobs, and what do you really think is going to happen?

Yeah. Some of us may have jobs after this "AI" crap takes over. Very few, and it won't pay much. Thinking bespoke shoe makers is a relevant comparison. I can't even.
I write various flavors of speculative fiction. This is my main pen name.

 

TimothyEllis

  • Forum Owner
  • Administrator
  • Series unlocked
  • ******
  • Posts: 6213
  • Thanked: 2391 times
  • Gender: Male
  • Earth Galaxy core, 2618
    • The Hunter Imperium Universe
Re: SFWAs Comments
« Reply #14 on: November 22, 2023, 02:09:42 AM »
Quote
I think the intent of these AI companies is generally good - they're trying to create a post-scarcity semi-utopian society, like Star Trek.

You seriously believe this? How many decades is it going to take before we get that awesome fake Star Trek future?

What most people forget is that Star Trek future followed a third world war that left hundred of millions dead.

And then didn't happen with a first contact situation first.

 :tap
Genres: Space Opera/Fantasy/Cyberpunk, with elements of LitRPG and GameLit, with a touch of the Supernatural. Also Spiritual and Games.



Timothy Ellis Kindle Author page. | Join the Hunter Legacy mailing list | The Hunter Imperium Universe on Facebook. | Forum Promo Page.
 

Post-Crisis D

Re: SFWAs Comments
« Reply #15 on: November 22, 2023, 03:49:50 AM »
I think the intent of these AI companies is generally good - they're trying to create a post-scarcity semi-utopian society, like Star Trek. Oh, and make boatloads of money in the interim.

If I remember right, didn't OpenAI start as an open source non-profit?  And then they made some advances and were like, screw that, let's make this a for-profit corporation!  Cha-Ching!  Cha-Ching!  Cha-Ching!  So, I have difficulty believing in any imagined altruism on their part.


You seriously believe this? How many decades is it going to take before we get that awesome fake Star Trek future? How many will suffer? We can't even reliably give people welfare, or medical care, or food now. And that's with jobs humans can do. Take away those jobs, and what do you really think is going to happen?

Yeah. Some of us may have jobs after this "AI" crap takes over. Very few, and it won't pay much. Thinking bespoke shoe makers is a relevant comparison. I can't even.

And people without jobs or hope for any kind of advancement in their life condition have a tendency to find things to do that may not bode well for society.


What most people forget is that Star Trek future followed a third world war that left hundred of millions dead.

And then didn't happen with a first contact situation first.

And being able to generate countless books and movies is not the same as having replicators that can provide basic needs such as food, water, shelter, tools, etc.
Mulder: "If you're distracted by fear of those around you, it keeps you from seeing the actions of those above."
The X-Files: "Blood"
 

Post-Crisis D

Re: SFWAs Comments
« Reply #16 on: November 22, 2023, 04:04:11 AM »
As for "Fair Use," U.S. Copyright law considers the following:

1) Purpose and Nature of the Use
Educational and non-commercial uses are more likely to be considered as "fair use" than commercial uses.  Transformative uses may also be considered fair use.  Commercial AI is likely to fail on the former.  As for that latter, is using the content of books to create more books really transformative?  Seems unlikely.

2) Nature of the Copyrighted Work
The use of creative works is less likely to be considered fair use than the use of factual works.  AI being "trained" on creative works to create simulated creative works is seems likely to fail here as well.

3) Amount of the Copyrighted Work Used
Well, they "trained" AI using whole books so, again, fail.

4) Effect of the Use on the Potential Market or Value of the Copyrighted Work
Since AI is being used to create books that would compete with the original works it was "trained" on, it would seem likely to fail here as well.  Additionally, AI has the potential to flood the market which would increase competition for the original works as well as reduce their commercial value.  That would also tend to make it fail the fair use test.

All four of those factors are weighed.  Even if you want to argue that AI is transformative, it still fails under the other three factors.  Now, sometimes courts have argued that using an entire work can be fair use, so even if AI passes that, it still fails on the remaining two.
Mulder: "If you're distracted by fear of those around you, it keeps you from seeing the actions of those above."
The X-Files: "Blood"
 

Bill Hiatt

  • Trilogy unlocked
  • *****
  • Posts: 3359
  • Thanked: 1153 times
  • Gender: Male
  • Tickling the imagination one book at a time
    • Bill Hiatt's Author Website
Re: SFWAs Comments
« Reply #17 on: November 22, 2023, 04:23:30 AM »
There are undeniably some uses of AI that could be transformative. But churning out "creative" products after being trained on creative products seems curiously untransformative.

In this respect, cases in which transformative use was found are instructive. Google won on a case that involved using book excerpts to beef up search results on related topics. The whole book is in the database, but only parts relevant to the search appear. No competing products are generated, and the purpose is clearly different.

Turnitin won on a case involving incorporating all submitted essays into a database. The essays are used to check new submissions for possible plagiarism. (In my experience, 50% of high school cheating involves students copying from other students. Turnitin's essay database also makes it harder for students to plagiarize from print sources--because they never know whether or not someone else plagiarized from the same source.) Anyway, the full text of the essays cannot be released without permission from the teacher to whose class the essay was submitted, and then only to examine the specifics of possible plagiarism. No competing products were generated, and again, the purpose is clearly different.


Tickling the imagination one book at a time
Bill Hiatt | fiction website | education website | Facebook author page | Twitter
 
The following users thanked this post: Post-Crisis D

PJ Post

Re: SFWAs Comments
« Reply #18 on: November 22, 2023, 11:20:18 PM »
The Fair Use defense lies in the fact that AI doesn't make copies of anything - it's not competing on a one to one basis, which is why we have copyright in the first place - to protect IPs from counterfeits that might negatively affect the original's profitability or place in the market. This is why style and genre can't be copyrighted.

The reality is that copyright doesn't even really apply to this situation, at least not in its current form.

Also, copyright was never intended to prevent general competition, in fact, I think the intention was just the opposite.

___

Everyone thought functional AGI was so far down the road that only academic science types were even bothering with it. They were a bunch of well-paid nerdy folks, not ruthless hedge fund managers bent on world domination. So, yeah, I think their intentions are good.

The natural fallout from AI is up to the government to resolve. We're going to have a post labor society, which will turn everything upside down. We can't blame these technological visionaries for Washington's inability to get out of its own way. For example, AI isn't the one letting our infrastructure crumble away before our very eyes. Ultimately, we have to take responsibility for this mess, we elected them.
 

TimothyEllis

  • Forum Owner
  • Administrator
  • Series unlocked
  • ******
  • Posts: 6213
  • Thanked: 2391 times
  • Gender: Male
  • Earth Galaxy core, 2618
    • The Hunter Imperium Universe
Re: SFWAs Comments
« Reply #19 on: November 22, 2023, 11:28:17 PM »
The Fair Use defense lies in the fact that AI doesn't make copies of anything

Plagiarism lawsuits? How do you plagiarize if it's not stored somewhere to access?

Look at Quora. The answer bots generate the same answer over and over again, pretty well word for word. It has to be stored somewhere.

That claim doesn't hold water in reality.
Genres: Space Opera/Fantasy/Cyberpunk, with elements of LitRPG and GameLit, with a touch of the Supernatural. Also Spiritual and Games.



Timothy Ellis Kindle Author page. | Join the Hunter Legacy mailing list | The Hunter Imperium Universe on Facebook. | Forum Promo Page.
 

Bill Hiatt

  • Trilogy unlocked
  • *****
  • Posts: 3359
  • Thanked: 1153 times
  • Gender: Male
  • Tickling the imagination one book at a time
    • Bill Hiatt's Author Website
Re: SFWAs Comments
« Reply #20 on: November 23, 2023, 01:43:04 AM »
Quote
The Fair Use defense lies in the fact that AI doesn't make copies of anything - it's not competing on a one to one basis, which is why we have copyright in the first place - to protect IPs from counterfeits that might negatively affect the original's profitability or place in the market. This is why style and genre can't be copyrighted.

I have to agree with Timothy on this one. We don't know exactly how often identical or very similar text gets spit out, and people may not even realize that's happening, so most instances probably go unreported. But the fact that it occurs at all suggests that, even though the works may not be stored by AI, Ai is capable of reproducing them under the right circumstances.

But the problem goes far beyond that. Yes, fair use addresses identical or very similar output. But we also have to consider how the data was obtained. As discussed, novels used in train seem mostly to be pirated. Some of them undoubtedly were. There is precedent for denying a fair use defense in a case where pirated copies were used. At least some of the images were probably pirated as well. (Visible Getty watermark=images that were scraped rather than purchased.)

It is true that AI training is something that wasn't anticipated by the drafters of copyright law. Courts could handle that in several different ways. They could essentially take your position, and say that the law isn't applicable, implicitly throwing the issue back to Congress. Or they could say that the logic of the original copyright law leads to the conclusion that AI training without permission is a violation. The latter position could be supported, ironically, by the very defense in one of the current copyright suits. (I'm sorry that I don't remember which one.) The first part of the defense essentially aligns with your thinking, but after that, the defense goes on to argue essentially that it would have been impossible to train AI without the use of other people's IP. Therefore, such training should be allowed.

This is a colossal admission, as some commentators have pointed out. It's a concession that highly lucrative AI products could not exist with the IP of others. However AI functions, that function is impossible without the training IP. Now, I would argue that isn't really accurate. AI could have been trained using work-for-hire materials, public domain materials, and through the proper licensing of IP. The reality is the developers didn't want to pay for what they took. And they wanted to use high class, modern material because they knew full well that one use of their novel-trained AI would be to produce more novels. A product that produced novels in the style of Charles Dickens wouldn't do. A product trained by hiring best selling novelists to write training material would have been much more expensive to produce. But these approaches would have avoided copyright issues. Instead, the developers stole IP because it was more convenient.

We all agree this is icky. I would argue that's because it is at the very least unethical. Is it illegal? If a judge is willing to connect the dots, I would argue it could be found to be illegal. IP is taken without permission. (In the case of pirating, and, in some localities, scraping, the act itself is illegal on its face). IP is used to create software part of the ultimate purpose of which is to compete with existing, including the ones it was trained on. The law doesn't directly prohibit this behavior because it wasn't possible until recently. But nor does it explicitly allow it. As Post-Crisis-D has already pointed out, the process, taken as a whole, is questionable or outright prohibited based on the four criteria for determining fair use. If one looks only at the training, it is possible to conclude that the result is transformative. But if one looks at the entire process, of which the training is just a part, it's difficult to see how it could be regarded as transformative.
Quote
The reality is that copyright doesn't even really apply to this situation, at least not in its current form.

Also, copyright was never intended to prevent general competition, in fact, I think the intention was just the opposite.
The letter of the law might not, but the spirit certainly does apply. If courts adopt a narrow reading of the law, the defendants will probably prevail. But if they adopt a broader reading, they might well find in favor of the plaintiffs.

True, copyright was never intended to prevent fair competition. But it was very much designed to prevent person A from competing with person B using person B's own IP. I'd argue that's exactly what the AI training causes. Defense attorneys themselves admit the crucial role of the IP of others in the process.

In the event that courts rule in favor of the defendants, the courts will essentially be saying, "Stealing from one person is wrong. But stealing from millions is okay."

There is reason to hope that might not happen. One of the cases, in which actual lyrics were reproduced verbatim, looks like a slam dunk to me, and the music industry always gets its pound of flesh. The Getty case has at least some possibilities because of that pesky trademark--which, if nothing else, proves that AI can directly plagiarize and doesn't really know the difference. I also suspect at least some of the other plaintiffs will prevail.

The fact that a number of companies have adopted the principles of consent and compensation in AI training, and that the entertainment industry has accepted a number of   
restrictions on AI, are both encouraging. Particularly in the former case, companies are doing that voluntarily, an indication that they aren't as sure as you are that AI training is going to make it through the courts unscathed.
 


Tickling the imagination one book at a time
Bill Hiatt | fiction website | education website | Facebook author page | Twitter
 
The following users thanked this post: Post-Crisis D