Recent Posts

Pages: 1 2 3 4 5 6 7 8 9 10
1
Bot Discussion Public / Re: Blowup at OpenAI
« Last post by Jeff Tanyard on November 25, 2023, 07:09:59 AM »
That was probably the scariest thing I'll read all year.

I belong to the camp that believes AI could/does pose an existential risk to humanity.


Remember when cults like Heaven's Gate and Jonestown, while crazy and self-destructive, were at least well-mannered enough to keep their destruction to themselves rather than trying to force the rest of us to share in it?

Ah, I miss the good old days.   :icon_sad:
2
Bot Discussion Public / Re: Blowup at OpenAI
« Last post by APP on November 25, 2023, 07:00:55 AM »
I belong to the camp that believes AI could/does pose an existential risk to humanity.

Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse
https://www.wired.com/story/sam-altman-second-coming-sparks-new-fears-ai-apocalypse/
3
Bot Discussion Public / Re: Blowup at OpenAI
« Last post by LBL on November 25, 2023, 02:15:06 AM »
That was probably the scariest thing I'll read all year.
4
Bot Discussion Public / Blowup at OpenAI
« Last post by APP on November 24, 2023, 05:59:47 AM »
For those of you that are interested, this article gives some great insight.

Note: The Wall Street Journal does have a paywall for many/most of its articles, but sometimes it doesn't.

https://www.wsj.com/tech/ai/openai-blowup-effective-altruism-disaster-f46a55e8?st=spwpxvm32gbl09w
5
Bot Discussion Public / Re: SFWA’s Comments
« Last post by Bill Hiatt on November 23, 2023, 01:43:04 AM »
Quote
The Fair Use defense lies in the fact that AI doesn't make copies of anything - it's not competing on a one to one basis, which is why we have copyright in the first place - to protect IPs from counterfeits that might negatively affect the original's profitability or place in the market. This is why style and genre can't be copyrighted.

I have to agree with Timothy on this one. We don't know exactly how often identical or very similar text gets spit out, and people may not even realize that's happening, so most instances probably go unreported. But the fact that it occurs at all suggests that, even though the works may not be stored by AI, Ai is capable of reproducing them under the right circumstances.

But the problem goes far beyond that. Yes, fair use addresses identical or very similar output. But we also have to consider how the data was obtained. As discussed, novels used in train seem mostly to be pirated. Some of them undoubtedly were. There is precedent for denying a fair use defense in a case where pirated copies were used. At least some of the images were probably pirated as well. (Visible Getty watermark=images that were scraped rather than purchased.)

It is true that AI training is something that wasn't anticipated by the drafters of copyright law. Courts could handle that in several different ways. They could essentially take your position, and say that the law isn't applicable, implicitly throwing the issue back to Congress. Or they could say that the logic of the original copyright law leads to the conclusion that AI training without permission is a violation. The latter position could be supported, ironically, by the very defense in one of the current copyright suits. (I'm sorry that I don't remember which one.) The first part of the defense essentially aligns with your thinking, but after that, the defense goes on to argue essentially that it would have been impossible to train AI without the use of other people's IP. Therefore, such training should be allowed.

This is a colossal admission, as some commentators have pointed out. It's a concession that highly lucrative AI products could not exist with the IP of others. However AI functions, that function is impossible without the training IP. Now, I would argue that isn't really accurate. AI could have been trained using work-for-hire materials, public domain materials, and through the proper licensing of IP. The reality is the developers didn't want to pay for what they took. And they wanted to use high class, modern material because they knew full well that one use of their novel-trained AI would be to produce more novels. A product that produced novels in the style of Charles Dickens wouldn't do. A product trained by hiring best selling novelists to write training material would have been much more expensive to produce. But these approaches would have avoided copyright issues. Instead, the developers stole IP because it was more convenient.

We all agree this is icky. I would argue that's because it is at the very least unethical. Is it illegal? If a judge is willing to connect the dots, I would argue it could be found to be illegal. IP is taken without permission. (In the case of pirating, and, in some localities, scraping, the act itself is illegal on its face). IP is used to create software part of the ultimate purpose of which is to compete with existing, including the ones it was trained on. The law doesn't directly prohibit this behavior because it wasn't possible until recently. But nor does it explicitly allow it. As Post-Crisis-D has already pointed out, the process, taken as a whole, is questionable or outright prohibited based on the four criteria for determining fair use. If one looks only at the training, it is possible to conclude that the result is transformative. But if one looks at the entire process, of which the training is just a part, it's difficult to see how it could be regarded as transformative.
Quote
The reality is that copyright doesn't even really apply to this situation, at least not in its current form.

Also, copyright was never intended to prevent general competition, in fact, I think the intention was just the opposite.
The letter of the law might not, but the spirit certainly does apply. If courts adopt a narrow reading of the law, the defendants will probably prevail. But if they adopt a broader reading, they might well find in favor of the plaintiffs.

True, copyright was never intended to prevent fair competition. But it was very much designed to prevent person A from competing with person B using person B's own IP. I'd argue that's exactly what the AI training causes. Defense attorneys themselves admit the crucial role of the IP of others in the process.

In the event that courts rule in favor of the defendants, the courts will essentially be saying, "Stealing from one person is wrong. But stealing from millions is okay."

There is reason to hope that might not happen. One of the cases, in which actual lyrics were reproduced verbatim, looks like a slam dunk to me, and the music industry always gets its pound of flesh. The Getty case has at least some possibilities because of that pesky trademark--which, if nothing else, proves that AI can directly plagiarize and doesn't really know the difference. I also suspect at least some of the other plaintiffs will prevail.

The fact that a number of companies have adopted the principles of consent and compensation in AI training, and that the entertainment industry has accepted a number of   
restrictions on AI, are both encouraging. Particularly in the former case, companies are doing that voluntarily, an indication that they aren't as sure as you are that AI training is going to make it through the courts unscathed.
 
6
Bot Discussion Public / Re: SFWA’s Comments
« Last post by TimothyEllis on November 22, 2023, 11:28:17 PM »
The Fair Use defense lies in the fact that AI doesn't make copies of anything

Plagiarism lawsuits? How do you plagiarize if it's not stored somewhere to access?

Look at Quora. The answer bots generate the same answer over and over again, pretty well word for word. It has to be stored somewhere.

That claim doesn't hold water in reality.
7
Bot Discussion Public / Re: SFWA’s Comments
« Last post by PJ Post on November 22, 2023, 11:20:18 PM »
The Fair Use defense lies in the fact that AI doesn't make copies of anything - it's not competing on a one to one basis, which is why we have copyright in the first place - to protect IPs from counterfeits that might negatively affect the original's profitability or place in the market. This is why style and genre can't be copyrighted.

The reality is that copyright doesn't even really apply to this situation, at least not in its current form.

Also, copyright was never intended to prevent general competition, in fact, I think the intention was just the opposite.

___

Everyone thought functional AGI was so far down the road that only academic science types were even bothering with it. They were a bunch of well-paid nerdy folks, not ruthless hedge fund managers bent on world domination. So, yeah, I think their intentions are good.

The natural fallout from AI is up to the government to resolve. We're going to have a post labor society, which will turn everything upside down. We can't blame these technological visionaries for Washington's inability to get out of its own way. For example, AI isn't the one letting our infrastructure crumble away before our very eyes. Ultimately, we have to take responsibility for this mess, we elected them.
8
Bot Discussion Public / Re: SFWA’s Comments
« Last post by Bill Hiatt on November 22, 2023, 04:23:30 AM »
There are undeniably some uses of AI that could be transformative. But churning out "creative" products after being trained on creative products seems curiously untransformative.

In this respect, cases in which transformative use was found are instructive. Google won on a case that involved using book excerpts to beef up search results on related topics. The whole book is in the database, but only parts relevant to the search appear. No competing products are generated, and the purpose is clearly different.

Turnitin won on a case involving incorporating all submitted essays into a database. The essays are used to check new submissions for possible plagiarism. (In my experience, 50% of high school cheating involves students copying from other students. Turnitin's essay database also makes it harder for students to plagiarize from print sources--because they never know whether or not someone else plagiarized from the same source.) Anyway, the full text of the essays cannot be released without permission from the teacher to whose class the essay was submitted, and then only to examine the specifics of possible plagiarism. No competing products were generated, and again, the purpose is clearly different.
9
Bot Discussion Public / Re: SFWA’s Comments
« Last post by Post-Crisis D on November 22, 2023, 04:04:11 AM »
As for "Fair Use," U.S. Copyright law considers the following:

1) Purpose and Nature of the Use
Educational and non-commercial uses are more likely to be considered as "fair use" than commercial uses.  Transformative uses may also be considered fair use.  Commercial AI is likely to fail on the former.  As for that latter, is using the content of books to create more books really transformative?  Seems unlikely.

2) Nature of the Copyrighted Work
The use of creative works is less likely to be considered fair use than the use of factual works.  AI being "trained" on creative works to create simulated creative works is seems likely to fail here as well.

3) Amount of the Copyrighted Work Used
Well, they "trained" AI using whole books so, again, fail.

4) Effect of the Use on the Potential Market or Value of the Copyrighted Work
Since AI is being used to create books that would compete with the original works it was "trained" on, it would seem likely to fail here as well.  Additionally, AI has the potential to flood the market which would increase competition for the original works as well as reduce their commercial value.  That would also tend to make it fail the fair use test.

All four of those factors are weighed.  Even if you want to argue that AI is transformative, it still fails under the other three factors.  Now, sometimes courts have argued that using an entire work can be fair use, so even if AI passes that, it still fails on the remaining two.
10
Bot Discussion Public / Re: SFWA’s Comments
« Last post by Post-Crisis D on November 22, 2023, 03:49:50 AM »
I think the intent of these AI companies is generally good - they're trying to create a post-scarcity semi-utopian society, like Star Trek. Oh, and make boatloads of money in the interim.

If I remember right, didn't OpenAI start as an open source non-profit?  And then they made some advances and were like, screw that, let's make this a for-profit corporation!  Cha-Ching!  Cha-Ching!  Cha-Ching!  So, I have difficulty believing in any imagined altruism on their part.


You seriously believe this? How many decades is it going to take before we get that awesome fake Star Trek future? How many will suffer? We can't even reliably give people welfare, or medical care, or food now. And that's with jobs humans can do. Take away those jobs, and what do you really think is going to happen?

Yeah. Some of us may have jobs after this "AI" crap takes over. Very few, and it won't pay much. Thinking bespoke shoe makers is a relevant comparison. I can't even.

And people without jobs or hope for any kind of advancement in their life condition have a tendency to find things to do that may not bode well for society.


What most people forget is that Star Trek future followed a third world war that left hundred of millions dead.

And then didn't happen with a first contact situation first.

And being able to generate countless books and movies is not the same as having replicators that can provide basic needs such as food, water, shelter, tools, etc.
Pages: 1 2 3 4 5 6 7 8 9 10