Recent Posts

Pages: « 1 2 3 4 5 6 7 8 9 10 »
31
Bot Discussion Public / Re: What Is Copilot and Is It Better Than ChatGPT?
« Last post by Bill Hiatt on December 07, 2023, 12:34:37 AM »
I'm adopting a wait-and-see attitude. At best, AI's tendency for inaccuracy on nonfiction topics would make me wary of using AI assistance for anything requiring accuracy.

I think I recall Amazon made a deal with Anthropic (though I might be wrong about which company it was), and I now see what I'm better are AI summaries on some product pages. I checked a few of them, and they seem to be accurate summaries of the reviews involved, though I think it's easier for AI handle something with that finite a data set and that specific a subject.
32
Bot Discussion Public / Re: What Is Copilot and Is It Better Than ChatGPT?
« Last post by Post-Crisis D on December 06, 2023, 07:49:26 AM »
Lest we forget the historical nature of this company and its founder, here's a little reminder:

https://www.quotesandsayings.com/authors/bill-gates/

"There won’t be anything we won’t say to people to try and convince them that our way is the way to go."

And, let's not forget his multiple visits to a certain island . . .
33
Bot Discussion Public / Re: What Is Copilot and Is It Better Than ChatGPT?
« Last post by Jeff Tanyard on December 06, 2023, 07:11:48 AM »
Lest we forget the historical nature of this company and its founder, here's a little reminder:

https://www.quotesandsayings.com/authors/bill-gates/

"There won’t be anything we won’t say to people to try and convince them that our way is the way to go."
34
Bot Discussion Public / What Is Copilot and Is It Better Than ChatGPT?
« Last post by APP on December 06, 2023, 06:04:03 AM »
For those of you that are interested, Copilot is Microsoft’s personalized version/s of ChatGPT.

https://lifehacker.com/tech/what-is-microsoft-copilot
35
Bot Discussion Public / Two Faces of AI
« Last post by APP on December 03, 2023, 02:38:35 AM »
36
Bot Discussion Public / Re: Blowup at OpenAI
« Last post by Jeff Tanyard on November 25, 2023, 07:09:59 AM »
That was probably the scariest thing I'll read all year.

I belong to the camp that believes AI could/does pose an existential risk to humanity.


Remember when cults like Heaven's Gate and Jonestown, while crazy and self-destructive, were at least well-mannered enough to keep their destruction to themselves rather than trying to force the rest of us to share in it?

Ah, I miss the good old days.   :icon_sad:
37
Bot Discussion Public / Re: Blowup at OpenAI
« Last post by APP on November 25, 2023, 07:00:55 AM »
I belong to the camp that believes AI could/does pose an existential risk to humanity.

Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse
https://www.wired.com/story/sam-altman-second-coming-sparks-new-fears-ai-apocalypse/
38
Bot Discussion Public / Re: Blowup at OpenAI
« Last post by LBL on November 25, 2023, 02:15:06 AM »
That was probably the scariest thing I'll read all year.
39
Bot Discussion Public / Blowup at OpenAI
« Last post by APP on November 24, 2023, 05:59:47 AM »
For those of you that are interested, this article gives some great insight.

Note: The Wall Street Journal does have a paywall for many/most of its articles, but sometimes it doesn't.

https://www.wsj.com/tech/ai/openai-blowup-effective-altruism-disaster-f46a55e8?st=spwpxvm32gbl09w
40
Bot Discussion Public / Re: SFWA’s Comments
« Last post by Bill Hiatt on November 23, 2023, 01:43:04 AM »
Quote
The Fair Use defense lies in the fact that AI doesn't make copies of anything - it's not competing on a one to one basis, which is why we have copyright in the first place - to protect IPs from counterfeits that might negatively affect the original's profitability or place in the market. This is why style and genre can't be copyrighted.

I have to agree with Timothy on this one. We don't know exactly how often identical or very similar text gets spit out, and people may not even realize that's happening, so most instances probably go unreported. But the fact that it occurs at all suggests that, even though the works may not be stored by AI, Ai is capable of reproducing them under the right circumstances.

But the problem goes far beyond that. Yes, fair use addresses identical or very similar output. But we also have to consider how the data was obtained. As discussed, novels used in train seem mostly to be pirated. Some of them undoubtedly were. There is precedent for denying a fair use defense in a case where pirated copies were used. At least some of the images were probably pirated as well. (Visible Getty watermark=images that were scraped rather than purchased.)

It is true that AI training is something that wasn't anticipated by the drafters of copyright law. Courts could handle that in several different ways. They could essentially take your position, and say that the law isn't applicable, implicitly throwing the issue back to Congress. Or they could say that the logic of the original copyright law leads to the conclusion that AI training without permission is a violation. The latter position could be supported, ironically, by the very defense in one of the current copyright suits. (I'm sorry that I don't remember which one.) The first part of the defense essentially aligns with your thinking, but after that, the defense goes on to argue essentially that it would have been impossible to train AI without the use of other people's IP. Therefore, such training should be allowed.

This is a colossal admission, as some commentators have pointed out. It's a concession that highly lucrative AI products could not exist with the IP of others. However AI functions, that function is impossible without the training IP. Now, I would argue that isn't really accurate. AI could have been trained using work-for-hire materials, public domain materials, and through the proper licensing of IP. The reality is the developers didn't want to pay for what they took. And they wanted to use high class, modern material because they knew full well that one use of their novel-trained AI would be to produce more novels. A product that produced novels in the style of Charles Dickens wouldn't do. A product trained by hiring best selling novelists to write training material would have been much more expensive to produce. But these approaches would have avoided copyright issues. Instead, the developers stole IP because it was more convenient.

We all agree this is icky. I would argue that's because it is at the very least unethical. Is it illegal? If a judge is willing to connect the dots, I would argue it could be found to be illegal. IP is taken without permission. (In the case of pirating, and, in some localities, scraping, the act itself is illegal on its face). IP is used to create software part of the ultimate purpose of which is to compete with existing, including the ones it was trained on. The law doesn't directly prohibit this behavior because it wasn't possible until recently. But nor does it explicitly allow it. As Post-Crisis-D has already pointed out, the process, taken as a whole, is questionable or outright prohibited based on the four criteria for determining fair use. If one looks only at the training, it is possible to conclude that the result is transformative. But if one looks at the entire process, of which the training is just a part, it's difficult to see how it could be regarded as transformative.
Quote
The reality is that copyright doesn't even really apply to this situation, at least not in its current form.

Also, copyright was never intended to prevent general competition, in fact, I think the intention was just the opposite.
The letter of the law might not, but the spirit certainly does apply. If courts adopt a narrow reading of the law, the defendants will probably prevail. But if they adopt a broader reading, they might well find in favor of the plaintiffs.

True, copyright was never intended to prevent fair competition. But it was very much designed to prevent person A from competing with person B using person B's own IP. I'd argue that's exactly what the AI training causes. Defense attorneys themselves admit the crucial role of the IP of others in the process.

In the event that courts rule in favor of the defendants, the courts will essentially be saying, "Stealing from one person is wrong. But stealing from millions is okay."

There is reason to hope that might not happen. One of the cases, in which actual lyrics were reproduced verbatim, looks like a slam dunk to me, and the music industry always gets its pound of flesh. The Getty case has at least some possibilities because of that pesky trademark--which, if nothing else, proves that AI can directly plagiarize and doesn't really know the difference. I also suspect at least some of the other plaintiffs will prevail.

The fact that a number of companies have adopted the principles of consent and compensation in AI training, and that the entertainment industry has accepted a number of   
restrictions on AI, are both encouraging. Particularly in the former case, companies are doing that voluntarily, an indication that they aren't as sure as you are that AI training is going to make it through the courts unscathed.
 
Pages: « 1 2 3 4 5 6 7 8 9 10 »