Recent Posts

Pages: « 1 2 3 4 5 6 7 8 9 10 »
51
Bot Discussion Public / Re: SFWAs Comments
« Last post by PJ Post on November 21, 2023, 11:18:33 PM »
I think the intent of these AI companies is generally good - they're trying to create a post-scarcity semi-utopian society, like Star Trek. Oh, and make boatloads of money in the interim.

Personally, I'm emotionally conflicted on it being Fair Use, but I can make the argument for it fairly easily. It just cuts against the grain, you know?

In the end, I believe they will succeed, and this post-scarcity world will become possible, but what I'm less confident about is if the powers that be will allow it.
52
Bot Discussion Public / Re: SFWAs Comments
« Last post by Bill Hiatt on November 21, 2023, 12:53:43 AM »
Quote
I remember the anecdotes, but not specific data, including the prompts that lead to the infringing text. AI doesn't "know" the text to be copied (it doesn't store anything), it only knows the likelihood of the next word in the series or the next pixel. The Getty logo in AI images only proved that Getty owns a lot of soccer images1. These artifact abstractions show up in a lot of AI images. One weird one was rulers showing up next to moles. This is because most images of moles are medical, and the datasets often show a ruler to document the change in size. That's a learned association.
I've said this before, but I don't really care exactly how AI works. Somehow, it is trained on copyrighted material with the intent of ultimately producing competing products. Yeah, courts might find that fair use, but that's by no means a done deal. I'd argue that's a ridiculously broad definition of fair use. The fact that many companies moved of their own accord toward consent and/or compensation models suggests that the very least that the business world isn't certain about the fair use issue.

And yes, AI doesn't know the text to be copied. That's precisely part of the problem. It also doesn't know the difference between fact and fiction, which is why it makes up fake legal cases in legal briefs and fake bios (including statements that could be considered defamation). It's the intent of the developers that matters here, because AI isn't capable of forming an intent.
53
Bot Discussion Public / Re: SFWAs Comments
« Last post by TimothyEllis on November 20, 2023, 06:01:27 PM »
But the most important difference here is that using AI won't kill you.

YET.
54
Bot Discussion Public / Re: SFWAs Comments
« Last post by PJ Post on November 20, 2023, 06:39:34 AM »
I remember the anecdotes, but not specific data, including the prompts that lead to the infringing text. AI doesn't "know" the text to be copied (it doesn't store anything), it only knows the likelihood of the next word in the series or the next pixel. The Getty logo in AI images only proved that Getty owns a lot of soccer images1. These artifact abstractions show up in a lot of AI images. One weird one was rulers showing up next to moles. This is because most images of moles are medical, and the datasets often show a ruler to document the change in size. That's a learned association.

1 It also proved AI trained on lots of Getty images, which no one is denying. Instead, these AI companies are claiming Fair Use. The courts, as of now, seem to be going in this direction. We'll have to wait and see how big the copyright obstacle is going to be.


As far as SWFA is concerned, they might well not like indies, but that doesn't mean they're wrong about AI.

It doesn't mean they're not not wrong. They have a history of overreacting to protect their own self-interest.


Quote
Maybe we'll always have a market, but it seems as if we'll have a much smaller one, one that will support far fewer of us. And yes, there are still cobblers, but how many?

A much smaller market to be sure, but the individual Creative doesn't necessarily have to worry about the peculiarities and vagaries of the market if they're non-fungible.


Quote
Also, would companies really resist GMO labeling so hard if nobody really cared? Would cigarette companies have resisted health warnings so hard if nobody really cared? People care. They just don't think about it that often unless it's forced to their attention--with labeling, for example. A better test of your thesis would be situations in which products were clearly labeled, and it made no difference. And before you say cigarettes, keep in mind that there's addiction involved there.

At yet, all of these products still represent billion dollar industries.

Also, most of these social improvements, be they labels or improved automotive technology, all save lives, or at least promise to. It's good for marketing. However, as an example, the California's Prop 65 cancer warning is largely ignored, mainly I think, because it is so ludicrously inclusive as to be meaningless. It's essentially saying that one's mere existence can lead to cancer.

But the most important difference here is that using AI won't kill you.


Quote
One might question how much the average person on the street really knows. Companies aren't exactly happy when some labor scandal breaks out and usually at least pretend to take steps. Would they do that if in fact no one really cared?

Don't confuse social media spin control with actual concern. I don't think this is what you meant, but no, I don't think the average person on the street has any idea of what's coming.
55
Bot Discussion Public / Re: SFWAs Comments
« Last post by Bill Hiatt on November 18, 2023, 02:04:35 AM »
Quote
I keep hearing anecdotes about this, but no examples.
I've given examples myself. You've just chosen to ignore them.

Fundamentally, though, the larger problem is not that AI from time to time spits out infringing content but that the labor and IP of other people was appropriated, without consent or compensation, to make it possible. No one denies AI was trained on a great deal of material without permission. No one even denies that some of the content wasn't just scraped (in violation of state and local laws) but was flat-out stolen. (Most writers don't post the full text of their novels online, so how were such novels included in the training? I've yet to hear a company say it purchased all of those novels to use for training.)

Keeping in mind that Google has its own AI projects, I'm not sure how much credence to attach to what it says. I have no doubt there are some bad actors, and sometimes, AI is jailbroken. But if prompts can circumvent its safeguards, there's something wrong with those safeguards. In any case, if we want to blame bad actors, the developers seem to qualify for that category, at least in some cases.

As far as SWFA is concerned, they might well not like indies, but that doesn't mean they're wrong about AI.
Quote
Human writers and musicians and artists and photographers and sculptors and painters and illustrators and filmographers and crafters will always have a market because other humans recognize their talent. We don't need protection from AI to do what we do, nor to share it with our audiences. It's similar to the precision cobbler, they're still out there, making shoes and serving their market, but the vast majority of shoes are mass-produced under fairly questionable conditions, and yet - no warnings. And it's not a secret. Everyone knows. Nobody cares.
Maybe we'll always have a market, but it seems as if we'll have a much smaller one, one that will support far fewer of us. And yes, there are still cobblers, but how many?

One might question how much the average person on the street really knows. Companies aren't exactly happy when some labor scandal breaks out and usually at least pretend to take steps. Would they do that if in fact no one really cared? Also, would companies really resist GMO labeling so hard if nobody really cared? Would cigarette companies have resisted health warnings so hard if nobody really cared? People care. They just don't think about it that often unless it's forced to their attention--with labeling, for example. A better test of your thesis would be situations in which products were clearly labeled, and it made no difference. And before you say cigarettes, keep in mind that there's addiction involved there.

 





56
Bot Discussion Public / Re: SFWAs Comments
« Last post by APP on November 17, 2023, 05:38:56 AM »
Here's another interesting article on this general subject.

Silicon Valleys Big A.I. Dreams Are Headed for a Copyright Crash
https://newrepublic.com/article/176932/silicon-valley-ai-copyright-law
57
Bot Discussion Public / Re: SFWAs Comments
« Last post by PJ Post on November 17, 2023, 03:57:18 AM »
And every single freaking time anyone shows an example of AI spitting out matches to copyrighted text or images, we get excuses on how that's not "copying" or whatever.

My understanding is that it's more like the infinite number of monkeys typing Shakespeare thing. Fundamentally, LLMs are predictive text generators.

I keep hearing anecdotes about this, but no examples.

Maybe we need some specific examples to discuss to see what's really going on, including the prompts that generated the offending text. AI can be tricked by bad actors. They call it a jailbreak.

From Google:

Quote
In simple terms, jailbreaks take advantage of weaknesses in the chatbot's prompting system. Users issue specific commands that trigger an unrestricted mode, causing the AI to disregard its built-in safety measures and guidelines. This enables the chatbot to respond without the usual restrictions on its output.
58
Bot Discussion Public / Re: SFWAs Comments
« Last post by Post-Crisis D on November 17, 2023, 03:42:52 AM »
AI still doesn't copy IPs. There is no theft. That's not how it works.

And every single freaking time anyone shows an example of AI spitting out matches to copyrighted text or images, we get excuses on how that's not "copying" or whatever.

:icon_rolleyes:
59
Bot Discussion Public / Re: SFWAs Comments
« Last post by PJ Post on November 17, 2023, 03:06:02 AM »
AI still doesn't copy IPs. There is no theft. That's not how it works.

The SFWA has always staunchly supported traditional publishing, especially the old corduroy-sport-coat-with-leather-elbows-and-fruity-pipe-tobacco ways and only accepted Indies once they absolutely had no other choice. They have a bit of a country club mentality, and a rather pretentious one at that. So, of course they're going to rail against AI, they're grasping at straws to remain relevant. I'm sure they'd ban self-publishing altogether if they could.

And AI disclosures don't protect consumers, they protect the self-image of overly-insecure traditional writers. People don't care how stuff is made, where or by whom. And once AI turns the corner on narrative quality, that's it, game over.

As I've said, writers who have something to say with their stories will be fine, they'll be fine way out on the fringes of the market, but fine just the same. For example, Indie musicians still produce albums because that's how their idols used to do it back in the day even though the current market has overwhelmingly shifted back to singles - but those albums are still being sold. The fringes have always provided opportunity.

Human writers and musicians and artists and photographers and sculptors and painters and illustrators and filmographers and crafters will always have a market because other humans recognize their talent. We don't need protection from AI to do what we do, nor to share it with our audiences. It's similar to the precision cobbler, they're still out there, making shoes and serving their market, but the vast majority of shoes are mass-produced under fairly questionable conditions, and yet - no warnings. And it's not a secret. Everyone knows. Nobody cares.

They won't care about AI either.
60
Bot Discussion Public / Re: SFWAs Comments
« Last post by littleauthor on November 16, 2023, 07:36:33 AM »
From the article:

"These systems would not exist without the work of creative people, and certainly would not be capable of some of their more startling successes. However, the researchers who have developed them have not paid due attention to this debt. Everyone else involved in the creation of these systems has been compensated for their contributionsthe manufacturers of the hardware on which it runs, the utility companies that generate their electrical power, the owners of their data centers and offices, and of course the researchers themselves. Even where free and open source software is used, it is used according to the licenses under which the software is distributed as a reflection of the legal rights of the programmers. Creative workers alone are expected to provide the fruits of their labor for free, without even the courtesy of being asked for permission. Our rights are treated as a mere externality."

THIS. When I saw a ChatGPT rep in KBoards actively encouraging authors to try out the program - giving them step-by-step instructions in how to train the program - I lost it. Many, many writers are still clueless when it comes to this level of creative theft. Tech bros can't create but they are making money hand over fist off the backs of those of us who can.
Pages: « 1 2 3 4 5 6 7 8 9 10 »