Recent Posts

Pages: « 1 2 3 4 5 6 7 8 9 10 »
81
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Lorri Moulton on October 30, 2025, 07:20:46 AM »
I get spammed 2-3 emails PER DAY now from fake book readers, reviewers and agents. Some of them are so thorough that they're now including photos. I don't know if the photos are real. If I said that to you only 2 yrs ago, you'd probably think I'm paranoid and crazy. But I truly believe they're now making up photos.

I don't believe any of these.  If someone really wanted to reach me, they'd probably contact me through my website...not through social media or my email.

Like my grandmother used to say, "If it's too good to be true, run the other way." LOL
82
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Post-Doctorate D on October 30, 2025, 06:57:07 AM »
It feels as if AI is slowly taking over social media. A huge part of new content seems to be AI. So we're getting to a point where you can't trust what your eyes see as real anymore. This goes with news and PM/email contacts.

Facebook videos are like Dumb and Dumber.  There are videos created by real people that are dumb and then there are videos created by AI that are dumber.
83
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by cecilia_writer on October 30, 2025, 06:50:24 AM »
Just the other day I read quite an encouraging article (possibly on the BBC news website, possibly not!) about the possibility of robots taking on some tasks like cleaning - there already so-called robotic vacuum cleaners, of course - and even caring for older people. Though of course some older people might not take to this idea. The comments on the article were generally quite positive.
84
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by alhawke on October 30, 2025, 06:02:32 AM »
Since we've turned this a bit to AI, I want to comment on something I've seen in the past few months.

It feels as if AI is slowly taking over social media. A huge part of new content seems to be AI. So we're getting to a point where you can't trust what your eyes see as real anymore. This goes with news and PM/email contacts.

I get spammed 2-3 emails PER DAY now from fake book readers, reviewers and agents. Some of them are so thorough that they're now including photos. I don't know if the photos are real. If I said that to you only 2 yrs ago, you'd probably think I'm paranoid and crazy. But I truly believe they're now making up photos.

Most recently, one that I wasn't sure if it was real or not, I ironically scanned the message through ChatGPT. Then I came up with a ChatGPT response to email back. Prety soon the machines will just be talking to each other. :icon_rofl:
85
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Post-Doctorate D on October 30, 2025, 03:56:00 AM »
There aren't actually too many pro-AI trends in this study. Majorities do see a positive role for AI in
Quote
Forecasting the weather (74%) Searching for financial crimes (70%) Searching for fraud in government benefits claims (70%) Developing new medicines (66%) Identifying suspects in a crime (61%)
In other words, people are willing to let AI do statistical analysis or medical research, but they went thumbs down on other applications.

In general, is AI more risky or more beneficial? 57% said more risky, 25% said more beneficial. 60% want to have more say over how AI is used. Majorities or pluralities say that AI will result in decline of human skills related to thinking creatively, forming relationships, making decisions, and solving problems. Interestingly for us, 76% of Americans say
Quote
it?s extremely or very important to be able to tell if pictures, videos and text were made by AI or people.
So much for the idea that fans don't care. Now, it is true that the question wasn't designed to test reactions to creative products partially produced by AI (which would be most of them.) But it's reasonable to assume that products that are mostly AI would be highly suspect. And it's also easy to see why the industry goes bonkers over potential labeling and disclosure requirements.

There are a lot of fake videos (that is, AI-created videos) on Facebook and, when you look at the comments, a lot of people call them out.  Sometimes they are labeled with "Sora 2" or whatever, but other times they are not.  But, so far, there are often tells that reveal they were created by AI.

As some of us here have argued, there really is no need for AI in creative fields.  For many of us, our whole lives we have been promised a future of automation where computers and robots would do all the work, which would free humans to do all our creative pursuits, like painting, writing, drawing or whatever.  Instead, we are getting a lot of the opposite.  And, fortunately, there seems to be growing backlash against it, which is also probably why a lot of "creatives" want to hide the fact they are using AI to do their writing or illustrating or whatever.  If people want authenticity, they aren't going to get it from someone using AI.  They're just not, no matter how much people want to jump up and down and claim that people don't care.

There's already a term for it: AI slop.

On the flip side, most of us are okay with using AI for data analysis, identifying fraud, medical research, etc.  Many of those things were done on computers before AI.  AI is just the current buzzword.  It's a more advanced set of algorithms and interactivity than what we had before, but, despite the name, it's still not intelligence.  So, I do believe that it is probably true that AI will create a lot of new jobs because people--actual people--will need to oversee and check and verify what these AI tools produce.

But, so far, despite all the promises of AI, AI still has:

1) NOT cured cancer
2) NOT figured out the actual identity of D.B. Cooper
3) NOT created any reliable, sustainable means of generating energy such as will be needed for all these AI data centers that, so far, seem to largely focus on creating fake cat videos
4) NOT figured out the identity of Jack the Ripper
5) NOT produced an original thought

The list could go on.  If you're single, can AI find your perfect match?  Nope.  If you're broke, can AI develop a plan for you to earn $500 per day?  Nope.  If you have heart failure, can AI find a way to reverse it?  Nope.  If your dog is lost, can AI find it?  Nope.

But, AI can be trained on thousands and thousands of copyrighted works in order to create derivatives of those works to compete with the original works it "trained" on without the permission of or compensation to the creators of any of those works.

And, oh yes, AI will be used in military applications to destroy targets and kill "enemy" soldiers without human oversight.

So, yeah, we get promises of utopia yet the primary uses of AI so far are to copy and steal creative works from people and to kill people.  All while using tons and tons of energy.

But, it's okay because it helps some people develop more effective ads for their wares and write their eMails for them.


There are two other jokers in the deck--whether or not the AI bubble bursts, and how many times AI screws up publicly.

There was an incident several weeks ago that apparently has already dropped off the radar.  At one company, their AI deleted everything on their servers.  I don't remember if it also deleted or partially deleted data on the backups as well.

That sort of thing would give rational people pause over giving too much control over their systems to AI, which may be why the story hasn't gotten more traction.
86
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Bill Hiatt on October 29, 2025, 11:55:44 PM »
Nothing has changed--except that public awareness is greater.

Politicians can manipulate people on a lot of issues, but they can't change people's perceptions about certain things. If you're paying more at the grocery store, for example, they can't convince you that you're actually paying less. (Not that they don't occasionally try.) And when you're out of a job, they can't convince you that you actually have one.

The last election demonstrates how voters prioritize personal issues over national or global ones. Economic growth was good, especially when compared to other industrialized nations. But the growth was unevenly spread. Drilling down to the county level, areas that voted for Harris were responsible for 65% of the gross domestic product. Areas that voted for Trump were responsible for 35% of it. In other words, people most positively affected by economic growth supported the party in power. People least affected supported the party out of power and could have cared less about the overall stats.

As the detrimental effects of AI on employment become more and more visible--especially for white collar voters who tend to have higher participation rates--politicians will have to pay more attention to AI, like it or not. We can already see this happening in both parties. Continuing increases in unemployment will only make it more intense.

Consider the recent Pew survey, which shows an increase in public unease about AI. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/

In 2021, 37% of people were more concerned than excited about AI. In 2025, 50% said more concerned than excited, and only 10% said more excited than concerned. You don't need a political consultant to tell you that's not the way you want to see those numbers trending. And the Pew survey didn't even ask questions related specifically to employment.

There aren't actually too many pro-AI trends in this study. Majorities do see a positive role for AI in
Quote
Forecasting the weather (74%) Searching for financial crimes (70%) Searching for fraud in government benefits claims (70%) Developing new medicines (66%) Identifying suspects in a crime (61%)
In other words, people are willing to let AI do statistical analysis or medical research, but they went thumbs down on other applications.

In general, is AI more risky or more beneficial? 57% said more risky, 25% said more beneficial. 60% want to have more say over how AI is used. Majorities or pluralities say that AI will result in decline of human skills related to thinking creatively, forming relationships, making decisions, and solving problems. Interestingly for us, 76% of Americans say
Quote
it?s extremely or very important to be able to tell if pictures, videos and text were made by AI or people.
So much for the idea that fans don't care. Now, it is true that the question wasn't designed to test reactions to creative products partially produced by AI (which would be most of them.) But it's reasonable to assume that products that are mostly AI would be highly suspect. And it's also easy to see why the industry goes bonkers over potential labeling and disclosure requirements.

All of that said, abstract issues don't move voters as much as concrete ones. So much depends on how AI affects tangible things like the job market. If the job market becomes robust despite AI, then it likely won't be a major issue in 2028. If, as I think more likely, the job market is declining in 2028, and AI is playing a visible role in that, politicians will be running for cover--which means, at the very least, more AI regulation. With anti-AI sentiment in at least some areas far greater than pro-AI sentiment, it's going to be hard for politicians to be pro-AI or even silent.

There are two other jokers in the deck--whether or not the AI bubble bursts, and how many times AI screws up publicly. In the former case, the amount of money on the table in going to shrink. AI stock is wildly overvalued based on what AI can do now. If it doesn't realize its potential fast enough, the bubble will burst. In the latter case, despite the earlier snafus in the legal profession, we now have a huge public disclosure of clerks using unchecked AI output to provide precedents for judges to use in rulings. New regulations are now being put in place for that kind of issue. Each disclosure like that fans anti-AI sentiment, even though human error is obviously involved.

I think some kind of process of reigning AI in a bit is more probable than not. And as for money on the table, it's not going to matter if voters get mad enough to flip the table over.

But of course, even the 2026 election is over a year away, let alone the 2028. Lots can happen. Consider all the people who polls said were overwhelming favorites for nomination and good bets for election who didn't end up in the White House: Howard Dean (D-2004), Rudy Giuliani (R-2008), Hillary Clinton (D-2008, 2016), Jeb Bush (R-2016). Three out of four didn't even end up close to nominated, let alone elected. There are other examples. The only sure thing in politics is that there is no sure thing.   
87
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by PJ Post on October 29, 2025, 10:46:47 PM »
When this all started, I said AI was inevitable becasue there was just too much money on the table. Nothing has changed.
88
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Hopscotch on October 29, 2025, 06:17:03 AM »
Ah, Bill's an optimist.  Or maybe just a hopefulist.  But, as the philosopher Carl Hiaasen says, In life, always assume the worst.  That's what I expect.
89
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Post-Doctorate D on October 29, 2025, 04:53:26 AM »
I wish we/they/someone would come up with a more accurate term than "AI."

When we generally think of AI, we think of examples like HAL, Data, KITT, K-9 or various other characters from scifi with artificial intelligence.  Characters like that are able to think similarly to how people think.  They can reason and be reasoned with.

What is currently labeled as "AI" is nothing like that.  There is no intelligence.  It has no understanding.  You cannot reason with it anymore than you can reason with a Magic 8 Ball.

There are AI researchers that argue that we will never actually achieve actual artificial intelligence using the methods commonly used for developing "AI" tools.  That it will eventually reach a dead end.

A thousand monkeys typing for a thousand years may eventually produce the works of Shakespeare, but they won't have any understanding of the symbols on the page.  And that's pretty much where we are with current "AI."

That's not to say a rogue "AI" couldn't cause havoc.  It's kind of like a bear trap in the woods.  It may trap a bear, but it will also trap anything else that sets it off, whether human or deer or whatever.
90
Bot Discussion Public / Re: AI book piracy lawsuit payout
« Last post by Bill Hiatt on October 29, 2025, 04:18:48 AM »
An AI encyclopedia checked by the AI that created it? What could possibly go wrong?

Meanwhile, I notice that the Amazon 14,000 person layoff because of AI, while certainly not the first, is getting a lot of media attention. If this trend continues, particularly in a job market that is not especially robust to begin with, I would expect AI to be a major issue in 2028. And/or people's constant fretting over the AI bubble bursting may become a self-fulfilling prophecy, perhaps in conjunction with a rising anti-AI political climate.
Pages: « 1 2 3 4 5 6 7 8 9 10 »