There aren't actually too many pro-AI trends in this study. Majorities do see a positive role for AI in Forecasting the weather (74%) Searching for financial crimes (70%) Searching for fraud in government benefits claims (70%) Developing new medicines (66%) Identifying suspects in a crime (61%)
In other words, people are willing to let AI do statistical analysis or medical research, but they went thumbs down on other applications.
In general, is AI more risky or more beneficial? 57% said more risky, 25% said more beneficial. 60% want to have more say over how AI is used. Majorities or pluralities say that AI will result in decline of human skills related to thinking creatively, forming relationships, making decisions, and solving problems. Interestingly for us, 76% of Americans say it?s extremely or very important to be able to tell if pictures, videos and text were made by AI or people.
So much for the idea that fans don't care. Now, it is true that the question wasn't designed to test reactions to creative products partially produced by AI (which would be most of them.) But it's reasonable to assume that products that are mostly AI would be highly suspect. And it's also easy to see why the industry goes bonkers over potential labeling and disclosure requirements.
There are a lot of fake videos (that is, AI-created videos) on Facebook and, when you look at the comments, a lot of people call them out. Sometimes they are labeled with "Sora 2" or whatever, but other times they are not. But, so far, there are often tells that reveal they were created by AI.
As some of us here have argued, there really is no need for AI in creative fields. For many of us, our whole lives we have been promised a future of automation where computers and robots would do all the work, which would free humans to do all our creative pursuits, like painting, writing, drawing or whatever. Instead, we are getting a lot of the opposite. And, fortunately, there seems to be growing backlash against it, which is also probably why a lot of "creatives" want to hide the fact they are using AI to do their writing or illustrating or whatever. If people want authenticity, they aren't going to get it from someone using AI. They're just not, no matter how much people want to jump up and down and claim that people don't care.
There's already a term for it: AI slop.
On the flip side, most of us are okay with using AI for data analysis, identifying fraud, medical research, etc. Many of those things were done on computers before AI. AI is just the current buzzword. It's a more advanced set of algorithms and interactivity than what we had before, but, despite the name, it's still not intelligence. So, I do believe that it is probably true that AI will create a lot of new jobs because people--actual people--will need to oversee and check and verify what these AI tools produce.
But, so far, despite all the promises of AI, AI still has:
1) NOT cured cancer
2) NOT figured out the actual identity of D.B. Cooper
3) NOT created any reliable, sustainable means of generating energy such as will be needed for all these AI data centers that, so far, seem to largely focus on creating fake cat videos
4) NOT figured out the identity of Jack the Ripper
5) NOT produced an original thought
The list could go on. If you're single, can AI find your perfect match? Nope. If you're broke, can AI develop a plan for you to earn $500 per day? Nope. If you have heart failure, can AI find a way to reverse it? Nope. If your dog is lost, can AI find it? Nope.
But, AI can be trained on thousands and thousands of copyrighted works in order to create derivatives of those works to compete with the original works it "trained" on without the permission of or compensation to the creators of any of those works.
And, oh yes, AI will be used in military applications to destroy targets and kill "enemy" soldiers without human oversight.
So, yeah, we get promises of utopia yet the primary uses of AI so far are to copy and steal creative works from people and to kill people. All while using tons and tons of energy.
But, it's okay because it helps some people develop more effective ads for their wares and write their eMails for them.
There are two other jokers in the deck--whether or not the AI bubble bursts, and how many times AI screws up publicly.
There was an incident several weeks ago that apparently has already dropped off the radar. At one company, their AI deleted everything on their servers. I don't remember if it also deleted or partially deleted data on the backups as well.
That sort of thing would give rational people pause over giving too much control over their systems to AI, which may be why the story hasn't gotten more traction.