PJ, we'll have to agree to disagree.
I agree that there will always be a place for writers producing quality work. But that's partly because I don't believe that Ai will just keep improving. I think there are inherent constraints that make such a progression unlikely. (And the the song you're using as an example is pretty but generic. Also, "bright blue hair"? Uh...)
Will the publishing industry implode? Maybe, but that's not a certainty. And if AI becomes dominant enough to create that outcome, I'm not sure we'll survive it, either. The upscale tendency of niche markets that demand human-made products may change, but that's not a certainty, either. In other fields, human-made products tend to be more expensive. Will books be different? Maybe, but you've consistently resisted the idea that publishing is different. And if books fall into the same pattern, then yes, luxury branding will be an issue. And the self published author is unlikely to ever be the luxury brand.
For everyone outside the household names, if you are right, and AI can produce material that's good enough, how large a population will that leave who isn't content with good enough? Will it necessarily include all of our tribes? Unless we're adopting a mystical outlook in which tribes are like soulmates, I don't know how we can be sure of that--or anything. (The whole everyone-has-a-tribe idea seems based on magical thinking to me.) And assuming that, publishing industry or not, people don't develop mass amnesia, the big names will still have the advantage in terms of getting attention. Stephen King failed with "The Plant" because people weren't ready to deal with authors individually, and the infrastructure wasn't there. He could walk away from his publisher right now, publish through KDP, and make a fortune. Or he could set up on Substack, start putting out his new work as serials, and make a fortune that way.
To be clear, I agree with you on the optimism. But I think we arrive at that optimism through very different thought processes.
As far as the ethical point about other industries is concerned, it opens an important discussion. We need to be more aware of those issues. But I will point out that it's relatively easy to not use AI at this point. It's not quite so easy to avoid using some products--or to always know which of them is produced by heinous labor practices and which isn't.
Cellphones were one technology I had no interest in. I broke down and got one when I started needing ride services like Uber. But even before that, my employer was assuming everyone had a cellphone, for example, by using them as the only conduit for some emergency information. It's now impossible to do business with some companies without one.
Using your Smartphone example, yes, 50% of them are produced in China, where working conditions are often unsafe. Unfortunately, banning smartphones manufactured in China would create other problems. China is already becoming harder and harder to work with. And any action we take without careful negotiation will doubtless result in retaliation. Increased unemployment is the best-case result. Arming and otherwise supporting our enemies is the worst case. So a simple moral question becomes complicated by the consequences of those actions (potentially including increasing the likelihood of World War III). Also, even in the event of a boycott or a ban, companies can still market their unsafely-made Smartphones elsewhere. And exploitive labor practices, if no longer usable in one industry, can simply result in the transfer of exploited labor to another. An individual society, even one as powerful as the US, has only so much leverage. We can keep our own companies from doing business using unsafely made goods. But we have limited influence on what other countries do.
That's assuming we can always pinpoint the source of the goods we use. The only 100% safe option is to live on a self sustaining farm with enough resources to grow our own food, make our own clothes, tools, etc. That's not even remotely practical for most people.
What we need to do is keep pushing in the direction of health, safety, and environmental soundness, but that takes time and delicate negotiation to avoid creating as many problems as we solve.
AI regulation is in some ways a much easier problem to fix. Not the most important problem, but one whose solution is not quite as fraught.