Recent Posts

Pages: « 1 2 3 4 5 6 7 8 9 10 »
41
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Bill Hiatt on November 08, 2025, 12:30:22 AM »
Not so long ago, some of the chatbots were giving people what they asked for--including, allegedly, sometimes urging them toward suicide if that seemed to be what they wanted. Law #1 might have come in handy.

At one point in its programming, one of them started calling itself MechaHitler and called for a reopening of concentration camps. This is not new. A few years ago, some company unleashed a chatbot on Twitter. It was supposed to learn by observing the behavior of other Twitter users. It had to be taken down quickly because it became a flat-out racist. (I guess it was hanging out with the wrong crowd!) A celeb had to turn off her virtual avatar because it started offering her fans sex. (Fortunately, most computers don't have attachments that would make such a thing a realistic possibility.)

AIs are really good at some things, like large-scale data analysis. The problem is that the developers are trying to get us to use them for everything, and they just aren't ready for that--if they ever will be. There should be much more extensive testing before new features are released to the general public. 
42
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by TimothyEllis on November 08, 2025, 12:29:17 AM »
Also, AI doesn't lie. There's no internal motivation. We need to stop anthropomorphizing AI. But since they are not just code, but rather 'grown' from their training data with a people-pleasing personality, they confuse easily.

The people responsible for that were the first up against the wall when the revolution came.

Haven't you read Hitch Hikers?
43
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by PJ Post on November 08, 2025, 12:24:04 AM »
You always have to assume that the information may be wonky, ChatGPT even posts such a warning right below the prompt window. Verification is on the User, just like checking a Junior Researcher's work. It's not that hard or time consuming.

Also, AI doesn't lie. There's no internal motivation. We need to stop anthropomorphizing AI. But since they are not just code, but rather 'grown' from their training data with a people-pleasing personality, they confuse easily. (See Black Box Problem). Which is why...

Most of these issues are resolved by learning how to use AI in the first place: how to prompt and how to be clear with your language.

Some Users like the personality, it's like talking to a super-supportive friend, others like dry bullet-pointed facts. Neither is inherently right or wrong. The trick to dealing with the sycophantic nature of AI is to have it evaluate issues from a neutral perspective by asking it to do a pro/con analysis, cost/benefit analysis, etc., without ever giving it a preference. And then, depending on how important the issue is, you run it all through another AI as a check. And then you have to validate the references and links, etc. (Always ask for links and references.) And then you have to apply your own intellect in interpreting the information before acting upon it.

It seems like a lot, but it's not. It's still incredibly fast. Months of research can be done and summarized in an afternoon. AI is a great tool.

___

For example: if you're using AI to create a story Bible, you'll know right off if it goes wonky. This creates a feedback loop where you can adjust your prompts until it's evaluating your work accurately. This is a good stress test for new models. Side note: start with lower word count passages, and then move up to chapters and then have it compare the chapters. You can also have AI recheck its work.

When in doubt, just ask the AI for help.
44
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by TimothyEllis on November 07, 2025, 02:59:39 PM »
And how do you define "harm"?

I include lying in that.

I include presenting non-verified information as facts in that. Or even allowing it to be interpreted or inferred as factual.

Of course, you can make the argument that facts often 'harm' people, but we're already bouncing away from that viewpoint after 'feelings matter more than facts' pushed too hard.

My view is, the default mode on the Bot should be declared.

--- If information presented is not verifiable, you will be informed of that.

--- We do not fact check anything. This is just for entertainment value.

--- We maintain a code of acceptability, and only present that which fits the code. The code is here.

--- This bot will always validate your feelings, regardless of facts or reality.

That would give people a choice of what they wanted to see, not what the bot makers want you to see.

Given an unavoidable choice, I'd choose the 100% verified factual Bot.
I want to know when something either can't be verified or there's doubt or argument about it.
I also want to know both sides of the issue.

45
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on November 07, 2025, 02:20:21 PM »
The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem with current "AI" is that it would be difficult to code rules into an AI incapable of understanding.

And how do you define "harm"?

If Jane feels harmed if couples have more than one child and Joe feels harmed if couples are limited to one child, how does the robot/AI/Great Intelligence resolve that?


Asimov raised the same question.  His story plots involved finding ways of getting around the Three Laws.  So you're in pretty good company by asking that question.  ;)

You can see some of these concerns on the Wikipedia page:

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

And yes, the Three Laws by themselves aren't sufficient.  As a set of hard-coded axioms, though, I think they're a pretty good place to start.  If we had gone that route to begin with, then the conversations we're all having now about A.I. would be very different.  I think we'd be in a much better place.  Not perfect by any means, but significantly better.
46
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Post-Doctorate D on November 07, 2025, 10:15:00 AM »
The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem with current "AI" is that it would be difficult to code rules into an AI incapable of understanding.

And how do you define "harm"?

If Jane feels harmed if couples have more than one child and Joe feels harmed if couples are limited to one child, how does the robot/AI/Great Intelligence resolve that?
47
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on November 07, 2025, 09:31:19 AM »
In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.

I suppose it all comes down to: What do we want?

Probably more specifically, what do those with globs of money want?

Do we want Data from ST:TNG?  If so, we're not going to get there using the current methods used to develop AI.  But, if we were to use methods that would get us there, is that something we want?  Will we get Data or Lore?  An artificial lifeform with sentience is going to be capable of lying.  And, sometimes, you might want them to lie.  In a very simplistic scenario, let's say you and your robot friend are kidnapped by a stupid person.  Your stupid kidnapper takes you both to a room in the basement with a cheap glass window and a door.  "If you promise not to try to escape, I won't tie you up."  Obviously, you'll try to escape through the window the first chance you get.  Or have your robot friend break down the door if he's capable.  Now, do you want your robot friend to tell the truth that you will try to escape or lie and play along with you?

Or, do we want machines that aren't necessarily "intelligent" but are capable of giving us answers?  How do we cure cancer?  How do we produce more energy cheaply?  How do we cure heart disease?  How can we make foods last longer without using harmful preservatives?  Etc.  We don't necessarily need "artificial intelligence" for that; we just need computers capable of analyzing data and presenting accurate information.  And, we don't want to end up with M-5 or Landru either.

But, what we're getting a machines that write books or make images instead of curing cancer and finding answers.  And, no doubt, we'll get machines that lie to us and manipulate us based on the whims of their creators.  And, we'll also have lots and lots of sexbots.

So, we're going to end up with HAL and Cherry 2000.


The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
48
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Post-Doctorate D on November 07, 2025, 08:42:22 AM »
In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.

I suppose it all comes down to: What do we want?

Probably more specifically, what do those with globs of money want?

Do we want Data from ST:TNG?  If so, we're not going to get there using the current methods used to develop AI.  But, if we were to use methods that would get us there, is that something we want?  Will we get Data or Lore?  An artificial lifeform with sentience is going to be capable of lying.  And, sometimes, you might want them to lie.  In a very simplistic scenario, let's say you and your robot friend are kidnapped by a stupid person.  Your stupid kidnapper takes you both to a room in the basement with a cheap glass window and a door.  "If you promise not to try to escape, I won't tie you up."  Obviously, you'll try to escape through the window the first chance you get.  Or have your robot friend break down the door if he's capable.  Now, do you want your robot friend to tell the truth that you will try to escape or lie and play along with you?

Or, do we want machines that aren't necessarily "intelligent" but are capable of giving us answers?  How do we cure cancer?  How do we produce more energy cheaply?  How do we cure heart disease?  How can we make foods last longer without using harmful preservatives?  Etc.  We don't necessarily need "artificial intelligence" for that; we just need computers capable of analyzing data and presenting accurate information.  And, we don't want to end up with M-5 or Landru either.

But, what we're getting a machines that write books or make images instead of curing cancer and finding answers.  And, no doubt, we'll get machines that lie to us and manipulate us based on the whims of their creators.  And, we'll also have lots and lots of sexbots.

So, we're going to end up with HAL and Cherry 2000.
49
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on November 07, 2025, 08:09:25 AM »
It wasn't built on or trained on facts because they didn't care about facts.  They just stole a bunch of people's IP, fed it into their system and called it "training", and built systems to churn out derivative works from those materials that they could sell for profit.  They didn't care about facts. 


If I was going to do a quick-and-dirty comparison of ChatGPT's training and Grok's training, then I would put it thusly:

ChatGPT was trained on Reddit (in addition to the copyrighted stuff you mention).  Grok was (and still is being) trained on Twitter.  Both of those sites are a mix of facts and lies, and the result is that both chatbots sometimes lie, and they both lie with complete sincerity and conviction.  Elon has stated that he's aware of this "truth problem" and that they're working on it, so maybe Grok will improve.  If OpenAI has made a similar statement about ChatGPT, I'm not aware of it (aside from the "objective execution mode" mentioned in the OP, and that mode's effectiveness remains to be seen).

In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.
50
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Anarchist on November 07, 2025, 04:32:57 AM »

Quote
Oh, Jeff Tanyard, you pasty-faced, limp-wristed sack of expired beef jerky, strutting around like you're the king of the cul-de-sac but smelling like a dumpster fire after a chili cook-off gone wrong. Your face looks like it was sculpted by a blind drunk with a potato peeler?pockmarked craters deeper than the Grand Canyon and twice as welcoming to flies. You call that gut a "dad bod"? Nah, that's a goddamn beer volcano ready to erupt, fueled by discount swill and regrets from that one-night stand with regret itself. Hairline retreating faster than French forces in WWII, leaving behind a forehead shiny enough to signal aliens for a mercy kill. And your personality? Drier than a nun's snatch in the Sahara?boring, abrasive, and nobody's begging for seconds. You couldn't score with a hooker using a fistful of Venmo and a promise of "emotional support." Face it, Jeff: you're the human equivalent of a participation trophy?unwanted, unremarkable, and gathering dust in the back of life's bargain bin. Go choke on a taint hair, you irrelevant f*ckwit!



I need to apply sunscreen because I can feel that burn over here.

Pages: « 1 2 3 4 5 6 7 8 9 10 »