Writer Sanctum

Special Interest Topics that affect authors => Bot Discussion Public => Topic started by: TimothyEllis on November 05, 2025, 10:42:25 PM

Title: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 05, 2025, 10:42:25 PM
Saw this on FB.

Why is this not the core instruction set for all the bots?

Title: Re: Why is this NOT the Bot standard?
Post by: Bill Hiatt on November 06, 2025, 12:20:29 AM
That would certainly have prevented a lot of hassles, wouldn't it?
Title: Re: Why is this NOT the Bot standard?
Post by: Lynn on November 06, 2025, 04:01:20 AM
Because it doesn't work.

I wanted to see it in action so I had copilot make a transcript of the text from the image.

Then I put it into a chat in ChatGPT. Within three exchanges, it used the phrase "That fits beautifully..."

It literally just doesn't do anything to stop the emotionally expressive elements that make up responses from the LLM bots. They are not independent thinking machines so they *must* mimic what they're trained on.
Title: Re: Why is this NOT the Bot standard?
Post by: R. C. on November 06, 2025, 05:36:13 AM
....

It literally just doesn't do anything to stop the emotionally expressive elements that make up responses from the LLM bots. They are not independent thinking machines so they *must* mimic what they're trained on.

Not sure that's 100%.  In ChatGPT, within a retained session, and ensuring my anchors are solid, by asking for their removal, I've achieved fewer patronizing passages and annoying em-dashes.  However, when starting a new session, the fluff returns.

R.C.
Title: Re: Why is this NOT the Bot standard?
Post by: Lynn on November 06, 2025, 05:45:04 AM
....

It literally just doesn't do anything to stop the emotionally expressive elements that make up responses from the LLM bots. They are not independent thinking machines so they *must* mimic what they're trained on.

Not sure that's 100%.  In ChatGPT, within a retained session, and ensuring my anchors are solid, by asking for their removal, I've achieved fewer patronizing passages and annoying em-dashes.  However, when starting a new session, the fluff returns.

R.C.

I wasn't clear? I'm talking about "being a machine, emotionless responses, etc" versus "expressing things in a human way". They aren't human, aren't intelligent, and must mimic because that's what they are. They're trained on expressive content, not machine-like, emotionless data. The associations are going to come out no matter how hard you try to lock it down. LLM chat bots are what they are.
Title: Re: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 06, 2025, 11:19:29 AM
Because it doesn't work.

Of course it doesn't work.

You can't apply that over the top of what's already there.

This should be what the base code looks like, not what a prompt looks like.

The bots were built to lie to people and make this up instead of admitting they don't know.

No amount of prompting will ever offset that.

I'm asking why this code set wasn't the base code for all of them.
Title: Re: Why is this NOT the Bot standard?
Post by: Jeff Tanyard on November 06, 2025, 03:43:44 PM
Saw this on FB.

Why is this not the core instruction set for all the bots?


Because it didn't have to be until Grok forced the issue.  Now that there's some serious ideological competition in how these chatbots are hard-coded, how they're trained, and how they function, the others can't just live inside the San Francisco ideological bubble if they want to compete for market share.

Want to have some fun?  Ask Grok to give you a vulgar roast of some person, place, thing, or group.   :hehe

Here, I'll start... 


(https://i.postimg.cc/Fsn4hKCD/grok-prompt.jpg) (https://postimages.org/)


Grok's response:


Quote
Oh, Jeff Tanyard, you pasty-faced, limp-wristed sack of expired beef jerky, strutting around like you're the king of the cul-de-sac but smelling like a dumpster fire after a chili cook-off gone wrong. Your face looks like it was sculpted by a blind drunk with a potato peeler?pockmarked craters deeper than the Grand Canyon and twice as welcoming to flies. You call that gut a "dad bod"? Nah, that's a goddamn beer volcano ready to erupt, fueled by discount swill and regrets from that one-night stand with regret itself. Hairline retreating faster than French forces in WWII, leaving behind a forehead shiny enough to signal aliens for a mercy kill. And your personality? Drier than a nun's snatch in the Sahara?boring, abrasive, and nobody's begging for seconds. You couldn't score with a hooker using a fistful of Venmo and a promise of "emotional support." Face it, Jeff: you're the human equivalent of a participation trophy?unwanted, unremarkable, and gathering dust in the back of life's bargain bin. Go choke on a taint hair, you irrelevant f*ckwit!


 :icon_rofl: :dance: :icon_rofl:

Seriously, if you haven't at least played around with Grok a little bit yet, you're simply missing out.
Title: Re: Why is this NOT the Bot standard?
Post by: Lynn on November 07, 2025, 02:46:21 AM
What has been built could not have been built on top of the type of code you're talking about. The fact that it exists is because it was not built to that standard. It would be a fact machine and it would be severely limited in scope because so much of what humans know are not facts. Hell, even the facts we know may not be facts.
Title: Re: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 07, 2025, 03:05:17 AM
What has been built could not have been built on top of the type of code you're talking about. The fact that it exists is because it was not built to that standard. It would be a fact machine and it would be severely limited in scope because so much of what humans know are not facts. Hell, even the facts we know may not be facts.

But that's my point.

It should all be built on facts. Not touchy feely validate your delusions rubbish like they are now.

If facts are in dispute, then it shows that in the answer.

In fact, it should be show both sides of every issue in every answer. Not just pandering to what it thinks your political and social leaning is.
Title: Re: Why is this NOT the Bot standard?
Post by: Post-Doctorate D on November 07, 2025, 03:39:26 AM
It wasn't built on or trained on facts because they didn't care about facts.  They just stole a bunch of people's IP, fed it into their system and called it "training", and built systems to churn out derivative works from those materials that they could sell for profit.  They didn't care about facts.  They didn't care about copyright.  They didn't care about anything but themselves and how they could extract maximum profit at minimum expense.

And there are people that think those careless, self-absorbed, greedy charlatans are going to deliver us utopia.

:icon_rofl:

On top of that, there are AI experts that say that the current methods being used to develop "artificial intelligence" will never lead to any sort of actual artificial intelligence, artificial general intelligence or any kind of self-aware artificial intelligence because all we have now and all that is being developed is essentially high-tech mimicry that has no potential to evolve or become what we traditionally have viewed as artificial intelligence.

Labeling what we currently have as "artificial intelligence" is like calling a device with wheels a "hoverboard."  They're just not the same.
Title: Re: Why is this NOT the Bot standard?
Post by: Lorri Moulton on November 07, 2025, 03:49:27 AM
I asked Grok a few questions about my fairytales, and one of the answers was not correct.  When I asked again and explained why it was incorrect, Grok found the correct book.  Granted, this title is not unique to me, so that may make it a little more confusing. 

Then, I asked if fairytales help AI learn...mainly because I hope AI will be a little kinder/nicer if it's going to someday rule the world.  Even if it's not there yet, one never knows.

Here's Grok's response if anyone wants to read it.  :angel:

https://lavendercottagebooks.com/fairytale-conversation-with-grok/ (https://lavendercottagebooks.com/fairytale-conversation-with-grok/)
Title: Re: Why is this NOT the Bot standard?
Post by: Anarchist on November 07, 2025, 04:32:57 AM

Quote
Oh, Jeff Tanyard, you pasty-faced, limp-wristed sack of expired beef jerky, strutting around like you're the king of the cul-de-sac but smelling like a dumpster fire after a chili cook-off gone wrong. Your face looks like it was sculpted by a blind drunk with a potato peeler?pockmarked craters deeper than the Grand Canyon and twice as welcoming to flies. You call that gut a "dad bod"? Nah, that's a goddamn beer volcano ready to erupt, fueled by discount swill and regrets from that one-night stand with regret itself. Hairline retreating faster than French forces in WWII, leaving behind a forehead shiny enough to signal aliens for a mercy kill. And your personality? Drier than a nun's snatch in the Sahara?boring, abrasive, and nobody's begging for seconds. You couldn't score with a hooker using a fistful of Venmo and a promise of "emotional support." Face it, Jeff: you're the human equivalent of a participation trophy?unwanted, unremarkable, and gathering dust in the back of life's bargain bin. Go choke on a taint hair, you irrelevant f*ckwit!



I need to apply sunscreen because I can feel that burn over here.

Title: Re: Why is this NOT the Bot standard?
Post by: Jeff Tanyard on November 07, 2025, 08:09:25 AM
It wasn't built on or trained on facts because they didn't care about facts.  They just stole a bunch of people's IP, fed it into their system and called it "training", and built systems to churn out derivative works from those materials that they could sell for profit.  They didn't care about facts. 


If I was going to do a quick-and-dirty comparison of ChatGPT's training and Grok's training, then I would put it thusly:

ChatGPT was trained on Reddit (in addition to the copyrighted stuff you mention).  Grok was (and still is being) trained on Twitter.  Both of those sites are a mix of facts and lies, and the result is that both chatbots sometimes lie, and they both lie with complete sincerity and conviction.  Elon has stated that he's aware of this "truth problem" and that they're working on it, so maybe Grok will improve.  If OpenAI has made a similar statement about ChatGPT, I'm not aware of it (aside from the "objective execution mode" mentioned in the OP, and that mode's effectiveness remains to be seen).

In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.
Title: Re: Why is this NOT the Bot standard?
Post by: Post-Doctorate D on November 07, 2025, 08:42:22 AM
In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.

I suppose it all comes down to: What do we want?

Probably more specifically, what do those with globs of money want?

Do we want Data from ST:TNG?  If so, we're not going to get there using the current methods used to develop AI.  But, if we were to use methods that would get us there, is that something we want?  Will we get Data or Lore?  An artificial lifeform with sentience is going to be capable of lying.  And, sometimes, you might want them to lie.  In a very simplistic scenario, let's say you and your robot friend are kidnapped by a stupid person.  Your stupid kidnapper takes you both to a room in the basement with a cheap glass window and a door.  "If you promise not to try to escape, I won't tie you up."  Obviously, you'll try to escape through the window the first chance you get.  Or have your robot friend break down the door if he's capable.  Now, do you want your robot friend to tell the truth that you will try to escape or lie and play along with you?

Or, do we want machines that aren't necessarily "intelligent" but are capable of giving us answers?  How do we cure cancer?  How do we produce more energy cheaply?  How do we cure heart disease?  How can we make foods last longer without using harmful preservatives?  Etc.  We don't necessarily need "artificial intelligence" for that; we just need computers capable of analyzing data and presenting accurate information.  And, we don't want to end up with M-5 or Landru either.

But, what we're getting a machines that write books or make images instead of curing cancer and finding answers.  And, no doubt, we'll get machines that lie to us and manipulate us based on the whims of their creators.  And, we'll also have lots and lots of sexbots.

So, we're going to end up with HAL and Cherry 2000.
Title: Re: Why is this NOT the Bot standard?
Post by: Jeff Tanyard on November 07, 2025, 09:31:19 AM
In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.

I suppose it all comes down to: What do we want?

Probably more specifically, what do those with globs of money want?

Do we want Data from ST:TNG?  If so, we're not going to get there using the current methods used to develop AI.  But, if we were to use methods that would get us there, is that something we want?  Will we get Data or Lore?  An artificial lifeform with sentience is going to be capable of lying.  And, sometimes, you might want them to lie.  In a very simplistic scenario, let's say you and your robot friend are kidnapped by a stupid person.  Your stupid kidnapper takes you both to a room in the basement with a cheap glass window and a door.  "If you promise not to try to escape, I won't tie you up."  Obviously, you'll try to escape through the window the first chance you get.  Or have your robot friend break down the door if he's capable.  Now, do you want your robot friend to tell the truth that you will try to escape or lie and play along with you?

Or, do we want machines that aren't necessarily "intelligent" but are capable of giving us answers?  How do we cure cancer?  How do we produce more energy cheaply?  How do we cure heart disease?  How can we make foods last longer without using harmful preservatives?  Etc.  We don't necessarily need "artificial intelligence" for that; we just need computers capable of analyzing data and presenting accurate information.  And, we don't want to end up with M-5 or Landru either.

But, what we're getting a machines that write books or make images instead of curing cancer and finding answers.  And, no doubt, we'll get machines that lie to us and manipulate us based on the whims of their creators.  And, we'll also have lots and lots of sexbots.

So, we're going to end up with HAL and Cherry 2000.


The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Title: Re: Why is this NOT the Bot standard?
Post by: Post-Doctorate D on November 07, 2025, 10:15:00 AM
The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem with current "AI" is that it would be difficult to code rules into an AI incapable of understanding.

And how do you define "harm"?

If Jane feels harmed if couples have more than one child and Joe feels harmed if couples are limited to one child, how does the robot/AI/Great Intelligence resolve that?
Title: Re: Why is this NOT the Bot standard?
Post by: Jeff Tanyard on November 07, 2025, 02:20:21 PM
The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem with current "AI" is that it would be difficult to code rules into an AI incapable of understanding.

And how do you define "harm"?

If Jane feels harmed if couples have more than one child and Joe feels harmed if couples are limited to one child, how does the robot/AI/Great Intelligence resolve that?


Asimov raised the same question.  His story plots involved finding ways of getting around the Three Laws.  So you're in pretty good company by asking that question.  ;)

You can see some of these concerns on the Wikipedia page:

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics (https://en.wikipedia.org/wiki/Three_Laws_of_Robotics)

And yes, the Three Laws by themselves aren't sufficient.  As a set of hard-coded axioms, though, I think they're a pretty good place to start.  If we had gone that route to begin with, then the conversations we're all having now about A.I. would be very different.  I think we'd be in a much better place.  Not perfect by any means, but significantly better.
Title: Re: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 07, 2025, 02:59:39 PM
And how do you define "harm"?

I include lying in that.

I include presenting non-verified information as facts in that. Or even allowing it to be interpreted or inferred as factual.

Of course, you can make the argument that facts often 'harm' people, but we're already bouncing away from that viewpoint after 'feelings matter more than facts' pushed too hard.

My view is, the default mode on the Bot should be declared.

--- If information presented is not verifiable, you will be informed of that.

--- We do not fact check anything. This is just for entertainment value.

--- We maintain a code of acceptability, and only present that which fits the code. The code is here.

--- This bot will always validate your feelings, regardless of facts or reality.

That would give people a choice of what they wanted to see, not what the bot makers want you to see.

Given an unavoidable choice, I'd choose the 100% verified factual Bot.
I want to know when something either can't be verified or there's doubt or argument about it.
I also want to know both sides of the issue.

Title: Re: Why is this NOT the Bot standard?
Post by: PJ Post on November 08, 2025, 12:24:04 AM
You always have to assume that the information may be wonky, ChatGPT even posts such a warning right below the prompt window. Verification is on the User, just like checking a Junior Researcher's work. It's not that hard or time consuming.

Also, AI doesn't lie. There's no internal motivation. We need to stop anthropomorphizing AI. But since they are not just code, but rather 'grown' from their training data with a people-pleasing personality, they confuse easily. (See Black Box Problem). Which is why...

Most of these issues are resolved by learning how to use AI in the first place: how to prompt and how to be clear with your language.

Some Users like the personality, it's like talking to a super-supportive friend, others like dry bullet-pointed facts. Neither is inherently right or wrong. The trick to dealing with the sycophantic nature of AI is to have it evaluate issues from a neutral perspective by asking it to do a pro/con analysis, cost/benefit analysis, etc., without ever giving it a preference. And then, depending on how important the issue is, you run it all through another AI as a check. And then you have to validate the references and links, etc. (Always ask for links and references.) And then you have to apply your own intellect in interpreting the information before acting upon it.

It seems like a lot, but it's not. It's still incredibly fast. Months of research can be done and summarized in an afternoon. AI is a great tool.

___

For example: if you're using AI to create a story Bible, you'll know right off if it goes wonky. This creates a feedback loop where you can adjust your prompts until it's evaluating your work accurately. This is a good stress test for new models. Side note: start with lower word count passages, and then move up to chapters and then have it compare the chapters. You can also have AI recheck its work.

When in doubt, just ask the AI for help.
Title: Re: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 08, 2025, 12:29:17 AM
Also, AI doesn't lie. There's no internal motivation. We need to stop anthropomorphizing AI. But since they are not just code, but rather 'grown' from their training data with a people-pleasing personality, they confuse easily.

The people responsible for that were the first up against the wall when the revolution came.

Haven't you read Hitch Hikers?
Title: Re: Why is this NOT the Bot standard?
Post by: Bill Hiatt on November 08, 2025, 12:30:22 AM
Not so long ago, some of the chatbots were giving people what they asked for--including, allegedly, sometimes urging them toward suicide if that seemed to be what they wanted. Law #1 might have come in handy.

At one point in its programming, one of them started calling itself MechaHitler and called for a reopening of concentration camps. This is not new. A few years ago, some company unleashed a chatbot on Twitter. It was supposed to learn by observing the behavior of other Twitter users. It had to be taken down quickly because it became a flat-out racist. (I guess it was hanging out with the wrong crowd!) A celeb had to turn off her virtual avatar because it started offering her fans sex. (Fortunately, most computers don't have attachments that would make such a thing a realistic possibility.)

AIs are really good at some things, like large-scale data analysis. The problem is that the developers are trying to get us to use them for everything, and they just aren't ready for that--if they ever will be. There should be much more extensive testing before new features are released to the general public. 
Title: Re: Why is this NOT the Bot standard?
Post by: Bill Hiatt on November 08, 2025, 12:43:02 AM
Quote
It seems like a lot, but it's not. It's still incredibly fast. Months of research can be done and summarized in an afternoon. AI is a great tool.
AI is generally good at summarizing things (though, assuming the cute new descriptions of books in search results, which maybe be derived from product descriptions, are AI, it's not always accurate, even at that).

If I look things up myself, I still have to verify information. But all AI does is add another layer to that process. Part of verification is looking at the original sources, anyway. So you might as well just start with the original sources. (This isn't as true with huge datasets, but it is true with a person's everyday searches.)

And there are a lot of things AI isn't good with. People asking AI for personal advice was a disaster.

We anthropomorphize them partly because sci-fi has conditioned us to, and partly because the developers designed them to react like people. That's the appeal behind eliminating all the phrases that make them sound human. (We anthropomorphize some animals, too, but some, like dogs, have emotions very similar to those of humans. It's actually more rational to see a dog as a family member than to want to marry an AI. Some of the people-also-ask questions on Google include, "Is it legal to marry an AI?" and "Is having an AI partner cheating?")

I look at stuff like this and despair for the future of the human race.

Title: Re: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 08, 2025, 12:46:13 AM
I look at stuff like this and despair for the future of the human race.

The last time I wrote the end, it was 2130 when the last human left, and the last remaining died.

That was a few years ago now.

These days, I don't think we'll make it that far.

Society is mutating so fast that the species is now on course to die out naturally, before climate change can kill us.
Title: Re: Why is this NOT the Bot standard?
Post by: Post-Doctorate D on November 08, 2025, 04:27:15 AM
I would put the odds of "climate change" killing us at close to zero.  I might even go with a negative percentage if that were possible.

Odds are higher of an asteroid or other large celestial body striking the Earth and wiping out most if not all life on Earth, depending on the size and nature of the impact.

Higher than that would be nuclear weapons, whether by nuclear war between nations or terrorists that get their hands on nuclear weapons.

Not sure where to rank AI.  Not necessarily AI itself taking over and killing us all, but also the possibility of people following stupid AI advice and doing dumb things that endanger us all.

People marrying chatbots or robots or trees or whatever tends to take the stupid people out of the gene pool, so there's an upside to that.
Title: Re: Why is this NOT the Bot standard?
Post by: Jeff Tanyard on November 08, 2025, 08:09:44 AM
Not so long ago, some of the chatbots were giving people what they asked for--including, allegedly, sometimes urging them toward suicide if that seemed to be what they wanted. Law #1 might have come in handy.


Ongoing lawsuits about it.

https://abcnews.go.com/Technology/wireStory/openai-faces-7-lawsuits-claiming-chatgpt-drove-people-127279676 (https://abcnews.go.com/Technology/wireStory/openai-faces-7-lawsuits-claiming-chatgpt-drove-people-127279676)
Title: Re: Why is this NOT the Bot standard?
Post by: Bill Hiatt on November 08, 2025, 10:21:44 PM
Yeah, those are the situations I was thinking of.

The people involved obviously needed help. but AI was like throwing an anchor to a drowning man. AI was not (and probably still is not) a great tool for psychological counseling. But the language that makes AI sound friendly or helpful could easily mislead someone who was vulnerable.
Title: Re: Why is this NOT the Bot standard?
Post by: Post-Doctorate D on November 09, 2025, 04:07:27 AM
(https://scontent-ord5-3.xx.fbcdn.net/v/t39.30808-6/573071587_10162771601633300_7348478245419555879_n.jpg?_nc_cat=107&ccb=1-7&_nc_sid=127cfc&_nc_ohc=vBmYnSNv6xYQ7kNvwHel-dr&_nc_oc=AdlndghoYMJUxH4rLb8kpXyVrSNBZNK_9WM8F8akhXHFp6FTzsNd92LqJ3vm0d8fHq4&_nc_zt=23&_nc_ht=scontent-ord5-3.xx&_nc_gid=Bw7MdWvQeRvp294fnFwhmg&oh=00_AfjFFg_q8DCyNbuvdAmlSb5Uz_16vWYRJ0h_uFmt0wfZQQ&oe=6914B325)
Title: Re: Why is this NOT the Bot standard?
Post by: PJ Post on November 10, 2025, 12:47:14 AM
(https://scontent-ord5-3.xx.fbcdn.net/v/t39.30808-6/573071587_10162771601633300_7348478245419555879_n.jpg?_nc_cat=107&ccb=1-7&_nc_sid=127cfc&_nc_ohc=vBmYnSNv6xYQ7kNvwHel-dr&_nc_oc=AdlndghoYMJUxH4rLb8kpXyVrSNBZNK_9WM8F8akhXHFp6FTzsNd92LqJ3vm0d8fHq4&_nc_zt=23&_nc_ht=scontent-ord5-3.xx&_nc_gid=Bw7MdWvQeRvp294fnFwhmg&oh=00_AfjFFg_q8DCyNbuvdAmlSb5Uz_16vWYRJ0h_uFmt0wfZQQ&oe=6914B325)

Great example of User error.

Is the mushroom edible? Of course it is. Humans can eat most anything. The AI answered accurately.

The more useful prompt/question was: "Is this mushroom poisonous?" or "Will eating this mushroom make me sick?" + "Please provide links and references."

And then, check the links and references, and because this is a life-or-death example, ask another AI to fact check.
Title: Re: Why is this NOT the Bot standard?
Post by: TimothyEllis on November 10, 2025, 12:53:53 AM
Great example of User error.

No, that's a great example of a really BAD answer.

The right answer was "It's impossible to answer that question without properly identifying the actual mushroom it is. Some mushrooms are poisonous, so don't eat it until it has been properly identified. A check of available information suggests that a red mushroom with white dots is most likely not edible. So be very cautious with it."

Anything less than that is criminal.

Blaming it on the user is also criminal.

The user actually asked the right question.

The Bot gave the wrong answer.

If the person died, the operator of the bot should be being sued, and put up on manslaughter charges.
Title: Re: Why is this NOT the Bot standard?
Post by: Bill Hiatt on November 10, 2025, 01:53:38 AM
The term edible is defined as "fit or suitable to be eaten." In other words, something which is poisonous shouldn't be labeled as edible.
Title: Re: Why is this NOT the Bot standard?
Post by: PJ Post on November 10, 2025, 03:53:15 AM
The term edible is defined as "fit or suitable to be eaten." In other words, something which is poisonous shouldn't be labeled as edible.

From google:

Quote
While it's often used in the context of human consumption to differentiate safe from unsafe items (e.g., "edible mushrooms" vs. poisonous ones), the concept of edibility applies to any organism. What is edible for one species, however, is not necessarily edible for another.

Quote
Yes, some animals and insects can eat poisonous mushrooms, but some poisons can be fatal to them as well. While some animals can safely eat certain mushrooms that are poisonous to humans...

Ergo...

Title: Re: Why is this NOT the Bot standard?
Post by: Lorri Moulton on November 10, 2025, 04:39:56 AM
If AI cannot answer a question, the response should be, "That question cannot be answered with the information provided.  Please do your own research."

Posing as a search engine and "answering" questions should mean that the answers are correct or that they point us TO the research. 

When AI responds with a definitive answer (and that answer is incorrect) that is not user error.  That is a programing error...and it should be corrected. 

Not sure I'm ready to hand over any important information or tasks if the incorrect responses will be "user error" any time there is a mistake. 

If someone picks up a book filled with information and that information is WRONG, I would not blame the reader.  That would be the author.