Recent Posts

Pages: « 1 2 3 4 5 6 7 8 9 10 »
31
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by PJ Post on November 10, 2025, 03:53:15 AM »
The term edible is defined as "fit or suitable to be eaten." In other words, something which is poisonous shouldn't be labeled as edible.

From google:

Quote
While it's often used in the context of human consumption to differentiate safe from unsafe items (e.g., "edible mushrooms" vs. poisonous ones), the concept of edibility applies to any organism. What is edible for one species, however, is not necessarily edible for another.

Quote
Yes, some animals and insects can eat poisonous mushrooms, but some poisons can be fatal to them as well. While some animals can safely eat certain mushrooms that are poisonous to humans...

Ergo...

32
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Bill Hiatt on November 10, 2025, 01:53:38 AM »
The term edible is defined as "fit or suitable to be eaten." In other words, something which is poisonous shouldn't be labeled as edible.
33
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by TimothyEllis on November 10, 2025, 12:53:53 AM »
Great example of User error.

No, that's a great example of a really BAD answer.

The right answer was "It's impossible to answer that question without properly identifying the actual mushroom it is. Some mushrooms are poisonous, so don't eat it until it has been properly identified. A check of available information suggests that a red mushroom with white dots is most likely not edible. So be very cautious with it."

Anything less than that is criminal.

Blaming it on the user is also criminal.

The user actually asked the right question.

The Bot gave the wrong answer.

If the person died, the operator of the bot should be being sued, and put up on manslaughter charges.
34
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by PJ Post on November 10, 2025, 12:47:14 AM »


Great example of User error.

Is the mushroom edible? Of course it is. Humans can eat most anything. The AI answered accurately.

The more useful prompt/question was: "Is this mushroom poisonous?" or "Will eating this mushroom make me sick?" + "Please provide links and references."

And then, check the links and references, and because this is a life-or-death example, ask another AI to fact check.
35
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Post-Doctorate D on November 09, 2025, 04:07:27 AM »
36
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Bill Hiatt on November 08, 2025, 10:21:44 PM »
Yeah, those are the situations I was thinking of.

The people involved obviously needed help. but AI was like throwing an anchor to a drowning man. AI was not (and probably still is not) a great tool for psychological counseling. But the language that makes AI sound friendly or helpful could easily mislead someone who was vulnerable.
37
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on November 08, 2025, 08:09:44 AM »
Not so long ago, some of the chatbots were giving people what they asked for--including, allegedly, sometimes urging them toward suicide if that seemed to be what they wanted. Law #1 might have come in handy.


Ongoing lawsuits about it.

https://abcnews.go.com/Technology/wireStory/openai-faces-7-lawsuits-claiming-chatgpt-drove-people-127279676
38
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Post-Doctorate D on November 08, 2025, 04:27:15 AM »
I would put the odds of "climate change" killing us at close to zero.  I might even go with a negative percentage if that were possible.

Odds are higher of an asteroid or other large celestial body striking the Earth and wiping out most if not all life on Earth, depending on the size and nature of the impact.

Higher than that would be nuclear weapons, whether by nuclear war between nations or terrorists that get their hands on nuclear weapons.

Not sure where to rank AI.  Not necessarily AI itself taking over and killing us all, but also the possibility of people following stupid AI advice and doing dumb things that endanger us all.

People marrying chatbots or robots or trees or whatever tends to take the stupid people out of the gene pool, so there's an upside to that.
39
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by TimothyEllis on November 08, 2025, 12:46:13 AM »
I look at stuff like this and despair for the future of the human race.

The last time I wrote the end, it was 2130 when the last human left, and the last remaining died.

That was a few years ago now.

These days, I don't think we'll make it that far.

Society is mutating so fast that the species is now on course to die out naturally, before climate change can kill us.
40
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Bill Hiatt on November 08, 2025, 12:43:02 AM »
Quote
It seems like a lot, but it's not. It's still incredibly fast. Months of research can be done and summarized in an afternoon. AI is a great tool.
AI is generally good at summarizing things (though, assuming the cute new descriptions of books in search results, which maybe be derived from product descriptions, are AI, it's not always accurate, even at that).

If I look things up myself, I still have to verify information. But all AI does is add another layer to that process. Part of verification is looking at the original sources, anyway. So you might as well just start with the original sources. (This isn't as true with huge datasets, but it is true with a person's everyday searches.)

And there are a lot of things AI isn't good with. People asking AI for personal advice was a disaster.

We anthropomorphize them partly because sci-fi has conditioned us to, and partly because the developers designed them to react like people. That's the appeal behind eliminating all the phrases that make them sound human. (We anthropomorphize some animals, too, but some, like dogs, have emotions very similar to those of humans. It's actually more rational to see a dog as a family member than to want to marry an AI. Some of the people-also-ask questions on Google include, "Is it legal to marry an AI?" and "Is having an AI partner cheating?")

I look at stuff like this and despair for the future of the human race.

Pages: « 1 2 3 4 5 6 7 8 9 10 »