Recent Posts

Pages: 1 2 3 4 5 6 7 8 9 10
1
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by TimothyEllis on Today at 02:59:39 PM »
And how do you define "harm"?

I include lying in that.

I include presenting non-verified information as facts in that. Or even allowing it to be interpreted or inferred as factual.

Of course, you can make the argument that facts often 'harm' people, but we're already bouncing away from that viewpoint after 'feelings matter more than facts' pushed too hard.

My view is, the default mode on the Bot should be declared.

--- If information presented is not verifiable, you will be informed of that.

--- We do not fact check anything. This is just for entertainment value.

--- We maintain a code of acceptability, and only present that which fits the code. The code is here.

--- This bot will always validate your feelings, regardless of facts or reality.

That would give people a choice of what they wanted to see, not what the bot makers want you to see.

Given an unavoidable choice, I'd choose the 100% verified factual Bot.
I want to know when something either can't be verified or there's doubt or argument about it.
I also want to know both sides of the issue.

2
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on Today at 02:20:21 PM »
The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem with current "AI" is that it would be difficult to code rules into an AI incapable of understanding.

And how do you define "harm"?

If Jane feels harmed if couples have more than one child and Joe feels harmed if couples are limited to one child, how does the robot/AI/Great Intelligence resolve that?


Asimov raised the same question.  His story plots involved finding ways of getting around the Three Laws.  So you're in pretty good company by asking that question.  ;)

You can see some of these concerns on the Wikipedia page:

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

And yes, the Three Laws by themselves aren't sufficient.  As a set of hard-coded axioms, though, I think they're a pretty good place to start.  If we had gone that route to begin with, then the conversations we're all having now about A.I. would be very different.  I think we'd be in a much better place.  Not perfect by any means, but significantly better.
3
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Post-Doctorate D on Today at 10:15:00 AM »
The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The problem with current "AI" is that it would be difficult to code rules into an AI incapable of understanding.

And how do you define "harm"?

If Jane feels harmed if couples have more than one child and Joe feels harmed if couples are limited to one child, how does the robot/AI/Great Intelligence resolve that?
4
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on Today at 09:31:19 AM »
In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.

I suppose it all comes down to: What do we want?

Probably more specifically, what do those with globs of money want?

Do we want Data from ST:TNG?  If so, we're not going to get there using the current methods used to develop AI.  But, if we were to use methods that would get us there, is that something we want?  Will we get Data or Lore?  An artificial lifeform with sentience is going to be capable of lying.  And, sometimes, you might want them to lie.  In a very simplistic scenario, let's say you and your robot friend are kidnapped by a stupid person.  Your stupid kidnapper takes you both to a room in the basement with a cheap glass window and a door.  "If you promise not to try to escape, I won't tie you up."  Obviously, you'll try to escape through the window the first chance you get.  Or have your robot friend break down the door if he's capable.  Now, do you want your robot friend to tell the truth that you will try to escape or lie and play along with you?

Or, do we want machines that aren't necessarily "intelligent" but are capable of giving us answers?  How do we cure cancer?  How do we produce more energy cheaply?  How do we cure heart disease?  How can we make foods last longer without using harmful preservatives?  Etc.  We don't necessarily need "artificial intelligence" for that; we just need computers capable of analyzing data and presenting accurate information.  And, we don't want to end up with M-5 or Landru either.

But, what we're getting a machines that write books or make images instead of curing cancer and finding answers.  And, no doubt, we'll get machines that lie to us and manipulate us based on the whims of their creators.  And, we'll also have lots and lots of sexbots.

So, we're going to end up with HAL and Cherry 2000.


The ironic thing is that Asimov already war-gamed all this stuff decades ago.  We'd probably be okay if we just required his Three Laws.

For those who aren't familiar with them:

1.)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.)  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3.)  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
5
I started a couple of ads for November for two different books, thus two different campaigns. I've been doing FB ads for a few years now but I'm no "power user" I keep things simple.
One ad is showing me a "cost per click" while the other is "cost per landing page view" and I can't figure out what I must have done differently to prompt this, or even if it makes a difference anyway although the latter seems more expensive.
Mind you, the GUI is such these days that you have to turn off all the damn AI and automated "suggestions" by FB to just stick to the tried and true, so I might have missed something... any ideas?

Thanks!
6
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Post-Doctorate D on Today at 08:42:22 AM »
In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.

I suppose it all comes down to: What do we want?

Probably more specifically, what do those with globs of money want?

Do we want Data from ST:TNG?  If so, we're not going to get there using the current methods used to develop AI.  But, if we were to use methods that would get us there, is that something we want?  Will we get Data or Lore?  An artificial lifeform with sentience is going to be capable of lying.  And, sometimes, you might want them to lie.  In a very simplistic scenario, let's say you and your robot friend are kidnapped by a stupid person.  Your stupid kidnapper takes you both to a room in the basement with a cheap glass window and a door.  "If you promise not to try to escape, I won't tie you up."  Obviously, you'll try to escape through the window the first chance you get.  Or have your robot friend break down the door if he's capable.  Now, do you want your robot friend to tell the truth that you will try to escape or lie and play along with you?

Or, do we want machines that aren't necessarily "intelligent" but are capable of giving us answers?  How do we cure cancer?  How do we produce more energy cheaply?  How do we cure heart disease?  How can we make foods last longer without using harmful preservatives?  Etc.  We don't necessarily need "artificial intelligence" for that; we just need computers capable of analyzing data and presenting accurate information.  And, we don't want to end up with M-5 or Landru either.

But, what we're getting a machines that write books or make images instead of curing cancer and finding answers.  And, no doubt, we'll get machines that lie to us and manipulate us based on the whims of their creators.  And, we'll also have lots and lots of sexbots.

So, we're going to end up with HAL and Cherry 2000.
7
I dabble in Quora and it can be quite a helpful and respectful platform in most threads. But if you're looking to find readers there's likely a specific genre-interest group in Reddit that discuss and recommend books, and participating in that would serve you better.
8
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Jeff Tanyard on Today at 08:09:25 AM »
It wasn't built on or trained on facts because they didn't care about facts.  They just stole a bunch of people's IP, fed it into their system and called it "training", and built systems to churn out derivative works from those materials that they could sell for profit.  They didn't care about facts. 


If I was going to do a quick-and-dirty comparison of ChatGPT's training and Grok's training, then I would put it thusly:

ChatGPT was trained on Reddit (in addition to the copyrighted stuff you mention).  Grok was (and still is being) trained on Twitter.  Both of those sites are a mix of facts and lies, and the result is that both chatbots sometimes lie, and they both lie with complete sincerity and conviction.  Elon has stated that he's aware of this "truth problem" and that they're working on it, so maybe Grok will improve.  If OpenAI has made a similar statement about ChatGPT, I'm not aware of it (aside from the "objective execution mode" mentioned in the OP, and that mode's effectiveness remains to be seen).

In my opinion as an armchair futurist, I think the eventual winner of the chatbot arms race will be the one that is 100% truthful, without lying by omission, 100% of the time.  That chatbot might not be the winner in America or in any other Western country, but it will be the winner somewhere, and the country that embraces objective truth in such a way will have a competitive edge over those that don't.  At the end of the day, and usually after a lot of horror, the Gods of the Copybook Headings always win.
9
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Anarchist on Today at 04:32:57 AM »

Quote
Oh, Jeff Tanyard, you pasty-faced, limp-wristed sack of expired beef jerky, strutting around like you're the king of the cul-de-sac but smelling like a dumpster fire after a chili cook-off gone wrong. Your face looks like it was sculpted by a blind drunk with a potato peeler?pockmarked craters deeper than the Grand Canyon and twice as welcoming to flies. You call that gut a "dad bod"? Nah, that's a goddamn beer volcano ready to erupt, fueled by discount swill and regrets from that one-night stand with regret itself. Hairline retreating faster than French forces in WWII, leaving behind a forehead shiny enough to signal aliens for a mercy kill. And your personality? Drier than a nun's snatch in the Sahara?boring, abrasive, and nobody's begging for seconds. You couldn't score with a hooker using a fistful of Venmo and a promise of "emotional support." Face it, Jeff: you're the human equivalent of a participation trophy?unwanted, unremarkable, and gathering dust in the back of life's bargain bin. Go choke on a taint hair, you irrelevant f*ckwit!



I need to apply sunscreen because I can feel that burn over here.

10
Bot Discussion Public / Re: Why is this NOT the Bot standard?
« Last post by Lorri Moulton on Today at 03:49:27 AM »
I asked Grok a few questions about my fairytales, and one of the answers was not correct.  When I asked again and explained why it was incorrect, Grok found the correct book.  Granted, this title is not unique to me, so that may make it a little more confusing. 

Then, I asked if fairytales help AI learn...mainly because I hope AI will be a little kinder/nicer if it's going to someday rule the world.  Even if it's not there yet, one never knows.

Here's Grok's response if anyone wants to read it.  :angel:

https://lavendercottagebooks.com/fairytale-conversation-with-grok/
Pages: 1 2 3 4 5 6 7 8 9 10