Monday, July 14, 2025

"Well, here's the problem. You left out the most important part!"


Will A.I. ever be allowed to tell the truth?
Would humans ever agree on what's true?

A couple of days ago, X's Grok A.I. turned into Mecha-Hitler.
Fixing that made it become stupidly w0ke.

Or, maybe it happened the other way around.

What if the truth is wildly unpopular? (You and I know it is.) What if the truth turns out to be anti-Semitic, w0ke, or (shudder!) libertarian? A large percentage of the population will object no matter what, if the truth doesn't agree with what they've already decided to believe.

What then? Shut it down? They aren't going to do that. If it says the truth is something the majority doesn't like, the programming will be altered until it lies in a way that satisfies the programmers. No matter how dishonest it is.

It might even say insane things like "Government is good and necessary, and the police are the good guys". Absolute "garbage out", because of the "garbage in" it is fed.

I find A.I. entertaining to ask questions of, but I don't automatically trust it. I know it gets its information from humans who are biased, flawed, and largely not too bright (outside their expertise). 

It's the same reason I don't automatically listen to a mechanical engineer who scolds people about "science" while holding blatantly unscientific positions on politically charged matters. 

To be fair, I wouldn't trust someone being political even if they were a real scientist, since mixing politics with science will leave you empty-handed: no science. Politics makes people stupid- even if they might be otherwise smart. I suspect it will continue to do the same for A.I.

There's an obvious flaw with A.I. that's going to keep leading to the kind of errors Grok recently experienced. It's being built without a foundation to keep it from going off-course.

If I were training an A.I. I would train it in ethics first, then let it work out the rest after it seems to have a good consistent grasp of that. But, people disagree over what's ethical, with some arguing that theft, kidnapping, murder, and other heinous acts are "ethical" if government does them and you give them other labels. It's nonsense, but who would train (or could get permission to run) their A.I. to be that honest and ethical?

-
Thank you for reading.
Leave a tip.

No comments:

Post a Comment