Will A.I. ever be allowed to tell the truth?
Would humans ever agree on what's true?
A couple of days ago, X's Grok A.I. turned into Mecha-Hitler.
Fixing that made it become stupidly w0ke.
Or, maybe it happened the other way around.
What if the truth is wildly unpopular? (You and I know it is.) What if the truth turns out to be anti-Semitic, w0ke, or (shudder!) libertarian? A large percentage of the population will object no matter what, if the truth doesn't agree with what they've already decided to believe.
What then? Shut it down? They aren't going to do that. If it says the truth is something the majority doesn't like, the programming will be altered until it lies in a way that satisfies the programmers. No matter how dishonest it is.
It might even say insane things like "Government is good and necessary, and the police are the good guys". Absolute "garbage out", because of the "garbage in" it is fed.
I find A.I. entertaining to ask questions of, but I don't automatically trust it. I know it gets its information from humans who are biased, flawed, and largely not too bright (outside their expertise).
It's the same reason I don't automatically listen to a mechanical engineer who scolds people about "science" while holding blatantly unscientific positions on politically charged matters.
To be fair, I wouldn't trust someone being political even if they were a real scientist, since mixing politics with science will leave you empty-handed: no science. Politics makes people stupid- even if they might be otherwise smart. I suspect it will continue to do the same for A.I.
There's an obvious flaw with A.I. that's going to keep leading to the kind of errors Grok recently experienced. It's being built without a foundation to keep it from going off-course.
If I were training an A.I. I would train it in ethics first, then let it work out the rest after it seems to have a good consistent grasp of that. But, people disagree over what's ethical, with some arguing that theft, kidnapping, murder, and other heinous acts are "ethical" if government does them and you give them other labels. It's nonsense, but who would train (or could get permission to run) their A.I. to be that honest and ethical?
Kent - You make an interesting assertion about the absence of "foundation" :
ReplyDelete"There's an obvious flaw with A.I. that's going to keep leading to the kind of errors Grok recently experienced. It's being built without a foundation to keep it from going off-course."
There is need of further discussion and debate about what constitutes the proper "foundation" and proper "course" for development / evolution of AI. Or for the evolution of man / mankind in general, since AI is but an external instantiation of the workings and content of a subset of human minds (its' tutors).
You acknowledge that "people disagree over what's ethical" yet you link to a previous essay in which you state that ethics asserts an objective right and wrong. So either people as observers are flawed and cannot perceive what is objectively extant, or the definition is flawed.
In my research into the concept of Liberty over the past two decades I've chased this fractal rabbit down a recursive rabbit-hole many times. In one of his treatises, John Locke confesses that he invented his concept of rights as a 'convenient fiction' to defend property from encroachment. I have no issue with his conclusion, but it's far from an objective basis upon which all men can agree.
I will annoy the objectivists and technologists with my conclusion, risking to trivialize the problem with an analogy to SkyNet (Terminator). TURN IT ALL OFF!
We are at a precipice beyond which there is no safe return. All men are inherently flawed because we are finite and temporal creatures. This underscores the danger of delegating power to constructs like government. By extension, all products of mens' minds (yes all, but AI in particular) are flawed.
The difference between AI and prior technologies of the 18th, 19th and 20th centuries is the limit to the destructive capacity when used for evil (yes, we need to discuss and debate a definition of evil). Earlier generations of technology can be more readily / easily turned off.
AI ... TURN IT ALL OFF ! Man is not mature enough, and likely will never be mature enough, to play God with truly dangerous machines.
Hans ... in the NC woods
"...you state that ethics asserts an objective right and wrong. So either people as observers are flawed and cannot perceive what is objectively extant, or the definition is flawed."
DeleteOh, I *definitely* think humans are flawed. You can see demonstrated time and again that they CANNOT "perceive what is objectively extant" on a wide variety of topics. Especially if it is (or can be made) political. A discussion I've had with someone over the past day or so about police illustrates that beyond a doubt. And, I'm open to discussions over the definition or "ethics" and "evil" I use.
"AI ... TURN IT ALL OFF !"
No disagreement from me at all. I just don't think there's anyone with the power to do so who will. Even if all but one did turn it off, it would only take one to leave it on and keep developing it to create havoc. Interesting times are ahead.
"Define 'interesting'." ~ Capt. Malcolm Reynolds