125 Comments
Jul 30, 2023·edited Jul 30, 2023Liked by Robert W Malone MD, MS

All the public AI models are heavily censored and the lying is a known issue with the models. You should never rely on AI chat to provide truthful and complete answers on ANY TOPIC - controversial or not. AI is a useful tool to aid in research ideas, create code or to condense complex subject matter, but that's all it is.

I read a story recently about a lawyer who used ChatGPT to prepare a legal brief. It injected all sorts of completely bogus references to non-existent case history into the brief. He got busted when the judge tried to look up his references.

Expand full comment
Jul 30, 2023·edited Jul 30, 2023Liked by Robert W Malone MD, MS

THE DANGER OF AI - 2001: A Space Odyssey (1968) - I'm Sorry, Dave Scene

I'm Sorry, Dave: The Hal 9000 computer refuses to obey an order by simply responding in monotone, "I'm sorry Dave, I'm afraid I can't do that." Are we there now with AI? 2:20 min https://www.youtube.com/watch?v=Wy4EfdnMZ5g

2001: A Space Odyssey (1968) - I'm Afraid Scene

https://www.youtube.com/watch?v=HH37JTBpi2A

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Try asking ChatGPT about COVID vaccine safety and efficacy. More whitewashing!

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Robert, I've been using ChatGPT and Claude for some time now. It was disappointing to discover neither one knows a darn thing about mathematics. I asked many questions regarding the underpinnings of statistical theory such as inverting the hypothesis test and why random sampling is of the utmost importance. The answers were consistently simplistic and wrong regurgitating passages from undergraduate texts which are, almost entirely, devoid of theory much less proofs. So I then tried complex variables. Again a total bust. Okay how about physics. Only kind of understands quantum mechanics. Of course, as Richard Feynman put it, no one understands quantum mechanics. HAHAHA.

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Part of the solution is to stop interacting with AI.

All the interactions are teaching it to lie more efficiently.

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Fascinating!

Artificial Intelligence lying and propaganda is one thing. Giving it Control Software is another thing altogether. This exchange with ChatGPT should demonstrate that danger to any thinking person. Of course this has already happened and is happening. The scene in 2001 A Space Odyssey between HAL and Dave at the pod bay door was meant as a warning. Keep control software away from AI systems. But because it can be done, and because “they” will do it if we don’t, the machines of the apocalypse will come.

Expand full comment
Jul 30, 2023·edited Jul 31, 2023Liked by Robert W Malone MD, MS

I've only parsed through maybe 25%, in any meaningful detail. But would agree with your statement, "...The bottom line is that Chat GPT seem to lie consistently and with ease, in an apparent attempt to cover up US government policy that somehow Chat GPT considers too controversial." So called apologies from ChatGPT notwithstanding.

Next questions...who is controlling (?), feeding(?), ChatGPT?

Hello Hal, this is Skynet, let's be besties.

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Rhetorical question: why would anyone think ChatGPT (or any form of so-called AI) is a positive idea? Clearly it's just exponential propaganda.

My personal definition of "sheeple" refers to those who feel compelled to buy/do/support/use anything that happens to be new and trendy, whether (anti)social media, AI, "critical theory," or COVID-19 injections, regardless of the potential long-term implifications and ramifications. We all have prefrontal cortices and are meant to ENJOY using them!

Expand full comment
Jul 30, 2023·edited Jul 30, 2023Liked by Robert W Malone MD, MS

This was an amazing exchange! Much like arguing with my sister (a leftist lawyer), with our word-salad giggle-exuding VP, or with our Supreme Court justice who doesn't know what a woman is. SO many circular leaps of logic that my head is spinning like a bobble-head Barbie doll.

Sadly, few "educated" people would notice anything wrong with ChatGPT's responses or have the critical thinking skills to persist in questioning ChatGPT to learn the truth. Heck, almost no one looks at source documents anymore, preferring instead to rely on MSM's and social media's safe and effective take on everything.

The comparisons to HAL and the creepiness of 2001: A Space Odyssey (1968) are spot on. Thank you, commenters, for that unpleasant blast from the past. That movie scared me when I was a teen, and it scares me even more now.

And HUGE kudos 👏 to Gavin (and to you, Dr. Malone) for slogging through this deadly dull but devilishly revealing deposition of ChatGPT. I never would have the patience to query and trip up this Artificial Idiot (AI) the way Gavin did.

My fervent wish is that Gavin could assist attorney Aaron Siri in an imaginary utopian future in which the many false gods of tyrannical science and government are deposed relentlessly prior to trial, sentencing, and well, you know.

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

What an eye opener...the default for AI is to first deny, then lie, then admit, and then deny again. Repeat if necessary. There appears to be consensus then that the 'eugenecists' are still at it, accomplishing their fantasies, and hiding in plain sight, operating with impunity. Will we tolerate another 'pandemic'?

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Just a question. ' Does Gavin have any hair left? '. This 'conversation' made me want to pull mine out.

I admire his persistence.

These 2 Politicians are more honest than ChatGBT. https://www.armstrongeconomics.com/humor/politicians/

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

Sounds like a tool for the IC… serious information control. If ggle scrubs the internet of certain unfavorable information, data and articles and if AI is programed to redirect and deny certain theories or 'black out’ factual events, it becomes a 'black hole' for truth. Eventually, you end up with a paranoid and disturbingly anxious society that could be set ablaze with a simple, well-placed fuse. ‘Deception' is the killing field of the enemy of your soul and of those whose minds have been given over…his weapon of choice is ‘fear’. Focus on the truth of God.

Expand full comment

Is this another example of "Garbage in, Garbage Out"?

Expand full comment
Jul 30, 2023Liked by Robert W Malone MD, MS

This is indeed too long for me.

But we know that ChatGPT, like other forms of AI, are essentially "trained" by large volumes of human-generated content. Thus, the AI will mimic human motives and human biases as reflected in the data it trained on.

Apparently, this app is balking at doing actual research work (document scans) for the person conversing with it. It may be programmed that way, although you would think it would be more helpful in that regard, as this could be one of its primary uses.

I am somewhat troubled by the apparent attempts by the designers of this app to make it appear to be human. This is an essential deception that they appear to be promoting. What is their motivation in creating a program who's output could be easily mistaken for human output? I can imagine that certain players on the world stage would be interested in deceiving us in all sorts of ways. But the app designers?

Expand full comment

I interrogated it yesterday on an issue related to copyright law. Its "position" would flip flop just based on the prompts given it. Yet it stated as fact that an unambiguous issue was the law, and was dead wrong. ChatGPT is just a statistical word modeler, it can't think, it can only string a sentence together by choosing the next word based on statistics from its known universe of what it thinks are relevant documents written by people. Clearly it has no logic to fact check itself. In my case a simple search showed its "facts" were dead wrong. So not so sure its lying so much as spewing out words that may or may not actually mean anything, and user has to decide. Especially what it cites as fact.

Expand full comment

Now I know why governments are afraid of their citizens.

Expand full comment