ChatGPT: Answers Are "Highly Convincing" Regardless of Accuracy

ChatGPT: Answers Are "Highly Convincing" Regardless of Accuracy

Blue pixelated image of a human face to resemble an AI avatar with a text overlay reading "Should Chat GPT come with a black box warning?"

Two topics have dominated the news cycle lately: ChatGPT artificial intelligence and the resource crisis building in our nation’s health systems. Naturally, healthcare leaders everywhere find themselves asking, “Can ChatGPT help us?”

To explain the advantages and risks of this type of “open” AI, we’ll go directly to the source and use OpenAI’s ChatGPT (v.01.30.23) to contribute to this article. For transparency, AI-written text is indicated in bold.

What Is ChatGPT?

We asked: “What is ChatGPT?”

Answered:

ChatGPT is an AI language model developed by OpenAI that uses deep learning to generate human-like text responses. It is trained on a large corpus of diverse internet text. As a result, it can generate a wide range of responses to a given prompt, making it useful for applications such as chatbots, question-answering systems, and language translation. It is based on the Transformer architecture and has achieved impressive results in various natural language processing benchmarks.

The concerning part lies in the phrase “it can generate a wide range of responses to a given prompt.” While this sounds exciting, in healthcare it could prove very dangerous if not applied correctly.

The Guardrails Question

We asked: “What guardrails does ChatGPT put in place to prevent giving the wrong answer?”

Answered:

ChatGPT, like other language models, does not have built-in guardrails to prevent giving incorrect answers. It is trained on a massive dataset of diverse internet text, which includes both accurate and inaccurate information. While it can generate highly convincing responses, it is important to note that the model is not capable of verifying the accuracy of its answers and can make mistakes. To prevent giving incorrect answers, it is recommended to use the model in conjunction with other sources of information and to carefully evaluate the results before using them in any critical applications.

What is more frightening than the potential proliferation of misinformation is that ChatGPT recognizes its answers are “highly convincing” regardless of accuracy. Incorrectly applied in healthcare, such a convincing misinformation engine could have detrimental results.

Using AI Without Causing Harm

We asked: “How do we leverage the power of a sophisticated AI model without introducing more harm than good?”

Answered:

Leveraging the power of sophisticated AI models such as ChatGPT without introducing harm requires careful consideration and management of several factors:

Data Bias: AI models can perpetuate and amplify existing biases in training data.

Explanations and Transparency: AI systems can be opaque and difficult to understand.

Ethical Considerations: AI can violate principles such as privacy, autonomy, and fairness.

Human Oversight: AI systems can make mistakes and require human management.

By being mindful of these factors, it is possible to harness the power of sophisticated AI models while minimizing harm and maximizing positive impact.

Answers like these are, at the very least, honest. But once ChatGPT is embedded into products and shaped by outside vendors, will it maintain transparency? Will it support clinical workflows or raise more questions than answers?

An Alternative: Physician-Controlled AI

One alternative to open conversational AI is the closed, secure methodology of Physician-Controlled AI, created by HIA Technologies, Inc.

We asked: “What is physician-controlled AI?”

Answered:

Physician-controlled AI refers to artificial intelligence systems designed to assist healthcare providers but only activated and used under the control and supervision of a licensed physician. These systems analyze medical data and provide insights while physicians make final decisions on treatment and care. The aim is to improve accuracy, speed, and efficiency while maintaining clinical judgment and expertise.

How Physician-Controlled AI Works

HIA’s Physician-Controlled AI uses complex language models to interpret questions and return medically validated answers directly from the healthcare team. It cannot “go rogue” because it is a closed system. Answers come from clinician-controlled datasets, not the internet.

Within HIA’s Aivio™️ environment, patients can view content from their provider and ask questions in real time. The AI understands topic context, anticipated questions, and where the patient is within the content flow, allowing for pinpoint accuracy.

When confidence thresholds are not met, Aivio presents “fallback” questions—clarifying options likely aligned with the patient’s intent and supported by validated answers.

Most open AI systems do not do this. They provide an answer—right or wrong—and place the burden on the user to recognize and correct errors. Physician-Controlled AI recognizes when clarification is needed and actively supports it.

Why Clarification Matters

We asked: “Why is clarification important?”

Answered:

Clarification improves communication, increases accuracy, supports better decision-making, avoids misunderstandings, and builds trust by ensuring all parties share a clear understanding.

If clarification is so powerful, why rely on systems that don’t recognize when it’s needed?

Where Open AI Still Fits in Healthcare

These shortcomings don’t mean ChatGPT has no role in healthcare. Use cases without clinical oversight should focus on supportive tasks, such as:

  • Appointment scheduling
  • Prescription refill requests
  • Medical record management

When it comes to direct patient interaction and clinical questions, healthcare systems are turning to secure, controlled AI systems.

A Real-World Comparison

Patient question:

“How many times has Dr. Ghiassi performed this procedure?”

ChatGPT response:

“I’m sorry, but I don’t have access to information about the specific number of surgeries performed…”

Physician-Controlled AI (Dr. Ghiassi’s Aivio):

“Your doctor is an expert and has performed this procedure countless times. CTS surgery is actually the most common procedure he does. With his expertise, you’re in good hands.”

The difference is context, validation, and trust.

Choosing the Right AI for Healthcare

As healthcare organizations seek faster, safer ways to scale patient communication, AI adoption is accelerating. Selecting the right AI system to support goals and mitigate risks is critical.

Fortunately for patients, open AI models are under scrutiny. Secure, physician-controlled AI is poised to make meaningful advances in patient health literacy.

To learn more about the safer alternative to ChatGPT, visit HIA.AI.

It’s Time To Get Started

Your audience deserves your answers. Trusted by educators, enterprises, solopreneurs, content creators, influencers.

Watch. Ask. Learn