Skip to Main Content
 

Major Digest Home How patients are turning to AI chatbots to fight back against the broken $5 trillion healthcare system - Major Digest

How patients are turning to AI chatbots to fight back against the broken $5 trillion healthcare system

How patients are turning to AI chatbots to fight back against the broken $5 trillion healthcare system
Credit: Kate Taylor, Fast Company

On July 29, 2025, at 9:45 a.m., Christine Ressy was supposed to be undergoing surgery to remove kidney stones. Instead, Ressy, a 49-year-old hairdresser in New York City, found herself holding back tears in the waiting room of a Manhattan hospital.  Unless she paid half of her $10,933 bill prior to surgery, her doctor simply could not operate, she had been told.  

Because Ressy was uninsured, she had hoped to receive a cash-pay discount or find some other way to negotiate costs. She wanted to see an itemized receipt after her surgery before paying up, and had prepared a $500 cash deposit. She had done all this on the advice of her most trusted advocate: ChatGPT. 

Ressy’s conversations with ChatGPT about the cost of her surgery spanned more than 28,000 words. The platform assured her that she was allowed to push back against medical cost estimates, offering scripts for phone calls and email drafts to send billing departments. In the hospital, Ressy messaged ChatGPT again. “I’m crying beyond tears,” she wrote. She was willing to pay, but did not want to do so upfront. The staff is “pressuring me,” she said. What should she do?  

“You are not the problem here,” ChatGPT responded, sending Ressy a yellow heart emoji. Ressy was simply a “patient asking to be treated fairly,” the AI platform said. “They are pressuring you at your most vulnerable—and that is wrong.”

Ressy went to the check-in desk and repeated a new ChatGPT script: This time, she wanted documentation that she had, as instructed, arrived two hours early for surgery, had offered a good faith deposit, and that the hospital would not be admitting her. One of the medical billers overheard Ressy, then mentioned the phrase “charity care.” Ressy was previously told on the phone that she made too much to qualify for any financial assistance. Now, the biller brought Ressy to the billing office and gave her a document to sign. Two hours after her scheduled appointment, Ressy went into surgery. 

Three months later, the only money Ressy has paid is the $500 she brought as a deposit. She never received a bill for her surgery, and she is currently negotiating the cost of her anesthesia. “I didn’t know I had any of these options,” Ressy tells Fast Company. “ChatGPT said it’s legal, it’s necessary, and it’s expected to negotiate—I didn’t know that.” 

Ressy is one of a growing number of people using ChatGPT and other AI tools to untangle the convoluted finances of the American healthcare system. As insurers invest in artificial intelligence, many patients feel the system is increasingly lacking in humanity. A ProPublica report found that Cigna denied 300,000 requests over a two-month period in 2023, with physicians spending an average of 1.2 seconds on each case. Cigna, UnitedHealthcare, and Humana are all facing class-action lawsuits that allege the insurers’ AI models denied patients lifesaving care, with denials that ran counter to doctors’ recommendations. Patients often are informed of these denials in confusing form letters that leave patients scrambling mere days—or even hours—before scheduled treatments. 

Now, thousands of patients are using platforms to appeal rejected claims, according to Alicia Graham, the CEO of AI startup Claimable. Others, like Ressy, are asking for scripts to help them negotiate the cost of care. 

Jessica Cunningham, a mother of four and content creator in Southern California, tells Fast Company she runs all of her family’s hospital bills through ChatGPT to make sure they are not being overcharged. In a confounding system, seemingly controlled by bots and byzantine policies, AI can feel like a lifeline. “It makes me feel like I have the smartest person in the world looking out for me,” Gordon says.

“They don’t know what to do next”

With its opaque pricing and convoluted policies, it’s easy to feel confused by the American healthcare system. A Gallup poll found that just 17% of Americans are aware of the cost of healthcare procedures before receiving care. Trying to navigate the medical system is an exhausting process, with more than 80% of patients and caregivers telling the Patient Insight Institute that they spent five or more hours a week on administrative tasks. Eighteen percent said they spent “too many hours to count.” Then there is the financial burden: according to a 2022 survey, four in ten Americans are in medical debt. 

Erin Bradshaw, the executive vice president of the Patient Advocacy Foundation, says that by the time people reach out to the nonprofit, they are already overwhelmed. Most are not aware that many hospitals are open to negotiating costs, nor that hospitals have charitable or discount programs. Even if patients are aware of these options, few know who to contact or what to say.  

“Often the barrier is they don’t even know what to do next, because you’re dealing with a health crisis to begin with,” Bradshaw says. 

Decoding hospitals and insurers’ policies can feel like trying to read another language. One of the most powerful aspects of AI platforms is their ability to analyze vast amounts of text nearly instantaneously, with ChatGPT reading hundreds of words in just a few seconds. Often, people simply surrender when the process becomes too overwhelming. If AI platforms can provide support for patients—even if it’s just by scanning documents and suggesting questions to ask—it can be a great tool for self-advocacy, Bradshaw says. 

At the same time, Bradshaw and other healthcare experts caution against relying solely on AI. Part of their caution is due to privacy concerns. Artificial intelligence is able to provide better results with more information, so if you upload your bills and medical records, you will likely get more fine-tuned responses. However, this information does not necessarily remain private, as most AI platforms save and collect user data. It’s a stark departure from the privacy-obsessed world of medicine, where Health Insurance Portability and Accountability Act (HIPAA) demands strict protection of sensitive health information. 

Also complicating matters is AI platforms’ quirk of offering up occasionally inaccurate information. Different states and healthcare systems have vastly different policies. What works in one situation might not apply in another, no matter what ChatGPT says. And sometimes, AI platforms are just straight-up wrong. Earlier this year, for example, CNN reported that the FDA’s AI platform was making officials’ jobs more difficult by misrepresenting research and hallucinating nonexistent studies.  

That does not mean patients should avoid AI altogether. They just need to check its sources, ensuring the original documents actually support platforms’ statements. Alternatively, experts advise seeking out platforms trained specifically to answer these types of questions. 

Courage to take action

In January, the Marshall Allen Project launched the “Marshall Allen Clone,” or MAC. The journalist Marshall Allen, author of “Never Pay the First Bill (And Other Ways to Fight The Healthcare System and Win),” spent his career publishing investigations that helped patients better navigate the healthcare system.

After Allen died unexpectedly in 2024, the Marshall Allen Project built MAC, an AI tool trained on Allen’s reporting. The free platform offers personalized answers to people like Ressy struggling to negotiate costs or untangle their options. 

“The general AI does a really good job of giving people a great starting point,” Andrew Gordon, a healthcare researcher who volunteers with the Marshall Allen Project, tells Fast Company. What sets the MAC apart is its training on the intricacies of the system. When a patient is advocating for themselves for the first time, Gordon says, feeling secure in the accuracy of this advice can be especially powerful. 

“It’s a North Star, it’s confidence, and it’s courage to take action,” Gordon adds.  

Other organizations are building even more specific AI tools. Claimable, a startup that launched in 2024 and one of Fast Company‘s 2025 World Changing Ideas, uses AI to generate and submit appeals for patients who have been denied healthcare coverage. The startup is a seed stage company with investors including Walkabout Ventures and Quiet Capital. In less than a year, Claimable has recovered nearly $20 million for patients.

Cofounder Alicia Graham tells Fast Company she was drawn to the idea after finding out that up to 99% of people whose claims are denied never file an appeal. Yet, when patients do push back against these denials, a sizable portion — up to 80% — win, allowing access to treatments previously out of financial reach. 

To use Claimable, which costs $39.95 per appeal, patients upload their medical and insurance information and answer a handful of questions. (Unlike most AI platforms, Claimable privately protects this information, in compliance with HIPAA.) The platform generates an appeal, drawing on specific insurance policies, local legislation, and relevant medical research. This kind of tedious work can take hours. Claimable creates a letter in minutes, then submits the appeal to the necessary parties. 

Michael Henry was one of the many patients who did not realize he could appeal rejected claims until he heard about Claimable. Henry, a chief of human resources in Battle Creek, Michigan, had started rationing his GLP-1 shots in late 2024 when Blue Cross Blue Shield Michigan announced it would no longer cover medications such as Saxenda, Wegovy, and Zepbound. Henry tried another weight-loss drug, but it did not work. He did not want to pay $1,200 a month. So, instead of injecting himself weekly, Henry—who was previously diagnosed as prediabetic—cut his shots to every other week, rationing out his remaining medication. 

By July, Henry was almost out. He was listening to an episode of “On the Pen,” a podcast about GLP-1s, featuring an interview with Zach Veigulis, another Claimable cofounder. Henry collected his documents and filled out Claimable’s questionnaire. The next day, he got a call from his doctors’ office. He had been approved. Henry picked up his medication later the same day. 

The United States is still in the early days of patients using AI to navigate the $5 trillion healthcare system. AI is not always the solution. Not every appeal is approved and not every attempt at negotiation succeeds. Artificial intelligence does not address patients’ fundamental concerns about the healthcare system, from its opaque pricing to confusion and suspicion around denials. 

Americans on all sides of the negotiation seem ready to let AI take the reins when it comes to healthcare. Disturbingly, this could mean that artificial intelligence platforms working with insurers will be financially incentivized to deny patients’ claims. A pilot program that is set to launch in January, for example, will use AI platforms to review prior authorizations of treatments. The platforms will be paid a share of the money saved by rejecting treatment. 

The ideal future of health insurance would be a system free from concerns of systemic bias, or at least one that does not require superhuman computing capabilities to understand. But as insurers implement new technology, AI can at least offer patients a new tool—and a new confidence—to push back against a system that leaves many feeling powerless. 

“The more people that appeal, the better,” Claimable cofounder Gordon says. “The more people challenge—if they feel they’ve been unjustly denied—the better for everyone.” 

Sources:
Published: