Commentary
Article
Why is health care so staggeringly difficult to use? How do we fix it? To move forward, we first must look back, because the system we have today was not really designed, it evolved, notes Ariela Simerman, Turquoise Health.
Why is health care so staggeringly difficult to use? How do we fix it? To move forward, we first must look back, because the system we have today was not really designed—it evolved. Slowly, reactively, unevenly, and that history matters. If you started your car this morning and drove off without thinking, you experienced what it feels like when a complex system works. If you’ve ever tried to find out how much a medical procedure would cost, you know what it feels like when a complex system fails. How did we get here? By accident.
In the early 20th century, medicine was relatively inexpensive, but also relatively ineffective. A serious illness often meant a long, uncertain recovery. The main financial risk was not the cost of care—it was lost income. To protect against that risk, mutual aid societies, fraternal organizations, and unions provided financial support. In 1917, nearly 200 such organizations paid out $97 million to members.1 Less than 1% of that went to medical costs.
Employers had a stake, too. Lost labor meant lost productivity. In response, companies in industries like mining and railroads hired doctors to treat workers and keep them on the job. These early programs offered little choice or transparency, but they marked the beginning of employer-sponsored health care.2
Ariela Simerman | Image Credit: © Turquoise Health
In the 1920s, Baylor Hospital introduced prepaid hospital plans: 21 days of care for teachers at $6 per year. The idea spread, forming the foundation for Blue Cross and similar nonprofit insurance models.
As medical capabilities advanced—with anesthesia, antibiotics, and improved surgical techniques—so did the cost of care, and insurance became not just a financial cushion, but a necessity. For-profit insurers stepped in. Unlike Blue Cross, which offered uniform rates, for-profit insurers priced their premiums based on risk. Younger, healthier people paid less; coverage levels varied; insurance became segmented; and health care became a business.
Just as insurance was becoming a profit-minded business, hospitals were also evolving. In 1899, administrators formed the Association of Hospital Superintendents—now the American Hospital Association—to standardize medical practice and improve hospital efficiency and financials.3 To achieve their goals, hospital administrators needed tools to standardize and bill for care. They looked to the research doctors were doing to classify diseases and the growing array of treatment options.
In 1893, the International Statistical Institute adopted a classification system for diseases based on the work of French demographer and statistician Jacques Bertillion’s Classification of Causes of Death, and in the early 1900s, Massachusetts surgeon Ernest A. Codman, MD, FACS, began collecting detailed data about his patients in an effort to track and improve care, setting the stage for standardized medical documentation. These classification projects echoed the work of Hippocrates, whose detailed medical histories and notation system helped lay the foundation for scientific, evidence-based medicine. However, they also created the infrastructure for something else: complex medical billing.
By the 1940s, health care was improving, insurance was evolving, and hospitals were becoming more sophisticated. A major shift came during World War II, when to control inflation, the US government froze wages. Employers could not offer higher pay, but they could offer health benefits—and those benefits were tax-free. The result: health insurance became tied to employment—a relationship that seemingly became permanent. In the 1950s, unions lobbied to keep employer-based coverage exempt from taxes.
Around the same time, other wealthy nations were building national health care systems. Presidents Franklin D. Roosevelt and Harry S. Truman considered similar moves—as did Presidents Richard Nixon and Bill Clinton in subsequent decades—but public sentiment leaned elsewhere. Consumerism and trust in private markets shouldered most of the blame.4Instead of a single national plan, the US introduced a series of targeted programs: Medicare and Medicaid in 1965,5 the Children’s Health Insurance Plan (CHIP) in the 1990s, the Health Insurance Portability and Accountability Act in 1996,6 and the Affordable Care Act in 2010.7 Although tach helped close a coverage gap, none transformed the system as a whole.
Although the US was creating a patchwork of insurance programs, others were continuing to standardize categories of illnesses and treatments. In the 1960s, the World Health Organization created its International Classification of Diseases while the American Medical Association developed its CPT coding system. Specialty societies also were creating their own codes for procedures, devices, and diagnostics. These classification systems were essential for clinical accuracy and public health tracking. But for billing, they became something else entirely: dense, shifting systems that required full-time specialists just to interpret.
There is no standard way to apply the codes. Providers and insurers interpret them differently. Errors are common, as is waste. Add in multiple insurers, overlapping policies, and opaque networks, and patients are often left with no clear place to turn. The cost of this administrative burden? Around $1 trillion annually8—a quarter of total health care spending. The result is a system no one can navigate.
Today, Americans face a system that’s fragmented, duplicative, and confusing. Medicare, Medicaid, CHIP, Tricare, and commercial insurance all have different standards, rules, and networks. Even experts struggle to make sense of it all. In 2023, a KFF survey found that 58% of insured adults had experienced a problem with their coverage in the past year.9 Among those in fair or poor health, two-thirds had issues. For people with frequent care needs or requiring mental health treatment, it was 3 in 4. Forty percent of insured adults skipped or delayed care due to cost. People want a simpler, more transparent system. They’re not wrong to expect one.
First, we have to acknowledge the obvious: this isn’t working. The US spends more on health care than any other wealthy nation,10 yet ranks near the bottom in health outcomes and life expectancy. Costs also continue to rise at an unsustainable pace. These happenings are not the results of a bad blueprint. They are the results of no blueprint at all. What we’re living with is the cumulative effect of more than a century of workarounds, incentives, and missed opportunities.
To fix it, we have to stop tinkering. We have to start building.
The good news: we have the tools. Technology, regulation, and momentum are all on our side. For the first time, consumers can begin to shop for care with actual price information. Turquoise Health has just launched its beta Consumer Search11 that allows individuals to enter a service, location, and insurance plan—and instantly see what that service is likely to cost themout-of-pocket.
We’ve also published the PATIENTS Framework,12 a roadmap for health care leaders who want to build a more coherent transaction experience. Give it a read, share it with your colleagues, reach out. We want to hear from you.
To make the system work better, we need to start by designing one. With new technology and a shared desire for change, we can move past the old ways of doing things. Are you in?
References
Stay ahead of policy, cost, and value—subscribe to AJMC for expert insights at the intersection of clinical care and health economics.