Do Wellness Outcomes Reports Systematically and Dramatically Overstate Savings?

Al Lewis wears multiple hats, both professionally and also to cover his bald spot. As founder of Quizzify, he has married his extensive background in trivia with his 30 years experience in healthcare to create an engaging, educational, fully guaranteed and validated, question-and-answer game to teach employees how to spend their money and your money wisely. As an author, his critically acclaimed category-bestselling Why Nobody Believes the Numbers, exposing the innumeracy of the wellness field, was named healthcare book of the year in Forbes. As a consultant, he is widely acclaimed for his expertise in population health outcomes, and is credited by search engines with inventing disease management. As a validator of outcomes, he consults to the Validation Institute, part of an Intel-GE joint venture.
When analyzing the effect of workplace wellness and related employee health services, studies invariable attribute savings among participants to the program rather than the more likely
The vast majority of “studies” in the field of workplace wellness and related employee health services compare participants to nonparticipants, and show substantial savings in the former vs the latter. They invariably attribute the savings among the participants to the program (a “program effect”) rather than to the likely much higher level of motivation among participants to succeed in any program (the “participation effect”).
 
For example, wellness promoters claim that if you divide a company into employees who want to lose weight vs employees who don’t, that the difference in weight loss between the 2 groups is due to the program, not the relative difference in motivation to lose weight. (And, further, that people who start in the motivated category but drop out shouldn’t count at all.)
 
If indeed it is the case that the “participation effect” is present and substantial, savings from all studies with that design are overstated. And yet despite its ubiquity in wellness study designs, no one has ever postulated the existence of a participation effect using data. This is particularly perplexing because, intuitively, it makes no sense that, in an undertaking such as personal health improvement in which motivation is key, separating a population into study and control groups on the basis of motivation would constitute a valid study design.
 
Unsurprisingly, it is easily provable that the intuitive result is indeed the correct result.
 
How do we know this? There have been 3 studies in which the participation effect can be isolated from the program effect. Further, each study featured the opposite of “investigator bias,” in that the authors were attempting to prove (and, indeed, thought they had proven) the opposite.
 
Each study demonstrated the substantial impact of the participation effect from different angles:
  • Case Study 1: Would-be participants were separated from nonparticipants but not offered a program in which to participate … and yet showed massive savings;
  • Case Study 2: A program gave what is now widely believed to be incorrect advice to participants, the advice was taken, and risk factors barely fell … but participants still showed massive savings vs nonparticipants;
  • Case Study 3: A controlled experiment measured results both population-wide and by separating participants vs nonparticipants within that population. The population’s health was unaffected by the intervention, but participants showed massive savings vs non-participants within the population.
 
Case Study 1
Eastman Chemical and Health Fitness Corporation: Savings Without a Program
The slide below—the key outcomes display of the Eastman Chemical/Health Fitness Corporation’s Koop Award-winning program application—clearly shows increasing savings over the period of the 2 “baseline years” (2004 and 2005 on the slide below) before the “treatment years” (2006 to 2008) got underway. Phantom savings reached almost $400/year/employee by 24 months (2006), even though the program was not available to would-be participants until the 24th month.
 

 
While this study would seem to constitute an excellent demonstration of the “participation effect,” there is one limitation: 4 years after the study was published, reviewed, and blessed by the Koop Award Committee, the authors and evaluators removed the X-axis altogether and presented an alternative: that the program had been in effect during the entire period. The revised slide is identical except that the X-axis no longer has labels.
 

 
That revision raises another question, though: by the end of 2008, the “savings” for Eastman participants exceeded $900/year, or 24%, but average participant risk declined only 0.17 on a scale of 5, or roughly 3%. And since wellness-sensitive medical admissions account for roughly 7% of all admissions, that 3% would be applied only to 7% of all admissions, thus accounting for only 0.2% of the 24% of savings.
 
Even if one accepts that—despite its review by the entire Koop Award Committee and its subsequent wide dissemination—the key display was wrong the entire time and no one noticed until it was exposed in a highly visible Health Affairs blog, the 24% separation of the 2 lines is overwhelmingly the result of the participation effect. It cannot possibly be attributed to the 3% reduction in risk factors among motivated participants.
 


Compendia
Adult ADHD Compendium
COPD Compendium
Dermatology Compendium
Diabetes Compendium
GI Compendium
Hematology Compendium
Immuno-oncology Compendium
Lipids Compendium
MACRA Compendium
Oncology Compendium
Pain Compendium
Reimbursement Compendium
Rheumatoid Arthritis Compendium
Know Your News
HF Compendium
Managed Care PODCAST
$AD300x250BB$