Currently Viewing:
Contributor
Currently Reading
Where's the Evidence? Barriers to Analyzing Digital Health's Impact
April 16, 2019

Where's the Evidence? Barriers to Analyzing Digital Health's Impact

A career researcher committed to evidence-based understanding of high impact healthcare problems, Meridith Peratikos leads a team that conducts research studies, methodological oversight, and literature review.

Prior to axialHealthcare, Meridith was a biostatistician and protocol manager for multi-center clinical trials conducted through the American College of Radiology Imaging Network at Brown University. She served full-time as a faculty member in Vanderbilt University Medical Center (VUMC) Department of Biostatistics, where she primarily collaborated within the Vanderbilt Institute for Global Health on observational cohort research in HIV/AIDS. Meridith is currently an Adjunct Instructor in the Department of Biostatistics at VUMC. She holds a bachelor’s degrees in statistics and sociology and a master’s degree in statistics from Case Western Reserve University. Meridith is lead or co-author on nearly 100 peer-reviewed publications (https://scholar.google.com/citations?hl=en&user=Ix2xcSsAAAAJ).
 
The digital health industry continues to grow at an unprecedented rate. In 2018, venture funding for digital health companies approached a record $8.1 billion in funding, according to Rock Health. These companies, including axialHealthcare, are striving to address important healthcare issues by developing products and service offerings in various categories such as genome sequencing, analytics, telemedicine, mobile apps, and population health management tools—all with the promise of increased insights and patient engagement, and better care coordination that improves outcomes, costs, and access to care. These goals are crucial to moving the US healthcare system toward value-based reimbursement.

That said, little evidence is available in the digital space when it comes to peer-reviewed publications, measurement of potential impact, and effect on patients with the greatest burden of disease.

Analyzing the Evidence Behind Digital Health

Health Affairs recently published a study by Kyan Safavi, Simon C. Mathews, David W. Bates, E. Ray Dorsey, and Adam B. Cohen that explored this observation. After identifying 20 of the top-funded, private US-based digital health companies, they analyzed their products and services related to peer-reviewed evidence, potential impact on patients with high-burden conditions, and impact on cost of or access to care.

The study found that of the few studies on digital health services published in peer-reviewed literature, most evaluated their products in healthy patients rather than high-burden patients. Also, clinical effectiveness studies with a high level of evidence were uncommon. Moreover, no studies evaluated the effectiveness of their products or services in terms of reducing costs or improving access to care.

These findings may lead to the assumption that digital health products and services from leading companies have had a limited impact on disease burden and cost in the healthcare system, but there’s another side to that narrative. There are myriad reasons that little evidence exists on digital health’s impact, and we discuss the most significant barriers below.
  • High cost. No matter the size of the study or whether it’s conducted in-house or by a clinical research organization, peer-reviewed studies are expensive. Several factors must be taken into consideration: length of study, number of patients in the study, type of materials provided to patients, study staffing, training for human subject research, ethical clearance from an Institutional Review Board, and many more. These costs can be a major barrier for startups with strained budgets and tight timelines.
  • Research bias and ownership. Research bias is defined as “an inclination of temperament or outlook, especially a personal and sometimes unreasoned judgment: prejudice.” Bias can occur in many stages of research, including planning, funding, data collection, analysis, and even publication. With publication bias, companies may only choose to publish self-sponsored studies that show positive results, which is one of the reasons the medical community has become adamant on studies being run and funded by an objective third party.
But for the company that developed the product or service, that means spending substantial money to contract with a third party. This is something many startups don’t inherently have to spend for studies they would not own that could negatively impact their businesses.
  • Timeliness. Depending on the study endpoint, some trials take years from inception, planning, accrual, follow-up, completion, and analysis. Even the peer-review process is slow, and that does not even start until the study is completed. On average, it takes 13 weeks for review. Moreover, if a publication is rejected twice and accepted at the third choice journal, then a whole year may pass from completion of the paper to publication of results.
A timeline of “years” can pose a major problem for companies that are constantly innovating. Products can evolve drastically over a short period of time, but a company undergoing a randomized study would have to put product adjustments on hold. axialHealthcare is a great example of this challenge. We’re continually assessing new scientific evidence in the pain and opioid space and if the findings indicate that we should make an adjustment to our clinical insight or analytical models, we implement the needed changes. This type of product adjustment may have to be put on hold during a study.
  • Competitive advantage. Yes, there are major benefits from a peer-reviewed study highlighting a company’s products and services, but there is also a competitor concern. It is difficult to get published in high-impact journals without sharing company details that include intellectual property. That’s a major concern, especially for startups.
What can a digital health company do to prove value?

A more pragmatic approach to evaluating digital health products may be to conduct observational data analysis on routinely collected data—an approach that requires data collection, study design, and statistical analysis skills. While not as rigorous as randomized controlled studies, a moderate level of evidence is better than none at all. Additionally, the academic medical community agrees that observational data analysis is important for informing research findings and have developed guidelines called STrengthening the Reporting of OBservational studies in Epidemiology (STROBE). If followed, STROBE allow for transparency and reproducibility of observational data analysis.

Given the long wait times for publication, digital health companies should consider the release of impact results as a white paper rather than in a peer-reviewed journal. White papers allow for quicker production of scientific knowledge and enable digital health companies to publicize results sooner. While white paper results are still met with much skepticism and would not have even been evaluated by Safavi et al,  if the culture around releasing results could emphasize transparency and reproducibility, then perhaps peer-review becomes less imperative for digital health companies pursuing innovative ways to make an impact.

 
Copyright AJMC 2006-2018 Clinical Care Targeted Communications Group, LLC. All Rights Reserved.
x
Welcome the the new and improved AJMC.com, the premier managed market network. Tell us about yourself so that we can serve you better.
Sign Up