In a retrospective cohort analysis, diabetic nonresponders to a patient satisfaction survey had higher healthcare costs, clinic visits, and hospitalizations, but lower medication adherence.
To compare healthcare costs, utilization, and medication adherence between diabetic responders and nonresponders to a patient satisfaction survey.
We performed a retrospective cohort study of 40,766 patients with diabetes who had been randomly selected to receive the 2006 Veterans Affairs' Survey of Healthcare Experiences of Patients. Outcomes were measured during the following year.
We used multivariable models to compare healthcare costs (generalized linear models), utilization (negative binomial regression), and adherence to oral hypoglycemic medications (logistic regression) between survey responders and nonresponders.
There were 26,051 patients (64%) who responded to the survey. Survey nonresponders incurred significantly higher healthcare costs (incremental effect, $792; 95% CI, $599-$986; P <.01). Nonresponders had a modest increase in primary care (incidence rate ratio [IRR], 1.06; 95% CI, 1.05-1.08; P <.01) and specialty care visits (IRR, 1.17; 95% CI, 1.12-1.22; P <.01), but more substantial increases in mental health visits (IRR, 1.74; 95% CI, 1.62-1.87; P <.01) and hospitalizations (IRR, 1.60; 95% CI, 1.46-1.75; P <.01). Medication adherence was significantly lower among survey nonresponders (odds ratio, 0.68; 95% CI, 0.65-0.74; P <.01).
Nonresponders to a patient satisfaction survey incurred higher healthcare costs and utilization, but had lower medication adherence. Understanding these characteristics helps to assess the impact of nonresponse bias on patient satisfaction surveys and identifies clinical practices to improve care delivery.
Am J Manag Care. 2015;21(1):e1-e8
Diabetic nonresponders to a patient satisfaction survey incurred higher healthcare utilization and costs, but lower medication adherence.
Patient satisfaction, which is routinely evaluated through patient surveys, has become increasingly important in assessing quality improvement, particularly as health systems—such as that of the the Veterans Health Administration (VA)—move toward a patient-centered focus to deliver care.1 As policy makers continue to tie healthcare reimbursement to patient satisfaction, these satisfaction surveys are becoming integrated into clinical care.2 Despite the widespread use of patient surveys, however, there is limited research examining the validity of patient satisfaction measures. In particular, it is not known how systematic biases, including nonresponse bias, impact the results of patient satisfaction surveys and how nonresponders interact with the healthcare system.3-5
The majority of research characterizing survey nonresponders has focused on population-based health surveys.6-14 Population-based surveys target a broad population, including individuals who have limited contact with healthcare.12 In contrast, patient satisfaction surveys focus specifically on individuals who are connected to the health system and elicit responses that are pertinent to patients’ health and satisfaction with care. As a result, determinants and outcomes associated with nonresponse to clinic-based patient satisfaction surveys and population-based health surveys may differ.
The VA has conducted regular assessments of patient satisfaction and offers a unique opportunity to study nonresponders in these surveys. Recognizing the characteristics of survey nonresponders may also help clinicians detect specific patient needs. Survey nonresponders are thought to have unique personality characteristics and may respond differently to medical care when compared with responders.15 For example, a study of patients with chronic illness found that individuals who did not respond to a medication-beliefs survey exhibited less persistence to pharmacological therapy.16 Knowing this difference could provide an opportunity for more in-depth counseling for survey nonresponders. Understanding characteristics of nonresponders is particularly important with patients who have a chronic illness such as diabetes, as they may represent a particularly vulnerable medical population.
The objective of our study was to identify factors associated with nonresponse to a patient satisfaction survey and examine whether the nonresponse was associated with greater utilization of healthcare. To do this, we studied a large population of diabetic patients who were seen in VA primary care clinics and mailed a questionnaire related to their healthcare experience. We compared healthcare costs, number of outpatient and inpatient encounters, and adherence to oral hypoglycemic medications between survey responders and nonresponders.
We performed an observational study examining diabetic veterans who received healthcare at VA outpatient clinics during the 2005 and 2006 fiscal years (FYs) (October 1, 2004, to September 30, 2006). Data were collected from the VA electronic medical record and administrative databases. The Survey of Healthcare Experiences of Patients (SHEP) was administered to a random subset of VA patients during FY2006, and we compared characteristics between survey responders and nonresponders. The VA Puget Sound Health Care System Institutional Review Board approved the study.
SHEP is an annual survey conducted by the VA Office of Informatics and Analytics to assess patients’ perceptions of the healthcare received during a particular clinic visit.17 Every month, a stratified random design is used to survey patients from each VA facility that provided ambulatory care. To ensure adequate representation of both specialty care and primary care, approximately 15 patients were randomly selected from each of the following visit categories: specialty care, established primary care, and new primary care. Patients were eligible for the survey if they visited a VA clinic during the previous month and had not participated in a SHEP during the prior 12 months. Across the VA, 431,921 patients received the 2006 SHEP, and the response rate was 57.6% (N = 248,850).
In 2006, SHEP included 107 questions and was estimated to take 30 minutes to complete. It included questions on obtaining an appointment, arrival and registration, interaction with the provider, a self-evaluation of health, health behaviors, and individual demographics. Selected patients were initially sent a letter that explained the purpose of the survey, and encouraged them to participate. One week later, the SHEP questionnaire was mailed. Reminder postcards were sent the following week, and survey collection remained open for 3 weeks after the postcards were sent. We collected administrative data on all patients who were mailed a SHEP questionnaire, including responders and nonresponders.
Our study sample was derived from a cohort of 444,414 veterans with diabetes who were on oral diabetes medications and were previously seen in VA primary care clinics (see Wong et al for more details).18 To address medication use, patients were included if they had at least 2 outpatient prescriptions for an oral diabetes medication (metformin, sulfonylurea, or thiazolidinedione) during FY2006. Of these patients, 40,766 were mailed a patient satisfaction survey. The response rate among patients in our diabetes cohort was 64%. All analyses were weighted back to the patient cohort of 444,414 using sample weights constructed to account for patients seeking VA care more frequently being more likely to receive SHEP.
Information on patient characteristics, comorbidities, and medication use was collected for all patients in the study. Patient characteristics consisted of age, gender, race, marital status, and having received free VA care for disability or low income. Access to VA care was assessed by distance to the most frequently visited VA facility. The severity of diabetes was measured by the Diabetes Complications Severity Index score, which is based on A1C and end organ damage.19 The overall patient healthcare burden was evaluated using Diagnostic Cost Group (DCG) scores, which are based on the predicted cost of patients’ underlying comorbidities.20 In addition to measuring DCG scores, specific comorbidities were identified by having 2 outpatient encounters with an International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis, or 1 inpatient encounter with an ICD-9-CM diagnosis, and included heart failure, atrial fibrillation, chronic obstructive pulmonary disease, renal failure, stroke, and depression.
Our primary outcome was costs for all healthcare paid for by the VA during FY2007. We examined 3 categories of costs: outpatient, inpatient, and total costs. Healthcare costs were obtained from the Decision Support System (DSS) National Extracts for services provided in the VA, and the VA Fee Basis files (for services provided by non-VA providers and paid for by the VA). The Consumer Price Index was used to convert costs in calendar year 2006 to 2007 constant dollars.
Secondary outcomes included the number of outpatient face-to-face encounters, including primary care, specialty care, and mental health visits, as well as the number of hospitalizations in FY2007. We also assessed medication adherence to oral hypoglycemic agents derived from DSS National Outpatient Pharmacy Extracts that included drug names, prescription dispense dates, and days supplied. We used prescription refill data to calculate medication possession ratios (MPRs) for oral hypoglycemic agents.21 We classified a patient as adherent if the MPR was greater than or equal to 80% during the first quarter of 2007.22
For bivariate analyses, we used χ2 and t tests to compare characteristics of survey responders and nonresponders. For multivariate analyses, we used a logistic regression to assess patient factors associated with survey nonresponse.
We analyzed cost measures using generalized linear models (GLMs). Due to skewness in the distribution of cost, each cost variable was transformed prior to multivariable analysis. We used the Modified Hosmer-Lemeshow test to determine the appropriate link function and the Modified Park Test to assess the shape of the variance distribution.23,24 For each cost model, the appropriate link function was the cubic root function and the appropriate GLM family was the Gamma distribution. Because the majority of patients were not hospitalized in the followup year (FY2007), we analyzed inpatient costs using 2-part models.25 In the first part, we estimated a probit model across all patients to model the probability of hospitalization. In the second part, we estimated a GLM across the subsample of patients with nonzero inpatient costs. We then calculated expected costs for each patient by multiplying predicted hospitalization probability (first part) and inpatient costs (second part). For all models, we calculated incremental effects (IEs), or the change in respective cost measures associated with survey nonresponse.
We analyzed utilization measures using negative binomial regressions. Results were reported as incidence rate ratios (IRRs), which were derived by dividing the expected number of encounters or hospitalizations among the survey nonresponders by the expected number of encounters or hospitalizations among the survey responders. Finally, we used a logistic regression to assess the relationship between medication adherence and survey nonresponse.
We also performed a sensitivity analysis, restricting our analyses to patients who were less than 65 years old, to isolate those patients who were most likely to rely solely on the VA system for healthcare. All multivariate analyses were controlled for patient characteristics and comorbidities and adjusted for sampling weights. All statistical analyses were performed using STATA 11.2 (StataCorp LP, College Station, Texas).26
Of the 40,766 patients in our cohort who were mailed the SHEP, 26,051 patients (64%) responded to the survey and 14,715 patients (36%) did not respond. Patient characteristics differed between the 2 groups (). Survey nonresponders were more likely to be younger (P <.01), female (P <.01), unmarried (P <.01), and nonwhite (P <.01). A higher proportion of survey nonresponders received free care from the VA compared with survey responders (P <.01). DCG scores were similar between the 2 groups, though specific comorbidities differed. Survey nonresponders had a lower diabetes severity index (P <.01) and a lower prevalence of hypertension (P = .01), coronary artery disease (P <.01), and atrial fibrillation (P <.01). Nonresponders had significantly higher prevalence of depression (P <.01), posttraumatic stress disorder (P <.01), and schizophrenia (P <.01).
In univariate analysis, all categories of VA healthcare costs were higher among survey nonresponders. On average, nonresponders had a significantly higher total cost than nonresponders ($7523 vs $5666; P <.01). Nonresponders also had more healthcare encounters, including primary care visits (4.1 vs 3.8; P <.01), specialty care visits (1.0 vs 0.9; P <.01), mental health visits (2.6 vs 1.5; P <.01), and hospitalizations (0.13 vs 0.08; P <.01). Nonresponders were less likely to adhere to diabetes medications than responders (66.4% vs 74.3%; P <.01).
In adjusted analyses, survey nonresponse was significantly associated with younger age, single marital status, nonwhite race, living closer to a VA facility, and receiving free care (). Nonresponse was also greater with higher DCG scores and among patients who had a history of depression and schizophrenia. The presence of hypertension and renal disease were associated with lower odds of nonresponse.
In adjusted cost analysis, survey nonresponders had significantly higher costs, including outpatient costs (IE, $431; 95% CI, $320-$542), inpatient costs (IE, $421; 95% CI, $256-$586), and total costs (IE, $792; 95% CI, $599-986) compared with responders (). These results indicate that the total difference in adjusted costs between nonresponders and responders was $792 during the follow-up year.
The expected number of face-to-face encounters was significantly higher for nonresponders (), which ranged from modest increases in primary care encounters (IRR, 1.06; 95% CI, 1.05-1.08) or specialty care encounters (IRR, 1.17; 95% CI, 1.12-1.22) to more notable differences in mental health visits (IRR, 1.74; 95% CI, 1.62-1.87). This indicates that while the expected number of primary care visits was only 6% higher for survey nonresponders, the expected number of mental health visits was 74% higher than those expected for survey responders. The expected number of hospitalizations was also significantly higher for nonresponders (IRR, 1.6; 95% CI, 1.46-1.75). Finally, nonresponders were less likely to maintain adherence to oral hypoglycemic agents than responders (OR, 0.68; 95% CI, 0.65-0.74), indicating that the odds of medication adherence were 32% lower among survey nonresponders.
In our sensitivity analysis, there were 16,786 patients who were younger than 65 years, with findings that were largely similar to the primary analysis. Nonresponders who were younger than 65 years had increased total healthcare costs (IE, $640; 95% CI, $302-$978), more mental health encounters (IRR, 1.17; 95% CI, 1.07-1.26), more hospitalizations (IRR, 1.38; 95% CI, 1.22-1.56), and less medication adherence (OR, 0.68; 95% CI, 0.64-0.72).
Survey nonresponders are an inherently difficult population to study, though their impact on clinical practices and healthcare delivery can be substantial. In this study, we have expanded the current knowledge on nonresponse bias not only by performing one of the largest studies on survey nonresponders, but also by focusing on a commonly and routinely used patient satisfaction survey. We found nonresponders to the VA satisfaction survey incurred significantly higher healthcare costs, more frequent clinic visits, and more hospitalizations compared with survey responders. Nonresponders also had lower adherence to oral hypoglycemic medications. Our findings help better characterize a population that is often overlooked, and may provide additional tools for better clinical practice.
Consistent with prior studies, survey nonresponders were more likely to be single, of nonwhite race, and receiving free care.8,13,27 These patient populations tend to have greater health disparities and often feel marginalized in the healthcare system.28 Additionally, nonresponders were considerably more likely to have a diagnosis of depression and schizophrenia, which has been substantiated in prior studies of nonresponse bias.8,29 The VA places a heavy emphasis on screening for mental illnesses30—in other health systems, psychiatric diseases often go unrecognized—and knowing their association with survey nonresponse may provide another opportunity for mental illness screening.
We found higher healthcare costs and greater utilization among diabetic nonresponders to SHEP. Few studies have examined healthcare costs among survey nonresponders, and these have only found higher costs in subgroups of nonresponders, such as those who did not participate due to illness or disability.7,12 Prior studies have also found conflicting results relating to the rate of hospitalizations and clinic encounters among survey nonresponders.9-12,14 These studies are focused on population-based health surveys where many individuals have little or no contact with the healthcare system. In contrast, our study focused on regular primary care users, who were seen at least once in a primary care clinic in the prior year, and we found that survey nonresponders still incurred higher costs and utilization.
Survey nonresponders are thought to differ from responders in a variety of personality characteristics.15,31 Nonresponders may represent a population that is less self-motivated and relies more heavily on their providers for disease management. Recognizing this association could provide an opportunity for further counseling to enable patients to manage their own health.
Although survey nonresponders had higher healthcare utilization, we found that their adherence to oral diabetes medications was significantly lower. However, there is limited research on medication adherence among survey nonresponders, which has yielded inconsistent results.16,32-34 A study of 1696 nonresponders to a medication-beliefs survey found survey nonresponse was associated with lower medication adherence across a number of diseases, including diabetes.16 Other studies have shown no difference in medication adherence between survey responders and nonresponders.33,34 Our study focused specifically on patients with diabetes who were on oral hypoglycemic medications, which is a population that tends to have low rates of medication adherence.35 We found more than a 30% reduction in the odds of medication adherence among survey nonresponders. These findings are supported by prior evidence indicating that nonresponders are less involved in self-care.31,36 For clinicians, identifying nonresponders to patient satisfaction surveys may help detect a particularly vulnerable population for medication nonadherence.
More than one-third of patients in our study who were sent a survey did not return the questionnaire. Being able to understand the characteristics of survey nonresponders not only helps to better define the surveyed population, but also helps to identify the specific needs of nonresponders. Based on the results of our study, nonresponders have distinct characteristics that may represent a particularly challenging population for clinicians and health systems. We have found that nonresponders weigh more heavily on the healthcare system with higher costs and more frequent encounters. Nonresponders may benefit from more in-depth psychiatric screening, more attention and disease counseling, and more guidance on medication adherence. These patient characteristics may represent a disenfranchised population that has poor self-care and requires more services once care begins.37
Overall, identifying nonresponders to patient satisfaction surveys and factors associated with nonresponse may provide opportunities to improve clinical practices by targeting patients at greater risk. Engaging disenfranchised patients could involve improved physician communication, structured patient education programs, and better patient follow-up.38,39 The potential biases in performance measures related to survey nonresponse should also be considered as policy makers begin to tie reimbursements to patient satisfaction surveys.40
Although we controlled for a wide range of potential confounding factors, unmeasured variables could have impacted the outcome. Notably, we only analyzed information on costs, healthcare utilization, and medication use either provided or paid for by the VA. Many individuals, including veterans who use VA services, have access to more than 1 health system through private insurance plans and public programs. We performed a sensitivity analysis examining only patients younger than 65 years (those who are most likely to rely solely on VA healthcare) and found similar results. We also focused our analyses on patients with diabetes who were on oral hypoglycemic medications. Patients with diabetes have more complex disease management with greater burdens and barriers to medical care, and they may differ from nonresponders with other chronic illnesses.41 Lastly, we studied VA patients and utilization of VA care, which may differ from other patient populations and healthcare systems.
We assessed a large population of patient satisfaction survey nonresponders and controlled for a wide range of potential confounding factors. We found that nonresponders had significantly higher medical costs, more healthcare utilization, and lower medication adherence. These patients appear to represent a unique population with distinct healthcare needs. Identifying survey nonresponders not only provides an opportunity for clinicians and health systems to develop targeted clinical practices to improve care delivery, but also may inform policy makers on the challenges of nonresponse bias in patient satisfaction surveys. Further studies should examine how these findings may vary in patients with different chronic illnesses and in different clinical settings. Our findings help characterize a population that is often overlooked, and may improve patient care.Author Affiliations: Health Services Research and Development, VA Puget Sound Health Care System, Department of Veterans Affairs (STR, ESW, JML, MP, CLB, C-FL), Seattle, WA; Divisions of Pulmonary and Critical Care (STR), Department of Medicine (STR, CLB), Division of General Internal Medicine (CLB), Department of Health Services (CFL), University of Washington, Seattle, WA.
Source of Funding: This research is based upon work supported by the US Department of Veterans Affairs, Office of Research and Development, Health Services Research & Development. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.
Author Disclosures: The authors report no relationship or financial interest with any entity that would pose a conflict of interest with the subject matter of this article.
Authorship Information: Concept and design (STR, ESW, C-FL); acquisition of data (JML, MP, C-FL); analysis and interpretation of data (STR, ESW, MP, C-FL); drafting of the manuscript (STR, JML, MP); critical revision of the manuscript for important intellectual content (STR, ESW, C-FL); statistical analysis (STR, ESW, MP); provision of study materials or patients (JML); obtaining funding (C-FL).
Address correspondence to: Seppo Rinne, MD, PhD, 1100 Olive Way, Ste 1400, Seattle, WA 98104-3801. E-mail: firstname.lastname@example.org.REFERENCES
1. Rosland AM, Nelson K, Sun H, et al. The patient-centered medical home in the Veterans Health Administration. Am J Manag Care. 2013;19(7):e263-e272.
2. Riskind P, Fossey L, Brill K. Why measure patient satisfaction? J Med Pract Manage. 2011;26(4):217-220.
3. Lasek RJ, Barkley W, Harper DL, Rosenthal GE. An evaluation of the impact of nonresponse bias on patient satisfaction surveys. Med Care. 1997;35(6):646-652.
4. Bot AG, Anderson JA, Neuhaus V, Ring D. Factors associated with survey response in hand surgery research. Clin Orthop Relat Res. 2013;471(10):3237-3242.
5. Boscardin CK, Gonzales R. The impact of demographic characteristics on nonresponse in an ambulatory patient satisfaction survey. Jt Comm J Qual Patient Saf. 2013;39(3):123-128.
6. Barreto Pde S. Participation bias in postal surveys among older adults: the role played by self-reported health, physical functional decline and frailty. Arch Geront Geriatr. 2012;55(3):592-598.
7. Etter JF, Perneger TV. Analysis of non-response bias in a mailed health survey. J Clin Epidemiol. 1997;50(10):1123-1128.
8. Launer LJ, Wind AW, Deeg DJ. Nonresponse pattern and bias in a community-based cross-sectional study of cognitive functioning among the elderly. Am J Epidemiol. 1994;139(8):803-812.
9. Osler M, Schroll M. Differences between participants and non-participants in a population study on nutrition and health in the elderly. Eur J Clin Nutr. 1992;46(4):289-295.
10. Vestbo J, Rasmussen FV. Baseline characteristics are not sufficient indicators of non-response bias follow up studies. J Epidemiol Community Health. 1992;46(6):617-619.
11. Drivsholm T, Eplov LF, Davidsen M, et al. Representativeness in population-based studies: a detailed description of non-response in a Danish cohort study. Scand J Public Health. 2006;34(6):623-631.
12. Gundgaard J, Ekholm O, Hansen EH, Rasmussen NK. The effect of non-response on estimates of health care utilisation: linking health surveys and registers. Eur J Public Health. 2008;18(2):189-194.
13. Korkeila K, Suominen S, Ahvenainen J, et al. Non-response and related factors in a nation-wide health survey. Eur J Epidemiol. 2001;17(11):991-999.
14. Reijneveld SA, Stronks K. The impact of response bias on estimates of health care utilization in a metropolitan area: the use of administrative data. Int J Epidemiol. 1999;28(6):1134-1140.
15. McLeod TG, Costello BA, Colligan RC, et al. Personality characteristics of health care satisfaction survey non-respondents. Int J Health Care Qual Assur. 2009;22(2):145-156.
16. Gadkari AS, Pedan A, Gowda N, McHorney CA. Survey nonresponders to a medication-beliefs survey have worse adherence and persistence to chronic medications compared with survey responders. Med Care. 2011;49(10):956-961.
17. Wright SM, Craig T, Campbell S, Schaefer J, Humble C. Patient satisfaction of female and male users of Veterans Health Administration services. J Gen Intern Med. 2006;21(suppl 3):S26-S32.
18. Wong ES, Piette JD, Liu CF, et al. Measures of adherence to oral hypoglycemic agents at the primary care clinic level: the role of risk adjustment. Med Care. 2012;50(7):591-598.
19. Joish VN, Malone DC, Wendel C, Draugalis JR, Mohler MJ. Development and validation of a diabetes mellitus severity index: a risk-adjustment tool for predicting health care resource use and costs. Pharmacotherapy. 2005;25(5):676-684.
20. Ash AS, Ellis RP, Pope GC, et al. Using diagnoses to describe populations and predict costs. Health Care Financ Rev. 2000;21(3):7-28.
21. Bryson CL, Au DH, Young B, McDonell MB, Fihn SD. A refill adherence algorithm for multiple short intervals to estimate refill compliance (ReComp). Med Care. 2007;45(6):497-504.
22. Briesacher BA, Andrade SE, Fouayzi H, Chan KA. Medication adherence and use of generic drug therapies. Am J Manag Care. 2009;15(7):450-456.
23. Manning WG, Basu A, Mullahy J. Generalized modeling approaches to risk adjustment of skewed outcomes data. J Health Econ. 2005;24(3):465-488.
24. Hosmer DW, Lemeshow S. Applied Logistic Regression. 2nd ed. New York: Wiley; 2000.
25. Duan N, Manning WG Jr, Morris CN, Newhouse JP. Choosing between the sample-selection model and the multi-part model. J Bus Econ Stats. 1984;2(3):283-289.
26. Stata Statistical Software [computer program]. Version 11.2. College Station, TX: StataCorp LP; 2012.
27. Kim HJ, Fredriksen-Goldsen KI. Nonresponse to a question on self-identified sexual orientation in a public health survey and its relationship to race and ethnicity. Am J Publ Health. 2013;103(1):67-69.
28. Stepanikova I, Cook KS. Effects of poverty and lack of insurance on perceptions of racial and ethnic bias in health care. Health Serv Res. 2008;43(3):915-930.
29. Clark VA, Aneshensel CS, Frerichs RR, Morgan TM. Analysis of nonresponse in a prospective study of depression in Los Angeles County. Int J Epidemiol. 1983;12(2):193-198.
30. Kanter JW, Epler AJ, Chaney EF, et al. Comparison of 3 depression screening methods and provider referral in a Veterans Affairs primary care clinic. Prim Care Companion J Clin Psychiatry. 2003;5(6):245-250.
31. Marcus B, Schütz A. Who are the people reluctant to participate in research? personality correlates of four different types of nonresponse as inferred from self- and observer ratings. J Pers. 2005;73(4):959-984.
32. Riekert KA, Drotar D. Who participates in research on adherence to treatment in insulin-dependent diabetes mellitus? implications and recommendations for research. J Pediatr Psychol. 1999;24(3):253-258.
33. Hunot VM, Horne R, Leese MN, Churchill RC. A cohort study of adherence to antidepressants in primary care: the influence of antidepressant concerns and treatment preferences. Prim Care Companion J Clin Psychiatry. 2007;9(2):91-99.
34. Lam F, Stevenson FA, Britten N, Stell IM. Adherence to antibiotics prescribed in an accident and emergency department: the influence of consultation factors. Eur J Emerg Med. 2001;8(3):181-188.
35. Donnan PT, MacDonald TM, Morris AD. Adherence to prescribed oral hypoglycaemic medication in a population of patients with Type 2 diabetes: a retrospective cohort study. Diabet Med. 2002;19(4):279-284.
36. Thoolen B, de Ridder D, Bensing J, Gorter K, Rutten G. Who participates in diabetes self-management interventions? issues of recruitment and retainment. Diabetes Educ. 2007;33(3):465-474.
37. Wild H. The economic rationale for adherence in the treatment of type 2 diabetes mellitus. Am J Manag Care. 2012;18(suppl 3):S43-S48.
38. Deakin TA, Cade JE, Williams R, Greenwood DC. Structured patient education: the diabetes X-PERT Programme makes a difference. Diabet Med. 2006;23(9):944-954.
39. Gao J, Wang J, Zhu Y, Yu J. Validation of an information-motivation—behavioral skills model of self-care among Chinese adults with type 2 diabetes. BMC Public Health. 2013;13:100.
40. CMS. 42 CFR Parts 422 and 480. Medicare program: hospital inpatient value-based purchasing program: final rule. Fed Regist. 2011;76(88):26490-26547.
41. Eton DT, Elraiyah TA, Yost KJ, et al. A systematic review of patient-reported measures of burden of treatment in three chronic diseases. Patient Relat Outcome Meas. 2013;4:7-20.