Objective: To assess the clinical quality of diabetes care andthe systems of care in place in Medicare managed care organizations(MCOs) to determine which systems are associated with thequality of care.
Study Design: Cross-sectional, observational study that includeda retrospective review of 2001 diabetes Health Plan Employerand Data Information Set (HEDIS) measures and a mailed survey toMCOs.
Methods: One hundred and thirty-four plans received systemssurveys. Data on clinical quality were obtained from HEDIS reportsof diabetes measures.
Results: Ninety plans returned the survey. Composite diabetesquality scores (CDSs) were based on averaging scores for the 6HEDIS diabetes measures. For the upper quartile of respondingplans, the average score was 77.6. The average score for the bottomquartile was 53.9 (< .001). The mean number of systems orinterventions for the upper-quartile group and the bottom-quartilegroup was 17.5 and 12.5 (< .01), respectively. There were significantdifferences in the 2 groups in the following areas: computer-generated reminders, physician champions, practitionerquality-improvement work groups, clinical guidelines, academicdetailing, self-management education, availability of laboratoryresults, and registry use. After adjusting for structural and geographicvariables, practitioner input and use of clinical-guidelinessoftware remained as independent predictors of CDS. Structuralvariables that were independent predictors were nonprofit statusand increasing number of Medicare beneficiaries in the MCO.
Conclusions: MCO structure and greater use of systems/interventionsare associated with higher-quality diabetes care. Theserelationships require further exploration.
(Am J Manag Care. 2004;10:934-944)
Diabetes is an epidemic for which human andeconomic costs are great and are predicted toincrease. The economic cost of diabetes isalready alarming, with lost productivity and direct medicalexpenditures amounting to an estimated $132 billion.1 People with diabetes incur more than twice thedirect medical costs (ie, hospitalizations, physician visits,medications) than people who do not have the disease.1 In spite of the obvious importance of this diseaseto those who have it, healthcare professionals who carefor them, and society as a whole, there is still wide variationin care among individuals with diabetes. Manypatients with diabetes receive care well below appropriatestandards believed to reduce morbidity and mortalityfrom the disease.2,3 Diabetes increases with age andis quite common among Medicare beneficiaries; approximately18% of the United States population over age 60years have diabetes.4 Prevalence and cost have madediabetes an important focus for quality measurementand improvement within the Medicare program.
The managed care industry has been at the forefrontof efforts to improve the effectiveness and efficiency ofcare.5,6 It has taken the lead in public reporting of diabetescare quality by using a set of comprehensivemeasures formulated in the Diabetes QualityImprovement Project (DQIP).7,8 Managed care plans collectdata on DQIP quality indicators to assess care relatedglucose control (glycosylated hemoglobin [HbA1c]testing and level), lipids (lipid profile testing and level),and examinations for eye disease and kidney disease.Plans use the DQIP indicator data to implement quality-improvementinterventions.
This is an era of increasing emphasis on organizationalresponsibility for quality of care.9-11 It is importantto understand the effects of managed care plans'quality-improvement interventions that are focused ontheir practitioners and patients. Managed care planstypically do not directly supervise individual clinician-patientinteractions. It is unclear what efforts by planshave been related to measurable improvements in diabetescare. To our knowledge, this study is the firstattempt to describe ongoing plan-based quality-improvementinterventions and systems that plans havein place, and correlate those systems or interventionswith the quality of diabetes care among Medicare managedcare plans.
RESEARCH DESIGN AND METHODS
The data on quality of care delivered to plan memberswith diabetes were available to the authors as aresult of the Medicare contract-mandated reporting ofthe Health Plan Employer and Data Information Set(HEDIS®, a registered trademark of the National Committeefor Quality Assurance) and as part of a specialcontract award between the Centers for Medicare &Medicaid Services (CMS) and IPRO, the New York StateMedicare quality-improvement organization. Comprehensivediabetes care measures in HEDIS (which area subset of the DQIP measures) were reported by allMedicare managed care plans using HEDIS 2001 technicalspecifications. The authors administered a surveyto plans, requesting information about interventionsdesigned to support and/or improve diabetes care.Survey results were compared with the results of theHEDIS diabetes scores to identify best practices in diabetesamong Medicare managed care plans.
In 2001, 201 Medicare+Choice managed care organizations(MCOs) reported their HEDIS results to CMS asrequired under their Medicare contracts (reporting oncare provided in the year 2000). The authors excluded28 health plans because they had special cost, risk, ordemonstration contracts. The study team also excludedan additional 10 plans that did not report a completeand validated measure set. Last, the authors excluded29 plans without an active Medicare managed care contractat the time of the study survey in 2002. The finalstudy sample size was 134 managed care plans.
Diabetes Care Measures
We used 6 HEDIS diabetes care measures. Thesewere HbA1c testing annually, an HbA1c level higher than9.5% (indicating poor HbA1c control), biennial lipid testing,a low-density lipoprotein level less than 130 mg/dL(indicating good lipid control), eye exams every 2 yearsin low-risk patients and annually in high-risk patients,and an annual nephropathy evaluation.
To investigate the relationship between the use ofvarious interventions and plans'overall quality of diabetescare, a composite diabetes score (CDS) was createdfor each plan. Lacking national consensus for themost appropriate weighting of the individual HEDISmeasures as part of a composite score, the study teamweighted each of the measures equally in the compositescore. This was done in consultation with a technicalexpert panel created for the project and afteranalysis of alternate scoring schemes. The technicalexpert panel consisted of experts from the DQIP team8and included representation from the AmericanAcademy of Family Practice, the American College ofPhysicians, the American Diabetes Association, theCenters for Disease Control and Prevention, theNational Committee for Quality Assurance, and theVeterans Health Administration. A full list of organizationsrepresented on the technical expert panel is availablefrom the authors.
The study team used principal-component analysisand Cronbach's alpha reliability coefficient testing toassess the degree to which combinations of the 6 measuresshared an underlying overall construct of diabetescare. The information captured by the first principalcomponent and the fairly high internal-consistencyresults both supported the viability of a single construct(data are available from the authors). Comparisons ofthe unit-weighted composite scores with those fromregression weights resulted in sufficiently high agreementto justify equally weighting each measure in thecomposite and, thereby, increasing the interpretabilityof the composite measure scores.12 Of note, the directionof the HbA1c indicator was reversed in the compositescore to be comparable to the other components,where a higher score stands for better performance.When HbA1c scores were reported, a lower score wasdesirable (ie, values greater than 9.5% indicated poorcontrol). This was reversed for purposes of the compositescore so that scores lower than 9.5% were counted asdesirable outcomes of the indicator.
Survey of Best Practices
The survey used to obtain information on the MCOs'quality-improvement interventions incorporated componentsdescribed by Dr. Edward Wagner and colleagues13in the Assessment of Chronic Illness Care survey,Version 3.0,14,15 a survey based on the Chronic CareModel of Disease, which identifies important componentsof quality care.14,15 Additional elements (eg, use of physicianchampions in quality improvement) came from thework of Dudley and colleagues and Felt-Lisk andKleinman.16,17 The technical expert panel reviewed thework of the study team and made suggestions for revisionsor additions to the survey. The investigators chose3 plans as a convenience sample to field-test the questionnaire. In these 3 plans, inter-rater reliability wasdetermined by having 2 individuals complete the survey.There were inter-rater reliabilities of 94% in 2 of the plansand 62% in the third plan. The survey was refined basedon field-test results prior to distribution to the 134 managedcare plans. The survey contained 20 multiple-partquestions with the following content areas: practitioner-centeredinitiatives for diabetes care (8 questions), member-centered initiatives for diabetes care (6 questions),administrative and structural characteristics (6 questions),and a single open-ended question on MCO community/local initiatives for diabetes care. (See the Box fora sample question.) The Appendix contains a detailedlist of the elements included in the survey.
Four weeks prior to receiving the survey, plansreceived letters encouraging them to respond to the surveythat was about to be mailed. Plans understood thattheir responses were voluntary and that the resultswould remain confidential. CMS and IPRO followedhealth-privacy criteria as defined by 42 CFR Section480 for quality-improvement organizations. Over a periodof 4 weeks, there were 2 mailings of the survey anda telephone call reminder to all plans. A computerizeddata entry option was provided for each plan.
To differentiate plans into meaningful categories, wegrouped the responding plans into 3 categories based ontheir CDS: the top 25% of plans, the middle 50% ofplans, and the bottom 25% of plans. We chose this a priorigrouping to primarily focus on the relationshipbetween quality of care delivered and the systems/interventionsin place in those plans in the top and bottomquartiles of the CDS. Differences among plans at theextremes of quality were more likely to be more accuratereflections of true differences in care delivered andmore meaningful clinically. The study focused onassessing differences in quality-improvement interventionsand systems in place between the top 25% and thebottom 25% of reporting managed care plans. The differencebetween the high-CDS group and the low-CDSgroup was tested by using chi-square tests of association.≤.05 was considered statistically significant.
Of the 134 managed care plans, 90 completed thesurvey, resulting in a response rate of 67%. Fifty-three ofthe responding MCOs were independent practice association(IPA) model (59% of respondents), 28 weregroup model (31%), and the remaining 9 (10%) werestaff model. Sixty-one plans (68% of respondents) werefor profit. Fifty-one plans (57% of respondents) hadNational Committee for Quality Assurance accreditation.The Joint Commission on Accreditation of HealthcareOrganizations or the Accreditation Association forAmbulatory Health Care accredited an additional 3plans. Twenty-eight plans (31% of respondents) were inthe Northeast, 24 (27% of respondents) were in theSouth, 19 (21%) were in the Midwest, and 19 (21%) werein the West. The mean Medicare membership amongrespondents was 32 983 with a standard deviation of37 896. The minimum number of Medicare beneficiariesin the responding plans was 1060 and the maximumwas 258 681. The mean number of years the respondentplans had been in business was 16.3 with a standarddeviation of 10.2 years; the minimum was 1 year, andthe maximum was 55 years. An average of 98% ofMedicare members in each responding plan were in aMedicare health maintenance organization (HMO) contract,with the remainder being in preferred providerorganization (PPO) or point of service (POS) contracts.
The only statistically significant difference (χ2 =3.9; ≤.05) between responders and nonresponderswas the "any accreditation"indicator: those who werenot accredited were more likely to respond (78.3%)than those who were accredited (61.4%).Nonstatistically significant differences betweenresponders and nonresponders were that nonresponderswere somewhat more likely to be from the Southand West regions (34% each), their model was morelikely IPA (71%), they were slightly less likely to be runfor profit (59%), they had larger Medicare memberships(mean = 54 526, SD = 78 887), they were in businessslightly fewer years (mean = 14.7, SD = 7.7), and theirMedicare members were more likely to be HMO members(mean = 99.9%, SD = 0.5) instead of PPO or POSmembers.
The survey response ratesare presented in Table 1. Thelowest-performing group had aresponse rate of 69%. The middlegroup had a response rateof 72%. The plans in the highest-performing group had aresponse rate of 55%. Themean individual CDS andDQIP scores for the 90responding plans did not differsignificantly from those of the44 nonresponders (Table 2).
The scores for each of the individual diabetes indicatorsin the top and bottom quartiles of plans are shownin Table 3. For the 90 responding plans, the mean CDSwas 65.6. The CDSs for the high-, middle-, and low-performingplans were 77.6, 66.0, and 53.9, respectively.The differences in the clinical quality of care deliveredby the plans in the top and bottom groups are considerableand are statistically significant (Table 4).
We found that the overall mean CDS for the 134plans was 66.0. For the 90 plans that responded to oursurvey, the mean CDS was 65.6. The average number ofquality interventions for the 90 responding plans was15.3 out of a 32 possible interventions (48%). The averagenumber of interventions for the lowest-performinggroup was 12.5; the average number of interventions forthe middle group was 15.6, and the average number ofinterventions for the highest-performing group was 17.5(< .01 for differences among means, analysis of varianceF test = 5.4). In pairwise comparisons, the meannumber of interventions for the lowest-performinggroup was significantly lower than that for the middlegroup or the highest-performing group. The differencebetween the number of interventions used by the middleand high-performing groups was not significant.Interventions related to practitioners and structuralchanges were different between the high-performingand low-performing groups. The mean number of practitionerinterventions among the high-performing groupwas 7.1; it was 4.3 in the low-performing group (=.004). The difference in the mean number of administrative/structural interventions for high-performing versuslow-performing groups also was significant (4.9 vs4.1; = .024). High-and low-performing groups hadsimilar mean numbers of member-focused interventions(0.9 vs 0.8; = .807).
High-performing plans, those in the top quartile,reported greater use of the surveyed systems or interventions.In the case of 9 of the interventions, the differencesbetween the high-and low-performing plans achievedstatistical significance (Table 5). The 9 interventions are(1) computer-generated reminders to practitioners, (2)physician champions (strong advocates for quality, identifiedas physician leaders), (3) opportunities for anypractitioner input, (4) practitioner quality-improvementwork groups, (5) support for use of diabetes care guidelines,(6) academic detailing to physicians, (7) diabetesself-management education, (8) use of diabetes diseaseregistries, and (9) availability of lab results.
We assessed possible relationships among interventions,structural variables such as profit status, andregion. Several multivariate regression models weretested. Because the number of variables was large andthe survey sample size small (n = 90), the regressionmodel-building did not include testing for interactionterms and started with the univariate variables significantlyassociated with the CDS. We constructed severalsets of regression models.
The first set of models consisted of structural variablesincluding total Medicare membership, years inHMO business, percent Medicare members in HMO contracts,accreditation and tax status, business model, andregion. The business model dichotomized IPA modelversus staff, group, orother model. The 4 CMSregions (Northeast,Midwest, West, and South)were analyzed as a set of 3dummy variables. A secondseparate set of regressionmodels includedsurvey items significantlyassociated with CDS onthe univariate level. Thefinal selection of the variablesin the models wasderived by stepwise andconfirmed by backwardelimination approaches.
In the modeling ofstructural variables andCDS as an outcome, taxstatus, total Medicaremembership, and accreditationanalyzed as thenumber of NationalCommittee for QualityAssurance accreditation points were significant predictorsof managed care plan performance in diabetes care.Standardized betas were .25 (≤.01), .26 (≤.01),and .25 (≤.01) with an adjusted multivariate coefficentof determination () of .23 for stepwise regression.Performing a multivariate analysis of just thesurvey items and CDS, level of practitioner participationfor any practitioner input and use of guidelinessoftware for practitioners were significant predictorsof CDS. Standardized betas were .32 (≤.004) and.23 (≤.04) with an adjusted of .17.
The last model combined predictors from the 2 setsof structural and survey items and CDS. The 4 variablesdescribed above remained significant. In this finalmodel, the adjusted was .30 (F test = 10.5; ≤.001).Standardized betas for the items were .26 (≤.01) forany practitioner input, .22 (≤.03) for practitioners'use of guidelines software, .19 (≤.05) for tax status,and .33 (≤.001) for total Medicare membership.
Although excellent analyses of the use of chronic-diseasemanagement strategies and interventions havesuggested an association between these strategies/interventionsand clinical quality,10,18 we believe this study isthe first to associate the use of interventions and systemswith the clinical performance in delivering diabetescare by all managed care plans serving theMedicare population. The findings among the 90 managedcare plans responding to our survey showed asomewhat higher use of systems and interventions thandid the Casalino and associates survey of 1040 largemedical groups.19 Casalino and associates reported useof 5.1 out of 16 (32%) possible organized processes formanaging care. In their survey, 50% of groups wereusing 4 or fewer of 16 processes. The Casalino et alstudy did not attempt to link clinical performance to theimplementation of systems or interventions. Differencesbetween the Casalino et al paper and this study (eg, surveyinglarge medical groups instead of managed careplans) are major enough that direct comparisons maynot be meaningful. However, theirs is the only publishedstudy of which we are aware that quantifies thepresence of components of the chronic-care model inorganized units of practices.
Beyond ascertaining the number of interventions, weattempted to determine which interventions were associatedwith differences in clinical quality. In a cross-sectional,observational study, it is not possible to assigncausality. However, we found that of the 9 interventionsthat occurred significantly more frequently in the high-performingplans, 7 were linked closely to programs,activities, or approaches that are directed primarilytoward the provider (reminders to practitioners, physicianchampions, opportunity for practitioner input intoquality-improvement activities, practitioner quality-improvementwork groups, supporting physician use ofguidelines, academic detailing to physicians, and feedbackto physicians of laboratory test results for morethan half of patients), in contrast to activities directedprimarily toward members (eg, reminders to members).
These findings are intriguing in relation to thesources of variation in care. Krein and colleaguesrecently reported that in the Veterans HealthAdministration the source of most variation in care providedto patients is not at the level of the physician butat the level of the facility.20 Earlier studies also suggestedthat the majority of variation in care does not occurat the level of the individual physician.21-24 Although wedid not examine variability at the level of the physician,we found that the physician-directed interventions arethe ones most closely associated with higher HEDISquality scores for diabetes. This suggests that the role ofa facility or health plan may be to create an environmentthat supports physician performance andenhances the physician's role in ensuring quality.Cabana and colleagues write that overcoming organizationaland external barriers is as important as clinicianknowledge or attitudes regarding guideline adherence.25
We found that the information-system componentswere somewhat less related to high or low quality ofcare than were physician-related components. Theinformation-system components that we surveyed wereuse of electronic medical records, care managementsoftware, computer-generated reminders, decision supporttools, and availability of pharmacy, laboratory, andeye care data in electronic systems. Of these componentsof information systems, only use of computerreminders, use of clinical-guidelines software/registries,and availability of lab results were statistically differentin the high-performing and low-performing groups. Ingeneral, the literature has supported an associationbetween improvements in care and use of informationsystems. For example, recent work on the use of electronicregistries is consistent with care improvements.26-29 Given the burden associated withimplementation and up-keep of information systems, itis important to ensure that the systems put in placeprovide the maximal support to clinical care.
The only member-focused intervention in this surveythat was statistically different among those providingthe highest-quality and lowest-quality diabetes carewas diabetes self-management formal instruction. Theintervention of obtaining member input regarding initiativesapproached statistical significance.
Our data suggest that interventions as predictors ofCDS are associated with MCO membership size and taxstatus, but not years in business, accreditation status,region, or business model. One must be cautious ininterpreting the multivariate analysis, given that thetotal number of plans was relatively small and therewere many possible interventions. For example, only 3responding plans used clinical-guidelines software. Thedata are consistent, however, with an observation thatlarger, nonprofit plans are more likely to promote keyinterventions.
There is extensive literature on the individual interventionsthat we studied. A general conclusion, supportedby a meta-analysis of 41 studies looking at aheterogeneous mix of interventions, was that multifacetedinterventions hold the key to quality improvement.18 Our work supports this conclusion in that thelowest-performing plans used the fewest number ofinterventions.
The most significant limitation to the study is thatdata on systems and interventions in place were collectedby self-report without verification of the accuracyof the reports. Respondents may have beeninfluenced to couch answers in a light that they feelwould most favorably reflect their plans. However,interrater reliability in the limited pilot testing suggestsconsistency, if not accuracy, of the reporting. A secondlimitation could be response bias if plans that did notrespond differed significantly from those that didrespond. Third, there are only 38 total plans in theupper and lower quartiles. This limited sample sizeensures that the most important effects are those thatare found (ie, found to have significance in this smallergroup), which suggests that these differences are quitelarge and potentially meaningful. Fourth, our unit ofanalysis is the managed care plan and our findings maynot necessarily be applicable to either non-Medicaremanaged care plans or individual physician practices.Last, another limitation that should be addressed infuture research is that this survey did not explore themanner and intensity with which each of these interventionswas implemented.
A survey about systems of care and quality-improvementinterventions administered to Medicare managedcare plans demonstrated that, overall, an average of15.3 out of a possible 32 interventions were reported bythe 90 plans responding to the survey. Plans achievingthe highest quality of diabetes care, based on the HEDISscores, also reported the greatest efforts to support andenhance the physician role in providing high quality ofcare for diabetes.
We thank Fatima Baysac of the Centers for Medicare and MedicaidServices and L Gregory Pawlson, MD, of the National Committee for QualityAssurance for their invaluable contributions to the preparation and reviewof this manuscript. We also thank Maureen O'Callaghan, IPRO, for hercoordination of the survey data collection team. Finally we want to expressionour appreciation to the special award manager, Janice Acar, IPRO. MsAcar's integrity, persistence, attention to detail, and collegiality were key tothe project's completion.
From the Centers for Medicare & Medicaid Services, Baltimore, Md (BF); IPRO, LakeSuccess, NY (AS, KOW); and the Delmarva Foundation for Medical Care, Easton, Md (DK).Dr Fleming now is with the Veteran's Health Administration, Washington, DC. Dr Keller hassince founded Halcyon Research, Inc, Sarasota, Fla.
This study was supported by a HEDIS® 2000/Diabetes Quality Improvement ProjectAnalyses Special Project Award from the Health Care Financing Administration (now TheCenters for Medicare & Medicaid Services) in February 2001.
The content of this publication does not necessarily reflect the views or policies of theUS Department of Health and Human Services, nor does mention of trade names, commercialproducts, or organizations imply endorsement by the US government. The authorsassume full responsibility for the accuracy and completeness of the ideas presented. Thisarticle is a direct result of the Health Care Quality Improvement Program initiated by TheCenters for Medicare & Medicaid Services, which has encouraged identification of quality-improvementprojects derived from analysis of patterns of care, and therefore requiredno additional funding on the part of this contractor. The authors welcome ideas and contributionsconcerning experience in engaging with the issues presented.
Address correspondence to: Alan Silver, MD, MPH, IPRO, 1979 Marcus Ave, LakeSuccess, NY 11042-1002. E-mail: firstname.lastname@example.org.
1. Hogan P, Dall T, Nikolov P. Economic costs of diabetes in the US in 2002.2003;26:917-932.
2. Jencks SF, Huff ED, Cuerdon T. Change in the quality of care delivered toMedicare beneficiaries, 1998-1999 to 2000-2001. 2002;289(3):305-312.
3. Saadine JB, Engelgau MM, Beckels GL, Gregg EW, Thompson TJ, Narayan KV.A diabetes report card for the United States: quality of care in the 1990s. 2002;136:565-574.
4. American Diabetes Association. Diabetes statistics for seniors. Available at:http://www.diabetes.org/utils/printthispage.jsp?PageID=STATISTICS_233185.Accessed April 12, 2004.
Jt Comm J Qual Improv.
5. Henley NS, Pearce J, Phillips LA, Weir S. Replication of clinical innovations inmultiple medical practices. 1998;24:623-639.
6. Health plans bear down on quality; HEDIS scores improve dramatically.October 10, 2001;10:34-35.
Plan Employer Data and Information Set (HEDIS)
7. National Committee for Quality Assurance. Washington, DC: National Committee for Quality Assurance; 2000:87-94. 2001; vol 2.
8. Fleming BB, Greenfield S, Engelgau M, et al. The diabetes quality improvementproject: moving science into health policy to gain an edge on the diabetesepidemic. 2001;24:1815-1820.
Crossing the Quality Chasm: A New Health System for the 21st Century.
9. Committee on Quality of Health Care in America, Institute of Medicine.Washington, DC: National Academy Press; 2001.
10. Rundall TG, Shortell SS, Wang MC, et al. As good as it gets? Chronic care managementin nine leading US physician organizations. 2002;325(7370):914-915.
Am J Managed Care.
11. Heisler M, Wagner E. Improving diabetes treatment quality in managed careorganizations: some progress, many challenges. 2004;10(2pt 2):115-117.
N Engl J Med.
12. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered toadults in the United States. 2003;348:2635-2645.
13. Wagner EH, Austin BT, VonKorff M. Organizing care for patients with chronicillness. 1996:74:511-543.
14. Improving Chronic Illness Care, a national program of The Robert WoodJohnson Foundation. The Assessment of Chronic Illness Care (ACIC) survey, Version3.0. Available at: http://www.improvingchroniccare.org/tools/acic.html.Accessed September 29, 2003.
15. Bonomi AE, Wagner EH, Glasgow RE, Von Korff M. Assessment of ChronicIllness Care (ACIC): a practical tool to measure quality improvement. 2002;37:791-820.
Med Care Res Rev.
16. Dudley RA, Landon BE, Rubin HR, Keating NL, Medlin CA, Luft HS.Assessing the relationship between quality of care and the characteristics of healthcare organizations. 2000;57(suppl 2):116-135.
Effective Clinical Practices in Managed Care:
Findings From Ten Case Studies.
17. Felt-Lisk S, Kleinman LC. New York, NY: Commonwealth Fund; November2000. Report to the Commonwealth Fund, No. 427.
Cochrane Database Syst Rev.
18. Renders CM, Valk GD, Griffin S, Wagner EH, Eijk JT, Assendelft WJ.Interventions to improve the management of diabetes mellitus in primary care,outpatient, and community settings. 2001;1:CD001481.
19. Casalino L, Gillies RR, Shortell SM, et al. External incentives, information technology,and organized processes to improve health care quality for patients withchronic diseases. 2003;289(4):434-441.
Health Serv Res.
20. Krein SL, Hofer TP, Kerr EA, Hayward RA. Whom should we profile?Examining diabetes care practice variation among primary care providers, providergroups, and health care facilities. 2002;37:1159-1180.
21. Orav EJ, Wright EA, Palmer RH, Hargraves JL. Issues of variability and biasaffecting multi-site measurement of quality of care. 1996;34(9suppl):SS87-101.
22. Sixma HJ, Spreewenberg PM, van der Pasch MA. Patient satisfaction with theindividual practitioner: a two level analysis. 1998;36:212-229.
23. Hofer TP, Hayward RA, Greenfield S, Wagner EH, Kaplan SH, Manning WG.The unreliability of individual physician report cards for assessing the costs andquality of care of a chronic disease. 1999;281(22):2098-2105.
24. Katon W, Rutter CM, Lin E, et al. Are there detectable differences in qualityof care or outcomes of depression across primary care providers? 2000;38:552-561.
25. Cabana M, Rand C, Powe N, et al. Why don't physicians follow clinical practiceguidelines: a framework for improvement. 1999;282:1458-1465.
Am J Med Qual.
26. East J, Krishnamurthy P, Freed B, Nosovitski G. Impact of a diabetes electronicmanagement system on patient care in a community clinic. July-August 2003;18(4):150-154.
27. Grant RW, Hamrick HE, Sullivan CM, et al. Impact of population managementwith direct physician feedback on care of patients with type 2 diabetes.2003;26:2275-2280.
28. Meigs JB, Cagliero E, Dubey A, et al. A controlled trial of web-based diabetesdisease management: the MGH diabetes primary care improvement project.2003;26:750-757.
29. Montori VM, Dinneen SF, Gorman CA, et al. The impact of planned careand a diabetes electronic management system on community-based diabetescare: the Mayo Health System Diabetes Translation Project. 2002;25:1952-1957.