• Center on Health Equity and Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Systematic Reviews for Evidence-based Management: How to Find Them and What to Do With Them

Publication
Article
The American Journal of Managed CareNovember 2004 - Part 1
Volume 10
Issue 11 Pt 1

Objective: To identify strategies for retrieval and evaluation ofsystematic reviews from a management perspective.

Study Design: Review of available literature and resources onsystematic reviews.

Methods: From published literature on evidence-based medicineand systematic review, we identified resources and adaptedretrieval and evaluation strategies for healthcare managers. A publishedsystematic review then was assessed for quality and relevanceto management decisions.

Results: Systematic reviews relevant to the organization anddelivery of care are available. Criteria for evaluating the relevanceand quality of systematic reviews on clinical topics may be adaptedfor systematic reviews on organizational topics. However, evena systematic review that focuses on an organizational topic canlack important information on costs and study setting.

Conclusions: Greater familiarity with the retrieval and evaluationof systematic reviews can help managers use these sources effectivelyand encourage the development of evidence-based management.

(Am J Manag Care. 2004;10:806-812)

Traditionally, evidence-based medicine has beenpromoted for clinical decision making.1-7 Evidence-based healthcare, or the use of evidence-based medicine strategies for management orpolicy decisions, also has gained acceptance, particularlyin countries with publicly financed health insurancesystems.8,9 Systematic review—the systematic retrievaland summarization of research evidence—is an importantpart of evidence-based healthcare and can be usefulwhen one is faced with difficult managementdecisions. However, articles that educate readers aboutevidence-based medicine are primarily written from theclinical perspective. In this paper, we aim to helphealthcare managers become familiar with systematicreviews by offering potential resources and retrieval andevaluation strategies for these types of studies. In addition,we will provide an example of a systematic reviewthat can inform the type of decision that managers athealthcare organizations need to make. Although thispaper is oriented toward healthcare managers, our suggestionsapply equally well to policy makers.

DIFFERENT TYPES OF EVIDENCE

The available research on which to base decisions iscomprised of different types of evidence, from standalone,randomized clinical trials to cohort and case-controlstudies and case reports, which vary in the degreeof scientific support they provide. Ranking these differenttypes of evidence by their level of scientific supportis a difficult task. However, the rankings are generallyguided by 3 factors: quality, quantity and consistency.10 Quality refers to the extent individual studies collectivelyminimized bias. Quantity refers to numbers ofstudies, sample size, and magnitude of effect.Consistency refers to whether findings are similarunder different study conditions, including differentsamples or study designs.

Classification efforts generally rank well-conductedsystematic reviews (when results from different studiesare statistically combined into an overall estimate, itoften is called a meta-analysis) of randomized, controlledtrials with consistent intervention effects asoffering the highest level of scientific evidence. This isfollowed, in descending order based on the level of evidence,by a single randomized clinical trial, concurrent(prospective) cohort studies, historic (retrospective)cohort studies, case-control studies, and case series. Asystematic review is ranked above a single studybecause the quantity of studies included permits assessmentof the consistency of effects under varying studyconditions.

Although systematic reviews are ranked as havingthe highest level of evidence, the strength of the evidenceactually offered by a systematic review dependson how well the review is conducted. Therefore, it isimportant that readers be skilled in identifying high-qualitysystematic reviews by understanding the elementsthat contribute to a well-conducted systematicreview. The following sections provide guidance onlocating and evaluating these reviews to aid in theappropriate incorporation of results in managementdecisions (summarized in the Appendix).

FINDING A SYSTEMATIC REVIEW

Locating a relevant systematic review for a particularquestion requires a comprehensive subject search strategy,as well as a strategy for identifying these types ofreviews. The subject search should derive from a carefullyformulated question to be addressed through theliterature. Terms used may include 1 or more of the followingcategories: those related to a specific populationor setting, such as hospitals or health maintenanceorganizations; those related to the condition of interest,such as diabetes or depression; those related to theintervention or exposure of interest, such as qualityimprovement programs; and those related to specificoutcomes of interest, such as patient satisfaction or utilizationrates.11

Once the subject search strategy has been determined,several resources are useful for finding originalsystematic reviews. The Cochrane Library (accessibleat http://www.cochrane.org) maintains a searchabledatabase of systematic reviews. It also maintains a registryof protocols, stating background, objectives, andmethods, for reviews that are currently being prepared.Managers may be particularly interested in reviewsfrom the Effective Practice and Organisation of CareGroup within the Cochrane Library (http://www.epoc.uottawa.ca), which include systematic reviews on topicssuch as the effects of audit and feedback, the effects ofpayment system on primary care physician behaviors,and intervention to promote collaboration betweennurses and doctors. The Campbell Collaborative (accessibleat http://www.campbellcollaboration.org) maintainsstudy registries, similar to those in the CochraneCollaborative, of social science research, including systematicreviews, which may be helpful to healthcaremanagers.

Another helpful resource is the Database of Abstractsof Reviews of Effects (DARE). The National HealthService Centre for Reviews and Dissemination at theUniversity of York, England, maintains a searchabledatabase that contains structured abstracts for publishedsystematic reviews that have been judged to beof good quality. DARE is accessible at http://www.york.ac.uk/inst/crd and through the Cochrane Library.

Evidence-based Healthcare & PublicHealth

Evidence-based Healthcare

Evidence-Based Medicine

The ACP [American College ofPhysicians] Journal Club

The journal (formerly , accessibleat http://www.harcourt-international.com/journals/ebhc/)provides health managers and policy makers with structuredabstracts and expert commentary on financing,organization, and management issues. Structured abstractson topics relevant to healthcare managers alsomay be found at (http://ebm.bmjjournals.com/) and (http://www.acpjc.org/).

Finally, systematic reviews may be searched in bibliographicdatabases such as MEDLINE or PubMed.12 Tostart, one can use "Clinical Queries" through PubMed atthe National Library of Medicine's Web site to searchquickly and easily on a particular subject topic, andlimit retrieved results via a "filter" to systematicreviews, meta-analyses, reviews of clinical trials, evidence-based medicine, consensus development conferences,and/or guidelines. If this approach yields toomany or too few references for systematic reviews, onecan directly search the PubMed database by limitingsearch results to publication type "review" or "meta-analysis,"using the "Limits" function. However, limitingto "meta-analysis" may miss systematic reviews andlimiting to "review" could retrieve many references thatare not systematic reviews. Hunt and McKibbon13 offeruseful tips for locating systematic reviews in electronicdatabases using Medical Subject Heading (MeSH) andpublication-type search terms, and the interested readercan learn about sensitive search strategies and tipson finding systematic reviews from other sources.14-17

EVALUATING A SYSTEMATIC REVIEWFOR RELEVANCE AND QUALITY

Relevance

A systematic review that focuses on specific managementquestions is more likely to produce actionableresults for managers than one asking vague or overlybroad questions. Therefore, if the question addressed isnot clear from the title or abstract, it is probably best toconsider another review.2 Ideally, the systematic reviewshould demonstrate relevance by (1) focusing on thepopulation served or targeted for intervention by theorganization and (2) addressing questions about interventionsunder consideration and outcomes of importanceto the organization, such as cost.

Subtle differences in review focus may affect relevance.Specifically, the population of interest and the definitionsof intervention or the clinical condition understudy may vary in different reviews. Some systematicreviews may focus on younger subjects and include onlystudies that evaluated widely used therapeutic agents,whereas others with the same study question mayinclude both young and old patients and evaluate a widerange of therapeutic agents. Furthermore, systematicreviews of the same clinical condition may define thecondition or outcomes in slightly different ways andthereby vary in their inclusion of primary studies.Therefore, it is important to examine closely the criteriaused to select studies and how different systematicreviews define clinical groups in their analyses. Finally,study settings also should be examined for relevance, aslocal or institutional factors often can influence outcomesof organizational interventions.

Quality Considerations

Comprehensiveness of the Search for PrimaryStudies.

To provide valid answers, systematic reviewsideally should include all available information on thequestion(s) of interest, including data from publishedand unpublished studies, abstracts from conferences,and ongoing studies. However, finding the universe ofstudies on a given topic is a time- and resource-intensivetask, and most meta-analysts aim to identify themost studies possible given available resources, with thegoal of identifying studies that are representative of allstudies on the topic of interest.18 Meta-analysts can relyon 3 main sources when locating studies for analysis:electronic databases; hand searches of materials such asreference lists of identified studies, abstracts of conferenceproceedings, and relevant journals; and referralsfrom expert researchers or funding organizations.5 Because of coding and indexing issues, any single databasesearch is unlikely to retrieve all the studies ofinterest19; therefore, searches through multiple databases,hand searches, and contact with content expertsoften are needed to enhance coverage and help identifyunpublished materials. At a minimum, the search forprimary studies should include:

  • Multiple electronic databases.
  • Hand searches through the reference lists of eligible studies.
  • A description of the authors' attempts to include"gray" literature (eg, unpublished studies,abstracts from conference proceedings) by contactingexperts and/or funding organizations, andby searching through relevant databases ofresearch in progress.

Methodological Quality of Individual Studies.

Because systematic reviews analyze results from previousstudies, the methodological quality of theincluded studies will affect the conclusions drawn inthe systematic review. Therefore, readers alwaysshould determine whether and how the author of thesystematic review assessed quality. Ideally, theassessment criteria used should be described in detailand uniformly applied across all studies. The design ofeach study and the effectiveness of its implementationalso should be well described. Quality itemspeculiar to the clinical condition being studied, suchas how individual studies define their clinical conditionof interest, may be relevant. Finally, a good systematicreview assesses both the quality of theindividual studies and the effect of that quality on themeta-analytic conclusions.

Randomized, controlled trials generally are evaluatedon the basis of randomization, blinding, and attrition,although consensus does not currently exist regardingwhat dimensions of quality are important and how theyrelate to study results. A widely used quality measure,the Jadad scale,20 rates trial quality on a scale of 0 to 5based on answers to 3 questions: Was the study randomized?Was the study described as double blind?Was there a description of withdrawals and dropouts?One point is awarded for each "yes" answer, and nopoints are given for a "no" answer. An additional pointis given if the randomization method was described andwas appropriate, but a point is deducted if the methodis described but is not appropriate. Similarly, a point isawarded if the method of blinding is appropriate anddescribed, and a point is deducted if the describedmethod is inappropriate. It is important to note thatmany types of management interventions, such aspatient reminders or changes in the organization ofcare, are not amenable to double-blinding, thus limitingthe usefulness of the Jadad scale for these types of studies.Concealment of allocation (ie, concealment ofassignment up to the point of intervention allocation),an element not included in the scale, has been shown tobe important and also should be included in the qualityassessment of randomized trials. Quality assessment fornonrandomized studies such as cohort and case-controlstudies is less well developed, but quality criteria fornonrandomized studies also have been described.21,22

Despite the limitations of the Jadad scale, the readershould look for its use in quality assessments of studiesinvolving randomized, clinical trials for pharmaceuticalproducts. For other studies, a systematic review shouldat minimum present a table that describes the quality ofthe included papers and clearly describe the criteriaused to assess study quality in the review.

Quality assessment depends on the information providedin a study's publication. Thus, although includingstudies found in the gray literature (eg, those reportedonly in abstract form) is important to eliminate bias,such studies often are at a disadvantage in terms of qualityassessment. In fact, it may be best to consider qualityassessment as quality of reporting rather than qualityof study, unless the authors of the systematic reviewdirectly contacted the authors of all the studies includedin the review, a costly and often ineffective process.

Heterogeneity of Individual Study Results.

Afterindividual studies are assessed for quality, authors of asystematic review often organize the studies into groupsof studies that are sufficiently similar such that combiningthe studies makes sense.23 The reader shouldlook for groupings that are determined by theoretical,clinical, or epidemiologic understanding. Groupingsthat address administrative or operational priorities canbe especially helpful for decision making. For example,by comparing results from a group of studies on patientreminders with a group of studies on provider feedback,a manager can determine whether patient remindersare more or less effective than provider feedback.However, the formation of these groups can be influencedby the availability of studies.

heterogeneity

From a statistical perspective, is saidto exist among the studies if the differences in thestudy results are larger than would be expectedbecause of chance (sampling error) alone. Suppose thereader is interested in the effectiveness of qualityimprovement (QI) programs for increasing the use ofpreventive services. The reader should expect somevariation in the effect of QI programs simply becauseof the different samples included in each study.However, if more than the expected sampling variationis observed, the reader might hypothesize that thestudies differ in important ways, such as the study settingor type of QI initiatives undertaken, and perhapsshould not be statistically combined (ie, pooled). Ifenough studies are available, studies could be split intomore comparable groups. Systematic reviews ofteninclude results from a statistical test that assesses theheterogeneity of the study results. However, the readershould look to see that experts in the field haveexamined study heterogeneity from a theoretical perspective,ideally prior to analysis. These experts performan implicit face validity check, from clinical andcontent perspectives, of how similar or disparate thestudies are.

Aside from having theoretically sensible and comparablegroups, systematic reviews should present a summaryof the study results in a meaningful manner. Thissummary may be solely narrative if differences amongstudies were too large to permit pooling results in ameta-analysis. When it is appropriate to perform ameta-analysis, the meta-analyst generally chooses acommon statistical measure to summarize the studylevel results.2 In rare cases, the meta-analyst will poolpatient-level data if they are available.

Various meta-analytic methods exist that use differentweighting schemes to pool summary statisticsacross studies. A complete discussion of these methodsis beyond the scope of this paper, but the basic choiceof the weighting method is between a fixed-effects and arandom-effects model.24 In general, a random-effectsmodel is preferred over a fixed-effects model if there aredifferences between studies (eg, patient population,study design) that go beyond the fact that different samplesare used. The random-effects model generally providesa more conservative pooled estimate, which maymore readily generalize to other situations becausestudy differences such as varying patient populationsare accounted for. However, if differences between studiescan be entirely attributed to sampling, a fixed-effectsmodel is adequate. The reader should ensure that theauthors of the meta-analysis have considered study heterogeneity,and have not pooled studies that are notclinically comparable.

Appropriateness of the Interpretation of SummaryResults.

After obtaining the summary result, authors ofa systematic review normally interpret what the resultsays about the appropriate way to manage a certain typeof problem. As overly enthusiastic interpretations arecommon, readers should ask themselves the followingquestions when evaluating the acceptability of theauthors' interpretations:

  • How good is the quality of the studies that contributeto the summary result? If all of the studiesare mediocre at best, then any summary resultshould be interpreted with caution.
  • How robust is the summary result? At each stepin the process, the authors of a systematic reviewmake assumptions about which studies to includeand how to summarize them. Therefore, it isimportant that they determine whether resultschange when key assumptions are changed, byrepeating analyses under different assumptions(ie, sensitivity analysis). If results are consistentunder different assumptions, the authors canmake a stronger statement about their findings.However, results that change under differentassumptions indicate a need for the authors totemper interpretation of their results.
  • Can results obtained by summarizing data fromexperimental studies be extended to real-worldsituations? If the intervention was mainly testedwithin staff-model health maintenance organizations,would the intervention work in a preferredprovider organization? Are data from middle-agedadults equally applicable to older and youngerpatients? What about women, or patients with certaincommon comorbidities such as hypertensionor diabetes? Summary findings from a systematicreview that includes studies of a variety of patientpopulations may be applicable to more real-worldsituations than summary findings from a systematicreview that includes studies of a single patientpopulation. More generally, a good systematicreview can be recognized by a thorough discussionof the strengths and limitations of applying thesummary result to real-world situations.

USING META-ANALYTIC RESULTS INMANAGERIAL DECISION MAKING

Management decisions differ from clinical ones inseveral ways. Managers are often asked to intervene atan organizational or systemic level and to implementsolutions that affect numerous people. Furthermore, theprevalence of a condition often is an important considerationin management decisions. Consequently, managersoften need different types of evidence, and usethat evidence in different ways, than clinicians do. Wewill apply the relevance and quality criteria specified inprevious sections (summarized in the Appendix) toassess a meta-analysis by Stone et al.25 In addition, wewill identify specific features of this meta-analysis thatare particularly useful and areas where additional informationwould be helpful for organizational purposes.

The authors evaluated the relative effectiveness ofintervention components such as reminders, providerfeedback, patient or physician education, financialincentives, regulatory or legislative actions, organizationalchange, and mass media campaigns for increasingthe utilization rates for immunization and cancerscreening services among adults.25 What is particularlyrelevant for managerial decision making is the authors'focus on organization-level interventions and population-level outcomes. In addition, by presenting estimatesof the effects of the various components, the authorsenable managers to rank these components in terms ofeffectiveness for improving the utilization rates of thesepreventive services among their adult members.

When determining relevance, the reader must considerthe limits imposed by his or her practice environment.Although the authors found that organizationalchange and patient financial incentives (eg, reducing oreliminating copayments) are highly effective, these arenot always pragmatic strategies for enhancing preventive-service use in solo or small group practices withlimited resources. Although patient reminders andpatient education were ranked lower in effectiveness,they consistently improved care and are more reasonablestrategies for small group practices to implement.In contrast, managers of a health maintenance organizationusually have more staff and financial resourcesavailable and can consider a fuller range of options suchas implementing organizational changes and financialincentives. In addition, the reader should see whetheranalyses are conducted specifically for the practiceenvironment (eg, provider group size, ambulatory orinpatient care, capitated or mixed reimbursement systems,rural or urban setting) and patient population (eg,lower income persons, women, children) that approximatethe setting and patients served at the manager'spractice or organization. Results from these analyseswill be most relevant and useful for decision making.

In terms of quality considerations, the authors werecomprehensive in their search for primary articles,using multiple electronic databases as well as handsearches of reference lists from relevant articles. Non-English studies were eligible for analysis, and a databaseof projects conducted by the Medicare Peer ReviewOrganizations was used to locate unpublished studies.The authors also searched a lengthy period (between1966 and February 1999). These procedures combineto increase the likelihood that relevant studies havebeen identified.

The authors used specific criteria (eg, allocation concealment,blinding, and withdrawal or dropout rates) toassess the quality of studies included in the meta-analysis.Studies of different designs were analyzed separatelyto evaluate the consistency of their summary results.The authors tested for heterogeneity statistically andused a statistical model to adjust for differences acrossstudies. The authors also tested the robustness ofresults by repeating analyses with different statisticalapproaches, in particular to adjust for the correlation incluster-randomized trials (ie, trials that randomize bygroups), which are more common in this literature thanperson-level randomized trials. They raised the issue oftrial quality as a limitation in their discussion and adequatelydiscussed the strengths and weaknesses of theiranalysis, such as the possibility that not all studies onmammography were identified. Based on the quality criteriadescribed in previous sections, this meta-analysisappears to be of high quality.

The authors found that organizational change wasmost effective in improving flu and pneumonia immunizationrates. Provider reminders, patient financialincentives, provider education, patient reminders,patient education, provider financial incentives, andprovider feedback, in decreasing order of effectiveness,also increased adult immunization rates. Furthermore,the authors found that adding an effective interventionto an existing intervention would enhance overall effectiveness.Although these findings can aid decision making,they must be interpreted and adapted to theparticular situation in the manager's organization. Forexample, although organizational changes were found tobe most effective, such a strategy would likely require asubstantial investment in time and resources, andimprovements might not be immediate. If a manager isfaced with the need to quickly increase preventive services use, it may be more prudent to implement a systemof provider reminders.

A limitation of the study by Stone et al for informingmanagerial decisions, as noted by its authors, is its lackof cost-effectiveness analyses. The small number ofstudies that report sufficient data on intervention costsand benefits contributes to this problem. Furthermore,Stone et al did not take study setting into consideration.As local conditions can affect the outcome of organizationalinterventions, additional analyses of study settingsmay be helpful for managers to gauge the potentialsuccess of different strategies in their organizations.

SUMMARY

Scientific evidence can play an important role in thedecision-making process of healthcare managers. Asmore high-quality syntheses of information relevant tothe organization and delivery of care become availableeach month, greater familiarity with the retrieval andevaluation of systematic reviews can help managers usethese sources effectively and encourage the developmentof evidence-based healthcare.

AcknowledgmentsThe authors thank Robert Brook, MD, ScD, for initiating this project.Robin P. Hertz, PhD, senior director of outcomes research/population studiesat Pfizer Inc, provided valuable support.

From the Department of Health Policy and Management, Johns Hopkins BloombergSchool of Public Health, Baltimore Md (KSC); the RAND Corporation Health Program,Santa Monica, Calif (SCM, PGS); the Southern California Evidence-Based Practice Center,Santa Monica, Calif (SCM, PGS); and the Greater Los Angeles Veterans AdministrationHealth Care System, Los Angeles, Calif (PGS).

This study was supported by a grant to RAND from Pfizer Inc.

This manuscript is a more concise and improved version of a report prepared for Pfizerentitled "A Practical Guide to Finding and Evaluating Meta-analyses for HealthcareManagers."

Address correspondence to: Kitty S. Chan, PhD, 624 N. Broadway, Rm 644, Baltimore,MD 21205. E-mail: kchan@jhsph.edu.

JAMA.

1. Guyatt GH, Haynes RB, Jaeschke RZ, et al. Users' guides to the medical literature,XXV: evidence-based medicine: principles for applying the users' guides topatient care. 2000;284:1290-1296.

JAMA.

2. Oxman AD, Cook DJ, Guyatt GH. Users' guides to the medical literature, VI:how to use an overview. 1994;272:1367-1371.

JAMA.

3. Cook DJ, Guyatt GH, Ryan G, et al. Should unpublished data be included inmeta-analyses? Current convictions and controversies. 1993;269:2749-2753.

Can MedAssoc J.

4. Oxman AD, Guyatt GH. Guidelines for reading literature reviews. 1988;138:697-703.

ArchPediatr Adolesc Med.

5. Jadad AR, Moher D, Klassen TP. Guides for reading and interpreting systematicreviews, II: how did the authors find the studies and assess their quality? 1998;152:812-817.

Arch Pediatr Adolesc Med.

6. Klassen TP, Jadad AR, Moher D. Guides for reading and interpreting systematicreviews, I: getting started. 1998;152:700-704.

Arch Pediatr Adolesc Med.

7. Moher D, Jadad AR, Klassen TP. Guides for reading and interpreting systematicreviews, III: how did the authors synthesize the data and make their conclusions? 1998;152:915-920.

Milbank Q.

8. Walshe K, Rundall TG. Evidence-based management: from theory to practice inhealth care. 2001;79:429-457.

Int J Technol Assess Health Care.

9. Oliver A, Mossialos E, Robinson R. Health technology assessment and its influenceon health care priority setting. 2004;20:1-10.

Int J Qual Health Care.

10. Lohr KN. Rating the strength of scientific evidence: relevance for qualityimprovement programs. 2004;16:9-18.

Ann Intern Med.

11. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidencefor clinical decisions. 1997;126:376-380.

12. National Center for Biotechnology Information. PubMed overview. Availableat: http://www.ncbi.nlm.nih.gov/entrez/query/static/overview.html. Accessed August27, 2002.

AnnIntern Med.

13. Hunt DL, McKibbon KA. Locating and appraising systematic reviews. 1997;126:532-538.

J AmMed Inform Assoc.

14. Haynes RB, Wilczynski N, McKibbon KA, Walker CJ, Sinclair JC. Developingoptimal search strategies for detecting clinically sound studies in MEDLINE. 1994;1:447-458.

Eff Clin Pract.

15. Shojania KG, Bero LA. Taking advantage of the explosion of systematicreviews: an efficient MEDLINE search strategy. 2001;4:157-162.

J Inf Sci.

16. Boynton J, Glanville J, McDaid D, Lefebvre C. Identifying systematic reviewsin MEDLINE: developing an objective approach to search strategy design. 1998;24:137-157.

J Inf Sci.

17. White VJ, Glanville JM, Lefebvre C, Sheldon TA. A statistical approach todesigning search filters to find systematic reviews: objectivity enhances accuracy. 2001;27:357-370.

BMJ.

18. Jadad AR, McQuay HJ. Searching the literature. Be systematic in your searching[letter]. 1993;307:66.

BMJ.

19. Dickersin K, Scherer R, Lefebvre C. Identifying relevant studies for systematicreviews. 1994;309:1286-1291.

Control Clin Trials.

20. Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomizedclinical trials: is blinding necessary? 1996;17:1-12.

J Epidemiol Community Health.

21. Downs SH, Black N. The feasibility of creating a checklist for the assessmentof the methodological quality both of randomised and non-randomised studiesof health care interventions. 1998;52:377-384.

Ann Intern Med.

22. Irwig L, Tosteson AN, Gatsonis C, et al. Guidelines for meta-analyses evaluatingdiagnostic tests. 1994;120:667-676.

J Rheumatol.

23. Shekelle PG, Morton SC. Principles of meta-analysis. 2000;270:251-253.

Ann Intern Med.

24. Lau J, Ioannidis JP, Schmid CH. Quantitative synthesis in systematic reviews. 1997;127:820-826.

Ann InternMed.

25. Stone EG, Morton SC, Hulscher ME, et al. Interventions that increase use ofadult immunization and cancer screening services: a meta-analysis. 2002;136:641-651.

Related Videos
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.