Quality of Anticoagulation Control Among Patients With Atrial Fibrillation
Published Online: March 23, 2011
Osnat C. Melamed, MD, MSc; Gilad Horowitz, MD; Asher Elhayany, MD; and Shlomo Vinker, MD
Atrial fibrillation (AF), the most common cardiac rhythm disorder, heightens the risk for ischemic stroke 4- to 5-fold.1 The use of oral anticoagulants such as warfarin has been shown in clinical trials to reduce the risk of stroke by 64%; thus, warfarin therapy is widely accepted in patients with AF and is advocated by the American College of Chest Physicians.2,3
In order to achieve maximal protection against stroke and to minimize bleeding complications, warfarin therapy must be tightly controlled and maintained within a narrow therapeutic index of international normalized ratio (INR) values between 2 and 3. This task is by no means trivial as each INR determination, which requires a venipuncture, needs to be promptly addressed by the managing physician. Moreover, INR levels are influenced by an array of factors including patient age, comorbidities, concurrent medications, genetic makeup, and diet.4,5 As a result, oral anticoagulant therapy necessitates regular and diligent monitoring, which can be toilsome for patients and physicians alike.
Although not easily achieved, high anticoagulation control, expressed as the time spent within the therapeutic range (TTR), has a paramount affect on patient outcomes, reducing stroke events and mortality rates.6,7 Moreover, it is estimated that optimal anticoagulation could prevent 28,000 cases of stroke in the United States annually, leading to a $2.5 billion cost reduction.8
Even though the literature acknowledged the superior outcomes of anticoagulation clinics over routine medical care in terms of anticoagulation control, anticoagulation management often is in the primary care physician’s domain.9,10 Nevertheless, there is a relative paucity of data concerning the quality of anticoagulation achieved in routine medical care, although it is assumed to be the most prevalent form of anticoagulation care in the United States.11 Moreover, studies that addressed anticoagulation care in the community were seldom population based; thus, they had selection bias that limited their generalization to other populations.12-15 Also, previous studies looking at anticoagulation control in the managed care setting had a heterogeneous patient population (ie, some patients received care in the community, while others were treated in anticoagulation clinics), which interfered with evaluation of the anticoagulation control achieved in routine medical care.16
In this article we describe the quality of anticoagulation control achieved in patients with AF receiving routine medical care within a large managed care organization (MCO) in Israel. The purpose of this study was to assess the quality of anticoagulation control (expressed as TTR) and to explore patientlevel factors that may have affected it.
This study was carried out in the central district of Clalit Health Services (CHS), Israel’s largest government-funded MCO. The central district of CHS provides medical care to approximately 500,000 patients residing in central Israel, a largely urban setting. All patients had full medical coverage by CHS inclusive of pharmacy benefits for prescription medication as granted to all Israeli citizens by order of the National Health Insurance Act.
Following approval of the CHS local institutional review board, we conducted a retrospective study from November 1, 2006, to October 31, 2007, using the CHS computerized database to identify all patients with a diagnosis of AF who were treated with warfarin for at least 6 months. Patients were excluded if they fulfilled any 1 of the following criteria: (1) were younger than 18 or older than 85 years; (2) were elderly and lived permanently in a nursing home; (3) had an active malignancy; (4) had prosthetic heart valves; (5) were bedridden; (6) were prescribed antipsychotic medication; or (7) had fewer than 5 INR determinations during the study period. All records retrieved from the database were audited manually by study staff for concordance with the above-mentioned criteria.
A total of 906 patients met the study criteria and were included in the analysis. Each patient was managed by his/ her personal physician during the study period. Overall, care was delivered by 124 primary care physicians in CHS community clinics. The computerized database provided demographics (age, sex) and medical diagnoses. In addition, the number and value of INR determinations for each patient were also extracted. Data on physicians’ board certification were retrieved from administrative records.
Anticoagulation control was assessed by measurement of time spent within the TTR (ie, time in which patient INR values were between 2 and 3). The therapeutic range was calculated with computer software that utilized a linear interpolation model, as described by Rosendaal et al.17 First, the TTR was determined for each patient. Later, stratification of patients according to TTR level was carried out as follows: a TTR level <60% was considered to represent poor anticoagulation control, a TTR level between 60% and 75% was considered to represent good anticoagulation control, and a TTR level >75% was considered to represent excellent anticoagulation control. This stratification allowed characterization of patient subsets associated with the different control levels.
All statistical analyses were performed using SPSS, version 15.0 (SPSS Inc, Chicago, IL). Each potential predictor of poor control was first assessed in univariate models (X² test for categorical variables and analysis of variance for continuous variables). Significant univariate predictors were subsequently assessed in the multivariate logistic regression model to determine their independent effect, expressed as odds ratio (OR) and 95% confidence interval (CI). P <.05 was considered significant.
A total of 906 patients with AF who were treated with warfarin for at least 6 months were identified through the computerized database. Table 1 presents patient demographics and clinical characteristics. The mean age was 71.7 years; 51.9% were female and more than 90% of patients had at least 1 risk factor for ischemic stroke (age >75 years, diabetes mellitus, hypertension, heart failure, or prior stroke). Patients were receiving routine medical care delivered mainly by non–board-certified physicians and by board-certified family physicians (48.6% and 37.1%, respectively).
Patients had 769 patient-years of follow-up (mean 310.6 days per patient), during which 14,935 INR determinations were performed. Due to the interpolation method, 137 patient-years could not be evaluated for TTR since INR determinations were performed more than 30 days apart.17 Patients had a mean of 16.5 INR determinations during the study period (range 5-75) and spent 48.6% of the time within the therapeutic range of 2 to 3, 32% of the time under the therapeutic range, and 19.3% of time above the therapeutic range (Table 2).
When patients were stratified according to anticoagulation control levels (TTR <60%, TTR 60%-75%, TTR >75%), more than two-thirds of them had poor anticoagulation control (Table 3). Only 11.9% had excellent anticoagulation control, and 20.6% had good anticoagulation control. Compared with the group that had poor anticoagulation control, the group that had excellent anticoagulation control had younger patients and fewer females (P = .006 and P = .02, respectively). Additionally, poor anticoagulation control was associated with more frequent INR testing than excellent control. It was also noticeable that the excellent-control group was less burdened by the comorbiditiesof diabetes, heart failure, and stroke (P = .003, P <.001, and P = .001, respectively).
Patients with poor anticoagulation control were seen more often by non–board-certified physicians than patients with excellent anticoagulation control (53% poor controlvs 40% excellent control, P = .018). An opposite trend appeared among board-certified family physicians, but it did not reach significance (36% poor control vs 45% excellentcontrol, P = .096) (Table 3).
In order to evaluate the independent effect of each variable as a predictor of poor anticoagulation control, we performed a multivariate logistic regression (Table 4). We identified 2 significant predictors of poor anticoagulation control: having a non–board-certified physician and heart failure (OR = 1.41; 95% CI, 1.05-1.88; and OR = 1.63; 95% CI, 1.20-2.22, respectively).
In our study, patients with AF receiving routine medical care within a large MCO had suboptimal anticoagulation control with a mean TTR of 48.6%. Additionally, poor anticoagulation control was associated with comorbidities and having a non–board-certified physician. Since a close correlate between anticoagulation control and clinical outcomes (ie, stroke, bleeding events) exists, suboptimal control has profound medical and economic implications.6,7
PDF is available on the last page.