Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Research Article - (2016) Volume 24, Issue 4

Improving the Quality of Care in Small Internal Medicine Practices: An Evaluation

Jill Anne Marsteller

Johns Hopkins University Bloomberg School of Public Health, Baltimore, USA

Chun-Ju Hsiao

Agency for Health Research and Quality, Rockville, MD

Simon C Mathews

Division of Gastroenterology, Johns Hopkins University School of Medicine, Baltimore, MD 21287

William S Underwood

PeaceHealth, Bellingham, WA

Paula M Woodward

Mid-State Health Center, Plymouth, NH

Michael S Barr

National Commitee for Quality Assurance, Washington, DC

*Corresponding Author:
Jill Anne Marsteller
PhD, MPP, Johns Hopkins University
Bloomberg School of Public Health
Baltimore, MD 21205, United States
Tel: 301-807-8634
E-mail: jmarste2@jhu.edu

Submitted date: August 15, 2015; Accepted date: August 23, 2016; Published date: August 30, 2016

 

Visit for more related articles at Quality in Primary Care

Abstract

Small practices face unique challenges to improving quality of care. We conducted a pre-post-test of a quality improvement intervention provided by the Center for Practice Innovation (CPI) to 34 small internal medicine practices featuring two site visits, a practice assessment, self-selection of focus areas for improvement and ongoing 'directed guidance' of the practices. In bivariate analyses, the intervention was associated with statistical improvement in percent of patients with: controlled blood pressure for diabetic patients (68% vs. 77%); assessment of fall risk (78% vs. 93%); asthma patients on inhaled corticosteroid (91% vs. 100%); flu vaccine (86% vs. 97%); and pneumococcal vaccine (83% vs. 99%). Additionally, statistically significant improvements were noted in selected practice processes and patient satisfaction measures. However, clinician and staff assessments showed some negative changes. Quality improvement initiatives focused on small practices can improve clinical and patient satisfaction measures but may have risks for clinician and staff satisfaction.

Keywords

Small practices; Quality improvement; Practice facilitation

Introduction

While progress has been made since the Institute of Medicine’s landmark publication, Crossing the Quality Chasm1 substantial shortfalls remain2 in the quality of care. About 70% of all physician office visits in the United States take place in practices with five or fewer physicians3 and large scale quality improvement (QI) initiatives often do not include (or perhaps even apply to) these small practices. Unfortunately, small practices often lack resources to make improvements4-6 and encounter numerous barriers such as time constraints and lack of financial incentives.7,8

Best practices for QI implementation are not well developed and an evidence-based approach is still not widely used.9 Non-traditional models for advancing quality such as pay-forperformance offer some promising alternatives.10 However, adapting these types of programs to smaller practices – which lack infrastructure, health information technology and support staff, and have only a small number of patients from any given private payer5 is not always straightforward. A national survey showed that less than 20% of small and medium sized practices made use of established QI techniques.11 Little is known about successful strategies for QI implementation in small practices. Studies found that training with specific goals to help meet external review requirements as well as team-based care approaches are critical methods to implement QI activities in small practices.7,12

In the United States, there have been only a few QI programs that specifically focus on small practices such as strategies identified by the Center for Health Care Strategies to improve care in small primary care practices serving high volumes of Medicaid beneficiaries13 and the American College of Physicians’ (ACP) Quality Connect that offers free QI programs on specific clinical conditions.14 The Agency for HealthCare Research and Quality also described approaches and supports that an external infrastructure could provide to build improvement capacity in practices.15,16

To address the need for QI in small practices, the ACP’s Center for Practice Innovation (CPI, later part of the Center for Practice Improvement and Innovation) undertook, in 2006, a practice improvement project specifically designed for small practices, an effort that is still relevant and largely unduplicated in today’s health care arena. This paper assesses the impact of this intervention on quality of care, clinician and staff assessments of the practice and patient satisfaction.

Methods

The intervention and study participants

With funding support from The Physicians Foundation, the CPI was created to assist small primary care practices in improving quality and efficiency. The CPI intervention focused on intensive customized support, perhaps a key, differentiating factor from other initiatives. A QI team from CPI collected data in 2006 and 2007 in volunteer practices of one to six physicians. Thirty-four small internal medicine practices were invited from a field of 99 complete applications and agreed to participate in a two year pilot of practice management (PM) and QI activities tailored to the small primary care setting. Invited practices were selected based on: 1) practice size (to include representation of solo practices and those with up to six clinicians); 2) diversity in patient factors such as ethnicity and disease conditions; 3) apparent dedication to making practice improvements (based on the application essay); and 4) geographic location where clusters were identified among applicants to minimize travel. Practice location varied across suburban, urban and rural areas.

The CPI intervention involved two site visits, a 3 h assessment of the practice by the CPI team, feedback to the practice and ongoing support of the practices in their efforts to improve self-selected clinical, operational and financial foci. The first round of site visits was conducted between late May and late September 2006 and the follow-up round was conducted between April and July 2007. Two CPI staff dedicated about two hours daily to helping practices find existing tools, sometimes customizing or developing them for the practice, answering questions and responding to practice needs to facilitate quality and operational improvements. Efforts included a regular “Practice Tips” email and seven hour-long didactic conference calls on topics of practice interest (not necessarily related to their performance improvement targets). Practices developed action plans with their ACP CPI advisor and selected one to three clinical, operational and/or financial measures to work on. Thus, not all sites selected the same clinical measures for improvement over the study period. A detailed description of the intervention can be found elsewhere.17

  No. of practices Time 1 Time 2 p-value
Scale: 1=Totally Broken to 4=Works well
Answering phones
18 2.50 2.48 0.97
Appointment system 18 3.71 1.87 <0.01
Messaging 19 2.60 1.64 <0.01
Scheduling Procedures 17 2.76 2.68 0.54
Ordering Diagnostic Tests 20 2.65 3.74 <0.01
Reporting Diagnostic Test Results 20 2.43 1.52 <0.01
Prescription Renewals 20 2.66 1.79 <0.01
Making Referrals 20 2.55 3.79 <0.01
Pre-authorization for services 19 3.23 3.41 0.18
Billing/Coding 17 3.45 2.65 <0.01
Phone Advice 20 3.65 2.75 <0.01
Orientation of Patients to Your Practice 19 3.78 1.85 <0.01
New Patient Work-ups 20 2.86 1.95 <0.01
Minor Procedures 18 3.76 1.94 <0.01
Education for Patients/Families 21 3.68 1.76 <0.01
Prevention Assessment/Activities 21 3.68 1.92 <0.01
Chronic Disease Management* 21 3.75 1.72 <0.01
Coordination of Patient Care* 21 3.68 1.68 <0.01
  Scale: 1=very dissatisfied to 6=very satisfied
Work environment satisfaction*
22 4.79 4.38 <0.01
Quality, stability, continuity, familiarity* 22 3.74 2.41 <0.01
  Scale: 1=poor to 5=excellent
My and others' morale*
22 3.45 2.95 <0.01
Scale: 1=always to 5=never
Hurried or stressed
22 3.16 3.16 0.91
  Scale: 1=strongly disagree to 5=strongly agree
Team dynamics*
22 3.94 3.39 <0.01
This practice has enough people andresources to meet the needs of your patients 22 3.53 2.94 <0.01
Quality improvement - measures, skills* 22 3.41 2.84 <0.01
You know how well your practice is doing
Financially
22 3.63 3.78 0.34
You are recognized for your work 22 3.76 2.60 <0.01
Patient centeredness* 22 3.86 2.12 <0.01
  Scale: 1=definitely No to 4=definitely Yes
Patient engagement*
22 3.70 3.08 <0.01

Table 1: Clinician and staff practice assessments pre and post intervention.

Data collection

Data used in this analysis came from patient and clinician/ staff surveys as well as practice-reported clinical metrics forms reported on a per-visit basis. A single clinical metrics form assessed quality indicators on fifteen clinical areas including diabetes (e.g. HgA1c level), prevention (e.g. fall risk assessment, mammogram, Pap test) and congestive heart failure (e.g. prescription of ACE-I or ARB). One goal of the CPI was to help practices learn how to gather and report quality indicators and patient satisfaction data, so the CPI provided practices with scannable paper forms, and practices faxed or emailed the forms back to the CPI on a rolling basis. Thus, we asked practices to submit a clinical metric form for all eligible visits. The CPI fed back to the practices summaries of their performance based on these faxed sheets.

The clinician and staff practice assessments used questions from several existing instruments.18-20 Areas investigated included components of general practice infrastructure and practice culture. We generally used Likert scales to measure responses, provided in Table 1 (with lower numbers representing a worse state or lower agreement, e.g. 1=totally broken to 4=works well; 1=very dissatisfied to 6=very satisfied). These data were gathered using a web-based form early and late in the intervention period (time 1: 05/15/06–08/22/06; time 2: 09/21/2007-10/21/2007). Challenges with getting various practices to respond during the first survey window led to an extended collection period. Because of the small size of the practices, we did not collect identifying information on respondents, and therefore could not link time 1 and time 2 data at the respondent level.

Patient demographic and satisfaction questionnaire items came from established surveys used elsewhere.20-22 They included questions about patient demographics such as gender, age and perception of overall health, and satisfaction questions such as access to care, needs met and quality of communication. As with the quality indicators, practices gathered patient satisfaction surveys using scannable paper forms provided by CPI and faxed or emailed the surveys to the CPI. Each practice was asked to complete 50 surveys at the beginning of the project and then a second set of 50 surveys in the second time period. The first set of patient satisfaction surveys was sent to the practices in late July 2006 and the second set of 50 surveys was sent in April 2007. The results for each practice were returned in batches, with some of them completing all of their surveys within a month.

Statistical analyses

Most analyses were performed at the visit or patient level, considering change over time in the data reported by all participating practices. Practice-level comparisons included participating practices to non-participants and the clinician and staff practice assessment evaluation. We rolled up the clinician and staff surveys to the practice level to protect anonymity of staff in very small practices. For the quality indicators and the patient satisfaction surveys, time 1 or time 2 samples at the practice level were sometimes very small, making practice-level analysis infeasible. In order to judge the representativeness of the participating group, we used two-sample t-tests to compare characteristics of the participating practices to those that applied but were not invited to participate.

Because of the rolling submission process, for the purpose of evaluation, we split the quality indicators data into two time periods for analysis by choosing the halfway month as the cutoff. Time 1 covered the period of 08/2006-02/2007, and time 2 covered the period of 03/2007-10/2007. To compare the practices’ performance on the clinical quality indicators before and after the intervention, we used the Wilcoxon rank-sum test to examine differences in the proportion fulfilling the quality measure. We thus assumed that most visits seen in time 1 and time 2 represented unique patients. As a second approach, we regressed whether each measure was fulfilled on time period, using logistic regression and accounting for clustering of patients within practice. However, regression results could not be calculated for four quality indicators where all of the reported observations fulfilled the measures in time 2 (and in time 1, for one indicator).

To compare differences in practice-level clinician and staff practice assessments from time 1 to time 2, we used the Wilcoxon signed-rank test because the practices were the same in time 1 and 2, but the outcome measures were generally not normally distributed.

To compare differences in patient satisfaction and patient demographics, we used the Wilcoxon rank-sum or signed-rank test (for the two measures that were only available in time 2) because the data were not normally distributed. In addition, we regressed patient satisfaction measures (access to care, needs met, and communication with provider) on time period, using linear regression and accounting for clustering of patients within practice. Patients’ age, gender, and overall health were adjusted in the regression. For two measures that were only measured in time 2, the Kruskal-Wallis test was performed to compare whether the mean varied significantly across the practices.

We conducted factor analyses to combine survey items within the clinician and staff practice assessment and within the patient satisfaction surveys when appropriate. All statistical analyses were performed using Stata statistical software, Version 8 (Stata Corp., College Station, TX). Johns Hopkins researchers conducted the analyses reported here as secondary analyses of de-identified data provided by ACP CPI staff. The analytic study was deemed exempt from review by the Johns Hopkins School of Public Health Institutional Review Board.

Results

Response rates (Not shown in tables)

Sample sizes varied by time period and data type. For the clinician and staff practice assessments, 31 practices submitted data in time 1 (a 91% response). Three practices left the project, and of the remaining 31, 25 submitted data in time 2 (80%). The number of submitted clinician and staff surveys per practice in time 1 ranged from 1 to 18, and in time 2, from 1 to 17. A total of 177 and 75 clinician and staff surveys were received in time 1 and time 2, respectively. For the clinical measures, the number of submitted records for each measure in time 1 ranged from 13 to 732 and in time 2, from 10 to 933. It is not possible to calculate a response rate for these measures as whether practices submitted forms for all eligible patients, or only for some, is unknown. Clinical measures were submitted by 20/34 practices in time 1 and 21/31 practices in time 2. For the patient satisfaction surveys, a total of 1,477 and 1,105 records were submitted in time 1 and time 2, respectively. Again, the response rate cannot be calculated because the actual number of patients who were offered the survey is unknown. At the practice level, 32 practices returned patient satisfaction surveys in time 1 (94%) and 25 in time 2 (81%).

Table 2 shows a comparison of practices selected to participate and non-participating applicants on selected practice and patient characteristics. Among practice characteristics, the number of physicians in each group (1.62 in participating vs. 3.52 in non-participating practices, p<0.05) and the number of other clinical support staff (1.79 in participating vs. 4.78 in non-participating practices, p < 0.05) were statistically different. There were no statistical differences between participating and non-participating practices for the number of Registered Nurses, Nurse Practitioners, Physician Assistants or administrative support. Patients’ race and ethnicity was similar between participating and non-participating practices.

Clinical metrics (i.e., Quality indicators)

Of the fifteen clinical measures assessed (Table 3) at two time points, five showed statistically significant improvement (p<0.05) when using the Wilcoxon rank-sum test, including: most recent blood pressure <140 systolic and <80 diastolic for diabetic patients (68% vs. 77%); assessment of fall risk within last 12 months for patients aged > 75 years (78% vs. 93%); asthma patient on inhaled corticosteroid (91% vs. 100%); flu vaccine given within 1 year to patients aged > 65 years (86% vs. 97%); and pneumococcal vaccine given to patients aged > 65 years (83% vs. 99%). One measure, antidepressant management for at least 12 weeks following an acute episode, was already at 100% compliance in time 1 and stayed there in time 2. However, using regression of measures on time with clustering for practice, only two measures (flu vaccine given within 1 year to patients aged > 65 years and pneumococcal vaccine given to patients aged > 65 years) showed statistically significant improvement (p<0.05). Three measures could not be tested with regression analysis because of 100% compliance at time 2.

Clinician and staff practice assessments

Surveys revealed mixed but largely negative results (Table 1). Two measures showed statistical improvement (p<0.05): ordering diagnostic tests (2.65 vs. 3.74) and making referrals (2.55 vs. 3.79). However, 22 measures showed statistical decline (p<0.05). Five items showed no statistical difference.

Patient characteristics and satisfaction survey

Practice Characteristic Participating practices response (mean) Number of Participating Practices Non-participating practices response (mean) Number of Non-participating Practices Two sample t-test with equal variances, p-value
Provider demographics          
Number of Physicians 1.62 34 3.52 90 0.024
Number of RNs 0.41 17 1.55 66 0.118
Number of Nurse Practitioners 0.41 17 0.73 67 0.328
Number of Physician Assistants 0.2 15 0.5 64 0.394
Number of other clinical support 1.79 28 4.78 76 0.031
Number of administrative support 2.41 27 8.35 79 0.089
Patient demographics          
% White, not of Hispanic origin 72.0 27 66.8 83 0.316
% White, Hispanic origin 8.4 30 15.1 72 0.053
% Black 15.0 31 15.5 75 0.903
% Asian or Pacific Islander 5.4 23 6.7 59 0.617
% American Indian, Eskimo, Aleut 2.1 11 1.6 28 0.685
% Other 2.7 12 2.8 27 0.899

Table 2: Practice characteristics, participating vs non-participating practices.

   Time 1  Time 2    Time 1 Time 2    
Clinical measures * No. of observations No. of practices No. of observations No. of practices Percent of observations that fulfilled
 measure
Percent of observations that fulfilled measure P-value of Wilcoxon rank-sum test P-value of regression on time with clustering
Diabetes measures                
Hemoglobin A1c<9% 732 18 933 18 91 93 0.07   0.06
  Most recent LDL<100mg/dL 643 17 798 17 70 73 0.11   0.26
Most recent blood pressure<140 and<80 633 15 751 18 68 77 <0.01     0.16
  Dilated eye exam within 12 months 575 15 531 18 72 74 0.54   0.75
  Congestive heart failure measure                
Patient on ACE-I or ARB 77 10 20 8 94 90 0.59   0.55
Coronary artery disease and prior myocardial infarction measures                
Patient on beta-blocker 118 9 47 9 92 91 0.85   0.83
  Patient on lipid-lowering agent+ 119 11 47 9 97 100 0.20   -
  Assessment of fall risk within last 12 months (patients >75 years) 50 6 86 7 78 93 0.01     0.35
  Antidepressant medication management at least 12 weeks for acute episode (patients >18 years)+ 13 4 10 2 100 100 -       -
  Asthma patient on inhaled corticosteroid+ 135 7 79 5 91 100 0.01   -
  Prevention measures                
Mammogram within one year (women 50-69 years) 316 13 613 15 79 83 0.18     0.20
Pap test within past 3 years (women 18-64 years) 220 12 185 11 85 86 0.79     0.86
Appropriate colon cancer screening done (patients 50-80 years) 230 10 670 13 79 84 0.09     0.60
Flu vaccine given within 1 year (patients = 65 years) 401 8 182 10 86 97 <0.01     0.01
Pneumococcal vaccine (patients = 65 years) 238 9 191 9 83 99 <0.01    <0.01

Table 3:Clinical measures pre and post intervention.

Results are detailed in Table 4. Patients’ gender and age were not statistically different across the two time points. However, overall health was statistically different among patients surveyed in the second time period compared with the first time period, with fewer patients reporting poor, and fewer reporting excellent, health (p=0.05). In the second time period, the average patient response to two measures on change (noticed changes in the selected areas in the past 6 months and noticed other changes in doctor’s office in the last 6 months) was positive and statistically different from zero (p<0.01). The selected areas included: length of time to get an appointment; difficulty of contacting the office by phone; length of time spent waiting at the office; time spent with the clinician; explanation of the care; and clinician’s sensitivity to special needs or concerns. However, the patient satisfaction measure, “Needs were met” (on a 0 to 4 scale) statistically declined (p<0.01) in the second time period (3.39 vs. 3.27). Another measure “communication with care provider” also statistically declined (p<0.01) in the second time period (0.83 vs. 0.80). None of the results were statistically significant in the multivariate analyses. For the two measures on change, the Kruskal Wallis test showed that the mean varied significantly across the practices (p<0.01).

(Patients are the unit ofanalysis) Time 1 (n=1477) Time 2 (n=1105) P-value of Wilcoxon rank-sum or signed-rank test P-value of regression on time with clustering, adjusted for age, gender and overall health
Patient characteristics (%)        
Gender        
Male (%) 31.94 32.20 0.89 -
Age        
Under 25 4.00 2.79 0.33 -
25-44 18.17 16.95
45-64 38.08 38.78
65+ 39.75 41.48
Overall health        
Poor 10.52 8.28 0.05 -
Fair 30.61 30.34
Good 41.35 42.51
Very good 14.42 17.07
Excellent 3.09 1.80
Patient satisfaction (mean)        
Access to care (0-4, 0=poor, 4=excellent) 2.93 2.94 0.68 0.91
Needs were met (0-4, 0=poor, 4=excellent) 3.39 3.27 <0.01 0.06
Communication with care provider (0-1, 0=poor, 1=good) 0.83 0.80 <0.01 0.06
Noticed changes in the selected areas in the past 6 months (-1: bad change, 0=no change, 1=good change)** - 0.21 <0.01 <0.01***
Noticed any other changes in doctor’s office in the last 6 months (0= no, 1=yes)** - 0.16 <0.01 <0.01***

Table 4: Patient characteristics and satisfaction pre and post intervention.

Discussion

In this study, we found that a customized quality intervention led by the CPI showed some evidence of positive changes in clinical measures and patient satisfaction. Specifically, blood pressure for diabetics, assessment for fall risk in the elderly, corticosteroid treatment for asthmatics, and influenza and pneumococcal vaccination for patients aged 65 and over all showed some evidence of improvement, with the changes in the two vaccinations holding in regression analysis. One patient satisfaction measure indicated patients saw positive changes in the last six months in the areas of length of time to get an appointment; ability to contact the office by phone; length of time spent waiting at the office; time spent with the clinician; explanation of care; and the clinician’s sensitivity to special needs or concerns. While results for clinician and staff practice assessments were mixed, perceptions predominantly declined over the study period.

This study is unique in its small practice focus, its customized nature, and the breadth of clinical measures evaluated. Our analysis focused on clinical and satisfaction measures; however, this initiative also demonstrated significant improvements in compliance with safety measures.23 While there have been a small number of comprehensive small practice QI initiatives (e.g. Practice Enhancement Forum, TransforMED, Improving Performance in Practice, Plan/Practice Improvement Project and LIdea Medical Practices), this study provides a broad analysis of pre- and post-intervention results on clinical and satisfaction measures. Descriptive reports of QI projects in small practices have also described clinical improvement,24,25 but have often not been accompanied by detailed statistical analysis validating these results.

Interestingly, our study indicated that clinician and staff assessments of their practices declined on most measures. This may reflect the perceived burden of taking on additional work, which is supported by a study examining the attitudes of clinicians and staff in small practices taking on QI.26 The authors cited increased workload as a major obstacle to pursuing QI and subsequently stressed the importance of taking on small, easy-to-handle projects first. Had we applied this approach to our study, clinician and staff practice assessments may have been higher, but we would not have had the same opportunity to evaluate as many domains of clinical care and practice management. The decline in practice assessments may also reflect the short, one year time frame between surveys, which allows little time for change, but may create improved awareness of problems. Also, it may reflect the fact that change is difficult and can be very uncomfortable until gains are realized and attitudes begin to improve. Additionally, the survey results reported here captured views at the practice level and were not linked to individual staff members. Response attrition over time may have left a biased population, possibly those who were more disgruntled, although one might expect those individuals to leave. Finally, a detailed reflection on this initiative, published elsewhere, described how the format of didactic presentations, scheduling of proposed changes, and time commitment to QI activities presented unique challenges to the individual performance of practices.17

On the other hand, we observed positive changes in patient satisfaction. In our study, a composite measure indicated that patients noticed positive changes in the length of time to get an appointment, the difficulty of contacting the office by phone, the length of time spent waiting at the office, time spent with the clinician, the explanation of the care, and the clinician's sensitivity to special needs or concerns in the past six months. However, these gains contrast with a decline in the measure, “needs are met”, and no significant change in the measure, “access to care.” This discrepancy may in part be explained by different respondent groups at each time point. The improvement in the first measure, “Noticed changes in the selected areas in the past 6 months”, implicitly counts patients who had an over-time perspective on the practice. However, items that did not ask explicitly about change over time but compared values at time 1 to time 2 did not demonstrate the perceived improvement reported by patients.

Limitations

There are some limitations in this study. First, the relatively short time period of the evaluation was set by the availability of funding, and one year may not have been enough time for practices to make meaningful changes routine. Second, there may have been some selection bias introduced by the fact that practices volunteered and were by design located nearby at least a few other practices; however, practices often identified themselves as needing help in their applications, so it is unlikely that any bias would tend to improve intervention results. Further, as in many QI interventions, challenges in gaining data reporting compliance sometimes limits the possibilities for robust evaluation. Despite its limitations, this study is one of the few that has quantified and assessed QI across a number of areas in small medical practices. Furthermore, it demonstrates that a customized approach to quality can lead to some improved clinical outcomes and potentially to increased patient satisfaction.

Conclusion

Small practices today are facing a range of challenges as we move from fee-for-service to value-based reimbursement models for payment.27 The findings provided here suggest that the intervention by the ACP’s Center for Practice Innovation led to positive changes in some clinical measures and some aspects of patient satisfaction. The CPI project provides some insight into how small practices can take the first steps toward meeting higher standards in quality of care.

References