Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Quality Improvement Report - (2014) Volume 22, Issue 3

Evidence-based healthcare and quality improvement

Steve Gillam MD FFPH FRCP FRCGP*

Department of Public Health and Primary Care, Institute of Public Health, University of Cambridge, UK

A Niroshan Siriwardena MMedSci PhD FRCGP

Professor of Primary and Prehospital Health Care, Community and Health Research Unit (CaHRU), University of Lincoln, UK

Corresponding Author:
Steve Gillam
Department of Public Health and Pri-mary Care
Institute of Public Health, University of Cambridge
Robinson Way, Cambridge CB2 2SR, UK
Email: sjg67@medschl.cam.ac.uk

Received date: 11 February 2014; Accepted date: 24 March 2014

Visit for more related articles at Quality in Primary Care

Abstract

This is the tenth in a series of articles about thescience of quality improvement. We explore how evidence-based healthcare relates to quality improvement, implementation science and the translation of evidence to improve healthcare practice and patient outcomes. Evidence-based practice integrates the individual practitioner’s experience, patient preferences and the best available research  information. Incorporating the best available research evidence in decision making involves five steps: asking answerable questions, accessing the best information, appraising the information for validity and relevance, applying the information to care of patients and populations, and evaluating the impact for evidence of change and expected outcomes. Major barriers to implementing evidence-based practice include the impression among practitioners that their professional freedom is being constrained, lack of appropriate training and resource constraints. Incentives including financial incentives, guidance and regulation are increasingly being used to encourage evidence-based practice.

Keywords

evidence-based medicine, general prac-tice, implementation, primary care, quality im-provement

Introduction

For quality improvement initiatives to be effective, they should be based on sound evidence. However, there are two main considerations relating to this evidence base. First, the intervention or interventions that the quality improvement initiative seeks to im-plement should have evidence of benefit: they should lead to improvements in patient outcomes that are, ideally, both clinically important and cost-effective. Evidence that translates basic research into its clinical application through new health technologies (either products or approaches) has been termed the ‘first translational gap’. Second, quality improvement in-itiatives should be based on sound evidence of what works to implement these products or approaches. This is the ‘second translational gap’, which forms the basis of quality improvement and implementation science.[1] We now consider evidence-based healthcare in the context of both these translational gaps.

What is evidence-based healthcare?

How much of what health and other professionals do is based soundly in science? Answers to the question ‘is our practice evidence based?’ depend on what we mean by practice and what we mean by evidence. This varies from discipline to discipline. A study in general practice found that around 31% of therapeutic clinical decisions were based on evidence from randomised controlled trials (RCTs), whereas 51% were based on convincing non-experimental evidence.[2]

Sackett et al defined evidence-based medicine (EBM) as ‘the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients ... integrating individual clinical expertise with the best available external clini-cal evidence from systematic research’.[3] The expansion of EBM has been a major influence on clinical practice over the last 20 years. The demands of purchasers of healthcare keen to optimise value for money have been one driver. A growing awareness among health pro-fessionals and their patients of medicine’s potential to cause harm has been another. In this article, we examine the nature of what is nowadays more broadly referred to as evidence-based healthcare (EBHC) in the context of quality improvement and discuss its strengths and limitations.

The tools necessary for evidence-based healthcare

The tools needed to practice in an evidence-based way are common across healthcare disciplines. Doctors, nurses and allied health professionals all need the skills to ensure that the work they do – whether with individual clients or patients, or in the development of policies for quality improvement – is based on sound knowledge of what is likely to work.

Of the following five essential steps, the first is probably the most important:

• convert information needs into answerable ques-tions, i.e. by asking a focused question

• track down the best available evidence

• appraise evidence critically

• change practice in the light of evidence

• evaluate your performance.

Step 1. Asking a focused question

Before seeking the best evidence, you need to convert your information needs into a tightly focused ques-tion. For example, it is not enough to ask ‘Are anti-biotics effective for otitis media?’ We need to convert this into an answerable question: ‘Do antibiotics reduce the duration of symptoms when prescribed to children with otitis media?’

The PICO approach can be used as a framework to focus a question by considering the necessary el-ements. It contains four components:

• Patient or population (children under 5 years)

• Intervention (antibiotics)

• Comparison intervention (placebo)

• Outcome (duration of specific symptoms, e.g. pain, or rate of complications).

Question

Form a focused clinical question using the PICO format to find the evidence for the effectiveness of smoking-cessation interventions in adult smokers who have had a heart attack.

Answer

• P Adult smokers who have had a heart attack.

• I Providing smoking cessation intervention.

• C Providing usual care.

• O Mortality and quit rates.

This gives us the question ‘In smokers who have had a heart attack does a smoking-cessation intervention in comparison with usual care reduce mortality and improve quit rate?’.[4]

Step 2. Tracking down the evidence

The second step in the practice of evidence-based healthcare is to track down the best evidence. Doctors and nurses often assess outcomes in terms of surrogate pathological end points rather than commonplace changes in quality of life or the ability to perform routine activities (‘the operation was a success, but the patient died’).

Traditionally, doctors making decisions about what works have attached much weight to personal experi-ence or the views of respected colleagues. Over time, knowledge of up-to-date care diminishes so there is a constant need for the latest evidence and simple ways to access and use it.[5,6] A study of North American physicians has shown that up-to-date clinical infor-mation is needed twice for every three patients seen, but they only receive 30% of this due to lack of time, dated textbooks and disorganised journals.[7]

Rather than relying on colleagues or textbooks, EBHC encourages the use of research evidence in a systematic way. Once a question has been formulated, the research base is then searched to find articles of relevance.

So what counts as evidence? Care needs to be taken in relying on published articles. Many reviews reflect the prejudices of their authors and are anything but systematic. Even mainstream journals tend to accept articles yielding positive rather than negative findings, for example, in assessing treatments, so-called ‘publi-cation bias’.[8,9] Most books date rapidly. Hence the prominence nowadays accorded to properly conduc-ted systematic reviews which are placed at the top of a ‘hierarchy’ of evidence. A widely used ranking of the strength of evidence is shown in Table 1.[10]

table

Table 1 reminds us of the three main types of epidemiological study design: descriptive, observational and interventional. When searching for evidence, we should look for the highest level suitable to our ques-tion. A question relating to the effectiveness of an intervention will most appropriately be answered by an RCT or a systematic review of RCTs. The RCT is widely regarded as the ‘gold standard’ method for determining effectiveness because robust random-isation ensures that study and control groups differ only in terms of their exposure to the factor under study; the observed results are due only to the inter-vention and not to alternative explanations (so called confounding variables). The Scottish Intercollegiate Guidelines Network (SIGN) takes into account the potential biases in its hierarchy of evidence. We can find answers to questions about the causes of a disease from case–control or cohort studies.

However, questions beginning ‘Why?’ or ‘How?’ are often not answered by these types of study. What factors, after all, go to make a ‘good nurse’ or a ‘good general practitioner’ and how easily are they meas-ured? It is not possible to answer the question ‘Why do women refuse an offer of breast screening?’ with any of the study types mentioned so far. Another example would be: ‘How do medicines get prescribed inappro-priately in older patients?’ In these cases, one looks for a qualitative study. Qualitative studies use methods such as interviews, diaries and direct observation to provide detailed information to describe the experiences of participants. Qualitative data are then analysed rigorously to lead to conclusions about why or how something might have occurred.[11] Detailed coverage of qualitative methodology is beyond the scope of this article, but it is important to remember that not every question can be answered using the classical hierarchy above. Qualitative methods can generate a wealth of knowledge to contextualise many of the decisions health professionals must make.

Question

Consider the questions below. What studies would be most appropriately conducted to answer them: RCT, cohort, case–control, cross-sectional or qualitative?

a. For what conditions do patients call their GP out of hours?

b. What are the barriers to hand washing in healthcare settings?

c. Does paternal exposure to ionising radiation before conception cause childhood leukaemia?

d. What is the most sensitive and specific method of screening for genital chlamydial infection in women attending general practice?

e. Does laparoscopic cholecystectomy cause less mor-bidity and a swifter return to work than a small-incision cholecystectomy?

f. Do clinicians change their practice as a result of education?

g. For a given patient with asthma, does beclo-methasone give better symptomatic control than fluticasone?

h. How do patients and carers view the service pro-vided by a mental health team?

i. How does smoking cessation affect the risk of stroke in middle-aged men?

Answer

a. Cross-sectional study.

b. Qualitative study.

c. Case–control study.

d. Cross-sectional study.

e. Randomised controlled trial.

f. Cohort study.

g. Randomised controlled trial.

h. Qualitative study

i. Cohort study.

There are various primary and secondary sources of evidence. Primary sources are the thousands of orig-inal articles published every year in research journals. However, to deal with the vast amount of information available, more and more people now turn to secondary sources of evidence. The most important source of systematic reviews is the Cochrane Database (www.cochrane.org). The Cochrane Collaboration (named after Archie Cochrane, an early pioneer of EBM) is an international endeavour to summarise high-quality evidence in all fields of medical practice. It has slowly transformed many areas of clinical practice.

It is important to have basic skills in searching the literature, although the help of expert librarians may be needed. Research papers are catalogued in a variety of databases searchable on the internet. For many medical or public health queries the database Medline is a good starting place. Other databases are available for specialist queries such as those in the fields of mental health and nursing. Using the PICO format here is helpful as it can be used to generate search terms with which to query the databases. Databases may have tools to support the user in this such as the ‘Clinical Queries’ tool in PubMed, which is a US National Library of Medicine’s service to search the biomedical research literature.

We can use our example question from earlier to demonstrate how a search might work. Our focused question was ‘In smokers who have had a heart attack does a smoking-cessation intervention in comparison with usual care reduce mortality and improve quit rate?’

Question

What study type would be appropriate for answering this question?

Answer

Randomised controlled trials are possible, where smokers who have had a heart attack are randomised to receive smoking-cessation intervention or usual care, to give a measure of the relative effectiveness of smoking-cessation intervention.

Question

Using the PICO format, list the key words we need to use to search databases through a search function such as PubMed’s Clinical Queries.

Answer

Smokers, heart attack, cessation, counselling, mor-tality. In Clinical Queries, as we select an option to indicate our interest is in therapy (i.e. intervention studies) the term ‘randomised controlled trial’ is automatically added to the key words. In other search systems or databases this may need to be added manually.

The journal articles found using this strategy are:

• Rigotti NA, Thorndike AN, Regan S et al. Bupropion for smokers hospitalised with acute cardiovascular disease. American Journal of Medi-cine 2006;119:1080–7.

• Dornelas EA, Sampson RA, Gray JF, Waters D and Thompson PD. A randomised controlled trial of smoking cessation counseling after myocardial infarction. Preventive Medicine 2000;30:261–8.

Question

Look at these results. Are these articles relevant?

Answer

Yes. Bupropion is used to help smokers quit their habit. The second study is an RCT testing the effec-tiveness of smoking cessation in patients who have had a heart attack.

In our search for evidence, it should be remembered that not every piece of information that might help us answer our question may have been published. Studies may be in progress that could inform our action; negative studies, which could help tell us what not to do, may not have made it as far as a publication; many pharmaceutical companies have unpublished infor-mation; conference reports might provide helpful infor-mation. As we move down the hierarchy, it becomes more difficult to find this kind of evidence (called ‘grey’ literature) from readily available sources but some databases and repositories are available. This is a good time to seek the help of an expert librarian!

Step 3. Appraising the evidence

To determine whether we should act on the results of the studies found in the search, we must be able critically to appraise a range of study types. An under-standing of some basic epidemiological concepts is needed to understand the methods used and the results presented. We are looking to decide whether the results are valid enough to change our practice. In order to do this, we ask a series of questions about the study which include:

• Did the research ask a clearly focused question and carry out the right sort of study to answer it?

• Were the study methods robust?

• Do the conclusions made match the results of the study? (Might the results have been due to chance? Were they ‘big’ enough to make a real difference?)

• Can we use these results in our practice?

There are standard checklists available to support systematic appraisal of different types of study designs. We can use these to help determine how valid the findings of the study are, and whether the findings can be generalised to our own population.

Table 2 shows a checklist for appraising an RCT, the most appropriate primary design to generate evidence of effective interventions. This checklist is taken from the Critical Appraisal Skills Programme (CASP) in Oxford (www.casp-uk.net).

table

It is important to be able to critically analyse the results of all study types but, as the volume of scientific literature increases, it is perhaps most important to be able to use systematic reviews effectively to guide practice. It has been estimated that a general physician needs to read for 119 hours a week to keep up to date; medical students are alleged to spend one to two hours reading clinical material per week – and that is more than the doctors who teach them.[12] Also, a single study of insufficient sample size or of otherwise poor quality may yield misleading results. The right answer to a specific question is more likely to come from a sys-tematic review. This is a review of all the literature on a particular topic, which has been methodically ident-ified, appraised and presented. The statistical combi-nation of all the results from included studies to provide a summary estimate or definitive result is called meta-analysis.

Step 4. Changing practice in light of evidence

Following through on the results of your appraisal of new evidence – implementation – is arguably the most difficult of the five steps. Some change can be self-initiated; other circumstances require change in those around you. The implementation of effective inter-ventions often requires change in others. The man-agement of people and an understanding of how they will react to change are invaluable.[13] Implementation strategies may be classified according to the target of the intervention (e.g. patients, providers or systems), the type of intervention (e.g. education, reminders, feedback) or the social theory (e.g. social influence, marketing) that underpins the intervention. The evi-dence for different types of intervention varies (Box 1).

box

Theoretical models of change and evidence can help us to determine how to implement change. For example, the three main contributors to change are the evidence that underlies the change, the inter-ventions (or facilitators) used to bring about improve-ment and the context for transformation. The context includes the change agents or various individuals and organisations involved in producing change, includ-ing the patient, the provider (healthcare professional), the healthcare team and the various other supporting organisations involved. Quality improvement and implementation efforts will need to embrace this complexity.[15]

There is no magic bullet.[16] Most interventions are effective under some circumstances; none is effective under all circumstances. A diagnostic analysis of the individual and the context must be performed before selecting a method for altering individual practitioner behaviour. Interventions based on assessment of poten-tial barriers are more likely to be effective.[17]

Step 5. Evaluating the effects of changes in practice

Commonly, this step will involve a quality improve-ment project or clinical audit based on an understand-ing of the processes involved[18] and a framework for improvement.[19] Depending on how frequently the intervention or activity under scrutiny is performed, a review of practice can be undertaken throughout the change using statistical process control methods or before-and-after the change using a clinical audit.[20] More robust methods such as RCTs or quasi-experi-mental studies (controlled before-and-after and inter-rupted time series) are sometimes used to determine the extent of an improvement, qualitative methods can be used to understand how or why an intervention was successful and mixed methods such as action research or case study methods can be used to do both.[21]

Question

If we go back to the example of use of antibiotics and otitis media from Step 1, how would you know that your practice had changed if you found yourself to be over-using them?

Answer

You could audit all consultations of children with otitis media over a defined period and check the proportion that had been treated with antibiotics before and after the introduction of new practice guidelines.

Limitations to evidence-based healthcare

Evidence is only one influence on our practice. Edu-cation alone may not change deeply ingrained habits, e.g. patterns of prescribing. Knowledge does not necessarily change practice. This is true for practi-tioners and patients or the public. An example is the continued use by patients of complementary ther-apies, which professionals consider to be ineffective.[22]

Hence, we need to consider employing other mech-anisms to stimulate change and improvement. These include regulation[23] and commissioning.[24] Commis-sioning or purchasing can also include financial in-centives, which are used to promote interventions known to be effective (e.g. target payments to increase immunisation uptake). In the NHS, the Quality and Outcomes Framework (QOF) system of pay-for-per-formance was introduced in 2004 to improve the quality of clinical care and promote evidence-based practice,[25] but the evidence for its effectiveness is mixed.[26]

The most strident criticisms of EBHC have come from those physicians who resent intrusions into their clinical freedom. The use of evidence-based protocols has been demeaned as ‘cookbook medicine’.[27] A more powerful philosophical argument is mounted by those arguing that a rigid fixation on RCTs risks ignoring important qualitative sources of evidence.[28]

In addition, there may be times when high-quality evidence simply does not exist. This should not pre-vent action! The lack of RCTs does not mean an intervention is ineffective, it means that there is no evidence that it is effective, a clear distinction. In these cases, one has to use the best evidence available. When no research evidence exists there is nothing wrong with asking colleagues for their opinions; the practice of EBHC simply means we should at least carry out the search.

In conclusion, the terms ‘evidence-based medicine’ and ‘evidence-based healthcare’ were developed to encourage practitioners and patients to pay due respec– no more, no less – to current evidence in making decisions. Evidence should enhance healthcare deci-sion making, not rigidly dictate it.[29] Practitioners need to consider the health and social care needs of the practice population and what effective interventions are available to meet them. Finally, the practitioner must consider individual or societal preferences.

Peer Review

Commissioned; not externally peer reviewed.

Conflicts of Interest

None declared.

References