Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Editorial - (2009) Volume 17, Issue 3

Using quality improvement methods for evaluating health care

A Niroshan Siriwardena MMedSci PhD FRCGP*

Foundation Professor of Primary Care, School of Health and Social Care, University of Lincoln, UK

Corresponding Author:
A Niroshan Siriwardena
School of Health and Social Care
University of Lincoln
Lincoln LN6 7TS, UK
Tel: +44 (0)1522 886939
Fax: +44 (0)1522 837058
Email: nsiriwardena@lincoln.ac.uk
Visit for more related articles at Quality in Primary Care

Quality improvement initiatives are a ubiquitous feature of modern healthcare systems because of actual and perceived gaps in the quality of healthcare delivery.[1,2] However, such initiatives are often not subject to evaluation, or when evaluation is conducted this is done poorly.[3]

Quality improvementmethods are increasingly being used to aid diffusion of innovations in health and can be used as a research tool to model and design complex healthcare interventions.[4] However, as well as being components of quality improvement programmes they can sometimes be a useful adjunct to other more traditional evaluation methods, thus serving a dual role.

Evaluation is often undertaken to determine the quality of care being provided by an individual, team or service where quality is taken to mean the effectiveness, efficiency, safety or patient experience of that care.[1] Evaluation is also undertaken to ensure that the aims of care are being met, to provide information for service users, commissioners, healthcare providers or other stakeholders about the quality of services being provided, and finally to establish the basis for future improvements. Quality improvement research is applied research involving evaluation of quality improvement initiatives which is aimed at informing policy and practice.[5] Current guidelines for reporting quality improvement include ‘descriptions of the instruments and procedures (qualitative, quantitative or mixed) used to assess the effectiveness of implementation, the contributions of intervention components and context to effectiveness of the intervention and the impact on primary and secondary outcomes’.[6]

A useful starting point for an evaluation is a logic model where the clinical population and problem that the healthcare intervention is aimed at, inputs (in terms of resources provided for planning, implementation and evaluation), outputs (in terms of healthcare processes implemented and the population that is actually reached) and longer-term outcomes are measured in terms of health and wider benefits or harms, whether intended or incidental and in the short, medium or long term (see Figure 1).[7]

Figure

Figure 1: A logic model for evaluating health care

A logic model can be expanded, either as a whole or in specific areas to form a ‘cause and effect’ (sometimes call a fishbone or Ishikawa) diagram (see Figure 2). The central line representing the patient pathway, is affected by patients themselves, but also by the other inputs and outputs (processes) as patients are travelling through the healthcare system being evaluated.[8]

Figure

Figure 2: Cause and effect (‘fishbone’) diagram

Traditional evaluation methods look at the structure, processes (outputs) or outcomes of care using various qualitative or quantitative methods (see Box 1).[9]

Figure

Box 1: Examples of traditional healthcare evaluation methods

However, a number of quality improvementmethods can also be used for evaluation and these overlap considerably with traditional evaluative techniques (Box 2). These methods have potential to enable better understanding of the processes of care and, importantly, to shed light on how to improve upon these.

Figure

Box 2: Examples of quality improvement evaluation methods

Clinical audit, which is the ‘systematic, critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources and the resulting outcome for the patient’[10] builds evaluation into the process. It involves measurement of care (‘how are we doing?’) against established criteria and standards (‘what should we be doing?’) through which performance and changes in performance can be measured (‘have the changes we have made led to improvement?’). Audit can and has been used as an evaluation method, even in randomised studies.

Significant event audit is another technique that is frequently used to evaluate care, particularly care that is considered to fall below standards or that is outstandingly good.[11] It is a powerful tool for evaluating healthcare processes by attempting to understand the detailed factors that led to care being outside the norm, but it can also help improve communication, team building and quality.[12]

Plan, do, study, act (PDSA) cycles are another means of investigating care processes while rapidly implementing evidence-based or common sense changes to processes of care, enabling changes to be spread more easily and effectively.[13] The third stage of the PDSA cycle involves studying the effect of a change using numerical or qualitative data – even with smallscale changes, the effect over time on processes of care can be measured and analysed using statistical process control techniques. The PDSA model is a useful means of evaluating while introducing rapid change to healthcare processes.[14]

Focus groups and individual interviews are important traditional techniques for gathering data about the experiences of patients and staff about services. An important quality improvement tool, which is a development from this, is the ‘discovery interview’.[15] This narrative technique involves listening to the stories of patients and carers of the care that they have received in order to understand experiences from a user perspective. Other narrative techniques for quality improvement research and evaluation include naturalistic story gathering during a project or collective sense-making of a complete project by a participant observer and the organisational case study.[5]

Root cause analysis is a specific type of significant event analysis which aims to find explanations for adverse or untoward events through the systematic review of written and oral evidence to establish underlying causes.[16] The analysis involves defining the problem, gathering evidence, identifying possible root causes and the underlying reasons for these and then deciding which causes are amenable to change. This leads to recommendations, the effect of which can be further evaluated.[17]

The Pareto (or 80/20) principle (see Figure 3), describes how a relatively small number of key causes will lead to most of the important outcomes, for example, 80% of outputs, outcomes or harms are due to 20% of inputs or causes. This can help to distinguish the most important causes.[18]

Figure

Figure 3: Pareto diagram for prescribing errors

Process mapping can describe the patient journey through the system of care and even complex pathways can be visualised using spaghetti diagrams or ‘swim lane’ diagrams (see Figure 4) to separate processes into different job roles or team activities.

Figure

Figure 4: Swim lane diagram for asthma care

Components of a process which are critical to quality (CTQ) can be represented as aCTQtree (see Figure 5). Such evaluations can determine whether the right treatment is given by the right person at the right time and place.[19]

Figure

Figure 5: Critical to quality (CTQ) tree

Another important aspect of evaluation is the human factors involved in change.[20] Ownership of change is particularly important for healthcare professionals, such as doctors and nurses, who at the front line of care have the power to promote or subvert change. This, the inverted pyramid of control,[21] has been applied to health care to emphasise the importance of clinical leadership.[22] An understanding of internal strengths and challenges (weaknesses) as well as external opportunities and threats, together with individual and group drivers and barriers to change is critical to successful health services, an approach which has its basis in Lewin’s ‘forcefield theory’.[23]

Comparing and benchmarking individual or organisational performance using statistical process control can help identify differences or gaps in performance, [24] which enable ‘special causes’ to be highlighted and explanations to be sought to look at ways of changing practice to improve performance (Figure 6).

Figure

Figure 6: Funnel plot showing institutional performance for aspirin administration to patients with ST-elevation myocardial infarction

Statistical process control charts plotted against time can also show where improvements have occurred in response to planned interventions,[25] and feedback using this technique as part of ongoing evaluation can contribute to improvement.[26,27]

Larger-scale evaluation or more robust evaluations may require more complex techniques such as quasi-experimental methods including time series or non-randomised control group designs as well as cost analysis.[28,29]

Quality improvement methods, despite their increasing application to health services,[30] have not been widely considered or used as part of healthcare evaluation but could provide a useful addition to the evaluative techniques that are currently in use.

Conflicts of Interest

None.

References