Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days

Research Paper - (2011) Volume 19, Issue 3

Developing quality indicators for community services: the case of district nursing

Philippa Davies PhD BA*

Research Associate, Academic Unit of Psychiatry

Lesley Wye PhD MSc BA

Research Fellow, Academic Unit of Primary Health Care

School of Social and Community Medicine, University of Bristol, Bristol, UK

Sue Horrocks DPhil MSc BA HV RGN PGCE

Senior Lecturer in Primary Care, Faculty of Health and Life Sciences, Department of Nursing and Midwifery, University of the West of England, Bristol, UK

Chris Salisbury MD FRCGP

Professor of Primary Health Care

Debbie Sharp BA (Oxon) BMBCh MA DRCOG PhD FRCGP

Professor of Primary Health Care, Academic Unit of Primary Health Care, School of Social and Community Medicine,

University of Bristol, Bristol, UK

Corresponding Author:
Philippa Davies
Academic Unit of Psychiatry
School of Social and Community Medicine
University of Bristol, Oakfield House
Oakfield Grove, Bristol BS8 2BN, UK
Tel: +44 (0)117 331 0164
Fax: +44 (0)117 331 4026
Email: Philippa.Davies@bristol.ac.uk

Received date: 25 November 2010; Accepted date: 2 April 2011

Visit for more related articles at Quality in Primary Care

Abstract

Background Quality indicators exist for the acute and primary care sectors in the National Health Service (NHS), but until recently little attention has been given to measuring the quality of community services. The innovative project described in this paper attempted to address that gap. ObjectivesTo produce a framework for developing quality indicators for Bristol Community Health services. To develop a set of initial indicators for Bristol Community Health services using the proposed framework. MethodAfter familiarising ourselves with community services and NHS policy, gathering the views of stakeholders and consulting the literature on quality indicators, we designed a framework for indicator development, using the ‘test’ case of the district nursing service. The long list of possible indicators came from best practice guidelines forwound, diabetes and end of life care, the three conditions most commonly treated by district nurses. To narrow down this list we surveyed and held workshops with district nurses, interviewed service users by telephone and met with commissioners and senior community health managers. ResultsThe final set of quality indicators for district nurses included 23 organisational and clinical process and outcome indicators and eight patient experience indicators. These indicators are now being piloted, together with two potential tools identified to capture patient reported outcomes. ConclusionDeveloping quality indicators for community services is time consuming and resource intensive. A range of skills are needed including clinical expertise, project management and skills in evidence-based medicine. The commitment and involvement of front-line professionals is crucial.

Introduction

Quality is a central tenet of the modern UK National Health Service (NHS). In 2004, a major quality ini-tiative was launched in NHS general practices with the General Practice Quality and Outcomes Framework (GP QOF),[1] a voluntary pay-for-performance scheme based on achieving targets across a range of organ-isational and clinical domains, including coronary heart disease, diabetes and hypertension. Although contro-versial,[2,3] the introduction of the GP QOF has rad-ically changed the quality landscape.

The final report of the NHS Next Stage Review,[4] published in July 2008, declared that quality should be the organising principle of NHS service delivery, with the setting and measuring of quality standards as an integral part. Although the initial focus was to be on acute services, the review included a commitment to address quality within the community services sector through the Transforming Community Services (TSC) programme,[5] which included a draft set of 76 indi-cators.[6] The recent White Paper Equity and Excellence: liberating the NHS reafirmed and reinforced the com-mitment to quality by charging the National Institute for Health and Clinical Excellence (NICE) with the remit of developing quality indicators for over 140 conditions, some of which are treated in the com-munity sector.[7]

The community services sector in England consists of a range of services such as health visitors for pre-school children, community nurses for older people, learning disability services, physiotherapy and podiatry. At the time of this project, community services in Bristol were commissioned by the primary care trust (PCT) known as NHS Bristol and provided con-tractually by Bristol Community Health (BCH), the local authority, other local health trusts (e.g. acute hospital and mental health trusts) and ‘third sector’ organisations such as charities. Having seen the powerful effect of the QOF on general practitioners (GPs), Bristol commissioners wanted to explore a similar approach to incentivising good quality care amongst local community services, particularly those operated by BCH. In March 2008, therefore, NHS Bristol commissioned the Universities of Bristol and the West of England to develop locally relevant quality indicators for community health services. The project had two main aims:

1 to produce a framework for developing quality indicators for BCH services

2  to develop a set of initial indicators for BCH services using the proposed framework.

The ethos of the project was to develop a range of quality indicators that would be meaningful to service providers, reflect the values of service users and carers and provide the basis for commissioning decisions in competitive tenders. The project began in September 2008, lasted for 12 months and was undertaken in a number of inter-related phases. In this paper we report the process of developing the framework and the indicator set.

Method

Familiarisation and planning

The aim of the first phase of the project was to conceptualise and formalise the direction of the pro-ject by familiarising ourselves with local community services. This involved three main activities:

1 Learning about community services and relevant NHS policy by:

• meeting and shadowing community health staff members from services including district nurs-ing, physiotherapy, community matrons and learning disabilities

• reading key national and local policy documentson quality indicators and on community ser-vices [4,813]

flattending a seminar on the new community contract organised by the NHS Primary Care Commissioning Team on behalf of the South West Strategic Health Authority.

2 Gathering the views of stakeholders by:

• holding three focus groups with Heads of Community Services to gather their views on: definitions of quality; the measures, protocols or standards currently in use within their own service areas; and ideas about where we should focus our efforts

• meeting the NHS Bristol Health Interest Group (service users and carers) and subsequently carry-ing out telephone interviews with six service users and carers

• meeting the Deputy Director of Commissioning from NHS Bristol and her team

• meeting several key BCH staff, including mem-bers of the clinical governance, performance management and learning and development teams.

3 Reading and appraising the quality indicators/ audit literature by:

• searching for information on developing quality indicators from national and international sources[1418]

• conducting several searches of the internet and bibliographic databases (e.g. Medline) to define quality, find methodological literature on indi-cator development and locate evaluations of qual-ity indicator programmes (search terms included: quality indicators, quality of healthcare, quality assurance, quality assessment, clinical indicator, outcome assessment, process assessment, pri-mary health care, community services).

The remainder of this section describes key learning points from those activities that influenced the course of the project.

Beyond a common aim to enable people to receive care in a local setting or in their own home, com-munity services are diverse, covering a broad range of clinical areas and types of staff with differing levels of specialisation. Newer services (such as intermediate care teams) are more likely to have some quality criteria linked to their contractual responsibilities. Commu-nity services work to a range of quality protocols, such as clinical guidelines and national service frameworks, which could provide a useful source of quality indi-cators. A large amount of data are recorded, although there are differences between services and teams both in terms of amount of data recorded, the types of systems used (e.g. electronic or handwritten) and where data are stored.

Quality incorporates many different dimensions. Service users and staff saw good quality care as being about much more than improving clinical outcomes.

Service users saw high-quality services as incorpor-ating clear management structures and organisation; good quality information and timely communication; continuity of care, with particular emphasis on tran-sition points; timely responsiveness and well-trained staff who combine a professional approach with kind-ness and flexibility. Staff saw good quality care as incorporating clinical effectiveness and safety, embed-ded in a holistic approach geared towards meeting the patient’s physical, mental and emotional needs. They felt care should be patient-centred and empower the patient and their family to manage their own health.

From our review of the literature, we identified several other similar quality initiatives, some of which (such as the QOF[19]) have been well evaluated, but very little specifically relating to community services. Exist-ing initiatives could be used as a basis for developing a framework for indicators for community services, taking care to ensure that they are feasible and appro-priate for the community setting.

Donabedian[20] described three categories of health care quality measurement – structure, process and outcome. Outcome indicators are intuitively appeal-ing as they represent the ultimate goals of health care and are more easily understandable for some groups, such as service users, than those based on either structure or process. However, many factors beyond differences in care influence outcome (such as socio-economic status, or patient concordance) and out-come indicators can therefore be hard to analyse and interpret. Process indicators are more likely to be within the control of healthcare providers and less susceptible to influence from external factors. To indicate quality, there should be clear evidence that improved processes are related to improvements in important outcomes.

Finally, having looked at the literature on criteria of ‘good’ indicators[18,21] and in discussion with our advisory group, we determined the following criteria against which to evaluate the proposed indicators:

flevidence of clinical benefit

• within the scope of influence of clinicians

• recognised as important by service users, com-missioners and community service managers

• measurable

• impact on health gain (scale of the healthcare problem, health inequalities)

• low risk of ‘perverse incentives’ or gaming.

Having described the key learning points, Table 1 shows how these influenced our choice of approach.

equation

Having decided to focus on one service, we chose district nursing (DN) as it is a large service, the role of district nurses has undergone change in recent years and the staff were enthusiastic about being a ‘test’ case. We decided to focus on three areas of clinical practice– wound, end of life and diabetes care – as the district

Different stakeholders emphasise different aspects of quality

Ensure that the agendas of multiple stakeholders are captured, in particular those of service users and carers

Other similar quality initiatives exist, including the GP QOF which has been well-evaluated and emphasises the patient’s experience

Community services are very heterogeneous in terms of clinical areas, types of staff, level of specialisation

Organise the framework according to the three dimensions used by the QOF – organisational, clinical, patient experience

Design a framework for the development of quality indicators that is universal to all services

Develop a set of indicators using the proposed framework for one community service only

Newer services more likely to have contractual quality requirements incorporated

Outcome indicators are difficult to interpret as outcomes are influenced by many factors beyond care given

Community services staff work within a range of quality guidance, much of which is evidence based

Most benefit would be achieved from developing indicators for an older service

Focus on developing process indicators, ideally those that are evidence based to ensure clear links with outcomes

Current quality guidance should be used as an initial source of indicators, supplemented where necessary by additional evidence reviews

Community services currently collect a large amount of data about care given

Where possible, use currently collected data to measure indicators rather than requiring new data to be collected

nurses identified these areas as constituting the ma-jority of their workload. They also fitted with NHS priorities to improve care for people with long-term conditions and at the end of life.[4]

Development of indicators

In March 2009, we ran three workshops – one for each clinical condition – with members of the Bristol District Nursing Strategy Group (comprised of 15 district nurses from all grades with a special interest in promoting and developing the service). Prior to the workshops, we identified 72 potential process indi-cators from current relevant guidance published by organisations including NICE, the Royal College of Nursing and the Department of Health.[2228] We selected indicators that should apply universally to a group or subset of service users, were likely to provide an important impact on health outcomes or patient experience, were appropriate to a community setting and were supported by evidence or expert consensus.

At the workshops, we asked attendees to consider the following questions for each indicator:

• Would this quality indicator lead to improved patient health outcomes?

• Would this indicator lead to improved patient satisfaction?

• Does this indicator reflect good quality nursing care?

• Is the indicator something within a nurse’s influ-ence?

• Is the indicator measurable? Are data already avail-able?

These questions were derived from the criteria we had determined against which to evaluate the proposed indicators. Indicators that were not within the influ-ence of district nurses (e.g. equipment), went against current PCT policy (e.g. advance preparation of insu-lin injections for patients to administer in their own homes), accounted for only a very small proportion of caseload (e.g. surgical wounds) or were not suitable for housebound patients (e.g. structured group edu-cation for type 1 diabetes) were dropped. Although measurability was not a top priority at this stage, we eliminated any indicators which district nurses argued could simply not be measured.

To obtain the perspective of a wider range of DN staff we conducted an online survey of Bristol DN team leaders in April 2009, with a response rate of 76% (n=34/45). Respondents were asked to rate the re-maining process indicators on similar items to those asked at the workshops and to indicate the extent to which they believed variability in practice existed for each indicator.

In tandem, we gathered the views of service users, senior managers from BCH, commissioners from NHS Bristol and clinical and academic specialists on the shortlist of indicators via face-to-face meetings, telephone conversations and email correspondence. At this stage, the commissioners from NHS Bristol requested that we also identify some outcome indi-cators. Consequently, we identified ten potential out-come indicators by looking at the most commonly used outcome measures in major studies of wound, end of life and diabetes care and through discussion with academics, district nurses, the DN Strategy Group and tissue viability, palliative care and diabetes clinical specialists.

In June 2009, our advisory group (which included representatives from NHS Bristol, BCH, the DN ser-vice, service users and a GP) met to consider the shortlist of organisational and clinical indicators and potential patient experience surveys. At this stage, there were 32 process and 10 outcome indicators. Prior to the meeting, short reports for each indicator were drafted. The reports detailed the evidence base, consensus and views of front-line staff, perceptions of current practice, views of management and com-missioners and the extent to which indicators met local and national priorities. An example report is shown in Box 1. At the meeting group members discussed the indicators and voted on which ones to keep. Following discussion, 22 process indicators were retained and 10 were deleted. Reasons for elimination included lack of consensus concerning appropriate care, not considered to be a useful indicator of quality and not measureable. Only six outcome indicators were retained. The remainder were rejected due either to problems with measurement or difficulties in attribu-ting changes in outcome to the DN service.

equation

Concurrently with developing the clinical indi-cators, we also considered how to measure the quality of patient experience. In particular, we wanted to capture two types of information: patient satisfaction and patient identified outcomes. We conducted tele-phone interviews with six service users (who were hand-picked by district nurses to represent views of people receiving care for a wound, diabetes or end of life) and looked at the literature on quality of care for housebound patients.[30] To capture patient satisfac-tion, we developed a short telephone survey, based on questionnaires developed by the Picker Institute[31] and Dr Foster.[32] This is currently being piloted with 20 district nursing service users. To capture patient iden-tified outcomes, two tools were selected as potentially suitable – Measure Yourself Medical Outcome Profile (MYMOP)[33] and Goal Attainment Setting.[34] Ten Bristol district nurses are currently testing these tools.

In the final phase (July to September 2009), we ran three further workshops with DN team leaders and relevant clinical specialists to make any final amend-ments to the wording of indicators and determine how each should be measured. In September 2009, we met with the DN Strategy group to clarify any outstanding issues and discuss implementation.

Results

The final list included two organisational, 21 clinical and eight patient experience indicators. Of the clinical indicators, five were for diabetes, eight for end of life and eight for wound care. Six were outcome indicators (three each for end of life and wound care, none for diabetes) and the remainder were process indicators. The full list is shown in Box 2.

equation

Discussion

In this paper, we have outlined our experiences of designing a framework for the development of quality indicators for community services (see Figure 1), using the DN service as a test case. In this section we describe our main findings and strengths and limi-tations of the project, contrast what we have done with other similar initiatives and consider implications for future research.

Figure

Figure 1: Framework for developing quality indicators in community services

Main findings

Having produced a framework for the development of quality indicators for community services, we then employed this framework to develop a set of indi-cators for the DN service in Bristol. Potential indi-cators were identified from best practice guidelines, reviews of the literature and consultation with service users. We consulted with stakeholders regarding the indicators through the use of a survey, workshops, telephone interviews and face-to-face meetings. The final set of quality indicators included 23 organisa-tional and clinical process and outcome indicators and eight patient experience indicators.

Strengths and limitations

A key strength of the project was that the indicators were locally developed. Involvement of front-line staff from the service for which indicators are being devel-oped is important in order to gain a clear understand-ing of the scope of the service and its core areas of practice, the validity of the indicators chosen and the feasibility of measurement. We involved members of the DN service at all stages of development and their assistance was invaluable. Participants were keen to contribute and to have the opportunity to reflect on their practice. This involvement also increased the sense of ownership of the indicators.

In addition to NHS staff, a broad range of people and organisations were contacted throughout the course of the project. This included individuals work-ing within other quality improvement initiatives, service users and carers, academics with specialist expertise in developing indicators and academics with national reputations in the clinical conditions under study.

The project also encountered several challenges. Defining and measuring the quality of a service necessitates a clear understanding of what that service actually does. However, at the outset of the project, there was no service specification for the DN service in Bristol. District nursing is not a specialist service, but plays a more generalist role in providing care to help patients to live independently within the community. As such, it addresses a wide range of health and social care needs. It was hard to capture the diverse range of activities provided by the service and to identify the core services provided. Prompted by our activities, the DN service now has a very clear remit, which is being used by commissioners to develop a service specifi-cation.

We aimed to involve service users at all points during the project. Users of district nursing services are by definition housebound, making attendance at focus groups or stakeholder meetings nearly impossible.

So our primary method of consultation was through telephone interviews. Most DN patients are elderly and many have cognitive issues. Many technical as-pects of the quality of care were not readily understood by service users and therefore we had considerable difficulty in eliciting responses to the indicators them-selves. In addition to struggling to know how to best involve users, we were challenged by how to incor-porate their views. Aspects of quality cited as import-ant by service users, e.g. kindness, compassion, good communication, were often more difficult to measure than those identified for the clinical conditions.

Despite our intentions, we found that good evidence was not always available for DN activities; however, the use of rigorously developed guidance (such as clinical guidelines) to identify indicators ensured that our process indicators were based on the best available consensus regarding what constitutes high-quality care.

Relationship to previous literature

We decided to use the well-evaluated[19] GP QOF as the initial starting point for developing our indicators, although initiatives also exist in nursing[35] and in other primary care settings including mental health services,[36] community pharmacy[37] and pre-hospital care.[38]

Indicators can be identified from existing data sets,[35,39] clinical guidelines[36,38] and reviews of the

evidence.[1] Our indicators were identified from clinical guidelines. The advantage of this approach for services developing their own local indicators are that well-developed clinical guidelines incorporate systematic reviews of the literature and expert consensus, and clinical practice recommendations may need little or development to be turned into good process indi-cators.

We developed quality indicators using three of the domains utilised by the QOF – clinical care, organ-isational and patient experience. To develop clinical indicators we focused on the commonest conditions dealt with by the service, an approach also used for the development of indicators for the ambulance ser-vice.[38] Additional reasons for the choice of clinical areas include the potential for improved outcomes[38] and the areas’ association with considerable morbidity or mortality.[40] The QOF development process allows stakeholders to suggest new clinical topics for in-clusion.[41] This represents a useful approach for future extension to our DN indicator set and if service staff are involved in the choice and prioritisation of new clinical areas this should continue the sense of local ownership.

Whereas other indicator sets have placed more emphasis on assessing structure[37] or outcomes[35] of care, the majority of our indicators measured pro-cesses. We had some difficulty developing outcome

indicators (for one of our clinical areas – diabetes care

– none of the original outcome indicators appeared in the final set). Health outcomes are influenced by many factors in addition to care given, in particular the characteristics of the population.[42,43] If services are judged on outcomes that staff cannot influence, this undermines the credibility of the quality improve-ment process, is demoralising for staff and can lead to gaming or patient selection.[42] Health services can only improve patient outcomes by improving what they do with those patients (i.e. the processes of care). For these reasons our main focus was on developing process indicators, taking care to ensure that they were clearly linked to health outcomes, preferably by being evidence based.

Although we regularly consulted with stakeholders

throughout the project, our consensus processes were less formal than other initiatives,[36,39,41] which have

conducted several rounds of surveys, utilising tech-niques such as Delphi[44] or the RAND appropriateness method.[45] We wanted to include a very broad range of individuals in the development of our indicators rather than limit our consultation to representatives on an ‘expert’ panel. Our consultation with stake-holders was iterative and achieved through email and telephone conversation as well as face-to-face contact. This enabled us to have very regular contact with stakeholders and to target particular individuals or groups as appropriate to specific stages of the devel-opment process. We feel that the continued involve-ment of members of the DN service in the process resulted in an increased sense of ownership.

Initiatives differ in the extent to which they have attempted to address the patient perspective of qual-ity. Shield[36] convened patient focus groups to identify aspects of care considered important from the patient/ user perspective in primary care mental health ser-vices. The QOF includes a patient experience domain; however, the indicators in this domain focus on the length of the consultation and the use of a patient survey as opposed to its results. Our patient survey addresses patient-centred aspects of quality including listening and communication, empowerment, respect and dignity and continuity of care, as well as overall satisfaction with care received.

A key difference between our project and other initiatives considered in this section is that our indicators were developed at a local rather than national level. Advantages to this approach are the increased sense of ownership and the setting of more personalised tar-gets. A disadvantage is that, as the data will only be collected in one geographical area, it will not be possible to benchmark the Bristol DN service against other DN services across the nation.

Generalisability

Developing quality indicators is both time consuming and resource intensive and requires a range of skills. Consequently, there are considerable benefits to general-ising quality indicators to other services. The framework we have produced for developing quality indicators should be suitable for use in other community ser-vices. Similarly, the indicators themselves could be used to evaluate the quality of other UK DN services. Our indicators may be a useful starting point for other services that encounter the same clinical conditions in similar settings, although care needs to be taken to ensure that they are appropriate to other patient groups and to the level of specialty. The transferability of the indicators to other countries may be limited due to variations in professional culture, clinical practice and differences in service organisation,[46] however, other initiatives have been successfully adapted.[39,47]

Implications for research

As a result of our activities we have identified a number of questions for future research. These include:

• How can service users be meaningfully involved, particularly in relation to the more technical as-pects of delivering high-quality care?

• What are acceptable, measurable and feasible out-come measures for district nursing, where the emphasis is more on care than cure?

• What differences do the implementation of quality indicators make to quality of care?

• Does the implementation of process indicators lead to improvements in outcomes for DN service users?

Finally, meeting the challenge of this project was partly frustrated by the lack of research evidence to support the indicators. Studies of community nursing inter-ventions for elderly, housebound patients are urgently needed. 

Conclusion

This project has encountered several challenges. How-ever, we were able to design a framework for developing indicators in community services and apply that framework to produce a set of quality indicators for the Bristol DN service. These indicators are now being piloted, together with the patient satisfaction telephone survey and two potential tools identified to capture patient reported outcomes.

Acknowledgements

Special thanks to Jan Huckle, Professional Lead for District Nursing, Simon Hall, David Pugh, Christine Campbell, Sue Ford, Sara Hall, Bethan Rowlands, Jane Cook, Nikki Ashton and the other members of the District Nursing Strategy Group for their enthusiastic support. Thanks to the Bristol district nurses who responded to the survey and the service users who were interviewed. Thanks to NHS Bristol commissioners, Deborah Lee for commissioning this project and Helen England for further support. Thanks to Helen Lockett and to Susan Field for senior management support from Bristol Community Health, and to all the members of our advisory group for their con-tinued commitment to this project.

References

Funding

This work was supported by a grant from NHS Bristol.

Ethical Approval

Ethical Approval was not required.

Peer Review

Not commissioned; externally Peer Reviewed.

Conflicts of Interest

None.