Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Research Paper - (2008) Volume 16, Issue 3

Roles, relationships, perils and values: development of a pathway between practice development and evaluation research

Susan M Carr PhD MSc BA RGN RHV HVL RNT PGCE*

Reader in Public Health and Primary Care, Health Improvement Research Programme, Community Health and Education Studies Research Centre

Monique Lhussier PhD MSc

Senior Lecturer, Health Improvement Research Programme, Community Health and Education Studies Research Centre

Jane Wilcockson PhD BA

Research Associate, Health Improvement Research Programme, Community Health and Education Studies Research Centre Northumbria University, Newcastle upon Tyne, UK

Lesley Robson RGN NDN CPT

Macmillan Clinical Nurse Specialist

Juli Warwick RGN ONC DN (Cert) BA

Macmillan Clinical Nurse Specialist Newcastle PCT

Pam Ransom RGN

Macmillan Clinical Nurse Specialist

Isabel Quinn RGN

Macmillan Clinical Nurse Specialist North Tyneside PCT

Corresponding Author:
Dr Susan M Carr
Community Health and Education Studies Research Centre
Northumbria University, Coach Lane (East), Benton
Newcastle upon Tyne NE7 7XA, UK
Tel: +44 (0)191 2156217
Email:sue.carr@northumbria.ac.uk

Received date: 20 December 2007; Accepted date: 25 March 2008

Visit for more related articles at Quality in Primary Care

Keywords

evaluation, innovation, participatory action research, reflection

Introduction

The configuration and focus of primary care has seen much innovation and change in recent years. This includes skill mix developments, extended roles, sub-stitution of care across roles and the transfer of location and sector of provision of care. All of these changes are encapsulated in a drive for enhanced quality, effciency and effectiveness of care. A key element of the quality agenda is innovation and evaluation. This paper presents a case study of the processes involved in evaluating practice developments in primary care that related to the introduction of an end-of-life care pathway. The issues discussed, however, have relevance and transferability to practice development and evalu-ation generally, and are not anchored exclusively in palliative care.

Palliative care has recently been the focus of much attention in UK government policy.[1–3] This promotes the implementation of an end-of-life framework to streamline care and commit to increased investment in this area. One of the challenges faced by professionals in improving end-of-life care is to transfer palliative care expertise from hospice settings and specialist teams to wider communities of healthcare practitioners. The end-of-life integrated care pathway (ICP) developed at Liverpool by Ellershaw and Wilkinson was one strat-egy developed to facilitate this process.[4]

This paper focuses on the issues arising from the evaluation of the implementation of this care pathway in two sites in the north of England. In both sites, the implementation was evaluated by the implementation facilitators, together with a university-led commis-sioned piece of research. While the results are reported elsewhere,[5,6] it is this superposition of two evaluative strands that is reported in this paper. The research team was fully aware of the complexity of this task and the inherent diffculties.[7–10] However, there is a paucity of literature that overtly discusses or analyses the challenges faced in this situation by all the stakeholders involved. Such an analysis is required to allow develop-ment of a better understanding of the roles, relation-ships, perils and value of this approach to maintaining and enhancing quality in primary care. In this paper, we argue that despite recent methodological develop-ments in practice evaluation,[11–13] methodological, interpretive and political tensions persist between practice development and evaluation. Our experiences and reflections lead to a maturation of the method-ology to fit these realities. It is hoped that this will offer some solutions to others engaged in ensuring quality care is achieved through practice development.

Methods

The research brief was very typical, in that the re-searchers were asked to evaluate the implementation of the ICP in the two sites, by (1) tracking the proportion of people able to die in their preferred location; (2) analysing the needs of staff as they adopted the ICP and their development of expertise in using the pathway; and (3) analysing the social and organ-isational contexts in which the pathway was being implemented, and the ways in which staff framed patient needs and whether this altered through the use of the pathway. In both sites, the ICP was imple-mented following a rigorous and reflective protocol that involved data collection and feedback. This pro-cess was organised independently from the research team and set up before their involvement, and enabled the implementations to be responsive and adaptive to local circumstances and demands.

In Table 1 we explain the roles and responsibilities of the different stakeholders in this innovation and evaluation endeavour. For clarity we refer to these as the ‘outsider’ and ‘insider’ evaluations, although we acknowledge that this distinction in itself is problem-atic.

Figure

Table 1: The two concurrent evaluations

Although conscious of the diffculties emerging from the time lapse between the initiation of the implemen-tation process and the commission of the evaluation, the research team adopted a participatory action re-search methodology,[14] to encourage close collabor-ation between the researchers and the facilitators. The paradigmatic position adopted was founded in critical theory, so that the approach would be democratic, mutually educative and reciprocal. The design of the project involved three concurrent strands of evidence:

(1) collaborative learning groups; (2) stakeholder inter-views with project steering group members, ICP implementation facilitators, healthcare practitioners (community- and hospital-based healthcare staff) and bereaved carers; and (3) tracking of outcomes using number of deaths in preferred location and ICP variance forms. However, the study details and subject are not the focus of this article. Rather, what the authors wish to concentrate on in this article is the process of con-ducting commissioned research concomitantly to a service development that integrated its own evaluation.

This article is the result of meetings between the researchers and the implementation facilitators pur-posefully convened to review and reflect on the evalu-ation process and experience. The intent was generated by the research team and readily adopted by the facilitators. The high levels of mutual understanding, trust and communication developed during the pro-ject meant that all participants felt able to engage in a joint reflective exercise about the research process. Both researchers and facilitators were encouraged to share their views on the research process, outcomes, impact on practice, and their expectations of one another’s role. The set up was open enough for all parties to express tensions and dissatisfactions. The overwhelming feeling was that of blame-free mismatched expectations, and issues around design, control and power over the research process. The details of the discussions were developed into an initial daft of this paper, and interactive exchange between the authors led to the development of this version.

Results

A mismatch of expectations

Embedded in the implementation models of the ICP were regular feedback loops involving practitionersusing the pathway together with audit of paperwork. In comparison, the report submitted by the researchers could feel remote from the realities of the service development. The competing views of specificity and locality versus globalisation and transferability are illustrated as follows: at the onset of the research project, specific local issues were seen as embedded within wider cultural and contextual considerations. As time moved on, and as they became experts in the devel-opment of the ICP in their locality, the views of the implementation facilitators tended to become more strongly anchored locally. However, as the outside research project progressed, researchers attempted to situate the implementation in the national landscape of end-of-life care and in the broader academic body of knowledge on care pathways and knowledge utilis-ation. This is in accordance with the recommendations of Fontana who stresses the importance of context-ualising the phenomena under study in the economic, political, historical and social forces that sustain them.[15]

The views of the stakeholders evolved from being consonant at the onset of the project, to being disson-ant at the end. This is represented in Figure 1, where ‘wider context’ refers to the contextual, cultural, political issues and academic knowledge base in which the ICP was being implemented. This move towards disson-ance is a natural consequence of the differing agenda priorities of the stakeholders involved. Practitioners are driven by policy agendas, such as the Agenda for Change in the UK.[16] In this, they have become ac-countable for practice change and improvement, and have to produce evidence for their actions. Implemen-tation facilitators were hoping for the research to either provide them with the evidence of a positive practice development, or highlight gaps in the practice develop-ment, on which they could focus subsequent efforts. For example, they would have preferred to be shadowed in the early days of the implementation, so that the researchers would acquire more of an ‘insider’ view of the implementation process. This is congruent with the recommendations of Warburton et al to consider evaluation at the planning stage of a service develop-ment, rather than as an addition once the service has started.[9] This would have given the whole project a more coherent feel, but, due to the time lapse between initiation of implementation and commission of its evaluation, it had not been possible.

Figure

Figure 1: Progression from consonance to dissonance in the researchers’ and practitioners’ perceptions of the research process

In contrast to this, the researchers were concerned with the production of knowledge that would be both of relevance locally and transferable, and could even-tually be publishable. They sought to produce a quality piece of research that could be seen as innovative by the research community and could contribute to the body of knowledge around the use of ICPs for prac-titioners, service managers and academics.

Design, control and power

When action research sits in a critical theory para-digm, as it did in this project, it assumes that know-ledge generation is a political activity.[15] Evaluation research is seen as political in nature because social forces shape its development and affects its dissemi-nation. The researchers found it very diffcult at times to get healthcare practitioners to engage with the outside research process. They were busy professionals with multiple pressures on their time, leaving little or no available time for research, and in some cases admitted to feeling ‘over-researched’. The researchers, aware of these feelings, were concerned about how their efforts to engage staff may be perceived, bearing in mind that over years of work with different local agencies, they have developed research relationships that they wished to nurture. While the commissioner and implementation facilitators had their interests focused in space and time on these two practice development projects, the researchers had to bear in mind their working relationship with local agencies and practitioners, as this was only part of an extensive portfolio of practice development research. For some practitioners, particularly community staff, the end-of-life ICP was only one of the numerous exemplars of practice change that had occurred over recent years. They felt familiar with the concept and use of care pathways from many other areas of practice.

From a commissioning viewpoint, a major concern was to obtain value for money, which was perceived as being a comparison of the two models of implemen-tation. In a relatively small world of palliative care in the two localities, this comparison, if presented very explicitly, could jeopardise the anonymity guaranteed to all stakeholders taking part in the evaluation. In addition to the lack of evidence concerning differing effects of the two implementation models, it was felt that the pros of an extensive comparison were out-weighed by the cons of failing to protect anonymity.

Discussion

The key problem fuelling the dissonance experienced by both practitioners and researchers appeared to relate to the different understanding of the evaluation scope and process among stakeholders. This had the potential to jeopardise the coherence of the concurrent external evaluation of a practice development initiat-ive. We wish to expose and discuss two paradoxical issues that arose around the researchers’ need to be ‘outsiders’ while using a participatory approach to inquiry, and the relationship between the two ongoing evaluation processes.

Finch et al highlight the complexity of a combined evaluation and implementation, in terms of the stab-ility of the research design necessary and the compet-ing flexibility required for a responsive approach to evaluation.[10] The tensions they highlight centre around defining and measuring clinical practice, and between evaluation and service provision. This was reflected in the project reported here, in that the different ap-provals required for the researchers to get involved with the practice development impeded their prog-ress. Paradoxes arose between the perceived need for researchers to retain an objective and detached stance towards the service development, while using partici-patory approaches in order to produce locally relevant knowledge. What this article contributes to is the neces-sity to draw terms of reference between all stakeholders at the onset of any such evaluation research. This would include a clarification of the role of all evaluative activity undertaken, their aims and their modes of integration. In the study reported here, the practitioners were running their own evaluative feedback loops and audits of paperwork, in order to inform the stages of implementation on a very local and practical level. The roles and aims of the university-led research had been negotiated and agreed with the commissioners, with an assumption that these communications had been shared and negotiated with the implementation facilitators. There was an assumption that the ‘outside’ evaluation would be complementary to the concurrent ‘inside’ evaluation. It is important here to stress that we did not enter negligently into these assumptions. Rather, these tend to be hidden or implicit in practice develop-ment evaluation, and we wish to present an argument to surface them and make them explicit if the methodology is to be taken forward. The relationship history at both individual and institutional level allowed us to adopt a reflexive approach to our research practice, with the aim of continual improvement and development. Our mutual conclusion was that there has been insuffcient articulation of the detail and complexity of this pro-cess for it to achieve effective results.

Another paradox lay in the relationship between the implementation facilitators’ evaluation and auditing processes and the university’s research into both implementation projects. The two types of knowledge generation differed in their ultimate goal. The imple-mentation facilitators’ was local and of practical rele-vance, that could have immediate implications for practice. In this respect, they saw their service as unique in its form, in the process of its implementation and in the context of implementation. Gerrish and Mawson highlight the blurred boundaries between evaluation research and service evaluation, and between action research, clinical audit and practice development.[17] They described service evaluation, clinical audit and practice development as ‘context specific and employed with the specific intention of informing local decision making or local service provision’.[17] By contrast, evaluation research and action research are concerned with the generation of new knowledge that has a degree of transferability beyond the local setting. This distinc-tion is useful in making sense of the paradoxes encountered in the present study. While the researchers sought to generate new and transferable knowledge out of two local projects, the facilitators were concerned with refining the implementation of their particular care pathway, in their particular context and settings.

Manley and McCormack distinguish two types of practice development activities, one ‘technical’, the other ‘emancipatory’.[18] They differ in the worldview adopted in their implementation, are underpinned by different assumptions, and require the use of different methodologies. In technical practice development the activity is considered as a means to achieve the development of services.[18] They identify this as being congruent with Habermas’ technical kind of know-ledge. In this, when staff development occurs, it is as a consequence, rather than a deliberate outcome of the practice development. In contrast, in emancipatory practice development, the development and empower-ment of staff is deliberate and inter-related with creating a specific type of culture in which change can happen. This distinction can explain some of the paradoxes identified in the present study. The im-plementation facilitators might have been conducting a technical practice development, aimed at introducing an ICP to improve the care of the dying patient. From this perspective, staff development happened as a consequence of this, rather than as an intentional outcome, even though the facilitators were aware of this. On the other hand, part of the researchers’ qualitative analysis focused on the journey that some staff had undertaken, from feeling ‘novice’ to ‘expert’ at using the ICP and caring for people at the end of their life. One of the research findings was that for some staff the expertise was there prior to the im-plementation of the ICP, whereas for others the care pathway was a source of considerable professional development.

The tension therefore seems to lie in the implemen-tation facilitators conducting one type of practice development, and using this for elements of service evaluation, while the researchers were in part eval-uating an emancipatory practice development, and using research evaluation methods. The outcomes of the research were not those expected by the imple-mentation facilitators because the focus of their evalu-ation and that of the researchers differed. This article suggests that the possibility of integration of concur-rent activities can only be possible if there is an initial emphasis on agenda sharing and negotiation.

A research pathway for evaluating practice development

A problem remains, with Gerrish and Mawson’s clarifications.[17] This has to do with running a service development alongside its evaluation and with the practical, as well as academic issues around action research methodology. Action research indeed made life more diffcult for the practitioners, since they had little choice but to participate in the research process. While participation is often seen as a good thing, it may be that in some cases, practitioners contract out the research so that it is taken away from their work-load. Finch et al explored the methodological issues arising from integrating an evaluation with the devel-opment of a telehealthcare service.[10] They highlighted the diffculties for health professionals with managing the often-competing demands between service provi-sion and evaluation, since research is often not con-sidered as a priority when competing with practice. In the case of the project described here, the implemen-tation facilitators conducted their own service evalu-ation, and might have seen the outside evaluation as an added burden, of little practical relevance. The researchers became aware of these complexities as they moved into the project, when they had little choice but to continue as they had started.

Another crucial issue highlighted here is the differ-ing timescales of practice and research. Practice de-velopment happens in an evolving environment, to which it needs to be responsive. The administrative and bureaucratic duties of the researcher in health care can impede early involvement and adaptation to changes in practice, even when the commissioning of the service and its evaluation happen concurrently. This introduces a time-lapse that evaluation research has to contend with, for example, when using retrospective inter-viewing. Finally, when evaluation research runs along-side service evaluation, the aims and expectations of all parties need to be made explicit at the outset of the project. A pathway enabling this is presented diagram-matically in Figure 2. This would enable the explicit integration of the views of researchers, service devel-opers, commissioners and ethics and research govern-ance boards. This would produce a more coherent, transparent and integrated research process, which would satisfactorily address the needs of practice and academically driven interests.

Figure

Figure 2: A pathway for the evaluation of small-scale practice development (PD) projects

Conclusion

This article exposes often underlying and unrecog-nised areas of consonance and dissonance between the views of researchers and practice developers in a context of concurrent practical and academic evalu-ations. In some cases there is potential to progress from dissonance to consonance. In others, the differ-ing worlds and agendas mean that dissonance will remain, but its existence needs to be acknowledged and worked with, rather than ignored.

References

Conflicts of Interest

None.