Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days

Research Paper - (2008) Volume 16, Issue 1

Utilising theories of change to understand the engagement of general practitioners in service improvement: a formative evaluation of the Lewisham Depression Programme

Anthony J Riley BA(Hons) MA*

Research Fellow, Lewisham PCT, London, UK

Richard Byng MPH, MBBCL, PhD

Senior Clinical Research Fellow, Kings College, Department of General Practice and Primary Care, London, UK

Corinna White BSc MA PGDip

Clinical Governance Interface Facilitator

Steve Smith MBBS BSc DRCOG MRCGP

Clinical Advisor

Clinical Governance Resource Group, Dulwich Hospital, London, UK

Corresponding Author:
Anthony Riley
Lewisham PCT, Cantilever House
Eltham Road, Lee, London SE12 8RN, UK
Tel: +44 (0)20 7206 3216
Email: anthony.riley@selssp.nhs.uk

Received date: 9 July 2007; Accepted date: 5 December 2007

Visit for more related articles at Quality in Primary Care

Abstract

Background Mental health issues such as depression are commonly treated within primary care. In accordance with new UK National Institute for Health and Clinical Excellence (NICE) guidance, primary care practitioners have increased responsibilities to manage mild depression in primary care. This paper reports on an evaluation of a depression programme marketing strategy, and factors contributing towards practitioner engagement or nonengagement. Aims The main aims of the study were to conduct an evaluation that was programme specific, to immediately feed results back iteratively through early development and engagement of general practitioners (GPs) and nurses and their practices in a multifaceted programme, and to investigate decisions to participate or not participate in the depression recognition audit as a first point of engagement, in order to gain a better understanding of what motivates GPs and nurses to be involved and what prohibits involvement, in order to address any future potential barriers to improvement. Methods The methods for this formative evaluation can be categorised in three distinct ways: firstly, the iterative development of ‘theories of change’ (programme-based assumptions) using ethnographic techniques, which led to the development of a list of predefined theories of change and formed the basis of two questionnaires; secondly, questionnaires were sent to engaged GPs and nurses and a separate questionnaire was sent to a matched sample of GPs and nurses not engaged; finally, results were fed back on an ongoing basis to inform the ongoing programme development and evaluation and to produce final theories of change for engagement. Results The response rate to the questionnaire was 54%. Those involved in the audit reported individual motivation, team working, wider networks and method of engagement, all as positively influencing their decision to take part. Those not involved focused on practice organisational issues leading to non-participation, such as not having enough time, or being understaffed or busy with other initiatives. Conclusions The ‘theories of change’ method helped explore and shape assumptions around the Lewisham Depression Programme’s marketing strategy as a basis for future marketing of programme activity. It also helped to develop a joint programme- evaluation forum by which programme team members were empowered to lead aspects of future research within the programme. There are some key messages for future programme makers to help engage GPs and nurses, such as the importance of having face-to-face practice meetings with a trained facilitator, the positive engagement of practice managers, and a launch meeting for the programme. The results also support targeted strategies to support poorly performing individuals and practice teams.

Keywords

depression, engagement, evaluation, primary care, quality improvement

How this fits in with quality in primary care

What do we know?

Treatment for depression varies widely in primary care. As more depression is managed within primary care, so the need for increased knowledge and training in this area increases. Programme evaluation is often historically distant, where findings are reported some time after they can be really useful to the development of a programme. Those working in quality improvement and clinical governance use varied strategies for engagement. General practitioner surgeries are mostly independent small businesses, which results in differences in their organisational context.

There are practices that will always be innovators, because of their ethos and size butwhat we do not know is how and why practices make decisions, who makes the decisions and what drives certain practices to be involved in such programmes.

What does this paper add?

‘Theories of change’ is a useful method for programme-focused and relevant evaluation, as it enables improvement while programmes are happening. Theories of change allows programme makers to go through a process by which they agree on a set of programme assumptions, which in itself is invaluable in the development of any programme.

The reasons people get involved in quality improvement in primary care are wide ranging and complex, although there are some distinct patterns to responses based on their level of motivation or their surroundings. Engagement strategies should be multifaceted, often requiring personal face-to-facemeetings with a trained facilitator in order for practices to buy into the process. Practice managers have a key role in this process, although this can conflict with their more business-focused roles and can act as a barrier to engagement.

Introduction

Ninety percent of individuals receive their treatment for mental health problems in primary care.[1] The NHS Plan emphasises the importance of treatment of people with mental health problems in primary care,[2] as does the current National Institute for Health and Clinical Excellence (NICE) ‘stepped approach’ to mental health care.[3] When it comes to depression, general practitioners (GPs) often find it difficult to make a diagnosis;[4] depression is often perceived as difficult to treat, and recognition rates vary greatly between GPs.[5,6] While financial incentives have proved to be a valuable means of changing practice in primary care, engaging general practices in other qualityimprovement initiatives is a constant challenge. General practice surgeries are small businesses and heterogeneous with respect to culture, size, decision making and skills. Primary care trusts (PCTs) support clinical governance within general practice surgeries in which they are accountable for continuously improving the quality of their services. In order to achieve this, PCTs encourage practitioners to followand develop clinical guidelines, to promote training and research, and to audit and monitor effectiveness. Some of these methods incorporate local incentivised targets and banding, while others have also attempted morecollaborative approaches involving facilitated change.

The Lewisham Depression Programme (LDP) originated from a local mental health forum, concerned about variations in the quality of provision in primary care for people with depression.

‘Theories of change’ (TOC) is a methodology developed for use in social programmes, which engages front-line staff to develop micro-theories about how to bring about changes directed at specific goals.[7] Formative evaluation then refines theories in order to develop a more robust change-management strategy.

Methods

As part of a formative evaluation, two researchers (the research team) and two programme team members (the programme team) worked together over time (programme/evaluation synthesis) during the development and early implementation of a depression programme in primary care. Formative evaluation is a method of judging the worth of a programme while the programme activities are forming or happening.[8] Formative evaluation focuses on the process, which at its most basic is an assessment of efforts prior to their completion for the purpose of improving the efforts.[9] The programme/evaluation synthesis process is shown in Figure 1. The programme was initially loosely defined, and its development influenced by drivers such as local opinion leaders, Patients as Teachers and key stakeholders, coupled with the interactions and input from a joint management evaluation forum. Once the programme was defined, engagement became the first main area in which theories of change were developed. The evaluation aimed to provide empirical evidence of the programme’s successes and failures, through systematic observations about levels of engagement and participation in the programme audit, and findings were fed back to the programme through ‘cycles of learning’[10] and reflection.[11,12] The early aspect of the evaluation concentrated on understanding why practitioners engaged or did not engage in the audit following marketing of the programme to general practices, through a launch meeting, telephone calls and practice visits.

Figure 1: Programme/evaluation synthesis

The evaluation had three distinct phases: firstly, the collection of programme assumptions to create firstorder theories of change; secondly, to test out theories of change of the programme using a questionnaire that was sent to GPs and nurses who participated in the depression-recognition audit, and a matched sample of some who did not take part (determined by size of practice and number of GPs and nurses); finally, to amend the theories of change in the light of the results, to benefit the development of the programme and wider local and national activity.

Developing theories of change ‘naturalistically’

TOC refer to the processes by which change comes about as a result of a programme’s strategies and action. TOC links programme activities to results, whereby models are used to plan and identify gaps in thinking, develop consensus among stakeholders, set realistic expectations, and continually improve performance. TOC has become popular in the area of community development evaluation and, more recently, in health services research.[1315] TOC relates to how practitioners believe individual, intergroup, and social or systemic change happens, and how specifically their actions will produce positive results. According to Weiss, the purpose of evaluation of programmes using TOC is to ‘surface’ theories in specific detail, and allow evaluators to follow these assumptions.[7] For example, there was the perception among some members of the programme board, captured at key meetings, that those GPs in practices actively interested in mental health would be more likely to take part in a programme and, conversely, that GPs from single-handed practices would be less likely to engage – such assumptions could be tested for their validity. For Weiss, TOC makes an evaluation ‘programme focused’ and ‘programme relevant’, enabling further programme development. It also helps programme members to define their position and reach consensus with what they are trying to achieve and, because evaluation is so closely linked to a set of agreed programme assumptions, such evaluations can influence policy and opinion.

TOC was adopted as a method by which assumptions could be collected and tested, and refined as part of the programme’s development. Using ethnographic techniques to develop relevant programme assumptions naturalistically,[16,17] the researcher worked with the programme team members, observing and participating in meetings, practice marketing visits and the project steering group, and examining the project risk and issue logs to inform the TOC development process. During this activity, notes were openly taken, and were presented for verification to the programme team.[18] These notes were subsequently analysed, and verified independently by another researcher, for content on programme assumptions around GP or nurse engagement or non-engagement. A TOC document was developed and presented to the core programme team members and, after an iterative process to ascertain consensus, the notes were presented to the project steering group for validation and the production of an agreed TOC for the programme.

Testing out theories of change using questionnaires

A list of TOC for engagement and non-engagement in the programme’s audit came directly out of the above process. TOC were tested on responding GPs and nurses who participated in the audit, and with a selection of ‘matched cases’ of GPs and nurses from local practices similar in size to those that participated, using mailed out questionnaires. Ten practices had taken part in the audit (33 GPs and 12 nurses), and matching cases came from nine practices with similar numbers of GPs and nurses.

The key TOC were developed into two separate questionnaires, one for those who took part in the audit, the other for those who did not. Respondents were asked to review statements that were based on the TOC in terms of how they influenced the decision whether to engage with the audit. The list of TOC was turned into statements for ranges of agreement and disagreement. Questionnaires were anonymised using unique respondent identification numbers, with reminder letters sent to non-respondents at two weeks and six weeks. The data were analysed in terms of percentages agreeing or disagreeing with the statement or, for some of the questions, the reporting of respondents’ knowledge or awareness of the programme. [19]

Feedback to improve programme development

The third phase included feedback of evaluation findings directly, to shape the programme. Feedback was iterative during the development of TOC, in the monthly programme steering group, programme evaluation forums and individual meetings between the researchers and the programme members. The research team worked alongside the programme team when planning implementation, by taking part in decision making about engagement, and feeding results directly to support decisions. Another aim of the evaluation was to spread findings across the local health sector in South East London and through the wider NHS.

Results

The following section describes descriptive statistics of the level of individual engagement in the audit and the TOC developed to explain engagement or nonengagement.

Process of programme engagement

Overall response to the questionnaire was 54%(n = 49), 58% (n = 27/45) for those who participated in the audit and 48% (n = 22/45) for those who did not.Nine out of ten participating practices (17 GPs and nine nurses) responded to the questionnaire. For those not participating, five nurses and 16 GPs responded, representing all nine of the practices that were sent questionnaires. The composition of the practices for those who participated included one small (singlehanded or two GPs), four medium-sized (three to five GPs), and three large (six or more GPs) practices, and for those that did not there were three small, three medium- and four large-sized practices.

A year after the programme launch and the start of training, 59% of all Lewisham practices were involved in the programme. Figure 2 shows the multifaceted engagement strategy.

Figure 2: Programme’s engagement strategy

Box 1 contains the list of the original programme theories about engagement versus non-engagement. The theories are ranked according to the percentage of participants who agreed that each theory contributed to their personal experience of participation. The results are summarised in Box 1.

box

Incentives to involvement

Personal beliefs about quality improvement and depression

Personal beliefs positively influenced respondent participation in the following areas: viewing audit as a useful tool (78%) or as a way of improving care for depressed (88%), lack of confidence in managing patients (42%), and ‘not having an interest in mental health’ (19%). Twenty respondents who reported the positive influence of wanting to improve the way they work with patients also looked forward to new ways of working; 14 of this group thought undertaking an audit would highlight problems and positively influence decisions; nine were influenced by participation and linkages to their personal development plan. Seven of the 14 were also very comfortable working with depressed patients.

Practice contextual factors

Practice context influenced decisions in the following areas: good working relationships with colleagues (93%), being involved in other quality-improvement initiatives (85%), being in a practice that allowed time for improvement activity (77%), prioritising mental health issues (69%), and the respondent’s practice having a clear strategy for improvement (78%).

Sixteen respondents reported having both mental health priorities and a desire for improvement in their work as having influenced their participation. Ten of the above respondents were also confident as well as comfortable with depressed patients, and seven of the 16 reported all the above aspects positively influenced their decision in combination.

Programme-related factors

For those respondents involved in the audit, 92% reported having at least some knowledge and understanding of the programme; 67% were completely involved in the decision to take part. Programmeinfluencing factors included the programme’s facilitator meeting with participating practices (73%), the programme launch (58%), and being informed by their practice manager (46%). Having the programme launch and having the facilitator meeting in the practice were a combined positive influence towards participation for 12 cases.

Wider context

Having a relationship with the organisation in the locality also working with local practices on clinical governance projects (CGRG) positively influenced the decisions of 65% of participating respondents. In fact, having positive relationships with both the CGRG and PCT had a strong influence towards involvement with 13 cases.

Disincentives to involvement

The most frequently cited reason for not becoming involved was lack of time to participate in the audit (73%). Other contextual factors included: inadequate practice staffing (55%), inundation with PCT initiatives (50%), the impact of the new GP contract Quality and Outcomes Framework (QOF) (45%), and the practice already prioritising mental health (32%). Being involved in other quality-improvement activity was a disincentive for 59% of those respondents. Seven respondents felt both being understaffed and being involved in other quality-improvement initiatives contributed towards their failure to participate. Seven of those involved with other quality-improvement initiatives also said they had less time to be involved because of the QOF. In fact, six negative decisions were influenced by being inundated with PCT initiatives and not finding PCT initiatives useful. Half of those respondents not involved in the audit reported having no or vague recollection of the programme, and 41% were either uninvolved or unaware of a decision made regarding participation.

Personal factors also contributed to non-involvement. Fourteen percent admitted involvement would have highlighted poor performance. One respondent reported that a lack of confidence in working with people with depression contributed, and another also admitted that having depression contributed to noninvolvement.

Conclusion

The small sample size imposes limitations on this study and its conclusions, and this must be taken into consideration upon interpreting these results. Despite this, there was representation from all but one practice of the [19] that were sent questionnaires.

Methodological issues

Theories of change methodology offered clarification and agreement on a set of assumptions of the programme; the process itself aligned key people at the start of the programme to the main objectives of the programme. TOC methods enabled the evaluation to be formative, programme specific and relevant immediately, and to be meaningful and beneficial to the future engagement in the programme. TOC included various steps to aid the development and understanding of the programme. Steps included audit feedback, marketing and implementation of training of GPs and nurses, and a follow-on audit offered to those who took part initially. The effectiveness of face-to-face meetings with the facilitator, getting the practice manager on board, and other techniques learned in the first phase were tested in the next phase of development, much to the benefit of the programme. Ethnographic techniques were used in the TOC development. The explicit link between the evaluators and those delivering programmes is unusual, both in research and within service development in the NHS. Towards the end of the evaluation, the programme members were also involved in the evaluation, and are planning to take the lead on future collaborative evaluation work.[20]

The questionnaire sought clinician opinion on factors affecting their individual decision to engage in the programme, as well as factors present in practice or more widely. Interactions between the context of the practitioner, practice or wider policy landscape and the marketing of the programme were also explored.

Implications of findings for practice and future programme delivery

The engagement process was particularly successful in three main aspects: the programme launch, engagement with the practice manager and the intervention of a trained facilitator (particularly in face-to-facemeetings). Evidence suggests that facilitation can enhance the ability to carry out a programme of this sort.[21,22] The launch meeting provided the programme with coveted early ‘buy-in’. Practice managers have important persuasive influences in their practices, although not all will become involved, perhaps because they see this as a clinical issue or because of focus on practice management and finance, or through lack of personal capacity. Being part of a wider, loosely defined, network, in some cases made participation a less-risky experience for practices. This directly relates to network theory, whereby those with open looser networks are more likely to be involved than those practices with closer tight networks.[23] Pitching improvement initiatives in clinically relevant areas for clinicians was also important, and meant that the programme did not start with a ‘top-down’ approach.[24]

In the case of this programme, non-involvement did not necessarily mean respondents lacked interest, as programme assumptions might have suggested. In many cases there were organisational reasonswhy they couldn’t take part, such as not having the time, lack of staff, or being involved in competing activities.Organisational support is important, and a team-based approach to quality improvement fosters such support. The difficulty is motivating staff who do not see their involvement as important or who feel threatened by the extra work.[25] In addition, heavy workload can lead to stress, which obviously inhibits individual receptiveness to improvement initiatives. These factors are likely to change with time, and engagement may occur at a later date, as became evident as this programme continued.

For those involved in the audit, factors such as the individual characteristics of the practitioner, the method of improvement, its marketing, and the history of the existing networks of support and organisational culture of the practice were important. This links with literature on individual and group motivation, where wider system or environmental influences are important for improvement.[26] Other respondents admitted that it ‘looked good’ to be participating. This might indicate that some practices feel under general scrutiny, in addition to being assessed against more specific targets. The range of reasons indicates a tendency for some to want to be innovators,while others are followers or so-called ‘laggards’.[27] Respondents also reported a range of reasons linked to the positive influence of teamwork.[2830]

There were some surprises that helped programme members and evaluators reconsider their own assumptions to benefit future initiatives that show clear benefits of TOC methodology. For example, programme assumptions anticipated audit participants to be influenced by their personal interest in mental health. However, this factor did not affect their decisions to any great extent. Also it was expected that those lacking confidence in mental health were less likely to be involved in the audit, and again this failed to be the case. Strained relationships within a practice did not always lead to non-involvement, and this was surprising and somewhat against the literature on quality. In addition, the audit focused on change at an individual rather than organisational level, and therefore was not dependent on a ‘healthy’ organisational structure. A single practitioner reported personal depression as a reason for non-involvement, and illness in potential participants may have been under-reported. In contrast, some practitioners provided a more positive rationale for non-involvement such as that the programme did not fit with their strategy or they already felt confident about depression.

Any programme aiming to involve significant proportions of GPs and nurses, particularly those that are underperforming, needs to address barriers to involvement. It may be that PCTs need to develop generic strategies and programmes to support poorly performing individuals and teams, through mentoring, team building and other means before expecting their involvement in individual programmes.

Acknowledgements

We would like to acknowledge Susan Robinson at the Lewisham Research Unit at Lewisham PCT, all members of the Lewisham mental health steering group without which there would be no depression programme, GP practices in Lewisham for their patient endeavour and professionalism in taking part in the programme activity, and the Department of General Practice at King’s College, who co-hosted the research team and all members of the Clinical Governance Resource Group. We also would like to thank the reviewers and the editor of the journal for helpful and constructive comments.

References

Ethical Approval

Ethical approval for the programme’s evaluation was derived at Lewisham University Hospitals LREC committee.

Funding

The Lewisham Research Unit based at Lewisham PCT funded The Lewisham Depression Programme evaluation.

Conflicts of Interest

None.