Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Research Paper - (2006) Volume 14, Issue 3

Maturity Matrix: a criterion validity study of an instrument to assess organisational development in European general practice

Melody Rhydderch BSc MSc C Psychol*

NHS Primary Care Training Fellow

Adrian Edwards MRCP MRCGP PhD

Professor Department of General Practice, Centre for Health Sciences Research, University of Cardiff, Wales, UK

Martin Marshall MSc MD FRCGP FRCP

Deputy Chief Medical Officer, Department of Health, London, UK

Stephen Campbell BA ME (Econ) PhD

Research Fellow, National Primary Care Research and Development Centre, University of Manchester, UK

Richard Grol PhD

Professor

Yvonne Engels PhD

Health Scientist

Glyn Elwyn BA MB BCh MSc FRCGP PhD

Professor Department of General Practice, Centre for Health Sciences Research, University of Cardiff, Wales, UK

*Corresponding Author:
Melody Rhydderch
Department of General Practice
Centre for Health Sciences Research, Cardiff Univer-sity
Neuadd Meirionydd, Heath Park CF14 4YS, Wales, UK.
Tel: +44 (0)29 2068 7168
Fax: +44 (0)29 2068 7219
Email: RhydderchM@cardiff.ac.uk.

Received date: 29 January 2006; Accepted date: 2 May 2006

Visit for more related articles at Quality in Primary Care

Abstract

Introduction The Maturity Matrix is a self-assessment measure of organisational development designed to be used by general practice teams with the aid of a trained facilitator. To date, its content validity, feasibility and reliability have been studied with UK and European practices. There is increasing interest in combining practice-led assessmentswith externally led assessments such as professionally led accreditation schemes. The aim of this research is to evaluate the criterion validity of the Maturity Matrix when it is used with another more established quality improvement instrument known as the European Practice Assessment Instrument (EPA).Design Criterion validity study.Sample One-hundred and forty-five general practices from five European countries (Germany, The Netherlands, Slovenia, Switzerland, UK).Methods A mapping process was used to identify which of the 11 Maturity Matrix dimensions were similar to EPA items and could therefore be included in the study. The mapping process revealed that 12 EPA items were similar to eight Maturity Matrix dimensions. The included Maturity Matrix dimensions were clinical data, audit of clinical performance, clinician access to clinical information, human resources management, continuing professional development, riskmanagem ent, practice meetings and sharing information with patients. The EPA items were assessed by an external assessor using a categorical yes/no response. The Maturity Matrix dimensions were each scored on a 1–8 scale. Mann–Whitney U tests for statistically significant differences between the median scores on the Maturity Matrix and EPA were applied.Results Twelve analyses were conducted. Analyses fromsix dimensions revealed statistically significant differences between the median Matrix scores for yes and no responses on the EPA, five of which were significant at P 0.01 and one at P 0.05. These six dmensions were: clinical data, audit of clinical performance, clinician access to clinical information, human resources management, meetings and sharing information with patiets, although forboth the human resources management and sharing of information dimensions the evidence was mixed with some analyses not being significant. The analyses where no significant differences at all were found related to the following Maturity Matrix dimensions: riskman agement, continuing professional development. The box plot graphs for each analysis revealed that in some cases practices viewed themselves more positively using the MaturityMatrix than when they were rated on similar EPA items by an external assessor. Discussion The Maturity Matrix possesses partial criterion-related validity when compared with the EPA. Item wording may have been a factor in the six analyses that were not statistically significant. Alternatively the method of  assessment (self- vs external assessment) may be a factor in the remaining analyses that were not statistically significant.Conclusion Although combining self-assessment with external assessments is desirable to increase practices’ ownership of the process, thought needs to be given to the way in which they are used alongside each other.

Keywords

clinical audit, criterion validity, general practice, Great Britain, organisation development, performance

Introduction

The Maturity Matrix is a self-assessment measure of organisational development designed to be used by practice teams with the aid of a trained facilitator.[1]The purpose of the Maturity Matrix is to help teams identify those areas where they can improve the quality of organisation supporting the delivery of health care. To date, its feasibility, face validity and content valid-ity have been studied with practices in the UK as well as Germany, The Netherlands, Slovenia and Switzerland.[1,2]The purpose of this study is to evaluate its criterion validity when used alongside an established measure of organisation known as the European Practice Assess-ment Instrument (EPA).

The belief about the relationship between effective organisation and good-quality patient care is widely accepted and there is interest in organisational quality improvement tools.[35]Professionally led schemes dominate the landscape.[69]They typically consist of external assessments against indicators using checklists, questionnaires for staff and patients, and interviews. A recent systematic review of organisational assessments in the international peer-reviewed literature suggests this type of approach to quality improvement can be found in The Netherlands, UK, Australia and New Zealand.[10]Other approaches to quality improvement of organisational aspects of general practice take a more practice-led approach. Examples include Clinical Micro-systems,[11]continuous quality improvement (CQI) in-itiatives,[12]the Multi-method Assessment Process (MAP),[13]and the Maturity Matrix.[1]

However, there is an emerging interest in how practice-led assessment instruments can be combined in a co-ordinated way with instruments that are based on externally led assessments.[1416]Integrating prac-tice-led assessments with externally led assessments may increase ownership of the quality improvement process, and therefore the motivation to improve.[15]

The aim of this study is to evaluate the criterion validity of the Maturity Matrix as an international measure of organisational development using a Euro-pean measure of practice development known as the EPA.17The objectives are firstly to identify items that measure similar aspects of general practice organ-isation, and secondly to examine agreement between responses to mapped items within practices.

Method

Overview

Between November 2003 and March 2004, prac-tices from nine countries: Austria, Belgium, France, Germany, Israel, The Netherlands, Slovenia, Switzerland (German-speaking part) and the United Kingdom took part in a European study (the European Practice Assessment study) on practice assessment using an instrument called the EPA. The purpose of this wider study was to develop and pilot an indicator set describing effective organisation and management in European general practices. As part of the study, countries were also offered the opportunity to take part in a Maturity Matrix session with participating practices.

The EPA[17,18]

The EPA is an externally led assessment of a general practice organisation. It consists of questionnaires for the doctors and staff that ask about working con-ditions, education and training, and work satisfaction. In addition, an observer visits the practice and uses a checklist to assess aspects of physical infrastructure such as facilities for disabled patients, the presence of patient leaflets, the examination space and the doc-tors’ bags. On the same day, the observer interviews the practice manager or lead general practitioner (GP) to ask about accessibility and availability, staff policies, job satisfaction, medical equipment, information man-agement, quality and safety, and health promotion activities. The practice manager or lead GP completes a questionnaire that asks about the appointments systems, staff appraisals and inductions, computer security, finances, patient involvement, and practice profile information such as list size, attending popu-lation, arrangements for training juniors, and unfilled vacancies. Finally, the practice offers patients the opportunity to complete practice evaluation ques-tionnaires. This is done ahead of the visit and they are returned to the observer on the day, each practice aiming for a sample of at least 30 completed patient questionnaires.

The Maturity Matrix Instrument[1,2,19]

The Maturity Matrix is summarised in Table 1. It is a formative practice-led instrument and consists of 11 dimensions of organisation, each of which is described by an eight-point ordinal scale. Each point on the scale describes a specific stage of practice development for that dimension. The first page of the Maturity Matrix is shown in Table 2. The facilitator liaises with the practice to arrange for as many members of the prac-tice team as possible to be present. A session typically lasts 1 to 1.5 h. The facilitator introduces the Maturity Matrix, talks about the process and takes any ques-tions or comments. They then give a copy of the instrument to each member of the practice team and ask them to complete the Maturity Matrix individu-ally. It takes approximately 10 min for participants to decide where they think their practice is with regard to each of the 11 dimensions. The facilitator initiates a discussion about each dimension in turn, encouraging participants to move from individual perspectives to reach a team consensus about the practices’ existing levels of organisational development and how they would like to improve. At the end of the session, the facilitator summarises the main points and agrees the next steps with the practice.

primarycare-Maturity-Matrix

Table 1: The Maturity Matrix and the role of the facilitator: an overview

primarycare-five-dimensions

Table 2: The first five dimensions of the Maturity Matrix.

Sample and data collection

For the EPA study, each country was asked to recruit a convenience sample of 30 practices spread equally across a sampling frame representing single-handed, dual and three- or more partner practices and also to have a mix of rural and urban practices if possible. The Maturity Matrix was offered to each practice in those countries that agreed to participate. For those prac-tices that also used the Maturity Matrix, the observer arranged a Maturity Matrix meeting on the same day as the EPA visit.

Translation and facilitator training

Participating countries nominated a lead facilitator who attended a training session held in Slovenia in June 2003 and then returned to train a small number of other facilitators in their own country (1–2). The session consisted of watching a video of a facilitator working with a UK practice team. Watching the video was combined with discussion about best practice and reviewing the manual. The lead facilitator then took responsibility for co-ordinating forward and back-ward translation of the Maturity Matrix into Dutch, Slovenian or German.

Data analysis

Mapping items

Items were ‘mapped’ to identify a subset of items from EPA that were similar to Maturity Matrix items with regard to content, recognising that the process of assessment for both instruments is different. A two-way mapping process was undertaken. First, the EPA was analysed to develop a list of EPA items that were related to one of the Maturity Matrix items. Second, the process was repeated starting with the Maturity Matrix items. MR undertook the mapping process and this was discussed and agreed with GE and AE.

Scoring process

The scoring processes for mapped items on both instruments were examined. Overall the EPA uses yes/no responses, yes/no/not applicable responses, absolute numbers and string examples. The mapped EPA items were all yes/no items. The Maturity Matrix is different from the EPA with regard to its scoring process. Each of the mapped items is part of a scale of eight items that constitute a dimension. Thus, each practice obtains a score of 1–8 for each of 11 dimensions.

Test for statistical significance

The Mann–Whitney U test was applied. This test compares the number of times a score from one of the samples is ranked higher than a score from the second sample. A box plot graph was also created for each analysis to enable comparisons between the distributions of scores on the Maturity Matrix for those practices that scored yes on a mapped EPA item with those practices who scored no on the same mapped EPA item.

Results

Sample

Altogether, 273 practices took part in the wider EPA study; 145 practices from five countries took part in the Maturity Matrix study: Germany, 39 (26.9%), The Netherlands, 30 (22.22%), Slovenia, 30 (20.7%), Switzerland, 21 (14.49%) and United Kingdom, 25 (17.2%).

Mapping

Twelve items from seven Maturity Matrix dimensions were mapped by 12 EPA items. The results of the mapping process can be seen in the first four columns of Table 3. The observer checklist, the observer inter-view and the questionnaire for the practice manager/ lead GP were the three sources of mapped items from the EPA.

Statistical analysis

Twelve analyses applying the Mann–Whitney U test were conducted. Six analyses were not statistically significant. Five analyses were statistically significant at P 0.01, and one analysis was significant at P 0.05. Table 3 contains a summary of each analysis, describing the:

•EPA subject area, item and source of data

•Maturity Matrix dimension and item

•Mann–Whitney U value (significance level) and sample size.

Statistically significant differences were found be-tween EPA items and the following equivalent Matur-ity Matrix dimensions:

1  clinical data

2  audit of clinical performance

primarycare-Summary-results

Table 3: Summary of results.

4 clinician access to clinical information

6 human resources management (partial evidence)

9practice meetings

10  sharing information with patients (partial evi-dence).

This means that the median Maturity Matrix score for practices that were assessed as not having achieved an EPA item was significantly different from the median Maturity Matrix score for practices that were assessed as having achieved an EPA item, and indicates cri-terion validity for these dimensions.

A statistically significant difference was found for one out of the three comparisons made between EPA items and dimension 10: sharing information with patients, of the Maturity Matrix; and for one of the two comparisons made between EPA items and dimension 6: human resources. This indicates partial criterion validity for these dimensions.

primarycare-significant-results

Figure 1: Box plots for the statistically significant results.

primarycare-statistically-significant

Figure 2: Box plots for the statistically significant results.

The dimensions where no statistically significant differences between the Maturity Matrix and EPA items were found were dimension 7: continuing professional development; and dimension 8: risk management. Thus criterion validity was not demonstrated for these dimensions. The Maturity Matrix dimensions that could not be included in the analysis due to a lack of overlap with EPA items were dimension 3: use of guidelines; dimension 5: prescribing; and dimension 11: learning from patients. Overall, these data indicate partial criterion validity of the Maturity Matrix as it relates to the EPA assessment.

The box plots for the statistically significant results are contained in Figure 1. The box plot graphs for analysis 1 and 3 suggest that for the dimensions 1: clinical data and 4: clinician access to clinical infor-mation, there was consistency between the way prac-tices interpreted the Maturity Matrix scale and the score (yes/no) given to them by the EPA assessor. For the remaining four analyses where a significant differ-ence between the median responses was found, the distributions illustrated by the box plot graphs indicated that some overlap existed whereby practices who were assessed by EPA as not having undertaken audit or annual appraisals, or having kept records of meetings, or having a selection of books and videos available for patients, self-assessed themselves more favourably using the Maturity Matrix. The lack of consistency between responses to the Maturity Matrix and EPA was more marked for the six analyses that were not significant (Figure 2).

Discussion

Principal findings

This study found some evidence of criterion validity for the Maturity Matrix dimensions with 6 out of 12 analyses achieving statistical significance for mapping with EPA items. The Maturity Matrix dimensions where a significant difference between the median scores was found were:

1 clinical data

2 audit of clinical performance

4 clinician access to clinical information

6 human resources management (partial evidence)

9practice meetings

10 sharing information with patients (partial evidence).

In the case of the human resources management and sharing information with patient dimensions, the evidence was mixed with only some analyses suggest-ing a significant difference.

The Maturity Matrix dimensions where a lack of significant differences between the median scores was found were:

6 human resources management

7 continuing professional development

8risk management

10 sharing information with patients. different. An alternative explanation is that practices were self-assessing themselves more leniently using the Maturity Matrix than when they were externally assessed using the EPA.

Strengths and weaknesses of the study

The main limitation of this study is that a convenience sample of practices took part in the study and this makes it difficult to generalise based on the findings. In addition, three Maturity Matrix dimensions could not be included in the study due to lack of overlap with EPA items.

The wording of the items may provide a possible explanation for four of the six analyses where a lack of statistical significance was found. In analysis 4, it may be that practices view additional training (EPA) as different from induction training (Maturity Matrix). In analysis 6, it maybe that practices do incorporate personal learning plans into appraisal records (EPA), but that they do this for less than 50% of staff. In analysis 7, it may be that practices do analyse critical incidents (EPA item), but that they do not use team meetings as the vehicle for this activity. In analysis 8, it may be that practices do take action on critical incidents, but do not feel that resulting change is substantial enough to be classed as organisational, and may rather involve a change in a management process. Analyses 10 and 11 are less easy to explain, as the wording of the EPA and Maturity Matrix items is very similar, although the method of assessment is different. An alternative explanation is that practices were self-assessing themselves more leniently using the Maturity Matrix than when they were externally assessed using the EPA.

Existing literature

The Maturity Matrix is a self-assessment measure, while EPA is based on external assessment methods. Self-assessment of performance is thought, in the primary care literature, to be more susceptible to faking, or ‘gaming’, where performance data are distorted for the purpose of achieving an incentive or reward.20In the absence of any specific reward or punishment, we might have expected practices to ‘fake good’ by oppor-tunistically assessing themselves more leniently using the Maturity Matrix than when assessed by an external assessor using EPA. There were some indications of this from the following dimensions: clinical audit; sharing information with patients; continuing profes-sional development; and human resources manage-ment. Alternatively, ‘faking bad’ is also a recognised phenomenon whereby performance is under-reported to achieve extra resources or support. Practices could have assessed themselves more harshly using the Maturity Matrix than when assessed by an external assessor using EPA. However, in this study, there was no tangible reward or incentive to distort performance data and we did not find evidence of ‘faking bad’ in the self-assessments of the Maturity Matrix.

Implications for practice

There is an argument that levers for change and improvement often pull practices in different direc-tions. Buetow noted the tension that exists between external assessment and internal development.[16]The more practices are subjected to external assessment, the less ownership and motivation to improve exists. Combining external assessment with self-assessment may restore feelings of ownership of the process by practices and encourage those practices that typically stay away from accreditation processes to engage with quality improvement activities. This study found criterion validity for the limited overlap in content. In addition, both instruments have relatively different purposes (formative vs summative). Given these two features of limited overlap and different purposes, there is likely to be added value from combining the two.

Research

It is important to ask the following research question: How does the combination of external assessment and self-assessment improve on the validity that can be achieved by either method alone? Using more than one method of assessment increases the cost and thus there is a need to understand more fully the potential gain from using the two assessments together. Also, we need to identify whether there is a better order in which to use both assessments. Our study employed a design where both assessments were used on the same day by the same person. What happens if the Maturity Matrix is used before the EPA assessment and by a different person? Will instances of distortion be found? Will practices feel more engaged in the overall assessment process and therefore feel a greater degree of ownership when taking part in the EPA assessment? And what will the yield be for practices participating in this twin form of practice assessment?

Conclusion

This study suggests that the Maturity Matrix possesses partial criterion related validity when compared with a more established method for assessing the quality of organisation in general practices. This study also suggests that there was some evidence that practices distorted responses to the Maturity Matrix measure. Reasons may include that the assessments took place on the same day and that the EPA assessor was also the Maturity Matrix facilitator. Although combining self-assessment with external assessments is desirable to increase practices’ ownership of the process, thought needs to be given to the ways in which incentives and rewards may impact upon differentassessment methods, particularly self-assessment.

Conflicts of Interest

None.

References