Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Research Paper - (2004) Volume 12, Issue 4

Feasibility, appreciation and costs of a tailored continuing professional development approach for general practitioners

Sjoerd O Hobma

GP Researcher, Department of General Practice, Centre for Quality of Care Research

Paul M Ram

GP Researcher, Department of General Practice, Centre for Quality of Care Research

Frits van Merode

Professor of Logistics and Operations Management in Health Care, Department of Health Organisation Policy and Economics

Cees PM van der Vleuten

Professor of Educational Development and Research, Department of Health Organisation Policy and Economics

Richard PTM Grol

Professor of Quality in Health Care, Department of General Practice, Centre for Quality of Care Research University of Maastricht, The Netherlands

*Corresponding Author:
Sjoerd Hobma
Department of General Practice, University of Maastricht
P.O. Box 616, 6200 MD Maastricht, The Netherlands.
Tel:
+31 43 388 28 08
fax: +31 43 367 13 17
Email: sjoerd.hobma@hag.unimaas.nl

Received date: 5 August 2004 Accepted date: 25 August 2004

Visit for more related articles at Quality in Primary Care

Abstract

Background There is a growing tendency to develop more complex interventions for continuing professional development (CPD) of physicians in order to enhance effectiveness. Besides their effectiveness, it is interventions’ feasibility and the appreciation of stakeholders that are increasingly regarded as key features for their implementation in daily educational routines. Objective To study the feasibility and appreciation of a tailored approach to CPD in which general practitioners (GPs) work in small groups to improve demonstrated deficiencies. Design Cohort study. Setting General practices in The Netherlands. Participants Forty-three volunteering GP participants. Main outcome measures The ability of GPs and supporting staff to perform the intervention; costs per hour; participants’ appreciation of (aspects of) the educational intervention. Results GPs accept and are able to perform a CPD intervention that starts with a needs assessment and that subsequently supports the individual selfdirected learning process. GPs need on average 22.3 hours for the assessments, small-group meetings and work in their practices. Costs are e117.56 per hour. The mean appreciation is 6.8 on a 10- point scale. Appreciation of and participation in the intervention are dependent on the topic studied. Conclusions The approach towards CPD is feasible and acceptable. It requires a context in which sufficient resources are available with respect to budget, educational materials and skilled support staff. Furthermore, GPs must be really interested in the topic studied and probably also in the specific approach.

Keywords

continuing medical education, feasibil-ity, primary healthcare

Introduction

According to current insights, continuing profes-sional development (CPD) of practising physicians should be self-directed, learner-centred and should focus on the actual needs of physicians or healthcare organisations.[18] Translating these demands into an educational intervention requires some initial form of assessment of current competence or performance, preceding the intervention. The assessment results must then be used to define improvement goals, which may differ for each person or healthcare organisation. Learning plans should be pointed at these specific goals and they should be individualised, depending on differences in personal or local situations and needs. This educational ideal places great demands on both learners and organisers of CPD, since both individual learning paths for physicians and combining several educational methods enhance the complexity of edu-cational interventions.

In order to implement these interventions success-fully, they should be effective and feasible, and phys-icians must be willing to participate.[9] We studied these aspects in an educational intervention for general practitioners (GPs), based on the principles described above. The intervention consisted of assessments, feed-back and a personalised programme aiming at im-provement of care that is actually delivered. This paper focuses on the feasibility and appreciation of the intervention. Feasibility is defined as the extent to which participants and involved support staff could perform the intervention as planned, and as time and budget required. Appreciation is defined as the value of the intervention to the participating GPs. We were interested to know what factors related to these primary effect measures. Therefore, we also studied various aspects of the intervention that we expected to be of influence, such as the instruments and procedures used, the contents of the programme and the accept-ance of the underlying guidelines. The effectiveness of the intervention has been reported earlier.[10,11]

Methods

Subjects

A letter informed all GPs in the south of the Netherlands (n = 1066) about the study; they were asked if they were willing to participate in our study. We aimed at a minimum of 40 participants in the intervention. The questionnaire was returned by 670 GPs (63%), of whom 174 (26%) showed interest in participating, and a maximum of 100 actually subscribed. This paper focuses on the GPs that were randomised to the intervention arm of the study and who participated actively in the intervention (n = 43).

Educational intervention

The intervention (see Figure 1) consisted of assess-ments, feedback and a personalised programme of self-directed learning, aiming at improvement of daily care. We used assessments to select aspects of care in need of improvement. Three topics were used as examples in the project to study our approach. Doctor– patient communication, which was assessed by video observation during daily surgeries, and competence in management of diabetes mellitus and of ear, nose and throat (ENT) disorders, assessed by written know-ledge tests.[10,12] By relating assessment results to pre-defined standards, we selected two topics for each GP.[13,14] Following this procedure, GPs were allocated to doctor–patient communication (n = 37), ENT disorders (n = 39) and diabetes mellitus (n = 10). Participants received individual feedback on their test results. They received a feedback report with their mean scores, the predefined standards and scores of 10–15 colleagues, including detailed feedback on subitems, to allow identification of specific personal improvement goals. A GP colleague who had attended a three-hour training for this specific purpose, visited participants in their own practices, elucidated written feedback and gave instructions to prepare the first small-group meeting. Subsequently, participants could attend a series of seven small-group meetings (four to six GPs) of two hours each. Trained GP tutors and a manual supported them through a step-by-step pro-gramme aiming at improvement of daily practice. Participants first defined an individual goal for im-provement. On the basis of subsequently determined barriers, they made personal development plans (PDPs); these were readjusted if necessary and were evaluated in the second part of the programme.

primarycare-educational-intervention

Figure 1: Flowchart of the educational intervention

Support staff

A total of 50 collaborators were involved in the intervention (see Table 1).

image

Educational materials

Educational material on the topics, e.g. textbooks and written materials from the Dutch College of General Practitioners based on current guidelines,were supplied if necessary, and support was offered in case of specific needs, e.g. a meeting with an expert.

Time schedule

Assessments were planned over a maximum period of three months and feedback within the following two weeks.[15] Seven months were allowed for the small-group meetings that started in the week after the feed-back; participants could tailor these to their demands. Assessments, feedback and the first small-group meet-ings were planned within a limited timeframe to enhance effectiveness.[15,16] This meant that assessment results of all participants had to be available at the end of the planned assessment period to allow distribution of the participants over small groups.

Credits

As an incentive, four credit points were offered for the assessments and feedback and two credit points for each meeting attended. GPs need 40 credit points per year for recertification.

Dependent variables

Feasibility was operationalised firstly as the ability of participating GPs and support staff to perform the intervention as planned, secondly as the costs related to the intervention and thirdly as the appreciation of the intervention by the participants.

Performing the intervention

To answer the question about whether the partici-pants and support staff were able to perform the intervention as planned, we investigated participation in the intervention, i.e. the extent to which participants completed the programme and were able to do so within the scheduled timeframe, and the number of drop-outs. As criteria were not availablein the literature, we defined the following three criteria for feasibility:

• a majority (>50%) of the participating GPs had to be able to complete all stages of the intervention including self-reported improvement on one or two topics

• a limited drop out of participants due to organ-isational problems within the planned time schedule (max 5%)

• a maximum assessment-and-feedback time of four months and a period of eight months between feedback and the last small-group meeting.

Costs

These were determined as the required investment in Euros per hour for every participant. The costs were running costs and were determined in an ideal size unit for implementation, whereas the variation in costs in other sizes was calculated.

Costs were calculated on the basis of the following. The nominator included the costs of the participant, the time invested by support staff, and the costs of instruments and materials needed in the intervention. All time dedicated to the intervention (assessment, feedback, small-group work and improvement activities in own practice) was included in the denominator. The ideal sample size was assumed to be a unit of six GPs. Variations in costs in other sample sizes were calculated by adding or removing two GPs, assuming shifts in costs due to needs in terms of video-equipment, GP-tutors etc.

GP fees (participants, feedback-giving GPs, GP tutors, observers of video consultations and researchers) were assumed to correspond to an hourly rate of e77.[17] The costs of the participants are opportunity costs, as GPs cannot deliver patient care during educational activi-ties; costs for the support staff are true costs. Sec-retarial costs were calculated on the basis of the current salary scales (e13/h). Fees for installers of video equipment were calculated on the basis of the true costs (e18/h).

The costs of all instruments and materials needed were calculated using the annuity method, assuming a depreciation period of five years for all instruments and materials except the knowledge tests and the accompanying standards, for which a period of three years was assumed. The costs were determined as-suming a running period of one year with normal use of the instruments and materials.

Appreciation

Participants’ appreciation of the intervention as a whole was investigated by questionnaire and rated on a scale ranging from one to ten, the traditional Dutch scoring scale in education. We regarded a score of seven or higher as satisfactory, and a score of five or lower as insufficient.

The appreciation of various aspects of the inter-vention was given on a five-point Likert scale, ranging from 1 (= dislike) to 5 (= high appreciation). These aspects were related to the intervention, i.e. the as-sessment methods used, the predefined standards for desired performance, the procedure of allocating GPs to topics on the basis of assessment results, the written and personal feedback, and the support in the small-group meetings. Other aspects were related to the clinical contents that were used to study our inter-vention, i.e. the perceived relevance of the topic for improvement and the agreement with the existing guidelines on the topics used.

Instruments

Questionnaire

Participating GPs completed a questionnaire shortly after the intervention. This questionnaire contained a checklist to investigate the adherence to the programme in the small-group meetings. It also contained open questions concerning the required time-investment for participation in the intervention as well as its general appreciation, and closed questions on the appreciation of various aspects of the intervention. Besides these questions, participants were invited to comment on the intervention and specific aspects.

Adherence to the time schedule

We carefully monitored and registered the perform-ance of each aspect of the intervention up to the first small-group meeting, and compared these to the initial time schedule.

Time registration

All support staff prospectively registered the time required for their tasks to determine the costs of the intervention.

Statistical analysis

Analyses were done using SPSS. Differences between appreciations of different aspects of the intervention (the assessment methods, the predefined standards, the allocation to the selected topics, the perceived relevance of the topic for improvement and the agreement with the existing guidelines on the topics used) were analysed by t-tests.

The relationship between the appreciation of vari-ous parts of the intervention (the assessment methods used, the predefined standards for desired perform-ance, the procedure for allocating GPs to topics on the basis of assessment results, the written and personal feedback, the support in the small-group meetings, the perceived relevance of the topic for improvement and the agreement with the existing guidelines on the topics used) and participation was investigated by correlating the appreciations with the presence or absence of self-reported improvement.

Results

The questionnaire was returned by 42 of the 43 GPs. Data on all participants’ time schedules and complete time-registrations of the support staff were available.

Feasibility

Performing the intervention

Of the 42 participants, 14 reported successful com-pletion of the programme on two topics; 18 reported completion on one topic, indicating that the majority, 74%, were able to perform the approach. There was no dropout from the study due to problems in organis-ation. The planned period of three months for the assessments and feedback was feasible with regard to the written knowledge tests, but in the video assess-ment 20 participants exceeded the four months; 11 needed one extra week, the remaining nine GPs exceeded it by 2–6 weeks, mainly due to absence of GPs or increased workload due to absence of col-leagues over the holidays. The planned period of 7 months for the small-group meetings was sufficient for all groups. Table 2 shows the participation in the different steps of the intervention.

image

Costs

Besides the time dedicated to assessment, feedback and small-group meetings (mean 13.1 ± 4.4 h), the participants reported to have invested a mean of 9.2 ± 9.9 h for implementing the improvement plans into daily practice. Costs per hour are given in Table 3. Total costs were e117.56 per hour with a proportion of 65% of opportunity costs. Diminishing or enlarging the assumed ideal unit of six GPs with two GPs increases the costs by e8.07 per hour (6.8%).

image

Appreciation

The mean appreciation of the intervention as a whole on a 10-point scale was 6.8; 30 GPs (70%) gave an appreciation of 7 or higher and four GPs (9%) gave an appreciation of a 5 or lower. Four participants did not give a score for the approach as a whole, as we asked them to do, but made a distinction between topics; these GPs appreciated doctor–patient communi-cation more than ENT disorders (8.0 vs. 5.0).

The appreciation of various parts of the inter-vention was investigated and possible differences and effects on the participation were analysed.

Assessment

The video assessment of actual performance was appreciated more than the knowledge tests with re-gard to the identification of stronger and weaker sides (3.93 ± 0.75 vs. 3.34 ± 0.84; P < 0.005), being an incentive to improve (3.86 ± 0.89 vs. 3.08 ± 0.89; P < 0.001) and being a means to select topics for improvement (3.98 ± 0.79 vs. 3.30 ± 0.84; P < 0.0001). There were no differences in appreciation between the two knowledge tests on these aspects. Despite the substandard scores on doctor–patient communication and ENT disorders, criterion-referenced standards were accepted by the participants. There were no differences between the assessments (3.62 ± 0.89 vs. 3.71 ± 0.75 vs. 3.29 ± 0.93). Participants reported a neutral attitude towards the selection of topics for improvement on the basis of the assessment results. Participants in the groups on doctor–patient com-munication were however more satisfied about their allocation than those on ENT disorders (4.09 ± 1.03 vs. 3.57 ± 1.91; P < 0.05).

Feedback

The participants appreciated written feedback posi-tively. The oral explanation by a GP colleague was considered to be non-threatening but to contribute little to the information given on paper. Some par-ticipants would have preferred feedback from the GP who had done the video observations.

Small-group meetings

Both the tutor and the manual were evaluated posi-tively; the role of the tutor was described as indispens-able. As many as 27 participants planned to further use the approach of the small-group meetings, focusing on the process of improvement, in the future, while eight definitely did not want to experience this educational approach again. Their reasons were that it is too laborious, too time-consuming while, as a participant stated, ‘CME should be relaxing’. Two participants explicitly reported that the CPD approach was too time-consuming; the total of seven small-group meet-ings was too much, according to 22 participants. The remaining GPs regarded the time needed for the approach as acceptable. Severalparticipantscomplained that the amount of credit points was not proportion-ate to the hours invested.

Besides the instruments and procedures, the clinical content of the meetings and the underlying guidelines were expected to be of influence. Participants reported ENT disorders to be less appropriate for our approach, compared to doctor–patient communication (P < 0.05). The acceptance of existing guidelines was comparable for all topics.

Various parts of the intervention were expected to influence participation in the intervention. We found a significant correlation between the extent to which the intervention was performed and the perceived suit-ability of our approach for the topic of interest. This was found in both ENT disorders (r = –0.41; P < 0.05) and doctor–patient communication (r = –0.45; P < 0.01); there were not enough data on diabetes for these calculations. The opinions of participants on other aspects studied, such as the assessments used and the attitude towards the content of the guidelines involved, showed no significant correlations.

Discussion

The intervention is feasible when assessed with our criteria, costs are e117–125 per hour (adding up to a mean of e2700 per participant) and a majority of 70% of our participants valued the approach positively. However, our findings must be interpreted carefully and we will therefore discuss the three aspects in more detail.

Our approach appeared to be feasible as 74% of our participants successfully completed the intervention. This is an important finding as it shows the ability to pass through a self-directed learning trajectory based on weaknesses revealed by assessments. In our study, a group of doctors interested in this approach was studied in only two learning cycles. Repeated use of the approach may enhance this number of completed learning cycles. Our study was, however, done in a selected group of GPs, interested in our approach. The capability of doctors less interested in this approach may be different, as the approach relies heavily on the individual activities of the participating doctors. Therefore, to understand more about the feasibility of implementing our intervention in educational rou-tines for practising doctors, more research is needed.

With regard to costs we deliberately did not for-mulate explicit criteria for acceptability, nor did we perform a cost–benefit analysis. As we intended our approach to be generically applicable for all subdomains of general practice, both the costs of the assessments and the benefits will differ strongly depending on the topic. Therefore, these must be weighed considering the local situation. Whether costs are acceptable also depends on the budget available and on the resources already spent on education. In our study, 65% of the costs are opportunity costs. These opportunity costs are small if compared to many popular educational conferences, in which GPs are out of practice for days, or sometimes a week. In the latter, the costs are 40–50 h, while they receive education for no more then 20–30 h. Furthermore, an important part (69%) of the running costs are fees for GP colleagues, who themselves may also benefit from being involved. These effects were however outside the scope of our study.

To successfully implement the intervention in daily practice, the micro-economy of the doctors in the target group must be considered. A number of partici-pants complained about the time investment, inside and outside their practice, and about the reward in terms of credit points. The benefits for participating doctors, in terms of work satisfaction, credit points or money, must be made clear, also for hours spent in practice, as these diminish the costs per hour and are necessary for effectiveness.

The majority of our participants appreciated the approach investigated, and only a small minority assessed the intervention negatively. These are the opinions of a selected group. Before recruiting par-ticipants for the study, we found 26% in the GP population interested in our approach. Whether this reflects a large group with limited interest or a prom-ising group of early adopters is not clear.[18]

We found considerable variation in the appreci-ation and performance of the intervention between the topics studied.[10,11] This conflicts with our aim to develop a generically applicable approach for all sub-domains of general practice. Assessments were used to detect areas of care in need of improvement, as doctors tend to follow courses on topics they are already good at.[1921] Our findings suggest that the approach we used may not be the final solution for this problem.[5]

Acknowledgements

This work was funded by the Netherlands Organis-ation for Health Research and Development.

Conflicts Of Interest

None.

References