Quality in Primary Care Open Access

  • ISSN: 1479-1064
  • Journal h-index: 27
  • Journal CiteScore: 6.64
  • Journal Impact Factor: 4.22
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Reach us +32 25889658

Research Paper - (2010) Volume 18, Issue 1

Effect of computerisation on Australian general practice: does it improve the quality of care?

Joan Henderson BAppSc (HIM) (Hons) PhD (Med)*

Senior Research Fellow

Graeme Miller MBBS PhD FRACGP

Associate Professor, Medical Director

Helena Britt BA PhD

Associate Professor, Director

Ying Pan BMed MCH

Senior Analyst

Family Medicine Research Centre, Discipline of General Practice, School of Public Health, University of Sydney, Australia

Corresponding Author:
Dr Joan Henderson
Family Medicine Research Centre
University of Sydney, Acacia House
Westmead Hospital, PO Box 533
Wentworthville, NSW 2145, Australia
Tel: +61 2 9354 0618
Fax: +61 2 9845 8155
Email: joanh@med.usyd.edu.au

Received date: 1 September 2009; Accepted date: 6 December 2009

Visit for more related articles at Quality in Primary Care

Abstract

BackgroundThere is an assumption expressed in literature that computer use for clinical activity will improve the quality of general practice care, but there is little evidence to support or refute this assumption.Aim This study compares general practitioners (GPs) who use a computer to prescribe, order tests or keep patient records, with GPs who do not, using a set of validated quality indicators. MethodsBEACH (Bettering the Evaluation and Care of Health) is a continuous national crosssectional survey of general practice activity in Australia. A sub-sample of 1257 BEACH participants between November 2003 and March 2005 were grouped according to their computer use for test ordering, prescribing and/or medical records. Linear and logistic regression analysis was used to compare the two groups on a set of 34 quality indicators.ResultsUnivariate analyses showed that computerized GPs managed more problems; provided fewer medications; ordered more pathology; performed more Pap smear tests; provided more immunisations; ordered more HbA1c tests and provided more referrals to ophthalmologists and allied health workers for diabetes patients; provided less lifestyle counselling, and had fewer consultations with Health Care Card (HCC) holders. After adjustment, differences attributable solely to computer use were prescribed medication rates, lifestyle counselling, HCC holders and referrals to ophthalmologists. Three other differences emerged – computerized GPs provided fewer referrals to allied health workers and detected fewer new cases of depression, and fewer of them prescribed anti-depressants. Twentythree measures failed to discriminate before or after adjustment. Conclusion Deciding on ‘best quality’ is subjective. While literature and guidelines provide clear parameters for many measures, others are difficult to judge. Overall, there was little difference between these two groups. This study has found little evidence to support the claim that computerisation of general practice in Australia has improved the quality of care provided to patients.

Keywords

clinical computer use, family practice, quality indicators, quality of health care

How this fits in with quality in primary care

What do we know?

There is an assumption that using a computer will improve the provision of care in many areas of the health system, including general practice. To date there is little evidence that this is the case, and US studies have produced little support for this claim.

What does this paper add?

This paper applies a set of quality indicators to national representative general practice (GP) activity data to compare the practice behaviour of GPs who incorporate a computer in their clinical activity with those who do not. The results fromthe Australian setting support those so far reported fromthe USA – that to date, the use of a computer for clinical activity has done little to improve the quality of primary care.

Introduction

There is an underlying premise apparent from the literature of the past three decades that using a com-puter will improve the provision of care in many areas of the health system, including general practice. Cur-rent claims reference previous work, which paper trails often show to be based on suppositions made some 15 to 20 years earlier; for example Garrido et al (2005) state that ‘Electronic health records reduce uncertainty by providing greater accessibility, accu-racy and completeness of clinical information than their paper counterparts’, referencing a 1991 General Accounting Offce (GAO) report.[1] The GAO report (p. 25) actually concluded that ‘automated systems show promise’, that speed of record transfer and accuracy of information ‘should improve the quality of care’ and adds that ‘no fully automated medical record system exists, so the strengths and weaknesses of such a system have not been documented and are not clearly understood’.[2]

In Australia, practice computerisation has been encouraged since the 1990s through incentives and accreditation processes, and a variety of clinical soft-ware products is available. Computers may be used for a range of functions from use for administrative purposes only, to being fully incorporated into all levels of clinical activity.[3,4] The use of computers (at all), and the level of use for clinical activity, is entirely discretionary both between and within practices.

However, in 1999 Richards et al reported that they had found little hard evidence that the general use of computers in Australia improves effciency at individ-ual practice level or benefits the health sector gener-ally, or that improving outcomes was an aim when designing information systems.[5] This is not just a local trend. Healthfield et al proposed that decision makers in the UK and the USA may be being ‘swayed by the general presumption that technology is of benefit to health care and should be wholeheartedly embraced’ while the evidence to either support or oppose this supposition is still scarce.[6]

Mitchell and Sullivan (2001) undertook a system-atic review of world literature on primary care com-puting from 1980 to 1997.[7] Most studies identified some positive effects of computerisation in selected areas, but they found only 17 assessing the impact of computers on patient outcomes, a number they con-cluded insuffcient to measure the real benefits for patients.[7] While there is some evidence that computer use is associated with individual improvements to the quality of care,[810] there is also emerging evidence that the computer, while solving problems in some areas, is causing or accentuating problems in others.[1114]

Recently, there has been increasing demand for information on health care quality by health economists, policy makers, health professionals and consumers.[15] While this is an international trend, the approach to quality measurement and the capacity to validly assess quality varies widely between countries.[16,17] The use of quality indicators has become accepted as a reasonable approach for assessing quality. The focus has shifted in recent times, from process measures which reflect what was done, to outcome measures, which show the effect of what was done.[18]

Over the past 15 years, computer use by Australian GPs has increased such that over 97% have a computer available at their practice,[19] and it is therefore timely to investigate how the incorporation of the computer into clinical activity affects the quality of care provided by GPs. In a previous study we reported the extent and utilisation of computer use in Australian general practice.[3] This study aims to compare GPs who use a computer in their clinical activity with those who do not, on a range of quality indicators developed for use with primary care data, to determine whether the use of the computer has improved the quality of care provided to patients.

Methods

This study is an analysis of data from the national BEACH (Bettering the Evaluation And Care of Health) program. The BEACH methods are reported in detail elsewhere, but in summary BEACH is a continuous, national, paper-based, cross-sectional survey of general practice activity in Australia. Approximately 1000 GPs participate annually, recruited from a national rolling random sample drawn by the Australian Government Department of Health and Ageing (DoHA). Partici-pating GPs provide demographic information about themselves and their practices, including questions about their computer use, on a GP profile question-naire. They also provide patient demographics and encounter information for 100 consecutive, consent-ing, unidentified patients. The age–sex distribution of patients at BEACH encounters is compared with that of all GP encounters claimed through Medicare, Australia’s universal healthcare scheme, and shows excellent precision.[20]

The 1319 GP participants who completed the BEACH survey between November 2003 and March 2005 were divided into two groups as follows:

1   Clinical computer users Defined as those who use a computer for clinical functions, e.g. prescribing and/or test ordering and/or medical records, with or without internet and/or email.

2 Non-clinical computer users Defined as those who use a computer for administrative functions and/or internet and/or email only, without use of clinical components available in the medical soft-ware (prescribing, test ordering, medical records).

Those GPs who did not use a computer at all were also included in the latter group. Following univariate analysis, the extent to which resulting differences between the two groups were explained by other variables was identified through a series of adjust-ments using logistic and multiple regression.

Quality indicators

In the absence of an evidence-based model for determining how computers would alter behaviour and affect quality, we approached the problem from the perspective of ‘best quality’ and compared clinical computer users and non-clinical computer users to see which group performed ‘best’. To make this assess-ment, we measured their behaviour against a set of quality indicators applicable in a primary care setting. A set of 36 quality indicators validated in a previous study using BEACH data[21] were used to compare the practice behaviour of GPs assigned to the two groups.

Hypotheses

Based on the assumption that the use of computers will improve health outcomes, the overall hypothesis was that clinical computer users would provide a ‘better’ standard of care. The individual hypothesis and rationale for each domain of care was also based on this assumption. Arrows in Tables 1(a) and 1(b) specify the direction hypothesised as ‘better’ quality for each indicator.

table

table

The average length of consultation in minutes was calculated from recorded start and finish times for a sub-sample of patient consultations with GPs in each group. Encounters were designated as either long or prolonged based on their Medicare Benefits item number.[22] Problems managed by GPs were classified according to the International Classification of Pri-mary Care, Version 2.[23] Medications were classified using an in-house system called the Coding Atlas for Pharmaceutical Substances (CAPS).

Statistical analysis

Conventional simple random sample methods were used for the GP-based statistical analyses. Results are reported as proportions when describing events that can occur only once per GP or per patient encounter, but as rates per 100 encounters where events can occur more than once per consultation. As the patient encounters were a cluster-based sample, we adjusted the 95% confidence intervals and P values for the single-stage clustered study design using procedures in SAS version 8.2[24] and STATA version 8.0.[25]

We made univariate comparisons of characteristics of the GPs in each group (listed in Box 1), eliminated those highly correlated with others, and used simple logistic regression to identify those associated (P < 0.10) with clinical computer use. We used step-wise procedures in logistic regression analysis to identify characteristics independently related to clini-cal computer use (P < 0.05). A series of models were built on a hierarchical basis with predictors fitted depending on the outcome of interest. Predictors included GP and practice characteristic outcomes, patient, morbidity and treatment outcomes. Models used for outcomes are specified in the footnotes to Tables 1(a) and 1(b). Logistic regression was used to analyse categorical outcomes, and linear regression for continuous and ordinal outcomes, after adjusting for potential confounding variables.

box

Test ordering

The denominator for clinical computer users included GPs who used a computer for any clinical purpose, but there were a number of GPs in this group who did not use the test ordering function of their clinical software.Therefore, we compared test ordering behaviour for the set of all clinical computer users and their counter-parts in the first instance, and then repeated the investigation for the eight test ordering quality indi-cators, with the GPs grouped according to their use of the test ordering function of their software.

Results

Individual computer use was determined for 1257 of the 1319 GPs. There were 1069 GPs in the clinical computer use group (106 900 patient encounters) and 188 in the comparison group (18 800 encounters). There were 901 GPs who reported using computers for test ordering, and 356 who did not. The sub-sets of consultations with start and finish times recorded included 34 633 consultations with clinical computer users and 6084 consultations with non-clinical com-puter users. Using the sample sizes for 106 900 and 18 800 encounters, the intra-cluster correlation was calculated as 0.079 with a variance inflation factor of 8.821. The result for a two-sample comparison of proportions was a power of 0.8002 (80%) to detect a 3.3% difference between estimates, and of 0.8987 (90%) to detect a 3.8% difference between estimates.

GP and practice characteristics

Compared with their counterparts, GPs who used a computer for clinical activity were significantly more likely to: be female (P = 0.001); younger (P < 0.001); have had fewer years in general practice (P < 0.001); have trained for their primary medical degree in Australia rather than overseas (P = 0.001); be Fellows of the RACGP (P < 0.001); work in larger practices (with five or more GPs; P < 0.001); work in accredited practices (P < 0.001); and have a practice nurse at their major practice address (P < 0.001). They were signifi-cantly less likely to: bulk-bill Medicare for all their patients (P < 0.001); work in solo practices (P < 0.001); or work in major cities or in other metropolitan areas (P = 0.0002).

Quality indicators

Results of the univariate and multivariate analyses for the quality indicators are shown in Table 1(a) for clinical computer users and for non-users. Table 1(b) shows the indicators reanalysed according to GPs’ computer use for test ordering.

In total the GPs who used a computer in their clinical activity differed from GPs who did not on only seven indicators. The unadjusted regression co-effcients showed almost twice as many differences, but for many of these results, the adjusted regression coeffcients showed that the differences were explained by influences other than the GP’s use of a computer. Significant differences attributable to clinical computer use included: consultations with Commonwealth Health Care Benefits Card holders (adjusted odds ratio 0.83; P = 0.035, results not tabulated); overall prescribing rate; antidepressant prescribing; detecting new cases of depression; referrals to ophthalmologists for diabetes patients; referrals to allied health professionals and provision of lifestyle counselling.

Tables 2(a) and 2(b) provide an overview of all the quality indicators examined, including the hypothesis for each. It shows the indicators that did not discrimi-nate at either univariate or multivariate levels of analysis, or both (marked with a single X). For other indicators, the use of a tick (3) shows where differ-entiation occurred between clinical computer users and their counterparts, by showing that the indicator discriminated and the hypothesis was accepted in either the unadjusted results or after statistical adjust-ments were made, or both. For some indicators, the hypothesis was accepted at the univariate level (as indicated with a tick (3)), but ultimately rejected following adjustment (marked with a single X). Where the hypothesis was rejected, and the outcome was a reversal of the hypothesis, the result is marked with a double X (XX).

table

table

Discussion

On balance these results suggest that the use of a computer has had little effect on the quality of care provided by the GPs to their patients. After adjust-ment for other characteristics, clinical computer users performed ‘better’ on three of 34 quality indicators, and ‘worse’ on four. There was no difference in their performance over the remaining 29. Where the indi-cators were used to compare test ordering behaviour through the computer, only one difference emerged; in this instance, those ordering tests through their software performed ‘better’ than their counterparts. In total, from 44 indicators, clinical computer users performed ‘better’ on four and ‘worse’ on four, while no differences were discernible for the remaining 36.

What was different?

Why the groups differed on these particular indicators and not others is not readily apparent. One expla-nation for the lower overall prescribing rate of clinical computer users is that some clinical software in Australia defaults to the maximum number of repeats allowed under Pharmaceutical Benefits Scheme rules when a prescription is written.[12] Unless the default is manually overridden, patients would be given the maximum number of medication repeats allowable, and would not need to return for prescriptions as frequently. However, why these GPs prescribe fewer antidepressant medications for patients with de-pression, but do not differ on rates for other medi-cations, is unclear.

Decision support tools may have influenced the computer users to provide more referrals to ophthal-mologists for patients with diabetes, yet the referral rate to allied health professionals overall was lower for computerised GPs, and it would seem unlikely that these tools would single out ophthalmologists over other healthcare providers. Added to their higher ordering of HbA1c tests, it could be inferred that the clinical use of a computer results in a GP providing better care for diabetic patients. Electronic reminders are effective in modifying physician behaviour[26] and it might follow that GPs who are exposed to electronic reminders for diabetic patients in their software re-spond and therefore act differently. However, GPs who do not use the test ordering function of their software would still be exposed to these reminders, so electronic flags alone are unlikely to have caused this difference in test ordering behaviour.

We hypothesised that clinical computer users would detect more cases of depression, yet they detected fewer new cases – a reversal of the hypothesis. How-ever, their overall management rate of depression did not differ; neither did their rate of counselling of patients with depression. Depression is an illness which is not easily detected, particularly in situations where the patient may have diffculty disclosing the full extent of their symptoms.[27] Managing a problem once it has been diagnosed and making a new diag-nosis are two different scenarios and in this case perhaps the division of consultation time between patient and computer, and the diversion of attention from the patient, reduces the opportunity to detect the unspoken signals which GPs often rely on in these situations.

Computer proficiency should also be considered with regard to the length of consultation. The GP groups were identical on this indicator, suggesting no difference in the level of quality given to the patients, but this may also mean that the extra time involved in dealing with the computer means less ‘quality’ time spent with the patient over the same duration by the less computer proficient GPs.

Similar studies

A similar cross-sectional study in the USA examined the association of electronic health record (EHR) use with 17 ambulatory care quality indicators, with similar results.[28] For 14 of the 17 indicators, there was no difference in performance between visits with or without the use of an EHR. On two indicators, the clinicians using EHRs performed ‘better’ and on one indicat or they performed ‘worse’. Other US studies examining the relationship between EHR use and quality of care also found no assocation.[29,30]

Strengths and limitations

This study employs a method for collection of nationally representative data, which has proved to be a valid and reliable approach to providing an accurate picture of the behaviour of Australian GPs.[31] The large number of observations allows good stat-istical power for most outcomes. We have reported no difference between the GP groups for some of the variables measured but acknowledge that where dif-ferences are very small, there may have been too few cases to be able to make a reliable acceptance of a null hypothesis. For example, the rate of other investi-gations (see Table 1(a)) compared 1201 cases in the clinical computer user group to 169 cases in the group of their counterparts. These investigations occurred in each group at the comparatively low rate of only 11 in every 1000, and nine in every 1000 patient encounters respectively.

Computer use was self-reported in this survey, and some GPs may have inaccurately reported their level of usage, through recall bias or perceived desirable responding. However, the questions about computer use were incorporated within a larger set of questions about a variety of GP characteristics and we have no reason to assume that their responses were inaccurate to a degree that may have compromised this research.

As an entity, quality is diffcult to measure. The use of quality indicators is an inexact science at best, and the incorrect application of inappropriate quality indicators cannot produce a valid or reliable result.[32] However, the set of indicators used in this study were designed originally in consultation with the Royal Australian College of General Practitioners (RACGP) National Manager, Quality Care and Research and the RACGP National Standing Committee, Research, drawing from Australian and international guidelines for preventive activities. These included the RACGP ‘Red Book’, the Canadian guide to clinical preventive health care and guidelines for National Health Priority areas such as the National Heart Foundation cardio-vascular disease guidelines.[21] The quality indicators were validated in previous work done for the RACGP[21] and are suitable for use with the BEACH data source used in this study.

Future implications

One of the diffculties in clearly assessing the relation-ship between the inclusion of computers in the clinical process and quality of care is that GPs are not using the computer to its full potential. In many instances Australian GPs do use the computer to print prescrip-tions and order tests or referral letters, but do not use the electronic health record function available through their software, for a variety of reasons. Many are still heavily reliant on paper records.[3,4] The situation appears similar in the USA.[33]

We were able to examine GP practice behaviour where computers had been included in the clinical process, but within the computer use group there was wide variation in usage levels. It may simply be that computer use has so far made little difference to the quality of care because the computer is not used by many individuals to its full capacity. The cross-sectional data available via the BEACH method, while appli-cable to the process measures utilised in this study, cannot provide individual patient outcomes. Com-plete, longitudinal data would be needed to allow the application of indicators that could provide outcome measures – ironically, information that might become available once GPs use their computers exclusively and comprehensively. At such a time, this type of investigation could be repeated, but given the improb-ability of finding a comparison group of non-clinical computer users, other methods will need to be devised.

Acknowledgements

The GPs and patients who contributed to this study participated voluntarily without remuneration and the authors wish to thank them for their generosity.

References

Funding

The BEACH program is funded by a consortium of Australian government agencies, government funded quality of care agencies and pharmaceutical manufac-turers. During the data collection period for this sub-study, the BEACH program was funded by the Australian Government Department of Health and Ageing, the National Prescribing Service, Abbott Australasia, AstraZeneca (Australia) Pty Ltd, Janssen-Cilag Pty Ltd, Merck Sharp and Dohme (Aust) Pty Ltd, Pfizer Australia Pty Ltd, Sanofi-Aventis Australia Pty Ltd, the Offce of the Australian Safety and Compensation Council (Australian Government De-partment of Employment and Workplace Relations) and the Australian Government Department of Vet-erans’ Affairs.

Ethical Approval

The BEACH program was approved by the Human Research Ethics Committee of the University of Sydney (Reference No. 7185) and the Ethics Committee of the Australian Institute of Health and Welfare.

Peer Review

Not commissioned; externally peer reviewed.

Conflicts of Interest

Organisations fund BEACH through individual re-search agreements with the University of Sydney which provides complete research autonomy for the team conducting the BEACH program. Funders pro-vide input into the research design development through an Advisory Board, however, the final deci-sions regarding research design, data collection and analysis and reporting of findings remain with the principal investigators under the ethical supervision of the University of Sydney and the Australian Insti-tute of Health and Welfare (AIHW). The funding organisations supporting BEACH have no editorial control over any aspect of the resulting papers includ-ing the presentation of results. No financial support for production of this paper is accepted from the funding organisations. None of the authors of this paper have a financial conflict of interest.