Jane Koziol-McLain PhD RN
Professor of Nursing
Interdisciplinary Trauma Research Centre, Auckland University of Technology, Auckland, New Zealand
Denise Wilson PhD MA (Hons) BA (SocSc) RN
Associate Professor Maori Health, School of Public Health & Psychosocial Studies, Auckland University of Technology, Auckland, New Zealand
Ngaire Rae MPH BEd Applied
Health Promotion Advisor, Manaia Health Primary Health Organisation, Whangarei, New Zealand
Faye Clark MBChB FRNZCGP
Recognition and Response Partner Abuse Trainer in Primary Care
Doctors for Sexual Abuse Care, Auckland, New Zealand
Maori Planning and Funding Manager, Waitemata District Health Board, Auckland, New Zealand
Received date: 10 June 2011; Accepted date: 6 November 2011
BackgroundFamily violence is identified as a significant yet preventable public health problem internationally and in Aotearoa, New Zealand. Despite this, responses to family violence within New Zealand primary healthcare settings are generally limited and ad hoc. Along with guidelines and resources, a systems approach is indicated to support a safe and effective response to those who experience violence in the home. Aim To modify an existing United States evaluation tool to guide implementation of family violence intervention programmes within New Zealand primary healthcare. MethodsTwenty-nine expert panellists, representing diverse family violence prevention and intervention organisations acrossNew Zealand, participated in three rounds of a modified Delphi method to identify ideal primary healthcare family violence response programme indicators. In Round One, tool scope and context issues for New Zealand were identified; in Round Two, expert panelists identified ideal indicators and rated indicator importance, and in Round Three, expert panelists attended a one-day workshop to achieve consensus on tool categories, indicators, scoring and measurement notes. The developed tool was subsequently piloted at six volunteer primary healthcare sites for performance, clarity and usefulness. ResultsThe final tool encompasses 143 indicators organised within 10 categories. Pilot sites found the tool and evaluation experience useful in guiding programme development. ConclusionThe evaluation tool represents a best practice standard enabling focused family violence intervention programme development and quality improvement within primary healthcare settings. A standardised evaluation tool may be useful in guiding programme development. Future evaluations will enable individual and national benchmarking activities, using category, overall and target scores to measure progress across settings and over time.
domestic violence, modified Delphi technique, New Zealand, primary healthcare, quality indicators
Family violence is a significant public health problem both internationally[1–3] and in Aotearoa, New Zealand.[4–6] Despite growing recognition that family violence assess-ment and intervention are part of general practice work, formal programme responses are limited. The many barriers preventing primary healthcare profes-sionals from asking about violence are well documen-ted.[7–9] The New Zealand Ministry of Health Violence Intervention Programme (VIP) in District Health Boards (DHBs) seeks to reduce and prevent health impacts of violence and abuse through early identifi-cation, assessment and referral of victims. Alongside VIP, Family Violence Intervention Guidelines for Partner Abuse, Child Abuse and Neglect, and Elder Abuse and Neglect and other general practice re-sources[12,13] are available to support health professionals in identifying and responding effectively to cases of family violence.
The availability of family violence intervention guidelines, together with institutional support, have been shown to increase the likelihood of system-wide, sustainable change.[14–17] Four studies demonstrate the value of using a standardised Delphi evaluation tool in guiding and monitoring programme development.
Two address hospital-based domestic violence quality improvement efforts,[18,19] while Zink focuses on system supports for primary healthcare clinicians man-aging family violence. Finally, Koziol-McLain et al.’s longitudinal evaluation of hospital-based violence intervention programmes using modified Delphi tools demonstrates their utility and ability to contribute to sustainable programme growth over time.[20–24]
An external evaluation commissioned by the New Zealand Ministry of Health monitored development of hospital VIP programmes at baseline and five follow-up periods based on established performance indicators[20–24] and has been important in informing ongoing family violence programme development. Yet currently, there is no strategy to systematically monitor and evaluate responsiveness to family viol-ence at the primary healthcare service delivery level. The use of an evaluation tool, informed by the New Zealand context, would support primary health or-ganisations (PHOs) and general practices in imple-menting system-wide family violence intervention practices. The Family Violence Quality Assessment Tool for Primary Care Offices (FVQA) developed in 2007 in the USA, was the first family violence quality improvement instrument developed for primary care offices. Using a Delphi method, the authors modified the ‘Delphi Instrument for Hospital Domestic Violence Programmes’ for applicability to primary care, iden-tifying 111 performance items divided into nine categories. The face validity and clarity of the instru-ment was then tested in 32 primary care offices, noting the need to further test the tool in different types and locations of offices. To our knowledge, this is the only tool addressing quality improvement of family violence intervention efforts in a primary care setting.
This study aimed to modify the FVQA for the primary healthcare context in New Zealand through collaboration with primary healthcare stakeholders. The desired outcome was a best practice standard for PHOs and general practices across New Zealand, allowing for focused development and quality improve-ment efforts. Development of the tool also aimed to complement hospital responsiveness to family violence efforts, creating a whole healthcare system approach to reducing family violence.
The study applied a modified Delphi technique with expert panellists to identify ideal primary healthcare family violence response indicators. Key stakeholders and nominations for expert panellists were identified by the core research team, the Ministry of Health VIP Portfolio Lead and DHB VIP coordinators. From these nominations, 29 expert panellists were strategically selected by the core research team to ensure broad representation across New Zealand. Inclusion criteria required participants to hold expertise in the area of primary healthcare, family violence or family violence programmes and be able to contribute knowledge and ‘expert’ opinion given their position and experi-ence.[27,28] The developed tool was subsequently tested at six pilot sites for performance, clarity and useful-ness. Network sampling was used to identify six pilot sites, regardless of their current level of family violence response, who then volunteered to be evaluated. The selected sites were all urban and North Island based. Two Ma¯ori (New Zealand indigenous people) health providers were purposively selected to be part of the sample given the need for the tool to be culturally responsive to Ma¯ori sensitivities, and following rec-ommendations arising from consultation with Ma¯ori.
Auckland University of Technology Ethics Com-mittee granted low risk Ethical Approval for evaluation tool development (08/249), and the New Zealand Health and Disability Multiregional Ethics Committee granted approval for the pilot testing phase (CEN/09/ 09/060). The study collected site system indicators, no individual level data was collected, and no abuse experiences were asked about or reported on. Both expert panellists and pilot sites were provided with study information sheets and consent forms which assured responses would be kept confidential, and aggregate data would ensure participant anonymity. Additionally, participants were given the option to consent to being named as an expert panellist or pilot site in any publication of the results with the aim of increasing credibility of the findings.[27,28]
Cultural responsiveness was a key consideration in both the research process and development of the evaluation tool. The core research team upheld pro-cesses which respected Ma¯ori in recognition of the Treaty of Waitangi (an agreement between Ma¯ori and the Crown) principles of partnership, participation and protection, as well as all persons and cultures. The core team included two Ma¯ori members and partnered with AUT’s Kawa Whakaruruhau Komiti (cultural safety committee). Ma¯ori and representatives from Ma¯ori healthcare provider agencies were invited to participate on the expert panel and two Ma¯ori health providers were selected as pilot sites (men-tioned above). Through these means, the study aimed to support a safe and effective response for Ma¯ori within primary healthcare family violence inter-vention programmes.
Modified Delphi procedure
The Delphi technique aims to achieve consensus in a given area of uncertainty or lack of empirical evidence. It uses a series of ‘rounds’ combined by controlled feedback that seeks to gain the most reliable consensus of opinion of a group of experts.[30,31] Round One 4aimed to define the field of what was to be measured based on its proposed use. The seven members of the core research team individually reviewed the FVQA developed in the USA for applicability to the pri-mary healthcare context in New Zealand. Research team comments addressed the appropriateness, accu-racy and representativeness of the content, including target audience, implementation barriers, Funding and planning issues, health structures of care and tool language. Researcher comments were aggregated with consensus achieved regarding item modification to produce the first version of the New Zealand tool prior to consideration by the expert panel in the following round (Round Two).
Two further rounds were applied to achieve con-sensus on family violence response indicators for an ‘ideal’ family violence intervention programme within primary healthcare (Figure 1). Round Two involved the mail-out (participant choice of electronic or postal) of a confidential questionnaire with two parts to 29 expert panellists. In Part A, panellists were asked to list indicators of an ideal primary healthcare family violence programme (free text), and in Part B they were asked to indicate their level of agreement or disagree-ment for each of the indicators using a Likert scale (1 = strong disagreement, to 5 = strong agreement). Con-sensus was defined when an individual indicator scores of > 3.0 was reached by 85% of the expert panellists. Data from Part A of the mail-out results were entered into MicrosoftTM Office Word (2007) and qualitat-ively analysed (content analysis) to identify best prac-tice indicators addressing issues not included in Part B. Data from Part B were entered into SPSS (v.15) and descriptive statistics used to summarise measures of central tendency and distribution. Indicators from Part A of the questionnaire, in addition to the indi-cators which achieved the consensus cut-off (defined above) in Part B were then collated producing the second version of the New Zealand tool for the next round.
Round Three involved 23 expert panellists coming together at a one-day workshop to achieve consensus on tool indicators, and collectively consider issues of content validity, compliance with the Treaty of Waitangi and cross cultural equivalence. As identified by Wilson, the inclusion of a face-to-face meeting, not typically included in a Delphi technique, provided an effective mechanism for panellists to discuss and debate tool issues. Questionnaire results and minority responses, along with new items collected from Part A of the questionnaire were presented to the expert panel in this round for further consideration. First, using the second version of the tool, small groups of panellists worked to achieve consensus on inclusion of the tool’s pre-determined categories and category descriptions. Panellists also were able to propose new categories where required. Following group presentations to the wider panel, each panellist individually rated the im-portance of each category’s contribution to an effec-tive family violence intervention programme using a Likert scale (1 = not important, 10 = very important). Category ratings were entered in MicrosoftTM Office Excel (2007) averaged, and standardised to determine category weightings. Averages ranged from 8.2 to 10. Second, groups were provided with a table of indi-cators for a particular category and asked to consider each indicator in terms of category appropriateness, item wording and measurability. Panellists were encour-aged to pass indicators to another group if they were more appropriate for another category, and to reword, delete or merge indicators as necessary. Groups were also asked to phrase indicator measurement notes, including evidence required to achieve the indicator. Third, groups worked to achieve consensus on indi-cator scoring within each category, first by prioritising indicators in order of importance, and then assigning an appropriate score to achieve a total category score of 100. Each group then presented the argument for indicator inclusion and scoring back to the wider expert panel for consideration. Finally, a Ma¯ori and non-Ma¯ori non-New Zealand European caucus final-ised cultural responsiveness indicators. Round Three resulted in the third version of the New Zealand tool in preparation for piloting.
Pilot testing procedure
Following the three modified Delphi Rounds, the third version of the tool was administered during one-day site visits at six primary healthcare settings, including a general practice and its associated PHO. An identified site liaison person managed the evaluation processes and participation, including relevant practice and PHO representation. Site liaison persons were pro-vided with the evaluation tool prior to the visit, and asked to collect evidence of indicators. During the evaluation visit, one evaluator reviewed indicators and accompanying evidence, while another noted discus-sions addressing accuracy of tool measurement notes and indicator wording, scoring and relevance. Both evaluators recorded scoring and indicator evidence, resolved scoring contradictions during the evaluation, and noted where indicators needed further clarification.
In an evaluation ‘debrief meeting’ with participants at the end of the day, preliminary scores, programme strengths and recommendations were discussed. Site liaison persons were asked to provide written feedback on the evaluation tool and process. Draft reports were sent approximately three weeks after the evaluation to the site liaison persons to correct or clarify scoring and interpretations. Incorporating feedback, finalised site reports were sent to PHO and practice senior man-agement and evaluation participants. PHO chief execu-tive officers and practice managers were asked to comment on the usefulness of the evaluation and report in guiding programme development by responding to an evaluation form that included open-ended ques-tions. Following the pilot study, data were aggregated and presented to the core research team. Consensus was achieved on making final tool revisions, particu-larly regarding indicator clarity and measurement notes, resulting in the final, fourth version of the tool. Evaluation comments were analysed using content analysis to identify tool usefulness (or lack of) and suggestions for future tool application.
Round One results were analysed collectively by the core research team to produce a modified version of the FVQA, the New Zealand Primary Health Care Family Violence Responsiveness Evaluation Tool. This tool version included 96 indicators.
The response rate for Round Two was 93% (n = 27). In Part A, respondents identified 112 indicators of an ideal programme that were conservatively considered potentially unique from the 96 items listed in Part B. In Part B, a high level of consensus (mean scores ranging between 3.7 and 5.0) was achieved for retain-ing the original 96 indicators. Fourteen indicators with the lowest mean scores (between 3.6 and 4.5) were presented to the expert panel during Round Three to review clarity, wording and consensus on inclusion in the tool. The 208 (112 + 96) indicators were organised within nine categories: Collaboration, Policies & Procedures, Resourcing, Documentation, Physical Environment, Workplace Culture, Education, Routine Inquiry/Assessment and Quality Improve-ment (Table 1), including one potential new category ‘Governance and Leadership’ advanced to Round Three for consideration by the expert panel.
Twenty-three (79%) expert panellists participated in Round Three. Panellists agreed the scope of the evaluation tool should include partner abuse, child abuse and neglect, elder abuse and neglect and sexual assault. Owing to the variability between PHOs and general practices in size, resources, and independence, panellists agreed the tool should be promoted as a best practice standard to work towards. The new category, ‘Governance & Leadership’, was identified as necess-ary to address senior level programme governance and support, and Treaty of Waitangi concerns. Panellists rated ‘Education’ and ‘Governance and Leadership’ categories most important, and ‘Quality Improve-ment’ the least (Table 1). Categories were ordered purposefully within the tool to guide a phased ap-proach to programme development beginning with ‘Governance and Leadership’ and ending with ‘Qual-ity Improvement’. The Ma¯ori and non-Ma¯ori non-New Zealand European caucus amended 11 indicators and added three new indicators. Round Three resulted in 133 indicators organised in 10 categories.
Overall family violence programme scores ranged from 12 to 56, with a median score of 39. The dis-tribution of scores across and within the 10 categories is demonstrated in Figure 2. ‘Physical Environment’ was the highest scoring category (median score = 60), followed closely by ‘Governance and Leadership’ (me-dian score = 58). ‘Quality Improvement’ was the lowest scoring category (median score = 0). There was generally wide variation in scores within categories. The core research team collaborated to address queries and suggestions raised during site visits regarding tool measurement notes, indicator wording, scoring and relevance. This resulted in selected indicators being merged and additional indicators added. The final tool included 143 indicators within 10 categories (Table S1).
All six sites provided feedback on the evaluation. The evaluation tool and processes were considered useful by all, two sites specifically commented on how the tool could be used as a development guide for their programme, providing good direction as to next steps. One site suggested the tool could be improved by identifying who was responsible for various indicators, such as the Ministry of Health, the PHO or the general practice. This issue had arisen across the pilot sites and the evaluators had resolved this to some extent by focusing on the service provided to clients, rather than who provided the service. Four sites commented that they found the evaluation report to be accurate, thorough and that it would inform future develop-ment. One practice senior manager felt ‘despondent’ recognising the significant effort required to reach best practice standard. Based on this feedback, the core research team recognised a need to more clearly com-municate evaluation aims and processes with senior management.
Using a modified Delphi technique, the development of an evaluation tool with 143 performance indicators across 10 categories occurred. The indicators rep-resent extensive stakeholder agreement, with estab-lished measurement notes for each indicator and category to ensure consistency. It is important to note the tool is one resource to support the develop-ment and implementation of a systems approach to responding to family violence within primary health-care. Additionally, national programme and resource development, such as policy and business case tem-plates, standardised electronic clinical practice forms, national network of coordinators and programme Funding would support a systems approach to re-sponding appropriately and safely to family violence within the primary healthcare setting. Yet, until evidence exists on the effectiveness of documenting primary healthcare family violence assessment and interven-tion, the knowledge and experience of experts pro-vided an appropriate mechanism to guide programme and practice developments.
An important strength of this study was the diver-sity of experts that participated in identifying indi-cators of an ideal primary healthcare response to family violence. They included, for example, general practitioners and practice nurses, clinic managers, family violence (child protection, partner abuse, sexual abuse, elder abuse) specialists, police, refuge (shelter) and community agency representatives, policy makers and acute healthcare family violence intervention coordinators. This group recognised the need for safe, effective care provided within the context of indi-vidual primary healthcare practices and communities. The third face-to-face round, not generally included in Delphi rounds, was an efficient mechanism that offered the opportunity to discuss and debate dilem-mas, while working towards the goal of consensus. Finally, testing the tool in six sites provided an opportunity to identify and correct points of tool weakness, generally focusing on indicator meaning and measurement.
The New Zealand tool includes several improve-ments over the original FVQA tool that may be gen-eralisable. The addition of a ‘Governance and Leadership’ category highlights the importance of senior manage-ment and organisational commitment to achieving system change. Ordering the categories in a phased approach to guide programme development was also considered important by the panellists and pilot sites. There is also greater attention in the New Zealand tool to cultural responsiveness, addressing the need for health systems and programmes to address appropri-ateness and quality of services for disadvantaged and marginalised groups.
There are several limitations to consider with regard to this study. Importantly, it focused on the current healthcare delivery system in New Zealand. Alterna-tive health delivery models and other contexts may require further tool modification. It is also important to consider that the tool reflects best knowledge and experience at the time of the study. As new knowledge becomes available and systems learn from experience, indicators may become obsolete or need modification.
Finally, the pilot test was conducted in six volunteer urban-based primary healthcare settings in the North Island of New Zealand that were keen to focus their attention on family violence system developments. The evaluation tool and processes would likely be challenged with extending the evaluation further. These sites varied largely in characteristics (such as independence and Funding), and all were in the early stages of pro-gramme development, beginning to address ‘Govern-ance and Leadership and ‘Resourcing’ categories while ‘Routine Inquiry/Assessment’ and ‘Quality Improve-ment’ category development was absent.
Evaluation of sites using a standardised tool facilitates benchmarking activities. Benchmarking has two key uses: first, PHOs and general practices can be com-pared against one another at a high level with the aid of aggregate quality scores and meeting a pre-determined target score, and second, individual quality scores can
be used for continuous quality improvement within each PHO and general practice.[34,35] Scores for each category are aggregated, enabling comparison across the PHOs and with an optimal target score, arbitrarily set as 70 out of 100. This enables PHOs to work towards a target score through quality improvement activities, and can be reviewed at a later date. Future follow-up evaluations at the pilot sites will enable benchmarking activities and further test tool perform-ance as indicators are achieved over time. In addition, while during this study site liaisons were not required to complete a self-evaluation prior to the onsite eval-uation, this process could be implemented to test tool performance.
This study resulted in the Primary Health Care Responsiveness to Family Violence Evaluation Tool. The tool was developed for application within the New Zealand primary healthcare context to support family violence intervention programme develop-ment. While some tool indicators may require modi-fication for use beyond the New Zealand context, the phased approach of the tool could be used to support effective, sustainable programme implementation else-where. It is hoped that the tool will coincide with other New Zealand developments including review and updating of national guidelines, standardised primary healthcare electronic record family violence forms, and other resources contributing to capacity and capability building in responding to individuals, fam-ilies and communities who have for too long suffered the burden of violence without healthcare assistance.
Thanks to Jo Adams for her initial work in planning this project, Sue Zimmerman and expert panellists and pilot sites involved in the study.
Evaluation tool development was funded by the New Zealand Ministry of Health.
Evaluation tool Delphi study ethics approval granted by: Auckland University of Technology Ethics Com-mittee (08/249); Pilot testing ethics approval granted by: New Zealand Health and Disability Multiregional Ethics Committee (CEN/09/09/060).
Not commissioned, externally Peer Reviewed.