The Impact Areas Questionnaire (IAQ):
A Music Therapy Service Evaluation Tool
Abstract
Service evaluation is a professional requirement for music therapy practitioners and organisations. Yet service evaluation findings are rarely published within the professional literature, and there is limited documentation of the processes and methods of such evaluations, including the rationale, dilemmas, and challenges encountered. This is perhaps due to the perceived status, methodological weaknesses, and context-specificity of service evaluation work. Drawing on our engagement with service evaluation in diverse settings, we have become aware of its potential beyond its typical current uses in the field as well as of the need for open discussion and debate about the service evaluation tools that are available. This is where the aim of this paper lies: to introduce a service evaluation tool, the Impact Areas Questionnaire (IAQ), alongside the studies that led to its construction. Developed originally through a review of 27 individually designed service evaluation projects, this questionnaire contains a number of different impact areas. Adopting an ecological perspective, these areas refer to music therapy’s perceived impact not only on service users, but also on families/carers/friends, staff, and the organisational context in its entirety. Following its original development within Nordoff Robbins England and Wales, this questionnaire was tested in the context of Nordoff Robbins Scotland with the aim of exploring its applicability and transferability to other music therapy settings. In addition to presenting the findings of this testing, we discuss the potential use of the IAQ, which is included as an appendix to this article, in other settings and its relevance for knowledge and policy making in the field.
Date received: 29 April 2019
Date accepted: 4 June 2020
Publication date: 1 July 2020
Introduction: Towards service evaluation
Evaluation is a crucial component of any effective, ethical, and accountable service provision – and this is equally applicable to all arts and health practices, including music therapy. Service evaluation assesses a service and its impact in relation to its aims. As highly context-dependent work, service evaluation is shaped by multiple factors including the evaluation brief, the target audience as well as available resources. While recognising its context-specific nature, there has been an increased attention on positioning service evaluation outcomes within the broader evidence base in the field and on understanding how such outcomes may differ across different client groups and settings. Balancing the need to meet the specificities of the context within which each service is provided and the wish to produce meaningfully comparable findings across different services and contexts is a real challenge. Context-sensitive initiatives of music therapists and organisations have led to diverse service evaluation systems over time, but the need to develop more coherent and transferable evaluation frameworks for services has emerged in recent years (Daykin, 2016; Tsiris et al., 2014a; Tsiris & Hartley, 2014).
This paper introduces the Impact Areas Questionnaire (IAQ; see Appendix 1), a music therapy service evaluation tool developed at Nordoff Robbins in the UK. In addition to presenting the core components of this tool, we outline the processes that informed its development and testing. First, we discuss three themes of consideration within and around music therapy which lay a foundation for understanding the potential role of service evaluation.1 These themes, as discussed below, pertain to a critical engagement with the prevailing evidence-based practice movement, an emphasis on client or service user involvement, and the distributed impact of music therapy.
Theme I: Critical engagement with the evidence-based practice movement
In line with an integral understanding of evidence in music therapy (Abrams, 2010), the value and relevance of different evidence pathways and of different methodologies depend on the area and the aim of each investigation. This way of thinking challenges traditional assumptions around hierarchies of evidence and has promoted contextual responses to questions focusing not only on what counts as evidence but also on how we assess the quality of evidence (DeNora & Ansdell, 2014; Stige et al., 2009; Wigram & Gold, 2012). The National Health Service (NHS Health Research Authority, 2013) and some music therapy publications (e.g., Tsiris et al., 2014a) have outlined the value and different functions of research alongside other evidence pathways, such as audit, clinical assessment, and service evaluation projects. Although there are no universally accepted definitions of each pathway, a distinct characteristic of service evaluation is its focus on the music therapy service as a whole. This is in contrast to clinical assessment where the focus is on the individual client (Spiro & Tsiris, 2016).
Although evaluation is a professional demand (e.g., HCPC, 2013), its relatively recent entrance to the professional and disciplinary discourse of music therapy seems to be faced with various critiques. These critiques often pertain to methodological issues and perceived flaws associated, for example, to the double role of the music therapist as the evaluator, the construction and validity of the evaluation questionnaires, as well as sampling criteria and dissemination methods of evaluation findings. These issues, for some, may constitute reasons for disregarding service evaluation findings as a legitimate source of evidence and disciplinary knowledge.
Some evaluators try to respond to these critiques by changing their evaluation methodologies accordingly. Conversely, others argue that service evaluation should be considered as distinct to research, and its quality therefore should not be judged according to research quality criteria (Levin-Rozalis, 2003). The latter resembles our position; while proposing that evaluation can be informed by research methodologies, we argue that service evaluation is a distinct activity. In either case, however, it is crucial for evaluators to be transparent about the evaluation process and its limitations as well as about their assumptions and bias.
Equally, when reviewing the research literature, one needs to remain alert and question the underlying assumptions and belief systems of different paradigms. Taking music therapy in palliative care as an example, the outcome of a Cochrane review (Bradt & Dileo, 2010, p. 2) that there is “insufficient evidence of high quality” to support music therapy’s effect in palliative care needs to be understood within the context of the review’s methodological approach. Within its approach, the lack of ‘masking’ or concealment of group allocation of participants, assessors and service providers is perceived as a risk of bias and thus undermines the quality of research outcomes. Given the highly interpersonal and context-sensitive nature of music therapy practice however, such methodological approaches pre-empt the inability of music therapy research to achieve high-quality ratings for the measurement of subjective outcomes (O’Callaghan et al., 2015). This seems to be particularly relevant within sensitive care contexts, such as palliative care, where research “ideals” may be unachievable. Ethical dilemmas raised by the randomisation of dying patients, the opposition to randomisation by patients and their referral sources, as well as the sensitivities around data collection from dying patients and their caregivers are some issues (McWhinney et al., 1994).
The valuing of human experience in context and in action as a valid source of knowledge has been an antipode to objectivist research. Debates in music therapy have highlighted some of the tensions between these different positions (Ansdell, 2006; DeNora, 2006; Wigram, 2006), while some relatively recent perspectives suggest a more integral understanding (Abrams, 2010). These debates can inform the emerging dialogues around service evaluation methodology and its value in music therapy. Service evaluation, in our view, aligns itself more naturally with research approaches that foster context-specific explorations and value people’s opinions and narratives.
Theme II: Increased emphasis on service user involvement in the planning, delivery and development of healthcare services
Over the past three decades, there has been an increased emphasis on client or service user involvement in the planning, delivery and development of healthcare services as well as in research and evaluation (Brett et al., 2014; Omeni et al., 2014). Highlighting the benefits of client involvement, research has shown that such involvement can lead to improvements in the accessibility of and information about services, the coordination of care and the relationships between professionals and clients. Furthermore, service user involvement has been associated with positive clinical outcomes, such as improved self-esteem and confidence (Crawford et al., 2002; Omeni et al., 2014; Storm et al., 2011). At the same time, however, some difficulties have been observed. Studies show, for example, that service users can find it difficult to influence service providers and to have a real impact on decision-making across all levels of service delivery. Generally, service user involvement seems to be progressing faster at the level of individual treatment than at a wider organisational level (Kent & Read, 1998; Sargeant et al., 2007). For example, documenting people’s preferred place of care and death is a simple, yet important, example of service user involvement as part of advance care planning in palliative care.
This emphasis on service user involvement has been associated to some degree with a broader movement towards empowerment of service users and decolonisation which has been witnessed not only in practice development and improvement, but also in teaching and research (McLaughlin et al., 2014; Minogue et al., 2005). In music therapy, this turn to service user perspectives is reflected to an extent in the development of participatory research studies (e.g., Rickson, 2009) and of resource-oriented approaches to music therapy (Rolvsjord, 2010). McCaffrey (2018), for example, stressed the need for acquiring experiential knowledge of music therapy through service user evaluation. Promoting the concept of “expertise by experience,” McCaffrey’s evaluative work resonates with Baines’s (2014) work on music therapy as an anti-oppressive practice and both lay a useful framework for understanding and positioning the role of service evaluation in music therapy.
More broadly, Bradt (2018) argued that service users’ perspectives can “play a powerful role in examining and enhancing the impact and quality of music therapy services, securing continued funding for music therapy services, enhancing understanding of music therapy as a healthcare service” (p. 1) and improving the impact, relevance, and applicability of research findings. This view was shared by Geretsegger (2019), highlighting that the involvement of service users in research has become a common demand by many funding bodies and is supported by developments in citizen science. In recent years, some music therapy publications have focused, for example, on service user perspectives in neuro-rehabilitation settings (Tsiris et al., 2018), in mental health services (McCaffrey, 2018), and in community settings for older people, including those with dementia (Powell, 2006).
Theme III: Growing awareness of, and interest in, music therapy’s distributed impact
Alongside the emergence of community music therapy (Pavlicevic & Ansdell, 2004a; Stige & Aarø, 2012; Stige et al., 2010; Wood, 2015), there has been an increased interest in the ripple effect of music therapy’s impact (Pavlicevic & Ansdell, 2004b). This highlights the expansion of our awareness of music therapy’s impact beyond the individual client or service user (namely the direct beneficiary) to consider indirect beneficiaries, such as family members, carers, staff, or other bystanders. The ripple effect also hints at an expanded focus beyond the music-making moment to consider the music therapy service as a whole (for instance, including consideration of music therapists’ multifaceted contribution to multidisciplinary meetings, and the overall life of the organisation; see also Ledger, 2010). Studies have documented this ripple effect in relation to music therapy practices and settings, such as music therapy in several care homes in the UK (Pavlicevic et al., 2015), and diverse music therapy settings in Israel, England, Norway and South Africa (Stige et al., 2010). This expanded way of practising and understanding music therapy, however, can be relevant to any context of care. A UK survey of music therapists working in palliative and end of life care (Graham-Wisener et al., 2018)2 found that most practitioners perceived music therapy’s reach to extend beyond impacting clients to support relationships between clients, families, and staff, as well as to support palliative care staff. These findings resonate with those found in other studies exploring multidisciplinary perspectives of music therapy in adult palliative care (O’Kelly & Koffman, 2007; Tsiris et al., 2014b). This perceived distributed impact of music therapy is supported by research findings. For example, O’Callaghan and Magill (2009) found that oncology staff members who had witnessed music therapy on the hospital wards were often indirectly supported by the sessions and consequently perceived that their care of patients had improved. Canga and colleagues (2012) explored the impact of environmental music therapy on alleviating compassion fatigue and stress in oncologists, nurses, and other healthcare professionals in a cancer care setting. Likewise, Hilliard (2006) found that hospice staff improved in team building when either experiencing free-form or structured music therapy sessions. Examining the use of and satisfaction with music therapy services in a home-based paediatric palliative care programme, Knapp and colleagues (2009) found that primary caregivers were more likely to report satisfaction with the hospice care when patients received complementary therapies such as music therapy. Similar findings are also reported in terms of music therapy’s impact on bereaved caregivers of cancer patients (Magill, 2009), caregivers of people with dementia (Brotons & Marti, 2003; Clair & Ebberts, 1997), as well as family members of children with learning disabilities (Kaenampornpan, 2015). A study exploring music therapy for young adults with severe learning disabilities, for example, highlighted the indirect impact of music therapy on the parents of the young adults supporting them in the formation of friendships and social relationships (Pavlicevic et al., 2014).
The aforementioned considerations regarding our critical engagement with the prevailing evidence-based practice movement, the importance of service user involvement, and the distributed impact of music therapy prepare the ground for engaging with impact evaluation of music therapy practice within different settings. These considerations, alongside our theoretical underpinnings of improvisational music therapy (Tsiris et al., 2018), have informed the work that led to the development of the Impact Areas Questionnaire (IAQ) which was tested and used within a range of contexts.
The Impact Areas Questionnaire (IAQ)
Stages of development
Drawing on its service evaluation work between 2009 and 2017, Nordoff Robbins England and Wales (NREW) developed a service evaluation system. The development of a questionnaire was at the heart of this system and is the focus of this paper. However, this questionnaire was part of a wider service evaluation process – from planning to dissemination (see Tsiris et al., 2014a). This wider process includes other data sources, such as comment slips eliciting feedback from relevant parties, and case studies documented by music therapists and/or researchers, alongside monitoring information such as service users’ attendance, presenting features, and referral reasons.
The questionnaire development was organic, responding to local need and building on experience with the process. This development can be understood in four stages:
Stage 1. This five-year stage included the development of bespoke questionnaires for each NREW service evaluation project. Adopting a bottom-up approach, these questionnaires were designed in close collaboration with the practicing music therapist in each workplace and their manager. Over time we identified some key information – such as client group, format of music therapy sessions offered, and reasons for doing the evaluation – that was needed in order to develop context-specific questionnaires. This eventually led to the creation of a planning form where all such information was recorded.
Right from the start and while being informed by sociocultural and ecological approaches to music therapy3, all projects considered music therapy’s impact not only on service users, but also on their families as well as on staff and the workplace. Equally, we tried to include a range of participant groups, i.e., service users (where possible), families/carers/friends, staff and the music therapist in each workplace. To this end, and in addition to the standard questionnaire, we developed bespoke easy-read questionnaires using simpler English for service users where needed. The questions on both questionnaires were tailored to each participant group and therefore were not necessarily aligned.
Stage 2. This second stage focused on revisiting our service evaluation experiences until that point and drawing implications for future developments. This led to the publication of a guide to service evaluation (Tsiris et al., 2014a) where the nuts and bolts of doing evaluation were presented in five phases. In addition, we did a retrospective analysis of the 27 service evaluation projects that took place between 2011 and 2014.4 This analysis looked for emerging patterns and themes with regards to different areas of music therapy’s perceived impact by analysing the findings across all the projects as well as the bespoke questionnaires from each project separately. The identification of some commonalities in participants’ responses as well as in the designs and foci of questionnaires, informed the development of a new questionnaire. The rationale behind its creation lay in its potential use across all workplaces within which NREW was providing music therapy services. This questionnaire included a set of impact areas in relation to music therapy’s impact on service users (12 impact areas), families/carers/friends (6 impact areas), staff (5 impact areas) and the workplace (3 impact areas).
Stage 3. In line with our bottom-up approach, this stage focused on checking the extent to which the impact areas identified in Stage 2 were relevant and comprehensive (Spiro & Tsiris, 2017). Through an online survey in 2015, the NREW music therapists (n=32) were able to rank the importance of each impact area per different types of workplace (n=10) and client groups (n=13) drawing from their working experience as part of their employment at NREW over three years. The music therapists were also able to suggest the inclusion of new impact areas or indicate that certain areas may not be applicable. The survey outcomes highlighted the perceived importance of all impact areas, while there were no indications regarding missing impact areas or the inclusion of new ones. However, people’s comments helped to refine the wording of some of the descriptors of the impact areas.
Stage 4. The last stage of the questionnaire development concerned the further refinement of its impact areas and the ongoing check of their relevance. To this end we examined the dataset from all service evaluation projects which had used the standard questionnaire. This included checking any potential patterns in terms of what questions tended to be skipped by the participants. We also sent a follow-up survey to music therapists inviting them to comment on the relevance or irrelevance of each impact area in relation to different workplaces within which they were working. In parallel, we checked how the existing impact areas related to the changing NREW’s strategic vision and its focus on musical participation in itself as an outcome of music therapy work. As a result of this work we added an impact area (for service users, families/carers/friends, and staff) regarding providing opportunities to experience music. We also generated some main themes/research questions (according to the NREW mission) under which we grouped the impact areas. Apart from changing slightly the order of presentation of some impact areas within the questionnaire, these changes had no influence on the service evaluation process and the use of the questionnaire.
An internal consultation about the service evaluation process, including the standard and easy-read questionnaires, was conducted in December 2016. This involved feedback by NREW music therapists, researchers and managers. This process led to updates in relation to some procedural elements such as the administration of the questionnaire, and the format of the final evaluation report. Also, some changes to the questionnaires were implemented to enhance the accessibility of the easy-read questionnaire and its match with the structure of the standard questionnaire. The layout of the standard questionnaire was also refined. Finally, an English for Speakers of Other Languages (ESOL) version of the standard questionnaire was introduced in response to feedback from music therapists.5
Domains of impact and participant groups: A four-by-four approach
The IAQ takes a four-by-four approach, with four domains of impact and four participant groups (Table 1). This approach allows for collection, analysis, and representation of a range of relevant people’s perceptions of the potential direct and indirect impact of music therapy on its beneficiaries.
Beneficiaries are the people or organisation that may benefit from the music therapy service provision. Our four groups of beneficiaries are service users, families/carers/friends, staff, and the organisation. We distinguish between direct beneficiaries – the people who are referred to music therapy sessions – and indirect beneficiaries – those who might be involved or affected indirectly by music therapy. Service users and in some contexts, their families are the direct beneficiaries whereas staff and the organisation in its entirety are indirect beneficiaries.
We use the term service user to refer to a direct and intended beneficiary of music therapy. This term has been criticised by some for implying that music therapy is actively provided by an expert professional and passively used, experienced, or received by a service user (Bennett, 2017). Although it may not appear to fit well with music therapy as improvisatory, creative, and participatory practice, this generic term is used in the IAQ given the questionnaire’s use within different settings where different words, such as clients, patients, or residents, are used to describe music therapy participants. Equally, the term service user is increasingly relevant to our understanding of music therapy within the context of broader organisational and policy contexts (Bradt, 2018; McCaffrey, 2018; Solli et al., 2013). For these reasons, this term is used in the paper ensuring continuity in language and highlighting our evaluation focus on music therapy as a service.
Participant groups are the groups of people who can complete the questionnaire: service users, family/friends/carers,6 staff, and music therapists (as a separate group of professionals given the evaluation focus). It is clearly important in evaluating the impact of any intervention to collect the views and experiences of service users as the primary intended direct beneficiaries of music therapy. This may not be always easy or even possible in contexts where service users’ ability to complete a questionnaire is limited, but their views and experiences should always be sought and facilitated as far as is practicable. Family members, friends, and others who care for service users – whether attending music therapy sessions with a service user or not – can have important perspectives to share on the impact of music therapy. As such, they are considered as a second relevant group of participants. Staff members (whether paid or voluntary) at partner organisations where music therapy takes place also work with those service users and may have perspectives on the music therapy’s impacts, again whether they have been present in music therapy sessions or not. Finally, music therapists themselves have important information to contribute to the evaluation of impact in music therapy. The same questionnaire, in online or paper form, is completed by people in each participant group.
Table 1
The questionnaire is organised in relation to the four domains in Table 1. These four domains emerged from grouping 29 impact areas that summarise distinct ways that music therapy might have positive or negative effects across a range of settings. The impact areas were identified by music therapists reflecting on their work and then collated by the research team. As discussed above, such areas were not limited to those aspects of impact that would directly affect service users. This perspective on impact fits with the view of music therapy as having effects that ripple out from the central instances of music-making. From this perspective, music therapy is seen as possibly having positive (if sometimes subtler) impacts in a wider context to include family relationships, work stress of staff members who may be in or around music therapy, or the general atmosphere of a hospital, school, or care home within which music therapy is offered. It was considered important to attempt to capture information about impact in these areas, though, as can be seen in the distribution of impact areas across the domains, a proportional emphasis remains on the impact areas that relate to service users.
Some impact areas have overlapping but not identical foci and they are differentiated according to their target group/beneficiaries. For example, the focus on communication skills can relate to language or eye-contact for service users (IA1: Develops communication skills), whereas the same focus can relate to offering ideas and skills in communicating with relatives for families (IA14: Enhances communication skills and understanding). The focus of other impact areas however is unique to specific beneficiaries. Music therapy’s impact on work-related stress, for example, is specific to staff. Therefore, the ratings of different impact areas across the four domains are not grouped in their reporting.
The four-by-four approach recognises the importance of participants’ perceptions in each participant group about each domain. The questionnaire therefore gathers data about how service users, family/friends/carers, staff, and the music therapist each understand and experience the impact of music therapy for service users, for their families/carers/friends, for staff members at the organisation in which music therapy takes place, and for the organisational environment.
Through rating a Likert scale (from 5 = “very positive impact” to 1= “very negative impact”), all four participant groups are asked to respond to statements regarding music therapy’s impact in relation to each impact area (Appendix 1). For each statement, there is also a “not applicable” option. Results are then collated and analysed, with the numbers of participants from each group reported, together with further details such as job title for staff where possible and appropriate.
Questionnaires are distributed by the music therapist at the partner organisation in digital and paper formats. The standard questionnaire under discussion here, and reproduced as an Appendix to this article, is the default questionnaire, and has undergone several minor changes in response to user feedback and review of systems by the research team. NREW has also developed two other versions, ESOL and easy-read. The ESOL version is identical in structure to the standard questionnaire, with language redrafted by an experienced ESOL teacher so as to be clearer and easier to understand for individuals in any participant group for whom English may not be a first language. The easy-read version (Appendix 2) was developed to facilitate independent completion by service users such as young children, children with special educational needs, or adults with learning difficulties. In Domain 1, a question about each of the 13 impact areas is asked in simple language, followed by a row of five faces with simple expressions, corresponding to the Likert scale in the standard questionnaire. There is some evidence from completed easy-read questionnaires that the scales appear to have been understood (for example, extra smiley faces being added by participants to the list with an even more pronounced smiley face). Given that participants in these groups would most likely be service users, and that attention spans may be shorter in those for whom the easy-read questionnaire is appropriate, only the questions relating to Domain 1 are included; however, there is no reason in principle that participants completing an easy-read questionnaire could not contribute relevant opinions on impact areas in Domains 2, 3, and 4.
Reporting
The numeric and narrative findings resulting from the use of the IAQ are the central part of Nordoff Robbins service evaluation reports. These findings are presented alongside supplementary material and information from other sources to include monitoring information (such as attendance records, numbers of sessions and of unique attendees, and referral reasons), vignettes written by the music therapist that might detail their work, and photos.
In some cases, we grouped different sets of impact areas into four key themes: engagement in music; quality of life and well-being; interaction, communication and/or relationships; the organisation’s atmosphere. The first theme, for example, included two sets of impact areas: IA12: Provides a positive/creative experience, and IA13: Provides an opportunity to experience music. For Nordoff Robbins, these groupings offered a summarised overview of all service evaluation findings in relation to strategic priorities of the organisation. Other organisations could consider different groupings depending on their priorities.
Testing the IAQ
The IAQ, as the core component of the NREW service evaluation system, was trialled at Nordoff Robbins Scotland (NRS). This was the first time that the IAQ was used within another organisation outside the context within which it was originally developed. Although the IAQ is not necessarily Nordoff Robbins specific, NRS was an obvious place for testing the IAQ given the existing partnership between NRS and NREW and some of their shared theoretical and practice underpinnings.
The aim of this project was to explore how the IAQ could be implemented in other contexts of work, taking NRS as a case. As such, the project explored the applicability and transferability of the IAQ and its relevance to NRS’s contexts of work. By doing so, this study aimed to identify potential improvements in the IAQ prior to making it available to the wider music therapy community.
Procedures and participants
This research project included two phases. Phase A focused on replicating and implementing the IAQ across all NRS services. For the purposes of this project, and to facilitate comparison (as and when appropriate), NRS replicated the questionnaire and adopted NREW’s processes of data collection and analysis. The questionnaire was disseminated in electronic and print formats to all participant groups (i.e., service users, families/friends/carers, staff, and music therapists) as appropriate across all NRS services (33 services; 330 completed questionnaires). Other aspects of the broader NREW evaluation system – such as monitoring information regarding music therapy attendance or other information such as vignettes and case studies – were not included as part of the project.
In line with the original NREW data analysis process, data were gathered and analysed descriptively using frequencies and percentages per impact area. Free-text responses were thematically grouped according to each impact area as appropriate to offer further understanding of participants’ ratings. Table 2 outlines the number of workplace types, service evaluation projects and participants involved in Phase A. This overview offers a summary of our dataset. Comparative analysis of the findings between participant groups and/or workplaces however is beyond the scope of this paper.
Table 2
Phase B focused on exploring the extent to which the implementation of the IAQ and of the related data collection and analysis processes (Phase A) is applicable and transferable beyond NREW, in this case within the NRS context. All NRS music therapists, depending on their availability (four could not take part), participated in one of two focus group discussions (Glasgow, n=5; Edinburgh, n=7). In these discussions, they provided feedback regarding the IAQ and the perceived fit with their work. Following the focus groups all music therapists completed an online survey where they indicated the perceived relevance of the impact areas per different work settings as well as made suggestions for new impact areas. The survey was co-designed by the NRS and NREW research teams to ensure the relevance of the questions to both organisations.
Twelve music therapists completed the questionnaire about relevance/irrelevance of impact areas for the type of service they were providing in each different setting. The majority of the therapists had worked in more than one setting since 2016: 11 worked at Nordoff Robbins premises, nine worked in education, six in mental health, four in residential care and in a hospital, two in day centres, and one therapist in each of the criminal justice, social care and hospice settings. Respondents’ work in multiple settings led to each impact area being assessed either 28 or 39 times.7
Targeted points of focus group discussions were transcribed and analysed thematically according to their topic: impact areas and broader issues relating to service evaluation. Within a wider theme of “questionnaire administration”, for example, a code named “anonymity: importance and challenges” was included. This code drew on quotes pertaining to the difficulties and benefits of maintaining anonymity of the questionnaire responses, such as "you wanted to give somebody a chance to answer it [questionnaire] anonymously and to say what they wanted to say" and "some people actually would like their name to be included [in reports] but we can't" (quotes from focus group; FG1a). Likewise, survey data was analysed descriptively both in terms of numeric overviews (frequencies and percentages) and thematic coding.
Ethics
Ethical approval for this project was granted by the Nordoff Robbins Research Ethics Committee on 5th March 2017. Given that the study’s focus was on a Nordoff Robbins developed tool and the research team members, participating music therapists and some research ethics committee members were employed by NREW or NRS, there were a number of measures in place to avoid the potential for conflict of interest. The ethics committee included external expert members, participation in the study was voluntary, and participants could withdraw at any given time with no implications for their existing relationship with NREW and NRS. Anonymity and confidentiality were ensured throughout the process.
Findings
Combining findings from both Phase A and Phase B of the project, this section focuses on findings relating to impact areas and on wider considerations regarding the overall service evaluation process.
Impact areas findings
On the whole, service evaluation participants (across all participant groups) rated most of the impact areas across all domains highly and a ceiling effect8 was observed. Likewise, most participating music therapists indicated the relevance of most impact areas across different work settings. Below we outline the findings according to each domain of impact areas. In each case we report on the total ratings by all participant groups offering a base for exploring the applicability and transferability of the IAQ.
Domain 1: Impact areas relating to service users
Impact of music therapy (Phase A). All 13 impact areas for service users were reported to have had “positive” and “very positive impacts” by between 82.7 % (for IA8: Reduces symptoms/negative behaviours) and 97.9 % (for IA13: Provides an opportunity to experience music) of the participants. Only three impact areas were considered very positive by less than 50 % of the participants (IA5: Develops physical skills, IA8: Reduces symptoms/negative behaviours and IA11: Supports learning skills; Figure 1). Negative impacts were reported for some impact areas by small numbers of participants and by single participants for six impact areas. Some participants felt that certain impact areas were not applicable (N/A) to their situation or setting. IA8: Reduces symptoms/negative behaviours, for example, was reported as not applicable by 8.2 % of the participants. Generally, the most positively rated impact areas for service users were IA2: Enables social skills and interaction, IA4: Supports relaxation, IA6: Enhances quality of life, IA12: Provides a positive/creative experience, and IA13: Provides an opportunity to experience music. The less positive, neutral, or less applicable to participants’ context were IA5: Develops physical skills and IA8: Reduces symptoms/negative behaviours. These ratings reflect some possible trends in people’s perceptions of the impact of music therapy on service users. These perceptions, which are shaped by different organisational and other factors, highlight the prioritisation of social and musical aspects of music therapy over symptom-led and physical changes.
Relevance of the impact areas (Phase B). Overall, during Phase B, the music therapists considered the impact areas for service users (IA1-IA13) relevant to their practice. Between 71.8 % and 100 % of them reported individual impact areas to be relevant across different settings of work (Figure 2). Four of the impact areas were considered relevant by all respondents (IA2: Enables social skills and interaction, IA6: Enhances quality of life, IA10: Increases motivation, and IA13: Provides an opportunity to experience music), and this resonates closely with the service evaluation results where three of the most positively rated impact areas were IA2: Enables social skills and interaction, IA6: Enhances quality of life, and IA13: Provides an opportunity to experience music.
Only five impact areas were considered irrelevant and only by a small number of respondents. Again, the results relate to those of the service evaluation findings where IA5: Develops physical skills, and IA8: Reduces symptoms/negative behaviours were the less highly rated impact areas. These areas were indicated as “irrelevant” by 15.4 % and 5.1 % of the therapists respectively.
The results for individual workplaces highlight the variety of relevance of each impact area between different settings. IA5: Developing physical skills in particular was considered irrelevant by 50 % of the respondents working in mental health and hospital settings (n=6 and n=4 respectively). IA11: Supporting learning skills was also assessed as irrelevant by 50 % of those working in residential care (n=4) and 17 % of those practicing in mental health (n=6). IA9: Providing a distraction/everyday life experience and IA8: Reducing symptoms/negative behaviours, on the other hand, seemed less relevant to education, residential care and NRS settings.
Two respondents commented on IA8: Reducing symptoms/negative behaviours, suggesting potential rewording. One person initially marked “relevant/irrelevant” for this impact area, as they felt that “negative behaviours” could be considered part of the therapeutic process in response to the given opportunity to express emotions: “exploring negative behaviours (which in some cases are for a very appropriate reason) and having the safe space to do this can be an important part of the therapy” (Respondent 1).
Another respondent questioned the use of the word “negative” and, similarly to respondent 1, commented:
No behaviours are ‘negative’, all behaviour is expressive of some aspect of the client’s being and it may be positive that certain behaviours […] are being shown in the music therapy setting, with a view to working therapeutically with these. (Respondent 3)
Similarly, a focus group member commented: “Would that be a positive thing in some cases? Increased negative behaviours are part of therapy, part of a process” (FG2b). This perhaps highlights a fundamental challenge in evaluating and measuring impact in music therapy, or any therapeutic process where working through potentially difficult thoughts, feelings and behaviours can challenge conventions around “positive” and “negative” impact.
In relation to IA13: Providing an opportunity to experience music, one focus group member observed that it might be useful to include an additional impact area “about musical skills as such being developed” and shared that “a lot of positive stuff [was] happening there” (FG1b).
Domain 2: Impact areas relating to families/carers/friends
Impact of music therapy (Phase A). As shown in Figure 3, just about half of the participants (50.3 %) felt that they were unable to rate the impact areas for families/carers/friends. Of the remaining participants, positive and very positive impacts were reported by between 93.2 % (for IA19: Provides a positive/creative experience) and 79 % (for IA16: Provides emotional support) of participants. Small numbers of participants reported negative impacts only in relation to IA16: Provides emotional support (1.2 %), IA15: Improves relationships with relatives, and IA17: Supports relaxation (single participants – 0.6 % each). Three most often indicated N/A impact areas were IA16: Provides emotional support (8.5 %), IA18: Provides a distraction/everyday life experiences (8.5 %), and IA14: Enhances communication skills and understanding (7.3 %). Generally, while IA19: Provides a positive/creative experience and IA20: Provides an opportunity to experience music were the most positively rated impact areas, the differences between the impact areas were less pronounced than for impact areas pertaining to service users.
Relevance of the impact areas (Phase B). In the music therapists’ survey (Phase B), the impact areas for families/carers/friends (IA14-IA20; Figure 4) were considered less relevant than the impact areas for service users. In the focus groups, their relatively low relevance was commonly attributed to the limited communication between families/friends/carers and the music therapist. One music therapist explained that for education, mental health, residential care and hospital settings, in particular, “no parents/carers have been present for the music therapy input – only staff members” (Respondent 10). Similarly, another respondent noted that “in many services there is no contact with family/carers/friends” and that “when there’s no direct interaction with music therapy most areas are irrelevant” (Respondent 1). This is also perhaps the reason why the impact areas for families/carers/friends were generally considered more relevant to NRS’s own premises than to other workplaces where NRS music therapists work.
Only IA14: Enhancing communication skills and understanding was considered relevant by the majority of the music therapists (61.5 %). Despite its general perceived relevance across different settings, this impact area was assessed as irrelevant by 67 % of music therapists in relation to their work in mental health settings. IA17: Supporting relaxation was primarily perceived as irrelevant (41 %). As expressed by a music therapist in the focus groups, perhaps this is connected to a confusion regarding to whom this area refers.
Domain 3: Impact areas relating to staff
Impact of music therapy (Phase A). Just over 40 % of the participants indicated that they felt unable to rate the impact areas for staff (Figure 5). Of the remaining participants, between 61.9 % (for IA23: Reduces work-related stress) and 91.1 % (for IA25: Provides a positive/creative experience) reported positive and very positive impacts. The only four negative ratings related to IA23: Reduces work-related stress (1 %, n=2), IA21: Enhances communication skills and understanding (0.5 %, n=1) and IA22: Improves relationships (0.5 %, n=1). IA23: Reduces work-related stress – the area rated least positively – was rated as neutral (by 27.9 % of the participants) and N/A (by 7.6 % of the participants).
Relevance of the impact areas (Phase B). The perceived relevance of impact areas for staff covered a wide range (Figure 6). On one hand, IA21: Enhances communication skills and understanding, IA22: Improves relationships, IA25: Provides a positive/creative experience, and IA26: Provides an opportunity to experience music were rated as relevant by between 75 % and 89.3 % of the respondents. On the other hand, IA23: Reducing work-related stress and IA24: Improving motivation and productivity were considered irrelevant by 14.3 % and 32.1 % of the respondents respectively. However, these ratings varied dramatically from setting to setting. For example, IA23: Reducing work-related stress was not considered as relevant by any respondent in relation to mental health settings. Despite its overall neutral or irrelevant ratings, however, this impact area was considered relevant by 75 % of the music therapists in relation to their work in residential care settings. Similarly, IA24: Improving motivation and productivity seemed less relevant to mental health than other workplaces but generally it was considered neutral by the majority (42.9 %). These variations are potentially connected to different factors, including the clarity of meaning in its impact area. A focus group member, for example, understood IA23: Reducing work-related stress as different to the other impact areas in Domain 3 which “felt very much in relation to the client” (FG1b).
Domain 4: Impact areas relating to the partner organisation
Impact of music therapy (Phase A). Overall, 40 % of the participants felt that they were unable to rate the for the organisation (Figure 7). Of the remaining participants, between 86.4 % (for IA28: Improves interactions between people) and 95.3 % (IA27: Changes the atmosphere) reported positive and very positive impacts. Only one participant reported negative impact, and this pertained to IA28: Improves interactions between people (0.5 %). This impact area was rated as neutral more often than the other two impact areas in Domain 4 (by 10.6 % of the participants).
Relevance of the impact areas (Phase B). Impact areas for the organisation were considered relevant by the majority of respondents, with IA29: Fits in with the organisation’s ethos achieving the highest rating for relevance (96.4 % of the respondents; Figure 8). The importance of this impact area was highlighted by various focus group members: “I was thinking about values […] I really want to know what their ethos is” (FG1b); “In the places where we work, we have a particular interest to know how does it [music therapy] compromise and interact with other services and increase the organisation's services provision” (FG1b).
Interestingly, none of the impact areas for the organisation were rated as irrelevant.9 IA28: Improves interactions between people was assessed as neutral (42.9 % of the respondents). Some focus group members suggested that IA28: Improves interactions between people, required clarification in terms of its phrasing especially with regard to whom the impact area referred.
In relation to IA27: Changes the atmosphere, another focus group member voiced their doubt on whether it was desired for music therapy to affect the outside environment: “What’s happening in the music therapy room should not have impact on what’s happening outside, because that could be actually destabilising” (FG1b). This comment highlights music therapists’ suggestion that although they see music therapy embedded within the broader organisation, certain music therapy experiences and situations need to be contained within the music therapy room depending on client needs and the focus of the work each time.
Overall, there seemed to be a relatively small variation between settings, with impact areas considered neutral rather than relevant slightly more often for mental health than other workplaces. Again, this variation related to different potential factors and, as a respondent commented, this includes staff’s engagement with the music therapy service: “Relevance often depends on whether staff members sit in on sessions or have viewed video work” (Respondent 1).
Considerations regarding the overall service evaluation process
Focus group discussions with NRS music therapists considered a number of different areas. Some of these pertained to the content and format of the questionnaire, while other areas related to more general aspects of the service evaluation process, such as the administration of the questionnaire and sampling procedures. Overall, we identified four themes: Theme 1: Experience, scope and impact of the IAQ; Theme 2: Questionnaire content and format; Theme 3: Participant recruitment; Theme 4: Questionnaire administration.
Theme 1: Experience, scope and impact of the IAQ
On a basic level, focus group members mentioned that evaluation participants valued the opportunity to give feedback and have their voices being heard. Most music therapists confirmed that the scope of the questionnaire was appropriate, covering major, although not all, aspects of their music therapy practice. Depending on the engagement of each service evaluation participant with the questionnaire, completion time could be longer than ten minutes, but overall people felt that the questionnaire allowed for sufficient depth of information: “It's not necessarily a complete picture […] I don't think it's going to necessarily reflect all of the work. (FG2b)”; “[The questionnaire] allowed […] to reach quite a bit of depth without the need of much words from the person who was filling the form." (FG1a).
It was also highlighted that the overall evaluation process could raise the profile of music therapy within an organisation and have a positive impact on the organisation's perception of the role of the music therapist. In some cases, this also contributed to securing funding. This was highlighted as something positive not only for practitioners employed by Nordoff Robbins, but also for other music therapy providers as well as freelance music therapists.
Made the role look more professional and highlighted that we're part of a bigger organisation, which is thorough about how we assess the work that we're delivering, so the impact on my role within the organisation was for the good. (FG2a)
Other focus group member comments included: “[The organisation] had the opportunity to contribute to the service evaluation themselves which was right in line with the funding criteria for that particular project (FG2a)”; “For freelance therapists […] trying to secure funding to continue their post, there's masses of value in this” (FG2b). In line with these considerations, focus group members stressed the need to consider the evaluation’s timing: “It's a question of timing, when the questionnaire is administered, what is happening in the environment of the setting – is the service continuing? Is it just about to end?” (FG2a).
Also, broader factors that influence the music therapy service and the organisation as a whole were discussed. Such factors include contract and funding deadlines, as well as the organisational perception of what counts as evaluation. One National Health Service (NHS) setting, for example, would frame the service evaluation process as “collecting feedback” in order to distinguish it from their internal service evaluation systems which focused on pre and post clinical assessments of individual clients: “This was not necessarily the tool to evaluate in a way that NHS want to evaluate a service” (FG1b).
Theme 2: Questionnaire content and format
Focus group members commented on the questionnaire content and format. This led to the identification of various detailed suggestions around wording and formatting of questions. Some focused more on the content, while others on layout and readability. For example, people debated the appropriateness of including the therapist’s name on the easy-read questionnaire to describe the service. In some cases, this seemed important given the evaluation participants’ understanding of what music therapy means, whereas in other cases people felt it was giving a too personal tone: “[The evaluation participants] might not call it ‘music therapy’ because of their understanding but they can relate because of the name of the person [music therapist] they have done it with” (FG1b).
People also debated the wording of impact areas and to what extent they could be more neutral. Overall, people appreciated the balance between closed and open questions – and the boxes for open feedback.
Focus group members also commented on the need to translate some impact areas into different contexts. “Reducing symptoms,” for example, can have a very different meaning when referring to clients within a mental health context compared to clients in a special needs school. Also focus group members commented that “reducing symptoms” is not necessarily a desired outcome of the therapeutic process.10
The choice between online and paper versions was appreciated, as was the opportunity to use the easy-read version – these offered useful options to adapt the evaluation to the needs of individual settings and clients: “Having both paper and online option was good from my point of view because certainly in dementia setting you would need paper copies” (FG2a).
Equally, focus group members appreciated the easy-read version of the questionnaire. This version empowered more people to engage with the evaluation and have their voice heard and taken into account for the service development: “I really liked the accessible copy. I liked the level of engagement that the clients were able to have, particularly the younger clients, it was really positive” (FG2a).
The IAQ was developed primarily for music therapy services provided within partner organisations where clients were referred, usually by a professional, to music therapy. Other client groups, such as clients who self-referred to a music therapy clinic, had not been the focus, and focus group members concerned whether some questions would feel “awkward” or “patronising” to such clients.
Theme 3: Participant recruitment
Focus group members appreciated the flexibility of the service evaluation process which allowed a degree of adaptation to the context of each music therapy service. For example, sampling criteria and questionnaire administration processes were largely determined by what was considered appropriate and possible in each context (see Theme 4). While acknowledging the need for a generic evaluation tool to have such flexibility, focus group members discussed the repercussions of each individual practitioner making participant recruitment decisions. Some practitioners, for example, invited any staff member from the organisation to complete the questionnaire, whereas others invited only those with some kind of experience of music therapy (e.g., those who had observed at least one session). The former led to a higher number of participants indicating that they were unable to rate music therapy’s impact in relation to families/friends/carers (Domain 2) and staff (Domain 3). This observation fed a broader discussion about the relevance of staff participating when they have had no direct experience of music therapy within the organisation.
The lack of predefined sampling criteria led to music therapists’ making intuitive, ad-hoc decisions about who was selected to participate. In some instances, such decisions were influenced by each music therapist’s established relationships within the organisation. This included the music therapists’ perception of the therapeutic process of each client and the appropriateness of them completing a questionnaire at a given time: “I don't think I had any criteria in mind […] I am aware that I wasn't very thoughtful about selecting who is this form [the IAQ] going to and why” (FG1a); “The way that I approached different settings depended on previous relationships and how established I was there” (FG2a); “There were clinic [NRS] clients who I didn't put the form out to because we were too early in the therapy journey and [ … ] we were wondering if that was going to be helpful” (FG1a).
In cases of short-term outreach music therapy services, challenges around recruitment were reported. Similar challenges were observed within schools and this appeared to be due to school staff’s limited time availability and perhaps their perception of the music therapy service evaluation being an extra-curricular activity. In all cases, the need to document the decisions made in terms of sampling within each context was highlighted. Such documentation enabled transparent reporting of the evaluation processes and outcomes.
Theme 4: Questionnaire administration
Similar to participant recruitment, there were no strict guidelines for the administration of the questionnaire. Music therapists were encouraged to administer the questionnaire as they deemed appropriate within each organisation. In most cases, the music therapists themselves handed out the questionnaires and in some instances – especially with clients with limited mental capacity – the music therapists or another professional supported the evaluation participants by writing their spoken answers: “It's a very tricky area, as we're talking about people with additional needs. You can’t just simply ask somebody impartial to ask the questions. You need somebody that knows them [the clients]” (FG2b).
This flexibility came with challenges around overlaps between the music therapists’ dual role as the practitioner and the evaluator. Focus group members discussed these challenges both in terms of their ethical implications and the potential bias. Some reported that their dual role led to some clients seeing the completion of the questionnaire as an opportunity to communicate therapy-related matters directly to them. Equally, some music therapists found it difficult to separate the evaluation from the therapeutic process – especially if they were still working with a client.
I was the person administering the evaluation form and I was the person collecting them as well, and I wonder about bias and whether it would be possible in the future for that to be separate, so for somebody else to handle the forms [questionnaires]. (FG2a)
Given the small sample of participants in some organisations, anonymity was difficult to maintain, and the evaluation report had to be written carefully. In some cases, this involved avoiding the use of direct quotes or participants’ professional titles: “It's quite hard to keep responses anonymous when your sample size is so small” (FG1a).
Focus group members discussed possible ways to further separate therapy from evaluation. Recognising potential for bias, participating music therapists seemed to prefer not to be the contact person for the evaluation, and where possible, for an external professional to administer questionnaires. In that case, it was recognised that the external person would need to receive sufficient guidance especially with regards to the use of the easy-read version of the IAQ: “Really highlighting the attitude that [the staff] should have while helping the client fill the form, like […] not be manipulating and being as [neutral] as possible” (FG1a).
Focus group members mentioned that evaluation participants would often want to give “good,” as opposed to “honest,” feedback. This seemed to be connected to a number of factors including people’s misunderstanding of the evaluation purposes, the sampling (outlined above), and administration processes as well as to the fact that commenting about the service was experienced as commenting about the particular music therapist onsite. The latter gave a more personal tone to the process which may have discouraged some people from reporting what they perceived perhaps as “negative” feedback: “It's always done for the best intentions: 'we want to give you really good feedback'… No, we want honest feedback! Really tricky!” (FG1a); “It was perhaps more that they [staff] felt that they were feeding back to me about something to do with the quality of my work” (FG2a).
Discussion
Service evaluation is a vital component of providing a music therapy service – whether in an employed or freelance capacity. Despite its necessity, service evaluation has not been fully embraced within the wider professional and disciplinary community. Balancing the need to meet the context specificities of each service on one hand (e.g., client needs, service aims, and strategic priorities of the organisation), and to produce meaningfully comparable findings across different services and contexts on the other hand, is a real challenge to be negotiated by practitioners, managers and researchers.
Aiming to advance the dialogue around service evaluation in music therapy, this paper has introduced the Impact Areas Questionnaire (IAQ), a music therapy service evaluation tool developed at Nordoff Robbins in the UK. We have presented the core components of this tool, the processes that informed its development, and a study that tested its applicability and transferability. This study showed that the impact areas rated consistently positively were: IA12: Provides a positive/creative experience and IA13: Provides an opportunity to experience music among impact areas for service users; IA19: Provides a positive/creative experience and IA20: Provides an opportunity to experience music among impact areas for families/carers/friends; IA25: Provides a positive/creative experience and IA26: Provides an opportunity to experience music for staff, and IA29: Fits in with the organisation’s ethos among impact areas for the organisation. On the other hand, the impact areas rated consistently less positively than others were: IA5: Develops physical skills and IA8: Reduces symptoms/negative behaviours among impact areas for service users and IA23: Reduces work-related stress among impact areas for staff. There were no impact areas which were significantly less positively rated among impact areas for families/carers/friends and impact areas for the organisation. These findings show certain trends, and alongside the music therapists’ comments regarding the relevance/irrelevance of the impact areas and the overall service evaluation process, have led to a multi-layered exploration of the IAQ.
Looking ahead, there are both internal and external implications of our findings. By testing the applicability and transferability of the original NREW service evaluation system and its relevance to NRS’s contexts of work, this study has offered a firm grounding for the use of the IAQ. This grounding comes with an awareness of the strengths and of the limitations of this tool and of the study itself. Our interpretation of the findings is also informed by the observed ceiling effect in the service evaluation results and the relatively small number of participants. Nonetheless, the study offered a platform for an informed use of the IAQ as well as for ongoing review of the tool and response to each music therapy context.11 The findings of the study presented here were, for example, incorporated into an annual review of the service evaluation process within Nordoff Robbins, which sought comments from music therapists and regional managers about the process. The availability of options (paper and online versions, as well as standard and easy-read versions) was commended, and in response to comments from music therapists an ESOL (English for Speakers of Other Languages) version of the standard questionnaire was created as explained earlier. The easy-read questionnaire was revised in response to feedback and the order of questions was brought in line with the standard questionnaire.
To sum up, this article has outlined key aspects of the processes of developing and testing the IAQ over a 10-year period (2009-2019). It situated the IAQ work in relation to the broader service evaluation and research work of Nordoff Robbins in the UK. The resources, opportunities and constraints within the charity shaped the direction of our service evaluation work over time. For example, the position of Nordoff Robbins as a music therapy organisation which employs a large number of music therapists and sustains a research team has allowed resources to be dedicated to the development of a service evaluation process that is research-informed and supported by feedback from music therapists and music therapy researchers at every stage. Also, some areas of work relating to music therapy provision and its support were not dealt with by the researchers. For example, the cost-effectiveness of provision in any particular context has not been a key consideration of the service evaluation process as developed here due to the organisational structure of the charity meaning that such concerns were dealt with elsewhere within the organisation. The organisational structure and operational priorities of the charity have directed to a large extent the course of the development process and the shape of the service evaluation protocol itself. Clearly, service evaluation protocols in other situations may understandably need to include assessment of other factors, such as cost-effectiveness, as central priorities and may focus on other areas of practice and different means of data collection and analysis.
Although the IAQ has been developed and used within the charity’s context, we do not perceive its use as limited to similar contexts. The questionnaire, for example, can be used alongside other sources of service evaluation-related tools and approaches such as interviews, SWOT analyses (strengths, weaknesses, opportunities, and threats), cost-effectiveness, or social impact measurement tools. Furthermore, service evaluation can be complemented by other activities that focus on the effectiveness of music therapy interventions (rather than the music therapy service) such as clinical assessment tools and outcome measures (see Cripps et al., 2016; Jacobsen et al., 2019; Spiro et al., 2018).
Despite the advantages of developing context-responsive data collection tools and retaining a practice-sensitive stance, the parallel aim of the IAQ to be applicable and adaptable to various settings may limit the variations and range of information collected. While recognising the contextual diversity of music therapy and the different needs of evaluations, we are aware that producing meaningful information about the evaluation of music therapy services is crucial for the profession and we hope the publication of the IAQ contributes to this direction.
In all projects, the music therapist onsite distributed and sometimes administered questionnaires, and participants may have been aware of the potential link between the evaluation outcomes and practical matters such as funding needs and the continuation of the music therapy service. Such evaluation practices bring concern regarding biases or unrealistic expectations arising from the evaluation findings. However, they need to elicit as rich information as possible while there is usually no evaluation support for the music therapists in many workplaces. We are aware that music therapy service evaluation is often conducted with minimal organisational support and less availability of research resources than in our case.
Looking beyond the immediate context of the IAQ’s development and its use within Nordoff Robbins, this study has some broader implications for the music therapy profession. The study outcomes offer an evidence base regarding the IAQ, its potential usefulness for evaluating music therapy services in general, and its contribution to the existing knowledge base around evaluation in the field. To this end, we hope that this questionnaire might prove to be a useful and adaptable resource for music therapists and organisations beyond Nordoff Robbins.
Of course, the use of the IAQ (as well as of other tools) in practice brings up a range of broader considerations regarding questionnaire dissemination and ethics in relation to conflicts of interest and potential bias in data collection (see, for example, Daykin, 2016; Tsiris et al., 2014a). The particularities of music therapy’s varied contexts may mean that not all participant groups will be well represented in every evaluation report. Some contexts such as secure units, for instance, may mean that contact with families/carers/friends is diminished or restricted. Some service users may have very limited ability to complete a survey even with assistance, leading to difficult choices for those tasked with data collection. This particularly bears upon issues of conflicts of interest. A music therapist assisting a service user with completing a questionnaire may be best placed to capture their opinions accurately through familiarity with their means of communication, but may be at most risk of conflicts of interest and bias; conversely, data collection by an independent person may arguably be more “objective” but have less personal-specific expertise that would give the best chance of faithfully representing a service user’s perspectives on questions. Recruitment of people to voluntarily take time to complete a questionnaire is an issue in any methodological design which seeks to gather data in this way, and this perhaps bears particularly on the participation of busy staff members with high levels of work responsibilities and stress. To fully address these considerations is beyond the scope of this paper and they may apply in situations beyond the use of music therapy service evaluation questionnaires.
Conclusion
To our knowledge, the IAQ is one of the first tools to be published with an explicit focus on service evaluation for music therapy. Most published tools in the field focus on diagnosis, clinical assessment, and outcome measurement. This study expands the focal lens to consider the music therapy service as a whole. As a result, some of the differences between assessment and service evaluation emerge and the dialogue around service evaluation becomes more transparent. By giving an open account of the IAQ’s construction and of the challenges and dilemmas met along the way, we hope to begin a discussion around the nuts and bolts of the processes around questionnaire construction and validation in music therapy; a crucial methodological aspect which is rarely discussed.
In our attempt to reposition service evaluation, we argue that questions of evidence, impact and evaluation are ever-present and increasingly important in music therapy practice (Ledger, 2010; Tsiris et al., 2018). We hope this paper contributes to this questioning by reflecting on real-life challenges around constructing, implementing, testing and refining a service evaluation tool.
About the authors
Giorgos Tsiris, PhD, is Senior Lecturer in Music Therapy at Queen Margaret University and Arts Lead at St Columba’s Hospice, Edinburgh. He is the editor-in-chief of ‘Approaches: An Interdisciplinary Journal of Music Therapy’ and currently serves as the Chair of the ISME Commission on Special Music Education and Music Therapy.
Neta Spiro, PhD, is Research Fellow in Performance Science at the Royal College of Music and an honorary Research Fellow at Imperial College London.
Dr Owen Coggins is a Leverhulme Early Career Fellow in the Department of Social & Political Sciences at Brunel University London. He is Secretary of the International Society for Metal Music Studies and trustee of record label and registered charity Oaken Palace. His monograph, Mysticism, Ritual and Religion in Drone Metal, published by Bloomsbury, was awarded the 2019 book prize of the International Association for the Study of Popular Music.
Ania Zubala, PhD, is a Research Fellow in the Institute of Health Research and Innovation at the University of the Highlands and Islands in Scotland. Her research focuses primarily on arts therapies and their role for remote and ageing communities of the Nordic countries and beyond.
Notes
[1] These themes are also discussed in Tsiris and McLachlan (2019).
[2] This survey is framed as a service evaluation by the authors. According to our perspective, as communicated in this paper, this framing is inaccurate given that the focus is not on a particular music therapy service and its perceived impact.
[3] For further details regarding the underpinnings of our music therapy and service evaluation approach, see Tsiris et al. (2014a) and Tsiris et al. (2018).
[4] These projects took place in collaboration with diverse workplaces including schools and neurorehabilitation settings. The richness of their findings and their potential for knowledge generation in the field is discussed in a separate paper (Tsiris et al. 2018).
[5] The early development of the IAQ was led by Mercédès Pavlicevic who served as the NREW Director of Research between 2006 and 2015. Over the years, a number of different researchers contributed to the aforementioned developments with Giorgos Tsiris and Neta Spiro being involved in the ongoing review and design of the service evaluation systems since 2009 and 2011 respectively.
[6] We use the term carer in the second participant group to apply to people who care for service users in a non-professional context. In some cases, the term carer is used to describe the role of some healthcare professionals; these individuals would normally come under our third participant group as staff.
[7] Impact areas 21 to 29 were not relevant to Nordoff Robbins premises as a setting and were therefore assessed only 28 times.
[8] The ceiling effect (see also Michalos, 2014) refers to the situation in which participants’ responses to the different impact area Likert scales were clustered toward the high end (positive impact) of the IAQ.
[9] The impact areas for staff and the organisation were by default treated as irrelevant for music therapists’ work within NRS’s own premises.
[10] The relevance of reducing symptoms as a therapeutic focus or outcome has recently featured within the broader professional literature (see Bieleninik et al., 2017; Gold & Bieleninik, 2018; Turry, 2018).
[11] The two charities, NREW and NRS, merged in October 2018 and since then they have been following a unified service evaluation framework influenced by the work presented here. Ongoing review of the IAQ has led to minor edits many of which pertain to the service evaluation process (e.g., administration of the questionnaire and sampling) rather than the construction of the IAQ.
References
Abrams, B. (2010). Evidence-based music therapy practice: An integral understanding. Journal of Music Therapy, 47(4), 351-379, https://doi.org/10.1093/jmt/47.4.351.
Bieleninik, Ł., Geretsegger, M., Mössler, K., Assmus, J., Thompson, G., Gattino, G., Elefant, C., Gottfried, T., Igliozzi, R., Muratori, F., Suvini, F., Kim, J., Crawford, M., Odell-Miller, H., Oldfield, A., Casey, Ó., Finnemann, J., Carpente, J., Park, Al., Grossi, E., & Gold, C. (2017). Effects of improvisational music therapy vs enhanced standard care on symptom severity among children with autism spectrum disorder: The TIME-A randomized clinical trial. Journal of the American Medical Association, 318(6), 525-535, https://doi.org/10.1001/jama.2017.9478.
Bradt, J. (2018). Involving services users in music therapy evaluation. Nordic Journal of Music Therapy, 27(1), 1-2, https://doi.org/10.1080/08098131.2018.1398973.
Bradt, L., & Dileo, C. (2010). Music therapy for end-of-life care. Cochrane Database of Systematic Reviews, 2010(1), CD007169, https://doi.org/10.1002/14651858.CD007169.pub2.
Brett, J., Staniszewska, S., Mockford, C., Herron‐Marx, S., Hughes, J., Tysall, C., & Suleman, R. (2014). Mapping the impact of patient and public involvement on health and social care research: A systematic review. Health Expectations, 17(5), 637-650, https://doi.org/10.1111/j.1369-7625.2012.00795.x.
Brotons, M., & Marti, P. (2003). Music therapy with Alzheimer's patients and their family caregivers: A pilot project. Journal of Music Therapy, 40(2), 138-150, https://doi.org/10.1093/jmt/40.2.138.
Canga, B., Hahm, C. L., Lucido, D., Grossbard, M. L., & Loewy, J. V. (2012). Environmental music therapy: A pilot study on the effects of music therapy in a chemotherapy infusion suite. Music and Medicine, 4(4), 221-230, https://doi.org/10.1177/1943862112462037.
Clair, A. A., & Ebberts, A. G. (1997). The effects of music therapy on interactions between family caregivers and their care receivers with late stage dementia. Journal of Music Therapy, 34(3), 148-164, https://doi.org/10.1093/jmt/34.3.148.
Crawford, M. J., Rutter, D., Manley, C., Weaver, T., Bhui, K., Fulop, N., & Tyrer, P. (2002). Systematic review of involving patients in the planning and development of health care. BMJ, 325(7375), 1263, https://doi.org/10.1136/bmj.325.7375.1263.
DeNora, T. (2006). Evidence and effectiveness in music therapy: Problems, possibilities and performance in health contexts. British Journal of Music Therapy, 20(2), 81-99, https://doi.org/10.1177/135945750602000203.
DeNora, T., & Ansdell, G. (2014). What can’t music do? Psychology of Well-Being: Theory, Research and Practice, 23(4), 1-10, http://www.psywb.com/content/pdf/s13612-13014-10023-13616.pdf.
Geretsegger, M. (2019). Resonating research – What is needed to make music therapy research and implementation more relevant, meaningful, and innovative? Keynote presentation at the 11th European Music Therapy Conference, 26-30 Demark 2019, Aalborg, Denmark. https://www.musictherapy.aau.dk/emtc19/keynotes/#357990.
Gold, C., & Bieleninik, Ł. (2018). Authors’ response. Nordic Journal of Music Therapy, 27(1), 90-92, https://doi.org/10.1080/08098131.2018.1398988.
Graham-Wisener, L., Watts, G., Kirkwood, J., Harrison, C., McEwan, J., Porter, S., Reid, J., & McConnell, T. H. (2018). Music therapy in UK palliative and end-of-life care: a service evaluation. BMJ Supportive & Palliative Care, https://doi.org/10.1136/bmjspcare-2018-001510.
Health and Care Professions Council (HCPC). (2013). The standards of proficiency for arts therapists. https://www.hcpc-uk.org/standards/standards-of-proficiency/arts-therapists/.
Hilliard, R. (2006). The effect of music therapy sessions on compassion fatigue and team building of professional hospice caregivers. Arts in Psychotherapy, 33, 395-401, https://doi.org/10.1016/j.aip.2006.06.002.
Kaenampornpan, P. (2015). The inclusion of the family members as primary carers in music therapy sessions with children in a special education centre; How does this help the child and the carer? (Doctoral dissertation). Anglia Ruskin University. https://arro.anglia.ac.uk/550334/.
Kent, H., & Read, J. (1998). Measuring consumer participation in mental health services: Are attitudes related to professional orientation? International Journal of Social Psychiatry, 44(4), 295-310, https://doi.org/10.1177/002076409804400406.
Ledger, A. (2010). Am I a founder or am I a fraud? Music therapists’ experiences of developing services in healthcare organizations. University of Limerick, Ireland. Doctoral dissertation, https://ulir.ul.ie/handle/10344/1131.
Magill, L. (2009). Caregiver empowerment and music therapy: Through the eyes of bereaved caregivers of advanced cancer patients. Journal of Palliative care, 25(1), 68, https://doi.org/10.1177/082585970902500114.
McCaffrey, T. (2018). Evaluating music therapy in adult mental health services: Tuning into service user perspectives. Nordic Journal of Music Therapy, 27(1), 28-43, https://doi.org/10.1080/08098131.2017.1372510.
McLaughlin, D., Barr, O., McIlfatrick, S., & McConkey, R. (2014). Service user perspectives on palliative care education for health and social care professionals supporting people with learning disabilities. BMJ Supportive & Palliative Care, 5(5), 531-537, http://dx.doi.org/10.1136/bmjspcare-2013-000615.
McWhinney, I. R., Bass, M. J., & Donner, A. (1994). Evaluation of a palliative care service: problems and pitfalls. BMJ, 309(6965), 1340-1342, https://doi.org/10.1136/bmj.309.6965.1340.
Minogue, V., Boness, J., Brown, A., & Girdlestone, J. (2005). The impact of service user involvement in research. International Journal of Health Care Quality Assurance, 18(2), 103-112, https://doi.org/10.1108/09526860510588133.
NHS Health Research Authority. (2013). Defining research. https://researchsupport.admin.ox.ac.uk/sites/default/files/researchsupport/documents/media/defining-research.pdf.
O'Callaghan, C., & Magill, L. (2009). Effect of music therapy on oncologic staff bystanders: A substantive grounded theory. Palliative & Supportive Care, 7(2), 219-228, https://doi.org/10.1017/S1478951509000285.
O'Kelly, J., & Koffman, J. (2007). Multidisciplinary perspectives of music therapy in adult palliative care. Palliative medicine, 21(3), 235-241, https://doi.org/10.1177/0269216307077207.
Omeni, E., Barnes, M., MacDonald, D., Crawford, M., & Rose, D. (2014). Service user involvement: Impact and participation: A survey of service user and staff perspectives. BMC Health Services Research, 14(1), 491, https://doi.org/10.1186/s12913-014-0491-7.
Pavlicevic, M., O’Neil, N., Powell, H., Jones, O., & Sampathianaki, E. (2014). Making music, making friends: Long-term music therapy with young adults with severe learning disabilities. Journal of Intellectual Disabilities, 18(1), 15-19, https://doi.org/10.1177/1744629513511354.
Pavlicevic, M., Tsiris, G., Wood, S., Powell, H., Graham, J., Sanderson, R., Millman, R., & Gibson, J. (2015). The ‘ripple effect’: Towards researching improvisational music therapy in dementia care homes. Dementia, 14(5), 659-679, https://doi.org/10.1177/1471301213514419.
Powell, H. (2006). The voice of experience: Evaluation of music therapy with older people, including those with dementia, in community locations. British Journal of Music Therapy, 20(2), 109-120, https://doi.org/10.1177/135945750602000205.
Rickson, D. (2009). Researching one’s own clinical practice: Managing multiple roles in an action research project. Voices: A World Forum for Music Therapy, 9(1), https://voices.no/index.php/voices/article/view/364, https://doi.org/10.15845/voices.v9i1.364.
Sargeant, A., Payne, S., Gott, M., Small, N., & Oliviere, D. (2007). User involvement in palliative care: Motivational factors for service users and professionals. Progress in Palliative Care, 15(3), 126-132, https://doi.org/10.1179/096992607X196060.
Solli, H. P., Rolvsjord, R., & Borg, M. (2013). Toward understanding music therapy as a recovery-oriented practice within mental health care: A meta-synthesis of service users' experiences. Journal of Music Therapy, 50(4), 244-273, https://doi.org/10.1093/jmt/50.4.244.
Spiro, N., & Tsiris, G. (2016). Assessment and evaluation in music therapy: Is there a difference? Nordic Journal of Music Therapy, 25(sup1), 70-71, https://doi.org/10.1080/08098131.2016.11783620.
Spiro, N., Tsiris, G., & Cripps, C. (2018). A systematic review of outcome measures in music therapy. Music Therapy Perspectives, 36(1), 67-78, https://doi.org/10.1093/mtp/mix011.
Stige, B., Malterud, K., & Midtgarden, T. (2009). Toward an agenda for evaluation of qualitative research. Qualitative Health Research, 19(10), 1504-1516, https://doi.org/10.1177/1049732309348501.
Storm, M., Knudsen, K., Davidson, L., Hausken, K., & Johannessen, J. O. (2011). “Service user involvement in practice”: The evaluation of an intervention program for service providers and inpatients in Norwegian Community Mental Health Centers. Psychosis, 3(1), 29-40, https://doi.org/10.1080/17522439.2010.501521.
Tsiris, G., Spiro, N., & Pavlicevic, M. (2018). Repositioning music therapy service evaluation: A case of five Nordoff-Robbins music therapy service evaluations in neuro-rehabilitation. Nordic Journal of Music Therapy, 27(1), 3-27, https://doi.org/10.1080/08098131.2016.1273966.
Turry, A. (2018). Response to effects of improvisational music therapy vs. enhanced standard care on symptom severity among children with autism spectrum disorder: The TIME-A randomized clinical trial. Nordic Journal of Music Therapy, 27(1), 87-89, https://doi.org/10.1080/08098131.2017.1394902.
Wigram, T. (2006). Response to Tia DeNora. British Journal of Music Therapy, 20(2), 93-96, https://doi.org/10.1177/135945750602000203.