Program Directors’ Perceptions of the CBMT Exam
Abstract
Forty-one academic program directors completed a survey eliciting their perceptions of the Certification Board for Music Therapists (CBMT) board certification exam. Survey questions concerned the meaningfulness and utility of the exam in evaluating safe and competent practice; reasons students might fail the exam; exam preparation methods; and open-ended questions that allowed participants to express specific concerns about the exam, if they had any. On average, program directors perceived the exam to be “neither effective nor ineffective” in evaluating clinical competence, with open-ended responses suggesting the majority of these faculty had a range of concerns about the exam. After categorizing and defining these concerns, reflective comments serve to stimulate discussion about the meaningfulness and utility of the exam, as it is currently constructed.
Study Context
This article emerged from our experiences as program directors of two music therapy training programs in the United States. In our experiences preparing students for internship and professional life, we observed inconsistencies in our students’ abilities to successfully pass the Certification Board for Music Therapists (CBMT) board certification exam. We observed students whom we evaluated as strong entry-level clinicians repeatedly having difficulty passing the exam, whereas other students, whom we evaluated as less clinically competent based on their performance in fieldwork and internship, passed the exam first time. The main difference between these students appeared to be the student’s competence as a test taker—that is, students who tended to be better at taking multiple choice, timed tests tended to be more successful at passing the CBMT exam, even though this did not always correlate with evaluations of their clinical competence.
We were also (and remain) concerned about the declining pass rates for the CBMT exam, and what this suggests about the relationship between academic training programs and the evaluation of entry-level competence. In the latest communication from CBMT about our students’ pass rates (November 2019), CBMT has reported that only 70% of students passed the exam on their first attempt, and if they did not pass the exam on their first attempt, they only had a 59% chance of passing the exam thereafter. Given the 4½ year commitment students make to professional preparation, it has become an ethical concern for us in evaluating which students we consider for our academic programs, and whether we should evaluate and accept students for our programs who self-identify as struggling with multiple choice tests, especially those that are timed.
These concerns are compounded by our struggles to understand the extent to which these kinds of evaluations are suitable for determining which students can practice competently and safely. We do not have, to the best of our knowledge, any publicly available data that affirms the construct or predictive validity of the CBMT exam, making it difficult to understand whether these declining pass rates are a product of poor academic preparation, a disconnect between academic preparation and exam content, a disconnect between the exam and clinical practice competence, a combination thereof, or something else altogether.
Furthermore, surprisingly little has been written about the CBMT exam, particularly articles that discuss the relationship between academic preparation, exam scores, and clinical competence. This is further compounded by accreditation requirements with the National Commission for Certifying Agencies (NCCE), which certifies the CBMT exam. NCCE does not permit music therapy faculty to serve on the CBMT exam committee, the primary reason being that to do so may create unfair advantages for some academic programs. However, this requirement also creates a further disconnect between exam construction and academic programs.
This article, which identifies and describes AMTA-approved academic program directors’ perceptions of the CBMT exam is—we hope—a helpful step in opening a scholarly dialogue about the exam and its relationship to clinical competence. Forty-one program directors responded (58% of program directors in the United States), providing a total of 152 written responses to our open-ended questions in addition to nine Likert and Likert-type questions. This suggested to us that this topic was important to those that responded, and that their responses were worthy of consideration as part of a larger dialogue about the exam.
We now invite you into this dialogue. In doing so, we would like you to consider the following: Although you will find that we take a critical stance in relation to the exam, particularly in the Reflective Comments, we are not advocating for the exam to be discontinued. We believe the exam serves an important purpose in advancing the profession in the United States, particularly given the ways that healthcare delivery is changing. Rather, we are asking that faculty voices be heard, the data be considered, and dialogue ensue. Second, we can imagine that some faculty perceptions about the exam may be inaccurate, given that faculty do not have first-hand knowledge of the exam. If this is the case, we hope the reader will see this an important part of the dialogue, illustrating the barriers faculty may experience understanding the exam, its construction, and its relationship to clinical competence. Finally, we share this article with you in the spirit of advancing the profession by engaging in difficult but necessary conversations related to education and training, and the impact this has on student preparation.
Introduction
The purpose of this study was to illuminate and summarize the perceptions of music therapy academic program directors regarding the Certification Board for Music Therapists (CBMT) board certification exam. Recent changes to the cut score for the CBMT exam have impacted music therapy students’ abilities to pass the exam, with first time pass rates falling from 84% to 70% over the last decade (Wylie et al., 2017). This significant drop in pass rates has been reflected in academic programs. Between 2005 and 2015, the number of academic programs that had a 90% average first-time pass rate has fallen from 43% to 15%, while the number of programs that had a pass rate that was lower than 70% increased from 10% to 47% (Schneck, in Wylie et al., 2017). Factors contributing to this decline have not been adequately explored, although discussions about the effectiveness of undergraduate educational programs in preparing students for professional practice are ongoing (Hsiao et al., 2020; Wylie et al., 2017). This article describes academic program directors’ perceptions of the relevance and meaningfulness of the exam, and through an examination of these findings, seeks to encourage discussion about the exam and its relationship to competent clinical practice.
Contextualizing the CBMT Exam
CBMT and the Board Certification Exam
The Certification Board for Music Therapists (CBMT) was established in 1983, and its current mission is to ensure “a standard of excellence in the development, implementation, and promotion of an accredited certification program for safe and competent music therapy practice” (CBMT, n.d.-a). According to Aigen and Hunter (2018), one of the purposes of establishing the CBMT credential (MT-BC) was to “determine who was qualified to practice music therapy based on a national examination” (p. 186), and to promote music therapy reimbursement for members of the two music therapy associations that existed at the time (National Association of Music Therapy and the American Association of Music Therapy). In doing so, the goal was to create higher professional standards by requiring board certified music therapists to participate in ongoing continuing education, something not previously required.
Subsequent to the unification of these two associations, and the creation of the American Music Therapy Association (AMTA) in 1998, CBMT became the credentialing body for all students completing AMTA-approved programs. The CBMT certification program is accredited by the National Commission for Certifying Agencies, and CBMT is a charter member of the Institute for Credentialing Excellence (CBMT, n.d.-a).
The Board Certification Exam
Candidates for board certification have successfully completed the academic and clinical training requirements for music therapy, or its equivalent, as established by AMTA (CBMT Candidate Handbook, 2019). The board certification exam consists of 150 multiple choice questions, completed in 3 hours, of which 130 questions are graded (20 are non-scored experimental questions). These questions are distributed among four domain areas, as follows:
-
Referral, Assessment and Treatment Planning – 40 items
-
Treatment Implementation and Termination – 70 items
-
Ongoing Evaluation and Documentation of Treatment – 10 items
-
Professional Development and Responsibilities – 10 items
According to the CBMT (Wylie et al., 2017), the exam cut score (the number of questions that the candidate must answer correctly in order to pass) is currently 95, and 70% of students pass this exam on their first attempt (2015- 2017 [partial] data). Candidates who pass the exam are entitled to call themselves Board-Certified Music Therapists (MT-BC).
The CBMT Exam and Undergraduate Music Therapy Curriculum
The relevance and meaningfulness of the CBMT exam has been informally discussed among music therapy faculty since the inception of the exam, but recently these discussions have become more focused, stimulated in part by the creation of the Master’s Level Entry (MLE) subcommittee (AMTA, 2011), and in part by the CBMT Executive Director’s presentation to the MLE subcommittee and faculty in November 2017 in which CBMT reported a steady decline in the percentage of first time test-takers passing the exam—from 84% (2005–2010) to 70% (2015–second quarter of 2017; Wylie et al., 2017, pp. 10–11).
At the time of the MLE report, CBMT suggested a number of reasons for the decline in test score pass rates, all based on anecdotal evidence. These include perceived inconsistencies across programs and internships, anxiety about the MT-BC requirement for employment, time between internship and taking the exam, limited experience with multiple choice exams, application and analysis of knowledge vs. memorization, poor study skills and/or test taking skills, and a possible increase of international students for whom English is a second language (Wylie et al., 2017, p. 10). In support of CBMT’s observations, Hsiao et al. (2020) found that while 85.6% of the survey participants taking the CBMT exam passed on the first attempt, that number dropped to 50% for participants for whom English was a second language.
Of particular concern to educators was CBMT Executive Director Schneck’s perception of academic training programs, which she connected to declining first-time pass rates at the AMTA national conference in 2017. According to Schneck, “[W]hile music therapy clinical practice is advancing, as reflected in the practice analyses and increased cut scores; [sic.] the change is not being driven by music therapy education as indicated in the declining pass rates” (in Wylie et al., 2017, pp.10–11). Although the CBMT Executive Director does not clarify this statement further, the message she appears to be communicating is that academic programs (some or all) are not keeping up with clinical practice. This assumption that the CBMT exam represents current clinical practice and that academic training programs are not keeping up was, in part, the stimulus for this research study. The authors felt it was important to understand how other program directors were perceiving the problems associated with declining first-time pass rates, and what kinds of solutions they envisioned, if they perceived these to be necessary.
Schneck (in Wylie et al., 2017) also expressed concerns regarding the AMTA Professional and Advanced Competencies documents, citing job tasks identified in the CBMT Board Certification Domains that do not appear in the AMTA Professional Competencies document, or that are identified only in the AMTA Advanced Competencies document. This discrepancy has two important implications. The first is that it suggests AMTA and CBMT are not in agreement about what constitutes professional competence, and secondly, that AMTA-approved academic training programs, in their requirement to teach AMTA Professional Competencies to their students, may not be teaching all of the CBMT Domains identified in the Candidate Handbook.
However, other factors associated with exam outcomes emerged from Hsiao et al.’s (2020) survey of 662 music therapists who completed the exam between 2012 and 2017. Using data from music therapists who completed the exam after 2015, they found that self-reported cumulative Grade Point Average (GPA) and test anxiety scores were significant predictors of one’s ability to pass the board certification exam on the first attempt. Specifically, they found that a) a one unit increase in GPA increased the likelihood of passing the exam on the first attempt by 4.386%, and b) for every unit increase in anxiety as measured by the Westside Test Anxiety Scale, the probability of passing the exam on the first attempt decreased by 53.6%. These findings appear to suggest that academic standing and test anxiety contribute significantly to exam success.
Certainly, there may be other reasons that contribute to the decrease in pass rates as well, though we have no published data reporting these factors. For example, the increase in failures might indicate that professional standards are becoming higher, and that the exam serves the purpose it should—to act as a gateway to ensure competent practice and prevent under-prepared MT-BCs from entering the profession. Or it may be that inflated academic grades and the acceptance of students who do not develop to the extent expected during the course of the program contribute to student performance that leads to lower pass rates. However, findings from Hsiao et al.’s (2020) research suggest that grade inflation may not be a contributing factor, as students with higher GPAs were more successful in passing the exam on their first attempt. Therefore, one must consider whether increased failures might be related to other factors, such as exam validity. At present, however, we simply do not know why students are passing at a much lower rate in recent exams.
Academic Challenges Preparing Music Therapy Students
Concerns regarding an overburdened undergraduate curriculum and expanding scope of practice have existed for decades. As early as 1962, Braswell advocated for major curriculum changes that would give precedence to the unique requirements of a music therapy program, as opposed to a music program with a therapy component (Braswell, 1962). Similarly, in 1989 Bruscia argued that the profession was at a crossroads, with a burgeoning undergraduate curriculum that was no longer able to competently prepare students for professional practice. As a solution to this problem, Bruscia (1989) proposed three levels of competencies, for the bachelor’s, master’s and doctoral levels, defining the breadth and depth of music therapy practice at each level while doing so.
In the same year, Dileo Maranto (1989) published a summary from the California Symposium on Education and Training, wherein educators sought solutions to curriculum problems. Primary among these recommendations was a proposal for different levels of certification in music therapy, with the CBMT exam occurring after completion of the first level of training. Educators at this symposium proposed a variety of routes to achieve advanced clinical practice, with clear delineations between types and content of practice at each level. Over a decade later, Groene and Pembrook (2000) identified additional educator concerns regarding curriculum, with faculty suggesting specific subject areas that could be reduced to create more room for music therapy-specific content in the undergraduate curriculum.
Two recent articles point to the continued and urgent concerns of educators to address what is becoming, for some faculty members, an untenable problem (Ferrer, 2018; Lloyd et al., 2018). In her in-depth interviews of music therapy faculty and professional members, Ferrer (2018) found patterns of concerns regarding the undergraduate music therapy degree, which included:
-
General agreement that the number of requirements set by AMTA and NASM is too high,
-
Concerns regarding general education requirements in undergraduate programs, and the extent to which this supports or obfuscates the degree focus,
-
Concerns that students leave academic programs feeling overwhelmed, unable to integrate information in a way that gives them a comprehensive understanding of professional practice, and
-
Concerns that students are just not developmentally ready to work as music therapists, in that “they have not had the life experiences necessary to fully empathize with individuals facing complex situations” (Ferrer, 2018, p. 90).
A similar concern was expressed by Aigen and Hunter (2018). In citing the MLE report (Wylie et al., 2017) and declining CBMT exam first time pass rates, they concluded that the knowledge required for entry-level practice may be too large to be taught in four years, and that consideration be given to master’s level entry as a necessary step in addressing professional preparation (p. 192).
In an effort to understand more about the challenges that faculty face teaching in undergraduate music therapy programs, Lloyd et al. (2018) interviewed music therapy faculty about the internal and external challenges they face in addressing music therapy competencies. While a number of their findings were similar to Ferrer’s (2018), Lloyd et al.’s (2018) observations of, and insights into, addressing AMTA competencies in academic programs appear very relevant to the current discussion. This is reflected in a concern expressed by one participant (first quotation) that these authors then contemplate in terms of student preparedness (second quotation):
[H]ow many of these [AMTA competencies] are appropriate to be evaluat[ed]? To what extent should students be achieving these competencies? To what extent and where should they be demonstrated? Is 70% okay? Is 70% okay in multicultural areas but not ethics?
Though the competencies served as a guide for course content and program structure, program directors and faculty were left to further define those terms, making determinations as to what actually constituted competence, and how to create an environment that allowed students to reach those goals. (p. 113)
Lloyd et al.’s (2018) insights add another important dimension to the CBMT exam discussion—the role of AMTA guidance and oversight. AMTA appears to provide little guidance about minimally competent practice other than providing a list of professional competencies. AMTA also undertakes an evaluation every 10 years, through the Academic Program Approval Committee, of the extent to which these competencies can be verified as being addressed in each academic program, but this does not include evaluation of CBMT domains and related competencies. While the competencies and domains provide an essential foundation for competent educational preparation, this organizational approach appears to leave program directors (and program faculty) with decisions about which competencies, if any, receive emphasis in their academic program. These decisions, at an individual program level, then shape the ways in which students are prepared for the CBMT exam and professional life. The advantage of this approach is that it allows the values, beliefs and experiences of individual program directors and their faculty to shape the development of their students in ways that they believe prepare them for competent practice. The disadvantage is that this preparation may not align with the way the CBMT exam evaluates preparedness, even though the academic program has been approved by AMTA.
This incongruency leads us to an important juncture, in which the following dimensions interact, illuminating a complex constellation of challenges impacting academic training programs and their relationship to student preparation, as measured by the CBMT exam:
-
CBMT exam first-time pass rates have steadily declined over the last decade.
-
The CBMT Executive Director states that the scope of music therapy practice is changing and expanding, as defined by the CBMT practice analysis, but that “the change is not being driven by music therapy education” (Wylie et al., 2017, p. 10).
-
Faculty in academic training programs have expressed, for a number of years, concerns regarding the undergraduate music therapy curriculum, and their ability to prepare students for professional practice.
The relationship between CBMT exam test scores and academic program preparation therefore reflects a complex constellation of factors that are worthy of examination. This study addresses one dimension of this complex dynamic: academic program directors’ perceptions of the meaningfulness and utility of the CBMT exam as a measure of clinical competence. The following research questions guide this purpose:
According to academic program directors,
-
How effectively does the CBMT exam evaluate students’ abilities to practice competently?
-
To what extent, if at all, is there a relationship between students’ overall academic and clinical competence and their CBMT exam performance?
-
What factors, if any, contribute to students’ exam failures?
-
If music therapy academic program directors provide CBMT exam preparation, how effective do they perceive this preparation to be?
-
What changes, if any, should be made to the CBMT exam?
Method
Participants
Seventy-two program directors at AMTA-approved academic training programs were invited to participate in this study, identified from the American Music Therapy Association organization directory. In order to ensure anonymity, no descriptive information was collected from participants.
Survey Design
Specifically designed for this study, the survey comprised 14 questions: nine Likert and Likert-type questions and five open-ended questions. See Appendix A for the complete survey. After initial construction, the survey went through six rounds of revision and included feedback from four music therapy experts and one music therapy educator with knowledge in one or more of the following areas: survey design, question construction, knowledge of academic training programs, and data analysis procedures for survey research.
Procedures
Potential participants were sent an email invitation describing the study, along with a hyperlink to the survey itself. A follow-up email was sent to all potential participants 10 days after the initial invitation. The survey was housed in SurveyMonkey, and individual responses were stored in the secure SurveyMonkey portal and then downloaded to the researchers’ password-protected personal computers in summary form after the close of the survey. The survey included a consent form, which participants could review and accept prior to starting the survey or leave the survey portal if they did not wish to proceed.
IRB Approval
The study was reviewed by the Shenandoah University Institutional Review Board and adjudicated exempt from review.
Data Analysis
Two forms of data analysis were undertaken:
1. Summary data for the nine Likert and Likert-type questions were calculated as percentages and averages and presented in table or descriptive written form. In doing so, we followed guidelines provided by Boone and Boone (2012) and Harpe (2015). The authors treated Likert and Likert-type response items (individual items) as interval level measurement (e.g. effective to ineffective; agree to disagree, etc.), assuming the distance between each item was equal. Further, we used criteria provided by Harpe (2015) when considering responses in Table 1 (faculty perceptions of the effectiveness of the CBMT exam in evaluating student competence in the CBMT domains) as a Likert scale response, calculating a total average score across all items.
2. Data from the open-ended questions were analyzed qualitatively, using analytical procedures consistent with qualitative content analysis (Ghetti & Keith, 2016; O’Callahan, 2016). These procedures were as follows:
-
The researchers read responses to each open-ended survey question in their entirety to understand these responses as a whole.
-
After reviewing each answer, Eyre took primary responsibility for data analysis, organizing answers to each survey question into categories based upon their similarities. The number of responses in each category was also noted, and a summary table was created.
-
All of these categories of response were examined together, and further organized into broader themes that subsumed related categories.
-
These themes were then shaped into a written narrative that expressed the dimensionality of the theme, using category data to do so.
-
Once completed, narrative summaries were reviewed and verified by the co-researchers, with all theme and category data verified against the original responses to ensure the integrity of the analytic process.
After completing this series of procedural steps for each question, these narrative summaries were arranged sequentially by survey question. In order to maintain clarity and transparency with regards to qualitative responses in the Results section, qualitative codes in each narrative question were transformed into quantitative representations for statistical description. This approach of data transformation is appropriate in mixed-methods studies for purposes of “paradigmatic corroboration,” where both qualitative and quantitative responses examine a similar data set (Saldaña, 2016, p. 26).
Because of the complexity of the answers to each question and the overlap in responses between questions, a further analytic stage was undertaken to represent the data in ways that a) capture meta-themes, and b) reflect the responses of program directors as a whole. In doing so, this process integrated themes across questions, removed redundancies in participant responses between questions, and created an overall (meta) perspective of participant responses. This was undertaken as follows:
-
The researchers read the narrative summaries for each question to understand them as a whole.
-
These narratives were reorganized into cross-question themes and sub-themes based upon the research questions.
-
Redundancies were removed and narrative descriptions created that included: categories (headings), themes (subheadings) and subthemes (topical paragraphs) that reflected an overall conceptualization of participant responses.
-
These newly formed narrative descriptions were verified independently by the co-authors, who checked each other’s categories, themes and subthemes against the original written responses (raw data).
The Discussion section reflects the final written product of this analytic process.
Methods for Ensuring Trustworthiness
The integrity of the data analysis process was ensured primarily by documenting each stage in the analytic process in detail so that each researcher could independently verify the co-researcher’s work. As such, researcher triangulation occurred through 1) numerous discussions of the data, coding procedures, and theme development, 2) independent examination of the process through which categories, themes and subthemes were developed, 3) independent verification of the number of coded items in each category, and 4) independent analysis of the narratives created to form the Discussion section. In these ways, each co-researcher was held accountable to the other, and each stage of the data analysis process was transparent and available for verification. Readers are invited to review all the original responses to the open-ended questions in Appendix B and to review these in relation to the co-researcher’s data analysis processes (presented in this document).
Results
Forty-one program directors at AMTA-approved academic training programs participated in this study, a response rate of 58%. Survey results are divided into two sections. In the first section, summary data are provided regarding program directors’ responses to the nine Likert and Likert-type questions, followed by section two, an analysis of responses to five open-ended questions contained within the survey.
Summary of Responses to Likert and Likert-type Questions
Tables 1 and 2 summarize program directors’ responses to two questions addressing the effectiveness of the CBMT exam in evaluating clinical competence. Descriptive statistics were calculated by assigning a numerical value to each rating (1 = ineffective to 5 = effective) and calculating the average ratings accordingly.
Table 1
Table 1 summarizes faculty perceptions regarding the effectiveness of the CBMT exam in evaluating the four domain areas derived from the CBMT practice analysis. Average responses varied for each item, from 3.27 (treatment, implementation and termination) to 3.61 (professional development and responsibilities). When responses to these questions were treated as a Likert scale response (Boone & Boone, 2012; Harpe, 2015), the total average response was 3.34. Scores in the 3 to 4 range are rated as “neither effective nor ineffective” (a rating of 3) to “somewhat effective” (a rating of 4).
Table 2
Table 2 provides a summary of faculty perceptions regarding the relationship between clinical competence, academic grades and the CBMT exam. When faculty rated the relationship between clinical competence and exam performance, the average of these responses was 3.32. When faculty rated the relationship between academic grades and exam performance, the average of these responses was 3.78. Scores in the 3 to 4 range are rated as “neither related nor unrelated” (a rating of 3) to “somewhat related” (a rating of 4).
Table 3 provides a summary of the reasons program directors perceive students “could or might fail the exam” (1 = disagree to 5 = agree). Average responses to each question varied from 2.40 (Unable to provide breadth and depth of coursework) to 3.47 (I do not teach to the exam). Of particular interest were responses concerning academic preparation for the exam (I do not teach to the exam; I am unable to provide the breadth and depth of coursework; inconsistent philosophy) and perceptions related to the validity of the exam (the exam is not valid; some of the exam is irrelevant to competent and/or safe clinical practice). Regarding academic preparation, the average response to the item “unable to provide breadth and depth of coursework” was 2.4, where scores in the 2 to 3 range reflect “somewhat disagree” to “neither agree nor disagree.” When program directors were asked about “inconsistent philosophy,” the average of these responses was 2.63, where scores in the 2 to 3 range reflect “somewhat disagree” to “neither agree nor disagree.” Finally, when program directors were asked to rate the extent of agreement regarding “I do not teach to the exam,” the average of these responses was 3.47, where scores in the 3 to 4 range reflect “neither agree nor disagree” to “somewhat agree.”
Table 3
Further, when program directors were asked about their perceptions of the validity of the exam, responses to these two questions were as follows. When asked to respond to the statement “the exam is not valid,” the average of these responses was 2.8, where a score of 2.8 reflects “neither agree nor disagree” to “somewhat disagree.” When asked to respond to the statement “some of the exam is irrelevant to competent and/or safe clinical practice,” the average of these responses was 3.28, where scores in the 3 to 4 range reflect “neither agree nor disagree” to “somewhat agree.”
Program directors were also asked to indicate their level of agreement with a statement made by CBMT Executive Director Schneck that was included in the final report of the Master’s Level Entry subcommittee: “While music therapy clinical practice is advancing, as reflected in the practice analysis and increased cut scores; [sic.] the change is not being driven by music therapy education as indicated in the declining pass rate” (in Wylie et al., 2017, pp. 10–11). When these responses were examined as a whole, 18% (7) of faculty disagreed, 15% (6) somewhat disagreed, 15% (6) neither agreed nor disagreed, 23% (9) somewhat agreed, and 30% (12) agreed, with the average these responses being 3.33. Scores in the 3 to 4 range are rated as “neither agree nor disagree” (a rating of 3) to “somewhat agree” (a rating of 4).
Finally, program directors were asked if they, or their faculty colleagues, provided specific preparation for the CBMT exam: 59% said they did, whereas 41% said they did not. When those faculty who said they provided specific preparation for the CBMT exam (n = 28) were asked how effective they perceived their preparation activities to be, 3% (1) reported they were ineffective, none found them somewhat ineffective (0), 8% (3) described them as neither effective nor ineffective, 41% (16) described them as somewhat effective, and 21% (8) described them as effective.
Responses to Open-Ended Questions
Participant responses to the five open-ended questions are summarized below (Tables 4–8) and described in detail in the Discussion. This includes comments related to the exam (Table 4), why programs directors believe students may be failing the exam (Table 5), how academic programs support student preparation for the exam (Table 6), perceived barriers to effective preparation (Table 7), and what changes, if any, program directors would make to the exam (Table 8). In providing this summary data, respondents and responses are differentiated. The term respondents denotes the total number of program directors who responded to a particular question, whereas responses reflects the number of comments made for each theme or category of response. Thus, the total number of responses is often larger than the number of respondents as many of the program directors gave detailed written responses that were included in analysis for multiple themes. Percentage calculations in each of these categories are based on the number of respondents who made a statement about a particular theme, compared to the total number of respondents for the question.
Comments related to the CBMT exam
Program directors were given the opportunity to describe what they perceived the CBMT exam evaluates, based on their experiences of preparing students to take the exam. Thirty-nine program directors responded, providing 52 responses that are summarized in Table 4. These responses are as follows: 1) 54% (21) of respondents questioned or criticized the exam’s ability to evaluate clinical competence, 2) 41% (16) perceived the exam to evaluate test-taking abilities, 3) 31% (12) perceived the exam to evaluate a student’s ability to practice competently and/or safely, and 4) 8% (3) were unsure what the exam evaluated.
Table 4
Comments Related to the MLE Report
Program directors were given an opportunity to respond to CBMT Executive Director Schneck’s (in Wylie et al., 2017, p. 10) comments related to academic training and exam pass rates. Twenty-nine program directors responded, providing a total of 49 responses, which are summarized in Table 5. These include concerns regarding: 1) external validity (62%–ability of exam to measure entry level competency); 2) erroneous assumptions on the part of CBMT regarding reason for increased failures (24%); 3) undergraduate curriculum cannot expand to reflect the breadth and depth of our developing practice (21%); and 4) insufficient communication by and between CBMT, AMTA and National Association of Schools of Music (NASM) (21%).
Table 5
CBMT Exam Preparation
Program directors were asked if they provided specific kinds of exam preparation for their students, and if they did, to describe this preparation. Twenty-eight program directors (59%) reported that they (or their faculty colleagues) provided specific exam preparation, of which the three main methods were: 1) formal instruction (32%), 2) instructional methods within a specific course(s) (25%), and 3) specific exam preparation outside of class (21%). These responses are summarized in Table 6.
Table 6
Program directors were also asked if they experienced any barriers in preparing students for the exam. Thirty-four program directors responded, providing a total of 54 responses, which are summarized in Table 7. Although the majority of the responses (n = 24; 71%) reflect a belief that the exam preparation they provided students was effective or somewhat effective, the three most common barriers reported were: 1) a lack of knowledge about exam content and lack of experience with the exam (32%), 2) the perception that the undergraduate curriculum is insufficient in providing the breadth of knowledge and skills required to prepare students for competent practice (27%), 3) students have weak analytical and critical thinking skills, (9%) and 4) students wait too long after internship to take the exam (9%). All responses are summarized in Table 7.
Table 7
Changes to the CBMT exam
Finally, program directors were asked what changes, if any, they would make to the exam. Thirty-four program directors responded, providing a total of 41 responses. A wide variety of changes were suggested, which will be presented in detail in the Discussion.
Table 8
Discussion
The purpose of this study was to examine academic program directors’ perceptions of the meaningfulness and utility of the CBMT exam in measuring clinical competence. This purpose was guided by a series of research questions that also sought to understand the relationship between academic and clinical competence and exam performance, reasons students “could or might fail the exam,” and any changes program directors would make to the CBMT exam. In this section we combine responses to the Likert and Likert-type questions with our analysis of the narrative responses (open-ended questions), to provide a comprehensive picture of participants’ perceptions of the exam. In doing so, we begin by addressing the effectiveness of the exam, addressing reasons students could or might fail the exam, and conclude the Discussion by summarizing recommendations these program directors made for changing the exam.
Summary data from the Likert and Likert-type questions (Tables 1–3) suggest that program directors do not, on average, perceive the exam to be effective or ineffective in evaluating student competence to practice safely and effectively (x̄ = 3.34; Table 1), and that, on average, they viewed clinical competence and academic grades as neither related nor unrelated to CBMT exam performance (x̄ = 3.32 and x̄ = 3.78; Table 2). These averages were clarified in the written responses (open-ended questions), where the majority of the faculty (54%; Table 4) questioned the exam’s capacity to evaluate clinical competence. When reporting what they perceived the CBMT evaluates, 41% of respondents indicated that they perceived the exam to be a test of a student’s ability to take a standardized multiple-choice test.
These faculty perceptions conflict somewhat with a recently completed study investigating certificants’ perceptions and experiences of the board certification exam, including identifying predictors of exam success. Hsiao et al. (2020) found that GPA and test anxiety were two strong predictors of exam success, with GPA showing an odds ratio of 4.386 (p = .001), suggesting that a unit increase in GPA increases the odds of passing the exam on the first attempt by 4.386%. Additionally, anxiety showed an odds ratio of 0.464 (p < .001), suggesting that for every unit increase in anxiety, the odds of passing the exam on the first attempt decrease by 53.6%.
Significantly, of 192 respondents who provided comments in the Hsiao et al. (2020) study in response to the question, “Would you agree that the [board certification] examination reflects your competence as a music therapist?,” almost all stated reasons for their disagreement. The following three reasons were given: The exam did not fully reflect the test takers’ actual educational and clinical experiences (n = 64; 33.3%), 2) the testing format addressed only content knowledge and favored one type of test taker (n = 43; 22.4%), and 3) the test was subjective due to its bias toward certain theoretical approaches and philosophies and its US-centrism (n = 10; 5.4%). This appears to align with the findings from this study, in which faculty expressed concerns about the construction of the exam.
Construct and Content Validity
Concerns regarding content and construct validity were expressed throughout the survey, particularly in the open-ended questions (see Table 5). The leading problem that program directors had with the exam was its concreteness, which they viewed as disregarding the complex clinical contexts in which knowledge and skills are applied. From this perspective, clinical interventions are dependent upon the context in which they occur, whereas the exam accepts only one correct answer, which is often correct only within one particular theoretical approach (e.g. behavioral) and/or clinical context. Thus, from their perspective, the exam does not adequately reflect clinical practice, revealing instead a disconnection between the “correct” answer and the myriad ways clinicians actually think about their work with clients.
Some educators also reported that their clinical philosophy and way of thinking about and teaching music therapy was not consistent with the exam format or content, nor was the exam able to evaluate the kinds of clinical practice skills they identified as central to competent practice. For example, these educators believed that students who reflected deeply about the clinical context of the question tended to choose more incorrect answers because they were aware of the multiple ways one might respond to a client, depending on the context of the response. In addition, other educators observed that the test questions often had distractors that led students to have difficulty recognizing what knowledge the question was testing. Overall, for these respondents, the exam was therefore perceived to be a measure of test taking (i.e. understanding what is being tested) rather than evaluating sound clinical decision-making skills that are foundational to safe and competent practice.
Evaluating Entry-Level Practice
Program directors also expressed a range of opinions as to whether the current exam is reflective of entry-level practice. Reasons for this perceived lack of content validity were premised upon concerns that some questions included in the exam may require knowledge that extends beyond what one can reasonably be expected to know at the bachelor’s level. These concerns were connected to the construction of the exam, which is derived from the Practice Analysis. The Practice Analysis is undertaken every five years (the last was in 2019) to generate a list of job tasks that are used to generate and define Board Certification Domains. These domains are then used in the construction of the exam, which is managed by the testing firm Applied Measurement Professionals (AMP; CBMT, n.d.-b).
While the involvement of AMP ensures psychometrically sound procedures, many faculty remained doubtful that the exam solely tests bachelor’s level competency. These concerns were based on the perception that the exam may include questions formulated from 1) the knowledge and experience of graduate equivalency professionals (master’s degree), which they do not perceive as comparable to bachelor’s level entry (BLE) professionals, 2) professionals who have specialized training (e.g. NICU, NMT etc.), 3) professionals working in a specialized environment performing duties that require an advanced breadth and depth of practice (beyond BLE), and 4) professionals who carry out duties that extend beyond the job description of a music therapist (e.g. activity director, recreational therapist or case manager). Thus, these faculty believed that the Practice Analysis process results in an exam that extends beyond the scope and training for a BLE music therapist as defined by AMTA for entry into the profession (see Table 5).
Faculty also questioned whether it was preferable to have an exam that evaluated entry-level practice at the bachelor’s level or required a master’s degree to enter the profession. These comments were related to the MLE Subcommittee report and the AMTA Board of Directors’ decision about MLE, which was unknown at the time of this survey. A frequent comment was that undergraduate education and training was not adequate to teach entry-level skills and knowledge, while the current exam tests beyond entry level. The response of one educator succinctly captures the perspective of a number of these faculty members:
“[Music Therapy] is not an evolving undergraduate profession, we have huge burn out because jobs are being created for graduates with undergraduate level training who cannot and should not be exposed to the deeper, advanced clinical work required from graduate level training. We are in desperate need to re-think this as it impacts so much more than internal decisions. This one exam is impacting our profession and field as it is known to the public, job market, and any kind of potential career trajectory right up into leadership positions in [administration] where we need [graduates] to be heading to support and sustain the future of the field.”
CBMT, AMTA and NASM
Throughout the narrative responses, program directors expressed a range of concerns about the exam that they viewed to be a result of the relationship between CBMT, AMTA, and NASM. The primary theme expressed through these statements was the disconnect between the Professional Competencies of AMTA and the CBMT Domains, which faculty perceived as being qualitatively different. Further, the distribution of credits required by NASM for music therapy degree programs was another concern, particularly the core music credits that many perceived as having little application to the music skills required for music therapy practice. Finally, some faculty felt that the exclusion of educators during exam construction exacerbated the perceived disconnect between AMTA and CBMT competencies, making it difficult for educators to create a curriculum based on AMTA requirements that also addresses the CBMT domains.
The majority of program directors also felt that there was a lack of communication between CBMT, AMTA and educators, and that this posed a number of problems in preparing students for the exam. Notable among these concerns were the following: 1) faculty lack accurate knowledge about exam content and are likely to have no recent experience taking the exam, 2) there is no systematic mechanism through which faculty are informed about changes to cut scores, 3) there is no way for educators to identify where the focus of the exam will shift from cycle to cycle based on the practice analysis, and 4) there are no examples of actual test questions that mirror the current exam. When taken as a whole, this appears to suggest that educators perceive significant obstacles in preparing students for the exam.
Comments Related to Decreased Pass Rates
Program directors’ responses to Executive Director Schneck’s statement regarding educational preparation and clinical competence (“While music therapy clinical practice is advancing, as reflected in the practice analyses and increased cut scores; [sic] the change is not being driven by music therapy education as indicated in the declining pass rates”) may be viewed from two different perspectives. When Likert and Likert-type responses are examined, they suggest a widely divergent points of view: 18% (7) of faculty disagreed with the Executive Director’s statement, 15% (6) somewhat disagreed, 15% (6) neither agreed nor disagreed, 23% (9) somewhat agreed, and 30% (12) agreed. When the average of these responses was calculated, the mean was 3.3. Scores in the 3 to 4 range are rated as “neither agree nor disagree” (a rating of 3) to “somewhat agree” (a rating of 4). However, written responses from program directors provide a different picture, with most faculty members responding in ways that suggest increased failures were the result of systemic problems in the construction of the exam (already described in this section).
Some educators also drew attention to the fact that there is no evidence that the recent decline in the pass rate is associated with inadequate academic preparation. Logically, they point out, if clinical practice is advancing, how can CBMT conclude that education is not driving this advancement? Two educators take this logic further, suggesting that improvements in education may be causing lower scores, as students are being taught more advanced skills and concepts than those represented in the exam—which requires a more cognitive, cause-effect clinical perspective.
Responsibilities of Faculty
While most faculty members perceived that increased failures were due to systemic problems, two program directors suggested that educators themselves may be at fault, in that they may not be keeping abreast of evidence-based practice, or they may not include an adequate number of clinical approaches and philosophies, thus failing to prepare students. Several faculty members also stressed that it was the educator’s job to prepare students to take this kind of test by developing test-taking skills throughout their undergraduate or equivalency training. Recent successful exam candidates in Hsiao et al.’s (2020) study found academic course work to be helpful or very helpful (65%), but they requested additional support in terms of: 1) provision of an overview of the exam process, 2) provision of current resources to prepare for the exam, 3) assistance to develop skills needed to take standardized tests, and 4) the inclusion of exam questions in academic coursework that would be similar to questions used in the board certification exam. Certainly, the apparent discrepancy between the level of effectiveness with which faculty believe they prepare students for the exam (the majority of responses reflect a belief that the exam preparation they provide students was effective, or somewhat effective), and the continued decline in first time pass rates, suggests faculty give increased attention to the preparation methods and materials they provide for their students.
Responsibilities of Students
Some faculty also recognized that students had a role to play in passing the exam, noting that while some students had excellent clinical skills and an instinctive awareness when working with clients, they do not possess the analytical and critical thinking skills needed to pass the exam. Others believed that the timing of the test was a factor, since students who take the exam too long after internship and/or neglect to ask for help in preparing for the exam have more difficulty passing. Finally, some program directors observed that students for whom English is a second language had much greater difficulty passing the exam because of the ways in which language is used in the exam, a perception consistent with Hsiao et al.’s (2020) study findings.
Recommendations for Changes to the CBMT Exam
While 5 of the 34 respondents (15%) to the question “What changes, if any, would you make to the CBMT exam?” believed that no changes are needed, 29 program directors recommended changes. These include 1) organizational and curriculum changes, 2) exam content changes, 3) alternate exam formats, and 4) entry level and advanced practice exams. Each will be briefly summarized below.
Organizational and Curriculum Changes
Program directors suggested a number of solutions that address their perception that there is a lack of communication and coordination between CBMT, AMTA, and faculty. For example, one program director suggested that CBMT could be more forthcoming in helping academic programs to identify where their students are having problems so that these can be addressed. Another suggested that AMTA perform a complete review of their educational competencies and realign them to more accurately reflect current practice, as defined by the CBMT scope of practice.
Two specific solutions related to an over-extended undergraduate curriculum were also suggested. These were: 1) addressing the percent of music therapy courses in the undergraduate curriculum required by NASM and AMTA, and 2) reviewing the content of the NASM requirements with a particular focus to musical skills development. Program directors concerned with music skills requirements believe that by reducing the applied music and ensemble requirements, students could focus more on clinical music skills and ensemble requirements, which are ever-expanding and demanding, though often underestimated by applied music faculty. Such a focus on clinical music skills would be of great benefit in preparing students for their profession as music therapists, while also addressing burdens associated with an over-extended undergraduate curriculum.
Exam Content
Program directors suggested a number of solutions to address their concerns regarding exam content. These included:
-
Removing questions pertaining to theoretical orientations and models that require institute training (as they believe this level of knowledge reflects advanced practice)
-
Reducing the scope of practice, particularly where it pertains to knowledge that is not based on music therapy
-
Removing specialized medical terminology from exam questions
-
Eliminating questions that are reflective of private practice (e.g. termination; billing)
-
Aligning CBMT and AMTA competencies
-
Educators also suggested developing resources to help with exam preparation. This included creating study guides, providing more access to retired test questions, and providing more resources for international students. This need for resources that help students to prepare for the exam, as well as CBMT resources that more accurately mirror the exam, is supported by Hsiao et al.’s (2020) study of recent exam takers.
Alternative Exam Formats
Because of the challenges they perceived that some students have with the exam, a number of program directors favored exploring alternative ways of evaluating clinical competence. These included adding a live clinical component to the exam, focusing more exam questions on clinical practice, and providing alternative test-taking formats for students with disabilities. Some respondents in Hsiao et al.’s. (2020) study also suggested that CBMT might enhance the validity of the exam by considering other forms of testing such as essay questions and experiential components.
Entry-Level and Advanced Practice Exams
Finally, several program directors suggested creating different levels of exams, which they believed would address issues related to the scope of practice and an over-burdened undergraduate curriculum. These solutions included: 1) creating entry level and advanced practice exams, and 2) creating a tiered exam system similar to nursing. Such a system would reflect different levels of training and expertise, allowing for both undergraduate and graduate levels of examination.
Study Limitations
Three limitations are acknowledged when considering the findings from this study. The first is the sample size. While reflecting the responses of over half of all eligible program directors, a larger sample of academic faculty may have yielded different categories and distributions of responses. Second, as no identifying information was gathered on participants, comparisons from different groups of respondents were not possible. Gathering data such as program philosophy, recent first-time program pass rates, years as an educator, and region may have provided additional information that clarified responses and distinguished categories of response. Finally, as program directors are not permitted on the CBMT Exam Committee, nor do they have access to the exam, their survey responses reflect perceptions about the exam, and as such, may vary in their accuracy and/or depth of understanding.
Reflective Comments
The findings from this survey reveal a broad range of perspectives among program directors about the meaningfulness of the CBMT exam in measuring clinical competence. Summary data from the Likert and Likert-type questions suggest that program directors do not, on average, perceive the exam to be effective or ineffective in evaluating competence to practice safely and effectively, and that, on average, they view clinical competence and academic grades as being neither related nor unrelated to CBMT exam performance. Written responses from participants provide a clearer picture, with 54% of responses critical of the exam’s ability to evaluate clinical competence, and a further 41% suggesting the exam evaluates test-taking abilities. These concerns were further expressed in the number of comments related to external validity, with 62% of responses expressing concerns about the ability of the exam to measure undergraduate entry-level job tasks and competence.
While these findings suggest that program directors have a number of concerns about the CBMT exam, a number of faculty also appear to believe the CBMT exam is a relevant and meaningful measure of clinical competence (see Table 1). One might ask, how is this possible? How can it be that some faculty are supportive of the exam, perhaps even strongly so, whereas other faculty are critical of the exam, perhaps equally strongly so?
These conflicting perspectives may serve as an important starting point for a larger discussion about how music therapy is defined and practiced, and it may serve to bring us closer to addressing core concerns about the exam expressed in this survey. For example, perhaps those program directors who expressed support for the exam do so because they are aligned philosophically with the exam. That is, they believe that the exam evaluates the “correct” or “right” way of thinking about music therapy; that there are “first” and “best” clinical decisions that can be made outside of the clinical context in which they occur; and that music therapy is best understood causally. From this perspective, it follows that music therapy interventions can be understood objectively, with clients behaving in predictable ways in relation to musical stimuli, and that we can therefore predict the ways groups of clients respond to a music experience (whether this be a specific music element, activity, or experience). When this clinical perspective is taken, then the CBMT exam appears to make sense, and may well be a reliable way of measuring clinical competence.
What happens though, if you don’t believe music therapy works this way? Or, that you were not trained to think this way because your instructors taught you a different way of thinking about music therapy? What happens if you believe that benefits of music are not causal, and that music experiences evoke myriad reactions from clients, both conscious and unconscious, and these are best addressed within the context of the unique therapeutic relationship between the client and their music therapist? From this perspective, each therapeutic process may be different, even when working with clients who have the same diagnosis and clinical goals, and therefore deciding the “first” and “best” response to a client can only occur within the specific context of that client or session.
While we can understand music as a stimulus, in which the specific elements of music evoke specific responses from clients, this is only one way of understanding music. We can also understand music as a symbol, a metaphor, a cultural marker, an energy system, and a portal to the spirit world, just to name a few such perspectives. Such perspectives reflect more than philosophical differences in individual music therapists’ approaches to work with clients. They reflect equally valid ways of thinking about and practicing music therapy, and as such are equally important ways of understanding clinical competence.
Herein lies a core concern about the CBMT exam, and may be one way of drawing together the plethora of concerns that program directors have regarding the exam: whereas music therapy can be practiced in a wide variety of ways, each of which has its own integrity, the CBMT exam may only evaluate one way of thinking about music therapy clinical practice.
Students’ struggles with the exam, especially in the last decade, may therefore reflect two important things: 1) their exam performance may be an indication of the extent to which their academic program is philosophically aligned with the ways in which the CBMT exam defines music therapy clinical practice, and 2) the drop in first-time pass rates may reflect a deepening and differentiation of clinical practice knowledge that no longer aligns with the fundamental premises of the exam. That is, educators are advancing clinical practice, and one of the ways of doing so is to develop the sophistication of their own theoretical perspective. If this perspective does not align with the philosophical premises of the CBMT exam, measured in the ways exam questions ask students to think about therapy, then students in these programs may well do poorly.
A second and related concern has to do with the relationship between the academic training program, the internship, and the CBMT exam. As any client would hope, music therapy students must pass three levels of evaluation before they can work clinically: academic, internship, and exam. In this process, the academic training program “approves” the student for internship. That is, they vouch for the student by verifying to the internship director that the student has met all the academic and clinical training competencies necessary to start internship. Second, the internship director, at the end of a successful internship, vouches for the student. Through their final evaluation, the internship director says, in essence: “This student is ready to work as a music therapist.” That is, prior to being eligible to take the exam, the student has passed two levels of evaluation that verify the student’s competence. How is it that, even with these two levels of verification, students are not able to practice because they cannot pass the exam? Would it not be equally plausible to say that the exam is not measuring the student’s competence, especially if the internship director, who has observed the student working clinically for 6 months (approximately 1000 hours), says that the student is competent?
These clinical practice problems are compounded by AMTA and CBMT’s definitions of music therapy, as characterized by the Professional Competencies and Board Certification Domains. From these perspectives, music therapy has cognitive, communicative, emotional, musical, physiological, psychosocial, sensorimotor and spiritual benefits (CBMT Domain I.B.3) that can be addressed behaviorally, developmentally, humanistically, psychodynamically, neurologically, and medically (CBMT Domain II.A.4). Also included in CBMT’s treatment approaches and models are holistic, culture centered, community music therapy and improvisational (CBMT Domain II.A.4). How much of each of these theories are students expected to know, and even more importantly, how much clinical practice knowledge should students have about each model and treatment approach in relation to each clinical setting? None of this is defined, and yet students are being evaluated on these competencies.
Such a broad definition of music therapy has significant clinical practice implications. For example, should we expect a 22-year-old new graduate to work psychotherapeutically with a 54-year-old man with testicular cancer who has just been told his disease is terminal and he should “get his house in order.” According to both AMTA and CBMT, this newly board-certified music therapist has met the competency requirements to practice with this client (AMTA Professional Competencies 10.3, 10.5, 13.5 and 13.13; CBMT Domains I.B.3.c and II.A.4.i).
Further, how should this student’s competence to practice be evaluated prior to starting work? Does the CBMT exam evaluate minimal competence to practice when the music therapist is working psychodynamically with an adult addressing emotional goals? We propose that a majority of psychodynamically trained music therapists would argue that the CBMT exam does not evaluate this kind of clinical competence, even though the student has the designated credential (MT-BC) to practice.
Finally, we acknowledge concerns expressed by some faculty that some academic programs may not be “keeping up.” We believe this is an important topic for discussion, especially in light of the increased concerns expressed by many faculty about their students’ mental health, an overburdened undergraduate curriculum, and the financial pressures many students experience completing an undergraduate degree. But we also suggest any such discussions be carefully considered, especially if “keeping up” is only being measured by first-time pass-rates for the CBMT exam. The findings from this survey present a much more complex picture, in which it is equally important to ask: “Is the CBMT exam keeping up with clinical practice?” And, perhaps more importantly: “What kind of clinical practice is the CBMT exam evaluating?”
Acknowledgement
The authors would like to thank Dr Audra Gollenberg for her assistance with the analysis and interpretation of the survey data.
About the authors
Anthony Meadows is the Director of Music Therapy at Shenandoah University (Virginia, USA). He has more than 20 years clinical experience, working with both children and adults, and 18 years of experience as an educator. Anthony has served in a wide range of positions, including the Assembly of Delegates (AMTA), MAR-AMTA Research Committee, and as Editor of Music Therapy Perspectives (2011–2018). He has published broadly, including research in cancer care and the edited volume Developments in Music Therapy Practice: Case Study Perspectives (2011).
Lillian Eyre is an accredited music therapist (MT-BC), a licensed professional counselor (LPC, Pennsylvania), and a Fellow of the Association for Music & Imagery (FAMI). She is a visiting associate professor at Temple University, USA. Prior to joining Temple, Eyre was Associate Professor and Director of Music Therapy at Immaculata University, USA. In 1995, she founded music therapy programs in psychiatry, dialysis and long-term care in the McGill University Heath System, Canada, where she worked until 2006. She co-founded Le groupe Musiart, a performing arts group and choir for persons with serious mental illness. She serves on the editorial review board of Music Therapy Perspectives and the Canadian Journal of Music Therapy. In addition to article and chapter publications, she edited Guidelines to Music Therapy Practice in Mental Health (2013, Barcelona Publishers).
References
Aigen, K., & Hunter, B. (2018). The creation of the American Music Therapy Association: Two perspectives. Music Therapy Perspectives, 36(2), 183-194, https://doi.org/10.1093/mtp/miy016.
American Music Therapy Association (AMTA). (2011). Master’s level entry: Core considerations. http://www.musictherapy.org/assets/1/7/Masters_Level_Entry_Core_Considerations.pdf.
American Music Therapy Association (AMTA). (2013). American Music Therapy Association Professional Competencies. https://www.musictherapy.org/about/competencies/.
Boone, H., & Boone, D. (2012). Analyzing Likert data. Journal of Extension, 50(2), 2TOT2, https://www.joe.org/joe/2012april/tt2.php.
Certification Board for Music Therapists. (n.d.-a). About CBMT. Retrieved May 11th, 2019. https://cbmt.org/about-cbmt/
Certification Board for Music Therapists. (n.d.-b). What is a Practice Analysis? Retrieved August 1st, 2019. https://www.cbmt.org/frequently-asked-questions/
Certification Board for Music Therapists. (2019). Candidate Handbook. CBMT. https://www.cbmt.org/candidates/certification/.
Dileo Maranto, C. (1989). A letter from the president. Music Therapy Perspectives, 6, 7-9, https://doi.org/10.1093/mtp/7.1.7.
Ferrer, A. (2018). Music therapy profession: An in-depth analysis of the perceptions of Educators and AMTA board members. Music Therapy Perspectives, 36(1), 87-96, https://doi.org/10.1093/mtp/miw041.
Groene, R., & Pembrook, R. (2000). Curricular issues in music therapy: A survey of Collegiate faculty. Music Therapy Perspectives, 18(2), 92-102, https://doi.org/10.1093/mtp/18.2.92.
Harpe, S. (2015). How to analyze Likert and other rating scale data. Currents in Pharmacy Teaching and Learning, 7, 836-850, https://doi.org/10.1016/j.cptl.2015.08.001.
Hsiao, X., Tang, J., & Chen, M. (2020). Factors associated with music therapy board certification examination outcomes. Music Therapy Perspectives, 38(1), 51-60, https://doi.org/10.1093/mtp/miz017.
Lloyd, K., Richardson, T., Boyle, S., & Jackson, N. (2018). Challenges in music therapy undergraduate education: Narratives from the front lines. Music Therapy Perspectives, 36(1), 108-116, https://doi.org/10.1093/mtp/mix009.
Wylie, M., Borling, J., Borczon, R., Briggs, C., Creagan, J., Furman, A., Hairston, M., Hughes, M., Hunter, B., Kahler, E., Kaplan, R., Montague, E., Neugebauer, C., & Snell, A. (2017). A question of degree: Final report of the Master’s Level Entry (MLE) subcommittee. https://www.musictherapy.org/assets/1/7/MLE_11-30-17_Part_I.pdf.