Music Technology Tools – A Therapist-in-a-box? Human–Computer Interaction and the Co-Creation of Mental Health
Abstract
The purpose of this paper is to contribute to the discussion of technology in music therapy and public health, focusing on the human–computer interaction and the co-creation of mental health. Foundational theory explaining the possible therapeutic dynamics that can occur when engaged in digital technology is presented, along with two case vignettes that illustrate how adolescents interact with digital music technology to promote mental health and wellbeing. The discussion includes reflections concerning actor-network theory, agency, and affordance-theory, and it argues that the iPad should be considered a valuable co-agent in the agent-network functioning to promote adolescents’ mental health.
Date received: 20 May 2020
Date accepted: 5 May 2021
Publication date: 1 July 2021
Introduction
Portable music technology has changed the way we make music in the 21st century. Tablets and smartphones are considered powerful computational assets in music production; they are being used to record, edit, and process sound, and they contribute in the creation of musical artifacts, primarily in the popular music business. Moreover, the extensive, worldwide use of smartphones as personal computers, and the range of apps offered from all sorts of companies, organizations, and institutions, make this technology both highly accessible and highly comprehensive considering the diverse tools that are available. This development deeply impacts our daily lives.
Among other things, music is now in our pockets. We listen to music in different situations and environments, and we know that music is important for people’s health. For instance, we know that music affects our brain (Brean & Skeie, 2019), our emotions (Juslin & Sloboda, 2010), and our health and wellbeing (MacDonald et al., 2012) in many ways. According to a recent report from the World Health Organization (WHO), a meta-analysis showed that activities such as making and listening to music are associated with stress management and prevention, lower levels of biological stress, and lower daily anxiety (Fancourt & Finn, 2019). In addition, it builds self-esteem, self-acceptance, confidence, and self-worth, which are considered protective against mental illness. Moreover, the technology offers a portable music studio right in our pockets. We have a music studio-in-a-box;1 it affords a unique opportunity to engage in creative music-making activities, not only to produce an artifact, but also as a potential co-creator of mental health: a therapist-in-a-box.
In this paper, I present foundational theory explaining the possible therapeutic dynamics that can occur when engaged in digital technology. To exemplify this, I present two case vignettes that illustrate how adolescents interact with digital music technology to promote mental health and wellbeing. I also present the iPad® as a co-agent working with music and wellbeing, and I discuss how the iPad can become a co-creator of mental health.
Background
The Co-agent
The term co-agent is inspired by the recent philosophical perspective of an ethical and dialogical approach to mental health care (Bøe & Thomassen, 2017). In recent decades, mental health care has developed into a term that covers diverse practices, but with the same goal: improving people’s mental health. Within this context, not only health professionals, such as psychiatrists or psychotherapists, but also other professionals, with a range of expertise, are considered to be part of a person’s recovery. A multitude of humanistic agents can work together with health professionals to promote mental health.2 A music producer, for instance, can use the expertise and knowledge of music production to work with music to promote people’s mental health and wellbeing, as one of many possible resources for recovery. My background as a musician, music producer and popular musicologist working on various technological platforms, makes me a humanistic agent in this context. Although I am not a trained therapist, I take part in a person’s recovery by offering my expertise together with other health professionals. The humanistic, resource- and empowerment-oriented health approach recognizes health not as the absence of illness (pathogenetic perspective), but as a holistic and salutogenic (Antonovsky, 1987, 1996) perspective focusing both on factors that promote health and on a state of health and wellbeing that balances the body, mind, and soul (Cappelen & Andersson, 2014). This means that health is a subjective, experienced condition, existing as a continuum that can be influenced by our actions, participation, and self-actualization (Bruscia, 1998; Stensæth, 2010). A professional music producer can subsequently influence this subjective, experienced condition by offering knowledge of practice and access to technological tools that enable users to take part in health-promoting activities. Consequently, the music producer becomes a co-agent in an interdisciplinary teamwork, promoting mental health and wellbeing.
The iPad as Co-agent
In addition to human co-agents, several nonhuman co-agents are also present as resources in mental health care. One notion is the nonhuman co-therapist—a concept used to explain the role of objects as cooperative elements in the treatment of mental health. This concept is known from eHealth (Federici & Scherer, 2018; Smaradottir et al., 2015; van Velsen et al., 2013), nature-based or outdoor therapy (Berger & McLeod, 2006), and music therapy (Aigen, 2005; Rolvsjord, 2013; Weissberger, 2014; Wärja & Bonde, 2014). These disciplines share a basic idea of the interactive, nonhuman co-agent as an active partner in the recovery process in mental health care.
Moreover, the development in computer technology towards artificial intelligence (AI) has enabled deep learning technology that interacts and co-creates on a whole new level. In music production, computer-generated music-making, randomizers, generative or adaptive music generators, and automated audio mastering services are examples of how digital technology is more than just a passive tool. Whether the aim is to make a musical artifact or to use music-making as a means to obtain other objectives, computers can be co-agents in these processes. The iPad, being a versatile computer with numerous possibilities for musical interaction within the various apps available, is consequently a potential co-agent, both in making music and in turning the creative music-making process into a resource for recovery and treatment in mental health care.
For the latter to happen, co-creation (Eide, 2014; Stensæth, 2018) is emphasized as a vital quality to promote positive musicking (Small, 1998) experiences. The iPad does not make music, nor does it promote mental health and wellbeing all by itself; it is a co-agent, meaning it needs to collaborate with other co-agents. However, the co-agent perspective on technology means that the focus shifts from controlling the interface to motivating social interaction, musicking, and co-creation (Cappelen & Andersson, 2014). In a therapeutic setting, many agents are involved in a patient’s recovery and wellbeing. These agents (both human and nonhuman) are engaged in a socially constructed agency network that relies on co-creational skills and mutual interaction (Ansdell, 2014). In my research, working with the iPad as a co-agent in mental health, this notion suggests that further examination of this human-technology interaction is required to better understand the role of the iPad within this interaction and its potential health benefits.
The Human–Computer Interaction
The attempt to “humanize” the iPad and expand its status from being merely a tool to becoming a co-agent within an agency network is founded on interdisciplinary discussions going back to the beginning of computers. According to Brey (2005), discussing human–computer interaction (HCI) from a computational science perspective, the primary epistemic relation between humans and computers, as information processing and problem-resolving tools, has in recent years been supplemented by an ontic relation. This ontic relation stimulates virtual and social environments that extend the interactive possibilities found in our physical environment. When the computer functions as an enhancement of human cognition, humans and computers become a hybrid cognitive system, unlike any other human-technology interaction:
[…] the computer is a special cognitive artifact that is different from others in that it is capable of autonomously performing cognitive tasks and is able to engage in symbiotic relationships with humans to create hybrid cognitive systems. (Brey, 2005, p. 393)
This hybrid cognitive system is part human, part artificial, integrating the cognitive functions of the two parts to cooperate in performing cognitive tasks. From this perspective, the computer is considered both autonomous and part of an agency network, capable of engaging in creative, co-creational relationship with humans.
In music production, such co-creational relationships exist on several levels. Humans and computers cooperate to perform cognitive tasks, but the relationship also has practical consequences, affecting how musicians work, and aesthetic consequences, shaping the sound and structure of the music (Bennett et al., 2006; Frith & Zagorski-Thomas, 2016; Hepworth-Sawyer et al., 2019; Zagorski-Thomas et al., 2020). In a similar way that Brey (2005) argues for an evolution in HCI towards a boundless virtual reality, Moorefield (2005) describes an analogous evolution in music recording, also driven by the underlying mechanism of technological development. He claims that “recording’s metaphor has shifted from one of the ‘illusion of reality’ (mimetic space) to the ‘reality of illusion’ (a virtual world in which everything is possible)” (p. xiii). This “virtual musical reality” that Moorefield refers to is co-created by human-technology interaction, within a “symbiotic relationship” between human and machine, where technology operates as a co-agent. Without crossing the borders of technological determinism, one might say that these metaphors suggest a notion of “humanized” technology (or technology that possesses “human” skills and qualities) that actively participates in co-creating artwork and making “decisions,” thereby stretching the boundaries of reality. Brown (2016) puts it in the following way when explaining the computer as a musical partner: “Coding enables automated behaviours that take on a life of their own” (p. 179).
This co-agent perspective on technology challenges current thinking and historic misconceptions about incorporating technology into music therapy. In a reflection about interactive musical media, Stensæth (2018) leads us away from the notion of technology as an object or tool, and instead suggests “human–computer interaction” as a frame for examining the relationship between technology and the user. Drawing on experience from the RHYME project (Stensæth, 2014a), she argues that the media became active agents of co-creation when they were “transformed from intermediaries into technical and musical actors that were able to learn, memorize and respond to the inputs the user made” (Stensæth, 2018, p. 314). Stensæth looked outside the music therapy discourse to substantiate this point of view: both Cooren et al. (2006; organizational communication) and Brown (2016; music technology) suggest that objects can do things and that they afford or allow different types of behavior on our part. These nonhuman agencies can act and hence produce a change or transformation in the chain of actions, making them agents (or contributors) to the emergence of social processes. Agency is not a trait that is reserved for human beings only. Inspired by the anthropologist Alfred Gell, Brown suggests that
[…] artefacts and machines can have a relational agency that depends upon their situatedness within an intentional cybernetic system; machines can be co-creative with humans working within cultural settings. In addition, computers and the software they run are technologies, human constructions imbued with latent intentionality; either deliberately, implied or interpreted. (Brown, 2016, p. 180)
This relational agency within HCI, described by Brown, offers a perspective on technology that embraces the ability for machines to be co-creative. This perspective resembles the symbiotic relationship described by Moorefield (2005) and the hybrid cognitive system described by Brey (2005), in that they all acknowledge technology as an autonomous and integrated co-agent. In music production, this means that machines can act and participate in a creative relation with other agents to co-create music. The consequence is a virtual world where everything is possible, as Moorefield (2005, p. xiii) implies. If we adopt this idea in a therapeutic setting, as suggested by Stensæth (2018), then the concept of co-agency opens additional possibilities. The virtual world enables dialogue between users and technology that could lead to meaningful experiences detached from the basic human relationship (for instance between patient and therapist). By offering new ways in which to communicate and interact, the technological co-agent facilitates new perspectives on music-making and emotional expression that are potentially health-promoting. Used as a therapeutic tool, technology might become a co-agent for mental health and wellbeing: a therapist-in-a-box. Before I discuss this any further, I present two case vignettes that illustrate this co-agency.
Case Vignettes
The case vignettes are derived from a research project performed with adolescents receiving outpatient services from a mental health institution in Norway. They were experiencing a range of mental health conditions and their treatment consisted of regular conversational therapy with a psychotherapist from the institution that continued parallel to our music sessions. The participants were offered a weekly 45–60-min individual session with me and an iPad over a period of four months. The participants had the iPad at their disposal throughout the whole period. The iPad was equipped with a carefully selected music app library,3 intended to inspire the users by affording a variety of sounds and interface designs offering diverse musical expressions. While some apps were virtual instrument emulations that looked much like hardware instruments with knobs, buttons, and faders, others emphasized the tapping and swiping technique unique to the multitouch screen of the iPad. The latter consequently offered options for musical expression specially inspiring for users without any prior experience of music making. The purpose of the sessions was to work on an instrumental piece, that would end up as a completed and well-produced piece of music. Structured and improvised music-making, the technological processing of samples, arranging, and mixing of the music was performed at various levels during the session period. The research project was approved by the Regional Committees for Medical and Health Ethics (REK) and followed their guidelines for informed consent, confidentiality and design. To ensure confidentiality and protect privacy, the names given here are pseudonyms.
Case Vignette 1: Karen
Karen was a shy, introverted, socially anxious, 17-year-old girl who struggled with identity issues due to a demanding family situation. According to her therapist, she expressed fear of losing track of who she was. This uncertainty and instability led to lack of confidence and fear of trying new things. However, to challenge herself, she agreed to participate in the project. When we first met, she was acting nervous and restless and avoided eye contact. Moreover, when I introduced her to some of the apps, she barely touched the iPad, afraid of doing something wrong. She had low self-esteem and said she was afraid not to perform well enough, having no prior experience of electronic music making. Based on these observations, the therapeutic goal for our sessions was to build confidence and strengthen her identity.
We began to explore the iPad’s possibilities by experimenting in a very openminded way with the apps. This unstructured and free introduction to the sonic world of the iPad was a way for her to find what she liked and identified with. It became clear that she favored the calm, atmospheric soundscapes of apps such as Bloom, Trope and TC-11. These apps provided a starting point for our musical piece. We recorded long, improvised sections which we later edited and used to structure the piece. The apps were operated by tapping or swiping the full-screen multitouch performance area, and the timing and movement of the fingers controlled the sound. By observing her playing these apps, I discovered similarities between how she operated the touch screen moving her fingers around it, and dance. I knew that Karen was a very dedicated dancer. Now she was “dancing” on the iPad. We used that metaphor to incorporate other dance-related sounds, and by doing that, also connected the music to her already existing identity as a dancer. The opening of the piece is one example of this approach. She recorded dance steps with the iPad using AudioShare (a sample recording and filing app), and we edited and processed the sounds using sample processing apps such as Samplr and Borderlands, before bringing them into the GarageBand project file. By bringing her dance world into the music, identification was strengthened, and the personal samples expressed through the music became a statement: “This is who I am.” Her ownership of the music, the mood, the personal samples and the overall sonic expression, and the way we built on her interest in dance to reinforce her identity, gave her a new tool. She said that music making on the iPad provided her with “a new color in life.”
Moreover, it gave her confidence. She experienced an enormous development in her engagement with the iPad and the mobile technology. In the beginning she barely looked at the iPad, almost afraid to touch it. After four months, she took full control in the final mixing session, doing all the editing, volume and panning, automation, effect processing and use of plugins, and the final arranging of the song. She owned it. The process empowered her and gave her courage to show it to friends and family. In the research interview following the project, she said:
Case Vignette 2: Daniel
Daniel had a long history of complicated mental health problems, including depression, anxiety, aggression, anger management, and suicidal thoughts. His short temper, mood swings, and destructive mind-rush resulted in social maladjustment and destructive behavior, such as self-cutting, drug abuse, and criminal behavior. In addition to regular sessions with his psychotherapist, he had a history of short-time hospitalizations due to his unstable condition.
When we started working on our music piece, we did not have a single, specific therapeutic goal. However, as Daniel learned to use the iPad, certain beneficial features became evident. In sessions, the iPad became focus for his attention, and he managed to concentrate for long periods of time. Moreover, he participated and was activated by the music making on the iPad, despite the impression that he was acting both indifferent and uninterested when we first met. This engagement made the iPad a valuable tool at home as well, either as a time-filler or to eliminate his destructive mind-rush. He said that he could sit for hours “…and just try out a bunch of stuff.” Coming home from a session, he typically went up to his room and continued making music until bedtime. He also reported incidents of insomnia due to mind-rush, where the iPad became a distraction that turned his attention towards music making and away from the destructive thoughts that kept him awake.
Daniel’s use of apps was very different from Karen’s. While Karen preferred the ambient and reverberant soundscapes, Daniel preferred more energetic musical expressions, with intense beats and distorted sounds. He discovered an app called BlocksWave that matched his musical preferences and that enabled him to create beats using samples. BlocksWave is a sample and loop sequencer using either samples from internal sample packs, or personal samples imported from other apps or recorded directly into BlocksWave. The samples can be used in live improvisation or they can be combined and organized in different sections that can be played back as a song. Daniel made several sections at home by combining samples. Moreover, we used the recording function in BlocksWave to make personal samples that we integrated with the beat. These sample-combinations were then used to structure the musical piece into sections that were recorded into GarageBand and further developed by adding tracks from other apps.
Daniel was able to make music because of the affordances of the apps. He did not know how to play a keyboard or make a drumbeat. However, he made a personal piece of music using a sequencer and organizing samples into a musical structure of his choice. Moreover, he had ideas and suggestions about the musical expression, mood, and feeling of the music that I could help realize by introducing him to relevant apps. One example was when he played me a rap song from one of his Spotify playlists, featuring a male artist screaming phrases in Russian. Daniel liked the raw energy and the distorted sound. We included this element in our music by using a vocal sound generator app called VoxSyn to add spectral changes and sound modifications to a recording of a male voice, screaming the song title. The recording was made directly into the VoxSyn app, and the sound manipulations were triggered by placing fingers on virtual pads, spread out on the iPad screen. The outcome was an intense and distorted sound, adding personality and identity to the music.
Discussion
The case-vignettes illustrate that the co-creative music making activity facilitated the iPad as a co-agent and a co-creator. The activity of co-creating music with the iPad was the nexus of the project, and the digital music technology tool became a vehicle for creating music and for meeting therapeutic goals and enhancing musical relationships. The participants highlighted portability and ease of use to be the most prominent practical advantages of the iPad. They used it on the bus, in the car, in their homes, at school, and in connection to leisure-time activities. The iPad was part of their portable “toolkit”4 during the whole project, and it facilitated creative music-making without limitations of time and place. Accordingly, their musicking was not dependent on a time-limited scheduled appointment with me or a therapist, nor on requirements concerning specially equipped music rooms or ideal acoustic recording conditions, such as in a studio facility. The portable toolkit afforded them an opportunity to make music anywhere and anytime.
This was conditioned by the iPad’s ease of use. The way the iPad facilitated straightforward sound recording, editing, and processing of samples, in addition to the easy access to virtual instruments and sounds, was emphasized as important by the participants.
Altogether, the iPad’s portability and ease of use consequently affected creativity and enabled participants’ experiences of environmental inspiration, which subsequently affected their music. Karen pointed out that the apps inspired her to create. She improvised a fair amount and recorded many sketches based on the affordance of the apps. The iPad “challenged me to make something,” she said; it was the source of her inspiration. This inspiration led to creative activity and a sense of “flow” —a concept introduced by Csikszentmihalyi (1975), describing “a type of consciousness where a person is powerfully engaged in a gratifying activity” (Silverman et al., 2016, p. 1332). This condition is considered rewarding and beneficial in relation to wellbeing and life satisfaction, and music-based activities can facilitate such conditions.5 The state of flow was evident both during co-creation in our sessions and as part of the individual work in between sessions.
Moreover, the case vignettes illuminate that the co-creative music-making activity influenced the participants’ experiences of the activity associated with mental health and wellbeing. The introduction of the iPad as a co-agent challenges the human-technology relation and questions the role of the professional human agent, whether that is a music therapist, a music producer, or another professional agent. The following discussion focuses on issues concerning the human-technology relation and aims to examine the role and value of the iPad as a co-agent and a co-creator of adolescents’ mental health and wellbeing.
Actor-Network Theory
The cases illustrate that the iPad facilitated HCI that resulted in distinctive musical artifacts. The role of the iPad in this process could be analyzed simply as a tool controlled by the music maker, operating the interface in a causal series of events. However, this way of thinking about technology in music-making is not consistent with the technology’s autonomous and co-creational properties put forward by both modern HCI research and research on technology in music production and music therapy, outlined in the background section. Technology is not merely an “instrumental” tool or a medium for expressing musical ideas; rather, it should be considered as an “organic” musical partner that contributes to the creative process of making music. This view is consistent with theories on creativity in popular music, discussing the systems approach to creativity and creative practice (Csikszentmihalyi, 2014; McIntyre, 2012,, 2020), a genetic model of creativity, as Warner (2009) suggests, and perspectives on technology as a compositional tool (Eno, 2004; Théberge, 1997; Thompson, 2019).
My research supports this perception based on the way in which the participants used and talked about the iPad. Musical relationships, therapeutic goals, inspiration, and creativity were triggered and developed by the iPad and its position as the nexus of the project. Furthermore, apps such as Bloom, Trope, and TC-11 were engaged as improvisational instruments, and because of their generative interface attributes, they acted as co-creating musical partners, enabling participants to make music that would never have been possible without the creative contributions made by the technology itself. In the interviews, the participants highlighted this when they described their dynamic relationship with the iPad: as “musicking partners.” This demonstrates how the iPad facilitated HCI and became a co-agent in the process of creating music.
To further explain and understand the role of technology in such symbiotic relationships, it would be necessary to consider the social context and the sociology of technology. Zagorski-Thomas (2014) examined this human-technology relationship, arguing that if music is a process and not a thing, as Small (1998) suggests, then musical activity (i.e., musicking) should be analyzed as a social process involving people, technology, and the environment. According to Zagorski-Thomas, this ecological perspective informs a notion of technology as a co-agent, providing “affordances” (Gibson, 1979) that have a tremendous impact on how musicians work, how the music sounds, and how it is perceived by the environment.
As a social theory, the actor-network theory (ANT; Latour, 2005) explores these relational ties between different agents within a network with the purpose of explaining how they interact. Since ANT insists on the capacity of nonhumans to be actors or participants in networks and systems, and it emphasizes the equality between all relevant actors (“mediators”) in the network (whether human or nonhuman), ANT offers a framework to explain how human and nonhuman participants (“actants”) configure each other through their perception of and action upon affordances. This configuration, moving back and forth between the “actants,” occurs when they start acting as a whole. When my participants found apps that they liked, felt attached to, or believed were applicable to our musicking activity, one finger on the iPad’s surface would initiate a series of sonic events (or actions), often unpredictable and “uncontrollable,” that led to a reaction from the human actor. The “actants” consequently began to act as a whole, developing symbiotic relations that, among other things, led to experiences of flow. Moreover, this HCI (or “configuration”) led to musical dialogue, joint human–computer improvisations, and a co-creational creativity between humans, technology, and the environment—equivalent to an actor-network system. The role of the iPad, as an entity that explicitly made a difference in this co-creational network, accentuates my point that according to ANT, the iPad must be considered as a relevant actor and a “mediator” in this mode of computational music-making. The iPad makes a difference; the benefits are inspiring ways of musical interaction; co-creation; and new virtual possibilities that stretch the boundaries of reality, where technology is capable of taking on “a life of its own.”
The intention of this brief analysis of the iPad’s role and value in my project is to position the iPad as a co-agent within a socially constructed actor-network system. The cases illustrate that the human-technology interaction enhances the music-making process and that people, technology, and the environment mutually benefit from the process. In the light of this, the following question then arises regarding the value of the iPad in terms of mental health and wellbeing: Apart from being a co-agent of music-making, could the iPad also be a co-agent and co-creator of mental health?
Agency
The cases illustrate that the iPad facilitates agency in the participants. Agency can be simply defined as the ability to produce an (intended) effect; it is the capacity to influence one’s action and to exercise control over one’s thought processes, motivation, and affect. One of the most influential socio-cognitive psychologists, Albert Bandura (2001, 2002), highlights the important health-promoting causal effects of strong personal agency. Being in control and exercising that control by being (pro-) active is empowering. According to Bandura, agency is strengthened through “intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one’s capabilities, quality of functioning, and the meaning and purpose of one’s life pursuits” (Bandura, 2001, p. 1).
In this respect, the iPad is a strong facilitator and co-agent for strengthening agency. The immediate sonic effect of an intended action exercised towards a music app provides immediate feedback about the choice of action and the consequences of that choice, because the sound and the musical context change. The human actor consequently applies a self-reflective value judgement to the consequences of a decision—asking questions such as, 1) Do my actions provide the desired effect? and 2) Do they improve the music or not? —and he or she is held responsible for that decision. This self-reflectiveness regarding the causal effects of an action is an experience that, according to socio-cognitive theory, enhances the consciousness about one´s capabilities and quality of functioning. Vulnerable adolescents who suffer from mental illness often experience a lack of agency in general, and music could be a resource to strengthen that agency. “Small steps in ‘musical agency’ (e.g., understanding how music impacts me, how I would want it to impact me, and how I can achieve that) may lead to further steps in agency in general” (Saarikallio, 2019, p. 95). The interactive communication between the human agent and the co-agent (in this case, the iPad) is essential, and exemplifies the role of the nonhuman agent within this HCI.
Several of the participants reported a strong feeling of agency and control, especially at the end of the project. This was confirmed by my observations, because all participants demonstrated an increased belief in their own ability to influence their actions and to exercise control over those actions when working with music on the iPad. A prime example would be the way Karen took control of the mixing process, making artistic choices, executing technical changes and sound editing, and reflecting on the aesthetic result. I suggest that the ease of use and the multitude of possibilities and choices offered by the apps are major advantages in turning the iPad into a powerful tool for strengthening agency and experiencing empowerment.
From a health-promoting perspective, empowerment is about recognizing people’s self-understanding and competence in their own life, and it facilitates patient participation as a valuable resource for recovering from mental illness. This process involves dialogue, participation, and the mobilization of resources, with the goal of increasing patients’ capacity to take control of the factors that affect wellbeing and enabling them to make beneficial changes (Tveiten, 2017, pp. 48–49). I suggest that the iPad strongly facilitates this participation. First, tablets and smartphones are familiar tools for adolescents, which means that these individuals already inherit a strong agency regarding the tools, and the technological threshold is consequently low. Although some of my participants reported fear towards the music apps in the beginning of the project, the iPad itself (and particularly GarageBand) was a familiar starting point for exploring new possibilities for musical expression. Furthermore, the iPad contributes to leveling the underlying power structure between patient and therapist, and it transfers the role of the expert towards the patient. This renegotiation of power, where the individual’s lived experience and skills becomes a source of shared expertise, is highlighted in recovery and resource-oriented music therapy literature as a basic value and premise that empowers the patient (Anthony, 1993; Deegan, 1997, 2001; McCaffrey et al., 2018; Rolvsjord, 2010; Solli, 2012, 2015; Stensæth, 2014a). This is also strengthened by the co-creation structure of the project, where my role as participant and co-creator was to share my expertise, empower the participants, and strengthen their agency by uncovering resources that they could use in their recovery process. My research suggests that the multifaceted, mostly intuitive apps invited the participants to engage in creative activities that, for all of them, revealed new resources they did not know they had, hence mobilizing resources that could also be part of their recovery.
The previously described core features of agency enable people to participate in their recovery. Bandura (2001) suggests that activities that promote these features (such as this iPad project) consequently enhance self-efficacy. From this perspective, active participation in one´s own recovery seems to be crucial in promoting wellbeing. However, participation is dependent on agency: people need both an opportunity to act and a willingness to do so, based on prospects of a beneficial outcome; they need to believe that their actions matter. Therefore, efficacy beliefs “are the foundation of human agency. Unless people believe they can produce desired results by their actions, they have little incentive to act” (Bandura, 2001, p. 10).
In my experience, the co-creation model used in this project facilitated motivation from the co-agents (human and nonhuman), which contributed to the participant´s motivation and beliefs. The relational and dialogical attitude, as well as the amount of time spent together, seemed equally important. This was also highlighted by the participants’ therapists as a decisive quality of the project: They suggested that the benefits of individuals having an expert all to themselves for several months provides those individuals with a stable and secure environment for experimentation and reflection. In addition, I suggest that establishing a clear goal in the beginning of the project, and then following it by making an artifact—a personal piece of music—was equally important for motivation. This is consistent with some of the literature on music technology and therapeutic songwriting that highlights the benefits of recording (Kirkland & Nesbitt, 2019; McFerran et al., 2019; Sadnovik, 2014; Viega, 2018, 2019; Weissberger, 2014) and discusses the role of the artifact (Baker, 2015, pp. 22–23; Rolvsjord, 2010, pp. 194–196):
The permanency of the artifact may assist the songwriter to experience a sense of accomplishment and self-esteem as he reflects on the tangible object, a product of his creativity and a synthesis of a process he has experienced […] The song provides concrete evidence that he can successfully complete tasks in life and serves as a reminder of his ability and capacity for achievement, irrespective of disability. (Baker, 2015, p. 23)
Affordance and Appropriation
To strengthen personal agency, one must be exposed to opportunities for goal-directed action. Activities that promote creativity and interaction, such as music-making with tablets or smartphones, are carriers of musical affordance that provide such opportunities. One perspective that examines this relationship between music and humans is the theory of affordance.
The concept of affordance dates back to the ecological psychologist James J. Gibson (1977, 1979), and it was later adopted by many scholars in other research fields to understand why we act the way we do and how we interact with our environment.6 Most relevant for this paper is analyzing how technological affordance influences the way we engage with music and how this in turn can benefit mental health and wellbeing. My cases illustrate that the interface technology of the iPad affords specialized, diverse, and often straightforward means of musical interaction that meet an adolescent’s individual needs, independent of former knowledge or skills. This can, in turn, be used to promote health.
Mooney (2010) suggests a model in which tools of music-making (so-called “frameworks,” including both physical and conceptual tools) are viewed in terms of what they allow us to do (their “affordances”). These frameworks and the affordance model allow us to see the impact that tools have on the creative process and the resulting music, based on how we engage with the tools to achieve musical outcomes. Mooney suggests that because every framework requires both knowledge and skills to be appropriated by a human agent, there will be individual differences both in perception and in the ability to realize the affordances. Some of them are easier to achieve than others, and by ordering the affordances from “easiest” to “most difficult,” we obtain the “spectrum of affordance” for that framework: “Put simply, every tool has a range of things it allows us to do, and some of those things can be done more easily than others” (Mooney, 2010, p. 146).
Although Mooney applies the model to music-making and education, the model may also help us to understand the benefits of music technology in relation to health. First, Mooney highlights that the choice of frameworks has an obvious influence on the musical result, since the tools shape the product. The choice of instrument, technical equipment, or concepts (such as notation style, tuning, harmonic modes, or structural principles) will provide a set of affordances, as well as a set of constraints. The iPad’s affordances (or more simply, what it offers) are a diversity of individually designed apps that offer various ways in which to make, process, and organize sound. These apps naturally influence the music that is made, both formally and expressively; however, in contrast to a piano, which has a limited spectrum of affordances, the iPad’s range of affordances is almost limitless. The versatile qualities of the iPad to make sound, ranging from environmental samples and acoustic instrument emulations to electronic soundscapes and drumbeats, offer different affordances compared to other frameworks. Working with music-making and mental health, the iPad offers infinite options for expressing mood, emotions, and identity that can be individually adapted. My research suggests that the participants appropriated these affordances in a highly individual manner, resulting in a personal piece of music towards which they expressed a high degree of ownership, pride, and recognition. The music resembled their personality and was considered an artifact that substantialized the process that they underwent. The iPad facilitated this process in a versatile and dynamic way, strengthening agency, empowerment, and self-efficacy.
A second point discussed by Mooney is that the process of making music is closely related to the spectrum of affordances—the tools also shape the process. It seems important that the affordances are perceptible and that they are not experienced as too difficult to use. As Gaver (1991) points out, this is partly a matter of design:
Perceptible affordances are inter-referential: the attributes of the object relevant for action are available for perception […] What is perceived is what is acted upon [ … ] From this point of view, interfaces may offer perceptible affordances because they can offer information about objects which may be acted upon. (Gaver, 1991, p. 81)
This implies that there is a causal connection between design, perception, the spectrum of affordance, and the way we act. Furthermore, this affects the product as well, as discussed above.
On the iPad, each app can be considered a framework of its own, with a spectrum of affordances. Some apps are organized as sequencers, playing rhythmic patterns and musical loops organized around a metric beat and a pre-selected number of bars, while others offer more expressive and improvisational ways of making soundscapes free from the metric system. A third group of apps take the form of virtual music instruments or synthesizer emulations, with piano keys, strings, or drumheads, while others are virtual effect-units or sound-processing devices with knobs, buttons, and faders. The design of the apps, the sounds, and the possibilities for interaction provide the spectrum of affordance.
My research suggests that the interaction between the user and the apps is strongly dependent on the individual perception of the affordances of the apps. Since the iPad has physical and tactile limitations because of the touchscreen, app design becomes important in making affordance perceptible. The apps that were most instinctive and that responded immediately with auditory perceptible changes were most appealing to my participants. However, they did not choose the same apps; given their individual musical preferences, prior experience, and skills, they all chose different methods of engagement. They rapidly left the apps they did not understand in favor of the ones they mastered. Moreover, they expressed frustration in relation to affordances that were too difficult to appropriate, and they expressed surprise when I demonstrated new features of the apps that they had missed on their own. This exemplifies the spectrum of affordance and how it seems important that an app’s design and technical interface present a manageable affordance for the individual user in order to be considered useful.
My experience was that the iPad’s affordance for music-making, especially when working with patients experiencing apathy and depression, was dependent on the co-creation model. My expertise and experience with music technology and music-making was consequently important to enable appropriation for the participants. However, as soon as they were presented with some opportunities, the iPad facilitated experimentation, empowerment, and mastery, and it functioned as a versatile and flexible co-agent in the process of creating music.
Co-creation of Mental Health
Co-creation means creating something together. It thus involves a process of creative activity (creating), socially contextualized as a collaboration between several agents (together), that consequently moves towards a goal (something). In addition to spontaneous and random creative factors, such as playing, listening, exploring, and composing, Eide (2014) recognizes that co-creation also involves more deliberate collaboration, where people are socially motivated to act towards a common goal: the creation of “something third” (p. 125). It is natural to assume that concerning music-making, “something third” refers to the artifact (a piece of music). However, music therapists also describe “something third” as an intersubjective moment of meeting, something that is both invented and discovered and that exists on its own terms (Eide, 2014, p. 125). Co-creation can hence result in meaningful meetings between people, changes in relationships, shared experiences of discovering personal resources and potential, and changes concerning our experience of ourselves. If we, according to the humanistic health approach, consider health as an experience of wellbeing, then we can assume that this healthy experience can be facilitated by being co-creative. The co-created third’s duality as both invented and discovered enables experiences that are potentially health-promoting. In other words, we might say that creating something together, being an agent in a co-creative process, can potentially result in experiences of wellbeing, and we could consequently talk about the co-creation of mental health.
I have argued that this co-creational activity takes place within a network of human and nonhuman agents, and my research suggests that the iPad’s affordances play a crucial role in facilitating the creation of “something third.” Furthermore, I suggest that for the participants, this co-creative process has resulted in meaningful meetings between people, where new personal resources and potential have been discovered, perspectives have changed, and agency has been strengthened, leading to health-promoting experiences of wellbeing. Such experiences of wellbeing, facilitated by co-creation between humans and technology, demonstrate that the iPad’s affordances and role in this process are crucial not only to co-create music, but also to co-create mental health.
Similar findings are presented in the RHYME project (Stensæth, 2014a). The co-creative tangibles studied in this project are recognized as actors in the process of co-creation; they establish relations with the other actors and are considered equal partners and co-agents in the collaboration, which leads to health-musicking and health-promoting co-creation. In other words, co-creation implies health-musicking (the process of continuously promoting health), a strengthening of agency and mastery, and empowering interaction both with the tangible and with other people (Stensæth, 2014b, p. 74). However, in the editor’s foreword, to further promote healthy interaction between humans and technology, Stensæth calls for device flexibility, universal design, interdisciplinarity, and a common ground for understanding. Accordingly, she asks, “Who knows, perhaps our future home environments will have musical and interactive media that can operate as agents of health promoting co-creation?” (Stensæth, 2014a, p. xiii).
I argue that music-making with tablets and smartphones is one answer to that question. These computers—being in our pockets and following us wherever we go—are not only part of our homes, but also a significant part of our lives. Furthermore, the flexibility of the iPad, the apps, and the affordance that is brought forward by sonic and visual design, leading to musical artifacts of great personal and artistic integrity, demonstrate that the iPad vastly surpasses traditional musical instruments in its interactivity. The iPad therefore possesses great potential to enable concrete and tangible health-promoting co-creation.
Conclusion
The purpose of this paper was to describe how adolescents interact with music technology to promote health and wellbeing. The case vignettes illustrate the ways in which the iPad serves as a co-agent and co-creator of mental health.
I argue that technology can be a co-agent and a beneficial component for adolescents’ mental health and wellbeing. This paper suggests that the iPad facilitates musical creativity, participation, engagement, and motivation by affording specialized, diverse, and often straightforward means of musical interaction that meet each adolescent’s individual needs. This flexibility is afforded without limitations of time and place; it can be accessed wherever and whenever one needs it—it is in one’s pocket. This activity consequently empowers the adolescents, strengthens their agency, and promotes self-efficacy.
However, the beneficial qualities of music technology are not pre-given. Instead, they reveal themselves in a cultural and social context. In Norway, 69% of the population have access to a tablet and 95% to a smartphone (Statistics Norway, 2020). Easy access to mobile technology is a crucial condition for appropriating these tools as co-agents for mental health. Even though there are 3.5 billion smartphone users worldwide (Statista, 2020), the privilege of access described in the Norwegian context is not a global privilege. Moreover, young people’s appropriation of musical affordances is shaped by contextual factors; environmental conditions; and the values, roles, and beliefs of adults and peers. Collaboration and interaction between patient, expert, and technology is thus crucial to enable every co-agent to participate beneficially in the activity and to secure a health-promoting outcome. This highlights the importance of a dialogical approach, where co-creation and a resource-oriented focus are integrated into the human-technology interaction.
Finally, music technology tools, such as tablets and smartphones, are potentially health-promoting, and this paper demonstrates how they can participate in the co-creation of mental health. In this sense, the therapist-in-a-box metaphor can be useful to describe the potential of this technology in therapy, not as a replacement for human beings, but as a valuable co-agent.
Glossary
-
Adaptive music generator: software that use algorithms to automatically generate interactive musical inputs.
-
Applications (apps): software for tablets and smartphones.
-
Artificial intelligence (AI): non-biological intelligence, using deep learning to provide computers and computer programs with optimized intelligent response to different tasks.
-
Automation: having a DAW automatically perform tasks over time by playing back the recorded and edited movements of faders, knobs, and switches to create changes to volume, pan, and other track settings.
-
Beats: a term used in contemporary songwriting to signify a song track that consists of rhythmic, harmonic, and melodic elements from samples and music software, produced by a so-called beat maker. The lead melody and lyrics are added by a topliner.
-
Coding: the underlying system that runs a computer program or an application.
-
Deep learning: computer-generated learning based on large quantity of data that enables the computer to adapt to certain patterns of “behavior.”
-
Digital audio workstation (DAW): software for music production.
-
Generative interface: software that use input signals to generate potentially infinite music, either by itself, or in interaction with a user.
-
Mixing: describes a process where all the edited and processed tracks of a musical piece are organized and mixed together in a final mix.
-
Music instrument emulations (or; virtual music instruments): software reproductions of hardware music instruments.
-
Panning: the distribution of a sound signal into a stereo or multi-channel sound field. In popular music production panning is used to create space in a mix by positioning the sounds in the left to right spectrum of a stereo image.
-
Plugin: a software program that works inside a host DAW, such as effects or virtual instruments implemented to add or enhance audio-related functionality.
-
Randomizer: a tool that randomize sound inputs or automatically change different qualities of the sound during playback.
-
Samples: recorded sounds, either a cut from original music, or a field recording. The sample is often trimmed, re-arranged or processed to fit the musical context of the piece.
-
Sequencer: an analog or digital tool used to organize multiple sound-inputs into rhythmic patterns (or sequences) on a metric grid (so-called steps).
-
Sound editing: describes a process where samples or programmed and recorded tracks are individually edited (cut, trimmed, split, faded, equalized, etc.) to produce quality sounds for processing and mixing.
-
Sound processing: describes a process where analog or digital tools are used to manipulate sounds by adding effects (such as reverb, delay, modulation, frequency filter, distortion, etc.) or other modifications that transform the original input into the desired output of that sound.
About the Author
Kjetil Høyer Jonassen is a Cand. Philol. in music and an assistant professor in music at Ansgar University College in Kristiansand, Norway. He is currently pursuing a PhD in popular music at the University of Agder, Faculty of Fine Arts, Department of Popular Music, where he is doing research on mobile music technology in mental health care. This research is done in cooperation with Sørlandet Hospital (SSHF), Department of Child and Adolescent Mental Health (ABUP) in Kristiansand, Norway. Additionally, as a keyboard player, he is working as a live and session musician, arranger and composer.
Notes
[1] The term in-a-box inherits connotations to digital music programmes that, during the past 30 years of digitalization, have emerged as powerful tools for music-making. These programmes offer everything one needs in one place, making outboard equipment redundant. Band-in-a-box ®, for instance, plays popular music accompaniments and chord progressions in any genre, making the band superfluous. Moreover, all the major digital audio workstations (DAWs) on the market allow individuals to mix and master their audio productions using the computer as their only tool—often referred to as mixing in-the-box.
[2] This is also discussed in music therapy, concerning the position of the music therapist profession in health care, whether as institutionalized music therapy, community music, or public health. See, for instance, Ansdell & DeNora (2012), Daykin (2012), and Ruud (2012).
[3] The app library contained 70 music apps, including GarageBand, iMaschine 2, DM1, Patterning, Figure, Blocs Wave, Loopy HD, Scape, GrainProc, Fugue Machine, Cassini, nave, Thor, Animoog, iDensity, AUM, Final Touch, SP Link Edition, Audiobus 2, Bloom, TC-11, Trope, Thicket, AudioShare, Borderlands, and Samplr.
[4] I have borrowed this term from Patricia Deegan, an American psychologist living with schizophrenia and a central figure in the mental health recovery movement. She explains the toolkit as a collection of recovery strategies and self-care skills that helps her to cope with life (Deegan, 2001, pp. 11–14).
[5] We find similar perspectives in other resource-oriented approaches, such as positive psychology (Seligman & Csikszentmihalyi, 2000; Snyder & Lopez, 2002), the empowerment philosophy (Dalton et al., 2001; Zimmermann, 2000), and resource-oriented music therapy (Rolvsjord, 2010). Furthermore, recent affective neuroscience emphasizes that playfulness, curiosity, and joy provide powerful health affordances associated with the focus on health resources (Burgdorf & Panksepp, 2006; Panksepp, 2010).
[6] For example, in addition to the literature examined here, we find discussions of the theory of affordance in close relation to the actor-network theory (Latour, 2005), the sociology of music (DeNora, 2000), music therapy (Daykin et al., 2017; Rolvsjord, 2010; Stensæth, 2014a), music psychology (Krueger, 2011), and popular musicology (Zagorski-Thomas, 2014), that are the disciplines most relevant for this paper.
References
Anthony, W. A. (1993). Recovery from mental illness: The guiding vision of the mental health service system in the 1990s. Psychosocial Rehabilitation Journal, 1993, 16(4), 11–23. https://doi.org/10.1037/h0095655
Antonovsky, A. (1996). The salutogenic model as a theory to guide health promotion. Health Promotion International, 11, 1–18. https://doi.org/10.1093/heapro/11.1.11
Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52(1), 1–26. https://doi.org/10.1146/annurev.psych.52.1.1
Berger, R., & McLeod, J. (2006). Incorporating nature into therapy: A framework for practice. Journal of Systemic Therapies, 25(2), 80–94. https://doi.org/10.1521/jsyt.2006.25.2.80
Brey, P. (2005). The epistemology and ontology of human-computer interaction. Minds and Machines, 15, 383–398. https://doi.org/10.1007/s11023-005-9003-1
Brown, A. R. (2016). Performing with the other: The relationship of music and machine in live coding. International Journal of Performance Arts and Digital Media, 12(2), 179–186. https://doi.org/10.1080/14794713.2016.1227595
Burgdorf, J., & Panksepp, J. (2006). The neurobiology of positive emotions. Neuroscience and Biobehavioral Reviews, 30, 173–187. https://doi.org/10.1016/j.neubiorev.2005.06.001
Cooren, F., Thompson, F., Canestro, D., & Bodor, T. (2006). From agency to structure: Analysis of an episode in a facilitation process. Human Relations, 59(4), 533–565. https://doi.org/10.1177/0018726706065373
Daykin, N. de Viggiani, N., Moriarty, Y., & Pilkington, P. (2017). Music-making for health and wellbeing in youth justice settings: Mediated affordances and the impact of context and social relations. Sociology of Health & Illness, 39(6), 941–958. https://doi.org/10.1111/1467-9566.12549
Deegan, P. E. (1997). Recovery and empowerment for people with psychiatric disabilities. Social Work in Health Care, 25(3), 11–24. The Hayworth Press, Inc. https://doi.org/10.1300/J010v25n03_02
Deegan, P. E. (2001). Recovery as a self-directed process of healing and transformation. Occupational Therapy in Mental Health: A Journal of Psychosocial Practice & Research, 17, 5–21. https://doi.org/10.1300/J004v17n03_02
Gaver, W. W. (1991). Technology affordances. In S. P. Robertson, G. M. Olson, & J. S. Olson (Eds.), CHI91: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 79–84). Association for Computing Machinery, New York, NY. https://doi.org/10.1145/108844.108856
Kirkland, K., & Nesbitt, S. (2019). The therapeutic value of recording in music therapy for adult clients in a concurrent disorder inpatient treatment facility. Voices: A World Forum for Music Therapy, 19(2). https://doi.org/10.15845/voices.v19i2.2636
Krueger, J. W. (2011). Doing things with music. Phenomenology and the Cognitive Sciences, 10, 1–22. https://doi.org/10.1007/s11097-010-9152-4
McCaffrey, T., Carr, C., Solli, H. P., & Hense, C. (2018). Music therapy and recovery in mental health: Seeking a way forward. Voices: A World Forum for Music Therapy,18(1). https://doi.org/10.15845/voices.v18i1.918
Mooney, J. (2010). Frameworks and affordances: Understanding the tools of music-making. Journal of Music, Technology and Education, 3(2/3), 141–154. https://doi.org/10.1386/jmte.3.2-3.141_1
Seligman, M., & Csikszentmihalyi, M. (2000). Positive psychology. An introduction. American Psychologist, 55(1), 5–14. https://doi.org/10.1037/0003-066X.55.1.5
Silverman, M., Baker, F. A., & MacDonald, R. A. R. (2016). Flow and meaningfulness as predictors of therapeutic outcome within songwriting interventions. Psychology of Music, 44(6), 1331–1345. https://doi.org/10.1177/0305735615627505
Smaradottir, B., Gerdes, M., & Fensli. R. (2015). User-centered design of a COPD remote monitoring application. Experiences from the EU-project United4Health. International Journal on Advances in Software, 8(3/4), 350–360. http://hdl.handle.net/11250/2381422
Solli, H. P. (2012). Med pasienten i førersetet. Recovery-perspektivets implikasjoner for musikkterapi i psykisk helsearbeid [With the pasient in the driver’s seat. Implications of the Recovery-perspective for music therapy in mental health care]. Musikkterapi i psykiatrien online, 7(2), 23–44. DOI: https://www.researchgate.net/publication/261522215_Med_pasienten_i_forersetet_Recovery-perspektivets_implikasjoner_for_musikkterapi_i_psykisk_helsearbeid
Solli, H. P. (2015). Battling illness with wellness: A qualitative case study of a young rapper’s experiences with music therapy. Nordic Journal of Music Therapy, 24(3), 204–231. https://doi.org/10.1080/08098131.2014.907334
Statista. (2020, August 20). Number of smartphone users worldwide from 2016 to 2021. Retrieved August 29, 2020, from https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/
Statistics Norway. (2020, May 19). Fakta om internett og mobil [Facts about internet and mobile phones]. Retrieved August 8, 2020, from https://www.ssb.no/teknologi-og-innovasjon/faktaside/internett-og-mobil
Stensæth, K. (2010). Å spele med hjartet i halsen [To play with one’s heart in one’s mouth]. In K. Stensæth, A. T. Eggen, & R. S. Frisk (Eds.), Musikk, helse, multifunksjonshemming [Music, health, multiple disability] (Vol. 3, pp. 105–128). NMH-publications 2010, 2, Series from the Centre for Music and Health.
Stensæth, K. (2014b). Potentials and challenges in interactive and musical collaborations involving children with disparate disabilities. A comparison study of how Petronella, with Down syndrome, and Dylan, with autism, interact with the musical and interactive tangible ‘WAVE’. In K. Stensæth (Ed.), Music, health, technology and design (pp. 67–96). Series from the Centre for Music and Health, Vol. 8. NMH-publications 2014:7.
Stensæth, K. (2018). Music therapy and interactive musical media in the future: Reflections on the subject-object interaction. Nordic Journal of Music Therapy, 27(4), 312–327. https://doi.org/10.1080/08098131.2018.1439085
Viega, M. (2018). A humanistic understanding of the use of digital technology in therapeutic songwriting. Music Therapy Perspectives, 36(2), 152–160. https://doi.org/10.1093/mtp/miy014
Notes
[1] Correct spelling should be Sadovnik, Nir: Music Therapist and faculty member at NYU, Steinhardt. (https://steinhardt.nyu.edu/people/nir-sadovnik)