Interview: Sound Mind installation – visualising your response to music
Providing an insight into the inner state of the human brain when listening to music, and presenting a creative audio/visual spectacle to boot, Sound Mind is a neurofeedback installation like no other. We caught up with its builders to learn more…
We all know how music makes us feel inside, but what if we could peer inside our heads and see, visually, how our brain waves are reacting to different forms of music? Part scientific exercise and part artistic installation, Sound Mind has been designed to lift the lid on precisely this all-important interaction.
Conceived by dBs Institute graduate Mark Doswell, and serving as the major project for his Innovation in Sound MA, Sound Mind ‘paints’ the activity in the human brain via LED lighting arrayed across a large dome-like structure. Based in Bristol, Mark enlisted team mates, Rory Pickering and Jim Turner to construct this futuristic dome.
Doswell crafted a slick signal chain, beginning with a consumer-aimed electroencephalogram (EEG), running through a brain-wave organising application, into Ableton Live and then out to a light-controlling workstation, which triggers different LED lighting states across the structure.
Keen to dig into this fascinating new foray into brain/music interfacing further, we spoke to Mark and his team to find out more…
AMI: Hi guys, firstly what was the starting point for Sound Mind, and had the world of neurofeedback been of interest to you generally?
Mark Doswell: There were a few starting points really, I was really surprised to find out that there were consumer-grade EEGs available on the market. There was one that was used to aid in meditation and therapy. After that I discovered that there was a third-party app called Mind Monitor. That allows you to send OSC (Open Sound Control) messages which you can then pick up inside software like MaxMSP or Max for Live (inside Ableton Live). Both these things were quite exciting to me.
I’d built a biosensor before to use on plants, in the hopes of making music with them. I used it on myself at one point then started to wonder about what other bio signals you could use in a musical context. I played around with my heartbeat and galvanic skin response before I thought it’d be cool to scan human brainwaves.
AMI: At what point did the Sound Mind project find its feet then, and how did the team come together?
Mark: I met Rory at Hackspace, and Jim is an old friend of mine. Hackspaces are cool creative places which are equipped with laser cutters and 3D printers. They’re great for facilitating ideas. I started talking to Rory about my idea of illuminating a brain via EEG, and he explained how he typically makes light installations. Then we became collaborators
Rory Pickering: I’d been building a few things using LEDs and I’d always wanted to do something with music. I heard Mark’s idea and just thought it sounded very cool. For quite a while we were talking about building a literal brain that sits above somebody’s head, over time we realised it didn’t need to be quite so literal. It’s more an abstract representation.
Mark: Studying at dBs forces you to get stuff done, but the fact that we had this deadline, as it became my major project, meant we had a motivating force. The innovation course was great, and it was really useful for showing me what MaxMSP was capable of.
Rory: I’d never heard of dBs before getting involved with this project, but they were very encouraging, and facilitated our mad idea. I was quite impressed by the space and the people.
AMI: So what are we seeing when we’re watching the colours light-up, are they representing emotional responses?
Rory: So we had five channels of incoming data (corresponding to brainwaves), the hardest part was mapping these to different visual parameters. The data stream that indicates excitement, we might map to a visual parameter that is indicative of that state of mind. Like a strobe effect, or the speed at which some kind of LFO in the visuals is scaled. We used several different programs per track. We’d change the mapping for different songs, so you get quite interesting results. It also varies depending on the person.
Mark: At the moment, we do know that alpha waves are more active during a music listening session, or during relaxation or meditation. So we can demonstrate this. It’s also true that gamma waves are more likely to appear when stressed. We were focusing on emphasising this but then we realised that the best approach was to balance the science with art. We wanted to make it a creative installation ultimately.
Sound Mind is not mapped to brain *regions* yet. So, if you’re processing a certain element of music, like rhythm, the left hemisphere of your head should probably be the most active. This is something we’re looking at doing for the next iteration though.
AMI: So Rory and Mark were responsible for the concept and technical set-up, and Jim was tasked with building the structure itself?
Jim Turner: Yeah, I designed the structure of it. I was throwing ideas out to Mark and Rory over a weekend. The whole thing was made on a very low budget, so we had to be creative to make it look impressive, and have an angularity to it. To display the ideas we had. Overall it took three to six months.
Mark: Over half of that time was deciding where to go with the structure. We didn’t want to do anything that had been done before shape-wise which made it quite challenging.
AMI: What was the first test, and I guess a big question is how do participants interface with it?
Mark: So we use the Muse Headband, it’s designed for meditation but is a four-channel EEG. It’s surprisingly very reliable. There’s a lot of academic papers written on it. So we used that as our brain-scanner. This was going to my phone which had an app called Mind Monitor, which renders the incoming EEG data. That’s sent via OSC data to Ableton Live to automate some Max for Live devices. This is sent to the video mapping and light projection suite Mad Mapper.
Rory: It was Mark’s girlfriend that first tried it out. She recorded her brainwaves into Ableton Live, so then we had a recording to work with. Even though we were bending the DAW to a new purpose, it did become our main way of organising the control data, whereas the visuals were determined by Mad Mapper, taking the MIDI from Live.
AMI: Were there any big surprises, and how responsive was it?
Mark: One caveat to using SoundMind was that you had to close your eyes. Any eye movements would make litter jumps or artefacts. I had a conversation with Alan Harvey, a neuroscientist who did a great TED Talk called ‘Your Brain on Music’, which was very inspiring. He told us to make sure the subject’s eyes were closed.
It differs from conventional neurofeedback, because usually you’d be getting that data back in real time and you’d learn to control your brainwaves. With Sound Mind the participants are getting it later. The audience is watching this happen in real-time and getting an insight into what’s going on in the subject’s brain.
AMI: What kinds of music were you playing?
Mark: Jim made a track, I made a track and one of Rory’s friends made something. One was music for study, one was relaxing and trance-like, then mine was a mixture of emotions, a bit of a breakcore track. It had speed and pitch variations and dissonant tones. I did that to try and play around with the reactions, and trigger some interesting illuminations.
AMI: Are you thinking of developing this concept further on both the artistic and scientific fronts?
Mark: I’d like to. I’d like to do two variations that cover both the artistic side and scientific. It might be two different structures. I want to use the neurofeedback concept more, so people could be creating music using SoundMind. A way of doing it so eyes can be open would be great.
Rory: I think it’d be fascinating to see things the other way, so the brainwaves are affecting the music. It’s something we tried to do at the end, but we ran out of time I think this project was great in that it highlights how music changes what’s happening in your brain and your emotional state. To make it a full loop would be interesting.
AMI: Do you think we’re going to be seeing a lot more human brain interfacing applications, is it the future of musical control?
Mark: I think generative music is becoming a lot more mainstream and the fact that EEGs have become consumer grade and affordable opens doors. It’s quite exciting. We will definitely be seeing more of it.
Rory: EEGs started out as a medical technology, but it’s now commercially available. I think the creative uses of it haven’t really been explored yet, it hasn’t been in people’s hands long enough. But, it’s only a matter of time…