My quadrant

In the work of narrowing down the limitations of the scope for my studies I found it useful to draw this figure for music production models. I borrow the X-axis from my friend and colleague Jan-Olof Gullö (2014)
Sonic Signature Aspects in Research on Music Production Projects. In: Aalborg: ESSA/Aalborg University. (https://www.diva-portal.org/smash/get/diva2:781178/FULLTEXT01.pdf) Gullö describes two approaches for how to make music productions:

The recording is either documentation or production. An example of the documentation approach is a classical concert that is recorded with the objective to make it sound as similar as possible to the actual concert. In contrast, with a production there is no requirement to make the recording sound like a genuine acoustic event. With the production strategy the objective is to create reality, not to record it.

On the Y-axis, I’ve chosen “linear” and “adaptive”. Linear represents recorded music as we normally know it – songs on Spotify or Film music. Adaptive refers to music in interactive media like Computer Games or VR. The two upper quadrants are well defined by Gullö and the lower right covers most of the existing game music where the object is to create reality and make it adaptive to the game. The question rises: What to do with the lower left quadrant? Is it possible to have a documentative approach to the recording and still make it very adaptive? And if so; How can that be done? What challenges will we meet? Can it compete with a generative approach or will human composed and performed music in interactive environments be a historical monument belonging to the period when music sounded good but it weren’t that adaptive?

I quite like the lower left quadrant. I’ll run with it.

First international presentation

Last week I visited Innovation in Music 2017 – a great conference at University of Westminster, London where brilliant researchers and innovators in the field of music shared ideas, thought, results and visions. I also took the chance to present my study of the making of iMusic – my Javascript framework for playing back music in interactive environments. The focus is what differs my design from similar frameworks and the reason behind it. iMusic was developed as a teaching and testing tool for web pages and interactive exhibitions and has now been used in 50+ bachelor and master productions at the Royal College of Music in Stockholm including the beautiful Nobel Creations.

On my way to London, I couldn’t stop myself from solving a problem I’ve struggled with a long time: The jumping between Powerpoint, Youtube, web pages and interactive applications when people are presenting interesting things. I thought I better come up with a Javascript solution to the problem, so here we go: Have a look at my interactive, multimedia presentation, directly in your browser.

Interactive Music Software – interactive presentation

Abstract: Interactive Music Software

Refining

I sat down with my excellent co-supervisor, Per Mårtensson, the other week and got some good and hard questions to answer. It was clear that I wasn’t clear at all about what I’m doing. Finally I saw the missing part in what I was trying to describe: I didn’t see the thing that just was too obvious to myself.

Sometimes when you spend time with experts in a subject, you might forget that it’s your subject as well. In my case I’m teaching Game music, Programming basics, Web Design, Project management at the Royal Music College in Stockholm for years. In companion with my music production expert colleagues Jan-Olof Gullö, Juhani Hemmilä and Hans Gardemar I feel like a web hacker at a music college, but really, my background is music production. I’ve spent so much time producing music with hopeless technology. I’ve tried to make a great vocal track with an old spring-reverb. I’ve discovered the sys-ex code for my Roland MT-32 to turn off the reverb on the bass drum and I’ve lost hours and hours of work when Alesis MMT-8 lost the track data. I was also a part of SourceForce with Mats Liljedahl and Bjarne Nyquist developing the most advanced MIDI-Xtra (Sequence-Xtra) for Macromedia Director and have spent a lot of time developing interactive music pedagogic tools.

Therefor, it’s quite natural that my perspective on what I will explore in my research really is a music producers perspective. It’s not about interactive composition, the function of music in games, music theory or interactive live performance even if a lot will relate to it.

The question is really about something like “how the music production technology for interactive application can be improved to support musical expressions currently not supported”.

well…I know, it wasn’t that definite. I’ll be back. Refining.

How?

How will I find answers to my questions? How do I know if I discover something meaningful and valid to people?

My first part of the study will focus on scanning journals and conferences for papers in my research area to see where lots have been done and where there still are questions to ask. I will interview composers and producers of game music to document their process to identify obstacles and challenges. I will examine and compare existing middleware used to integrate music into games to find areas where music is limited by the technology.

I will keep on developing my own interactive music framework (iMusic – more on that in future posts) to test different solutions for music integration into interactive applications. I will use iMusic to build an interactive, audiovisual survey where I can collect feedback from (hopefully) lots of users and evaluate their respons to the musical experience. My aim is to get a better understanding of how different technical solutions affect the listening experience for different listeners.

This is my current strategy and method. It might well be refined along the way. If any of you have ideas, if you want to get connected and involved in any way. Please don’t hesitate to contact me.

What?

Many people asks me: “what is your research question?”, “what is your subject?”, “what exactly will you do??” and other valid and good questions.

The broad answer is “more knowledge in the area of music in interactive applications”, but there are of course much more to say. I also wouldn’t be totally surprised if the target moves a bit when I start trying to catch it, but here is a glimpse of what can be expected:

  • I will scan research done so far in the subject
  • I will network with other nerds around the world searching for answers
  • I will evaluate how different technical solutions (for producing and integrating music into interactive applications) affect the end result for different listeners.
  • I will focus on the challenges in the process of making (live) performed, traditional music for interactive applications rather than computer generated, experimental music.
  • I will continue building my interactive music framework to solve some yet unmet needs in this area.

Why?

It’s good to know WHAT you do, WHY you do it to find a way HOW to do it. In research, art and industry alike. I will try to answer these three questions regarding my own research in three blog posts. Please feel free to comment and give me your thoughts. It will be a valuable input to my future texts.

WHY am I doing this study?

I recently heard that kids growing up in Sweden today listen to more music through games (on smart phones, tablets and gaming consoles) than they do through more “traditional” channels like Spotify and Youtube. Ancient formats like CD seem to be completely outdated.

There are many indications of “Virtual Reality” becoming the next big thing in the entertainment industry and there has been a huge trend on the web where static web pages with information is turned into social, interactive experiences. We also see a trend in museums, exhibitions and even concerts where interactivity, feedback and participation from the visitor/audience/consumer is more and more an expected part of the experience.

The combination of audio and visuals in an interactive environment require new technical solutions and skills which at the moment leave most trained musicians, composers and producers outside.

You can also argue that the current available technology for integrating music into games and other interactive applications heavily restricts how the music can be used.

I’m passionate about music, musicians ability to communicate and the joy of interaction. To make it possible for this to happen even in the new age of interactive applications being the primary way for people experience music, lots of new knowledge, technologies and methods are needed. I hope my research can play a part in the answer to that need.

Music With Me

Music With Me is a piece of music where the visitor controls the changes of sections, the intensity of the piano, bass and cajun and the synth lead phrases. It’s recorded by Hans Lindetorp in a standard, linear manner with the A-section played with the intensity going from low to high across 16 bars. The B-section was then played with the intensity decreasing across another 16 bars.

The aim was to identify some challenges and solutions in the process in turning a linear recording into a non-linear production where the visitor is given the tools to control parts of the music. My plan was to perform the music on piano and record it into my standard DAW; LOGIC. After adding bass and cajon, my intention was to cut the music into bars and then use my own javascript framework to integrate it into the web page.

  • The first challenge I encountered is related to all sorts of production of music where you have to follow a metronome. I’m quite used to it but still it doesn’t feel great to stick to a steady beat all the time. Especially not when going from one intensity level to another. To make the recording work this time I had to quantize my piano playing pretty thoroughly, especially near barlines, to make it repeat well.
  • The second challenge was the lack of lead-ins between the different intensities. I had to play the music without leading to the next level to make the loops work and therefor I wasn’t allowed to ramp either up or down before going to a new level. This challenge is not solved in my demo.
  • The third challenge was to make the synth phrases fit the harmonies in the different sections. I found a way of labelling my files in a way that made them lock to the correct section. Another solution would have been to use even fewer notes to make all phrases fit all chords. This challenge would be very important to solve for future implementations, though. Both for different sections but also for different bars or even smaller fractions of a bar to make sure the Motifs always are selected according to the current harmony.

This example is a part of a doctoral study at Royal College of Music and Royal Institute of Technology in Stockholm, Sweden. More information, papers, presentations, demos etc will be found at hans.arapoviclindetorp.se. If you are interested, don’t hesitate to contact me: hans.lindetorp@kmh.se