Music Mind Map

http://momdev.se/hans_lindetorp/musicmindmap2

I’ve been touching on linear vs non-linear music in earlier posts, and even if I argue that music is always linear when we here it, we also know that there is an element of non-linearity in music for games, VR and other environments where the music needs to adapt to interactions. Through my teaching at the Royal College of Music in Stockholm, me and my students have talked about alternative views in music productions. One idea that seamed strong was a Music Mind Map where we could have an overview of themes, tracks and parts in a production and have musical transitions between them when we’re navigating through the music.

Said and done, I decided to create a prototype and evaluate it with my students and some professional game music composers. Henrik Lindström contributed with wonderful music and I used my interactive music javascript library iMusic to synchronise all the audio. It’s not a working tool for making music but it gives an idea about what it could be like. Please have a go and don’t hesitate to give me feedback through the evaluation form: https://goo.gl/forms/ASOUpjJDLTBVO4Y93

Musical Design

MUDEC

I went to Milan this weekend. One of the worlds center for design. We went to see their design museum, Triennale, fantastic interior design shops, the beautiful MUDEC museum and the Leonardo3 museum. It was amazing to see all fantastic shapes, lights, colours and materials playing together like instruments in an orchestra.

Every now and then I stopped and listened. Listened to the beautiful sounds of people. And to lots of terrible sounds. Screaming sounds from an approaching train, beeping ticket machines, LoFi-speakers at museums playing different music simultaneously. I asked myself: What would this chaos of sounds look like if they were visuals? And a more pleasing thought:  What would all the beautiful visual design sound like if it was translated into music?

I strongly believe in making public places more peaceful, creative and positive through design. And musical design would be an important part of it. 

The general or the specific

I find the question about what kind of knowledge we create more and more engaging. Learning to do research in collaboration with the Royal Institute of Technology (KTH) and the Royal College of Music (KMH) puts me in an exciting landscape between looking for general knowledge through lots of data, numbers and statistics and searching for the more specific by digging deeper through interviews and interactions. Luckily I have found myself in a very exciting workgroup at KTH with lots of experience in exactly this area and I’m realising there are many reasons for using mixed methods and combining insights from different approaches to learn more to get a better picture of the problem.

This week I’m planning for a study where I want to gain more knowledge about how music producers would respond to a new interface for music production applications. It involves prototyping, testing and evaluation and I realise that this is not the last time I will do something like that. The question is: what general or specific knowledge is there to find and how do we find it? How general can we be in our studies before the result is not interesting at all? How specific and personal can the result be and still be of common interest?

Is there non-linear music?

Without time, there is no music. Therefor “non-linear music” is a confusing term. Live-performed music is linear even if improvisations loosens up the form a bit. Even loop-based, produced music is linear even if just within smaller blocks. In music for games we talk about “adaptive”, “dynamic” or “non-linear” music, but is it really non-linear?

In adaptive music, the final musical form is not linear according to preconceptions the composer might have had but it is still linear when we here it.

If we want to build an adaptive music engine supporting performed music better, we can probably use a lot the theories developed in improvised music and editing practices in record production of classical music. This insight will guide me further into my studies of Adaptive Music Production.

Community

To me, one of the most beautiful things with research is the nature of sharing knowledge and building communities. I’m lucky to have lots of time for reading what my fellow colleagues around the world has discovered and I feel blessed to follow in the footsteps of many great thinkers and practitioners. I also made myself a habit to write a personal email to say ”Thank you” when I read something insightful and helpful for my research. As a result I’ve already got colleagues near and far, all devoted to contribute with knowledge to the wider community of producers of interactive music.

Here are some of the articles I’ve read over the last week. A big thank you to all authors!

Michael Liebe – Interactivity and Music in Computer Games

Anthony Prechtl – Adaptive Music Generation for Computer Games

Alex Berndt – Musical Nonlinearity in Interactive Narrative Environments

Axel Berndt, Simon Waloschek, Aristotelis Hadjakos, Alexander Leemhuis – AmbiDice: An Ambient Music Interface for Tabletop Role-Playing Games

Tracy Redhead – The Interactive Music Producer

Charles P. Martin, Kai Olav Ellefsen, Jim Torresen – Deep Predictive Models in Interactive Music

 

Happy researching!

Consumer or Creator based design?

In many branches and sectors it’s a no-brainer to have a “Consumer based” design/focus/strategy etc. I have noticed it’s true even for research and development of technology for music in computer games. That probably seem to make sens for most people – developers and gamers alike – but it is often good to stop and think about the consequences.

Is the focus on the consumer always good? Is it different for different branches? Is art in general and music in particular different in this aspect? What happens to music when our focus as composers/producers/musicians moves from what we express to what the listener hears? What happens to a performance when it is edited so it has lost its original qualities? What happens to our souls when artificial intelligens satisfy our need for music?

What do we hear when we listen to AI-made music? Is it music? Or is it just vibrations in the air that tickle our souls with frequencies that is very similar to music?

HiFi

What happened to HiFi? https://en.wikipedia.org/wiki/High_fidelity

”High Fidelity” – representing a good sound quality without adding noise or distortion. It was employed by audio manufacturers in the 1950s to describe records and equipment with ”faithful sound reproduction”. When I was a teenager in the 80s all of us wanted a good HiFi system for playing back our records.

But now?

My kids and their friends seem to enjoy music through their mobile phones speakers which means they don’t seem to care that much for frequencies below 1000Hz.

Maybe HiFi is of less interest now because there is no ”high fidelity” in the way most modern popular music is produced. In the 1950s music production and playback had the task to reproduce a real moment. Today we more often create the reality virtually which might make HiFi an obsolete term.

What trends do we see now? Are there any new interest for HiFi? Will we look for ”High Fidelity” in Computer Games and VR? In what sectors of our lives will created or generated music productions dominate and where will we rather listen to music productions documenting a musical moment with real musicians? (se my previous blog post: https://hans.arapoviclindetorp.se/2018/01/24/my-quadrant/)

My quadrant

In the work of narrowing down the limitations of the scope for my studies I found it useful to draw this figure for music production models. I borrow the X-axis from my friend and colleague Jan-Olof Gullö (2014)
Sonic Signature Aspects in Research on Music Production Projects. In: Aalborg: ESSA/Aalborg University. (https://www.diva-portal.org/smash/get/diva2:781178/FULLTEXT01.pdf) Gullö describes two approaches for how to make music productions:

The recording is either documentation or production. An example of the documentation approach is a classical concert that is recorded with the objective to make it sound as similar as possible to the actual concert. In contrast, with a production there is no requirement to make the recording sound like a genuine acoustic event. With the production strategy the objective is to create reality, not to record it.

On the Y-axis, I’ve chosen “linear” and “adaptive”. Linear represents recorded music as we normally know it – songs on Spotify or Film music. Adaptive refers to music in interactive media like Computer Games or VR. The two upper quadrants are well defined by Gullö and the lower right covers most of the existing game music where the object is to create reality and make it adaptive to the game. The question rises: What to do with the lower left quadrant? Is it possible to have a documentative approach to the recording and still make it very adaptive? And if so; How can that be done? What challenges will we meet? Can it compete with a generative approach or will human composed and performed music in interactive environments be a historical monument belonging to the period when music sounded good but it weren’t that adaptive?

I quite like the lower left quadrant. I’ll run with it.

First international presentation

Last week I visited Innovation in Music 2017 – a great conference at University of Westminster, London where brilliant researchers and innovators in the field of music shared ideas, thought, results and visions. I also took the chance to present my study of the making of iMusic – my Javascript framework for playing back music in interactive environments. The focus is what differs my design from similar frameworks and the reason behind it. iMusic was developed as a teaching and testing tool for web pages and interactive exhibitions and has now been used in 50+ bachelor and master productions at the Royal College of Music in Stockholm including the beautiful Nobel Creations.

On my way to London, I couldn’t stop myself from solving a problem I’ve struggled with a long time: The jumping between Powerpoint, Youtube, web pages and interactive applications when people are presenting interesting things. I thought I better come up with a Javascript solution to the problem, so here we go: Have a look at my interactive, multimedia presentation, directly in your browser.

Interactive Music Software – interactive presentation

Abstract: Interactive Music Software

Refining

I sat down with my excellent co-supervisor, Per Mårtensson, the other week and got some good and hard questions to answer. It was clear that I wasn’t clear at all about what I’m doing. Finally I saw the missing part in what I was trying to describe: I didn’t see the thing that just was too obvious to myself.

Sometimes when you spend time with experts in a subject, you might forget that it’s your subject as well. In my case I’m teaching Game music, Programming basics, Web Design, Project management at the Royal Music College in Stockholm for years. In companion with my music production expert colleagues Jan-Olof Gullö, Juhani Hemmilä and Hans Gardemar I feel like a web hacker at a music college, but really, my background is music production. I’ve spent so much time producing music with hopeless technology. I’ve tried to make a great vocal track with an old spring-reverb. I’ve discovered the sys-ex code for my Roland MT-32 to turn off the reverb on the bass drum and I’ve lost hours and hours of work when Alesis MMT-8 lost the track data. I was also a part of SourceForce with Mats Liljedahl and Bjarne Nyquist developing the most advanced MIDI-Xtra (Sequence-Xtra) for Macromedia Director and have spent a lot of time developing interactive music pedagogic tools.

Therefor, it’s quite natural that my perspective on what I will explore in my research really is a music producers perspective. It’s not about interactive composition, the function of music in games, music theory or interactive live performance even if a lot will relate to it.

The question is really about something like “how the music production technology for interactive application can be improved to support musical expressions currently not supported”.

well…I know, it wasn’t that definite. I’ll be back. Refining.