Perfectly in sync with my thoughts about linear and non-linear music I will attend a new course at Royal Institute of Technology (KTH) today. The name of the course is “Interdisciplinary Perspectives on Rhythm” and today’s seminar is about temporarily. We’re excited to meet Martin Schertzinger over Skype to discuss deep philosophical questions about absolute time, circular time, rhythm and related things. When reading he’s text on temporalities from the book “The Oxford Handbook of Critical Concepts in Music Theory” I came across one shocking line that i didn’t have a clue about. You might know it already but if you don’t:

The rotation speed of the earth varies all the time!

  • The time it takes for one lap differs 3 min 56 seconds depending on if we refer to the sun or the stars.
  • It varies 30+ seconds across the year
  • It slows down about 2ms per century.


Obviously the rhythm of the universe is not quantized!


Music Mind Map


I’ve been touching on linear vs non-linear music in earlier posts, and even if I argue that music is always linear when we here it, we also know that there is an element of non-linearity in music for games, VR and other environments where the music needs to adapt to interactions. Through my teaching at the Royal College of Music in Stockholm, me and my students have talked about alternative views in music productions. One idea that seamed strong was a Music Mind Map where we could have an overview of themes, tracks and parts in a production and have musical transitions between them when we’re navigating through the music.

Said and done, I decided to create a prototype and evaluate it with my students and some professional game music composers. Henrik Lindström contributed with wonderful music and I used my interactive music javascript library iMusic to synchronise all the audio. It’s not a working tool for making music but it gives an idea about what it could be like. Please have a go and don’t hesitate to give me feedback through the evaluation form: https://goo.gl/forms/ASOUpjJDLTBVO4Y93

Musical Design


I went to Milan this weekend. One of the worlds center for design. We went to see their design museum, Triennale, fantastic interior design shops, the beautiful MUDEC museum and the Leonardo3 museum. It was amazing to see all fantastic shapes, lights, colours and materials playing together like instruments in an orchestra.

Every now and then I stopped and listened. Listened to the beautiful sounds of people. And to lots of terrible sounds. Screaming sounds from an approaching train, beeping ticket machines, LoFi-speakers at museums playing different music simultaneously. I asked myself: What would this chaos of sounds look like if they were visuals? And a more pleasing thought:  What would all the beautiful visual design sound like if it was translated into music?

I strongly believe in making public places more peaceful, creative and positive through design. And musical design would be an important part of it. 

The general or the specific

I find the question about what kind of knowledge we create more and more engaging. Learning to do research in collaboration with the Royal Institute of Technology (KTH) and the Royal College of Music (KMH) puts me in an exciting landscape between looking for general knowledge through lots of data, numbers and statistics and searching for the more specific by digging deeper through interviews and interactions. Luckily I have found myself in a very exciting workgroup at KTH with lots of experience in exactly this area and I’m realising there are many reasons for using mixed methods and combining insights from different approaches to learn more to get a better picture of the problem.

This week I’m planning for a study where I want to gain more knowledge about how music producers would respond to a new interface for music production applications. It involves prototyping, testing and evaluation and I realise that this is not the last time I will do something like that. The question is: what general or specific knowledge is there to find and how do we find it? How general can we be in our studies before the result is not interesting at all? How specific and personal can the result be and still be of common interest?

Is there non-linear music?

Without time, there is no music. Therefor “non-linear music” is a confusing term. Live-performed music is linear even if improvisations loosens up the form a bit. Even loop-based, produced music is linear even if just within smaller blocks. In music for games we talk about “adaptive”, “dynamic” or “non-linear” music, but is it really non-linear?

In adaptive music, the final musical form is not linear according to preconceptions the composer might have had but it is still linear when we here it.

If we want to build an adaptive music engine supporting performed music better, we can probably use a lot the theories developed in improvised music and editing practices in record production of classical music. This insight will guide me further into my studies of Adaptive Music Production.


To me, one of the most beautiful things with research is the nature of sharing knowledge and building communities. I’m lucky to have lots of time for reading what my fellow colleagues around the world has discovered and I feel blessed to follow in the footsteps of many great thinkers and practitioners. I also made myself a habit to write a personal email to say ”Thank you” when I read something insightful and helpful for my research. As a result I’ve already got colleagues near and far, all devoted to contribute with knowledge to the wider community of producers of interactive music.

Here are some of the articles I’ve read over the last week. A big thank you to all authors!

Michael Liebe – Interactivity and Music in Computer Games

Anthony Prechtl – Adaptive Music Generation for Computer Games

Alex Berndt – Musical Nonlinearity in Interactive Narrative Environments

Axel Berndt, Simon Waloschek, Aristotelis Hadjakos, Alexander Leemhuis – AmbiDice: An Ambient Music Interface for Tabletop Role-Playing Games

Tracy Redhead – The Interactive Music Producer

Charles P. Martin, Kai Olav Ellefsen, Jim Torresen – Deep Predictive Models in Interactive Music


Happy researching!

Consumer or Creator based design?

In many branches and sectors it’s a no-brainer to have a “Consumer based” design/focus/strategy etc. I have noticed it’s true even for research and development of technology for music in computer games. That probably seem to make sens for most people – developers and gamers alike – but it is often good to stop and think about the consequences.

Is the focus on the consumer always good? Is it different for different branches? Is art in general and music in particular different in this aspect? What happens to music when our focus as composers/producers/musicians moves from what we express to what the listener hears? What happens to a performance when it is edited so it has lost its original qualities? What happens to our souls when artificial intelligens satisfy our need for music?

What do we hear when we listen to AI-made music? Is it music? Or is it just vibrations in the air that tickle our souls with frequencies that is very similar to music?


What happened to HiFi? https://en.wikipedia.org/wiki/High_fidelity

”High Fidelity” – representing a good sound quality without adding noise or distortion. It was employed by audio manufacturers in the 1950s to describe records and equipment with ”faithful sound reproduction”. When I was a teenager in the 80s all of us wanted a good HiFi system for playing back our records.

But now?

My kids and their friends seem to enjoy music through their mobile phones speakers which means they don’t seem to care that much for frequencies below 1000Hz.

Maybe HiFi is of less interest now because there is no ”high fidelity” in the way most modern popular music is produced. In the 1950s music production and playback had the task to reproduce a real moment. Today we more often create the reality virtually which might make HiFi an obsolete term.

What trends do we see now? Are there any new interest for HiFi? Will we look for ”High Fidelity” in Computer Games and VR? In what sectors of our lives will created or generated music productions dominate and where will we rather listen to music productions documenting a musical moment with real musicians? (se my previous blog post: https://hans.arapoviclindetorp.se/2018/01/24/my-quadrant/)

My quadrant

In the work of narrowing down the limitations of the scope for my studies I found it useful to draw this figure for music production models. I borrow the X-axis from my friend and colleague Jan-Olof Gullö (2014)
Sonic Signature Aspects in Research on Music Production Projects. In: Aalborg: ESSA/Aalborg University. (https://www.diva-portal.org/smash/get/diva2:781178/FULLTEXT01.pdf) Gullö describes two approaches for how to make music productions:

The recording is either documentation or production. An example of the documentation approach is a classical concert that is recorded with the objective to make it sound as similar as possible to the actual concert. In contrast, with a production there is no requirement to make the recording sound like a genuine acoustic event. With the production strategy the objective is to create reality, not to record it.

On the Y-axis, I’ve chosen “linear” and “adaptive”. Linear represents recorded music as we normally know it – songs on Spotify or Film music. Adaptive refers to music in interactive media like Computer Games or VR. The two upper quadrants are well defined by Gullö and the lower right covers most of the existing game music where the object is to create reality and make it adaptive to the game. The question rises: What to do with the lower left quadrant? Is it possible to have a documentative approach to the recording and still make it very adaptive? And if so; How can that be done? What challenges will we meet? Can it compete with a generative approach or will human composed and performed music in interactive environments be a historical monument belonging to the period when music sounded good but it weren’t that adaptive?

I quite like the lower left quadrant. I’ll run with it.

First international presentation

Last week I visited Innovation in Music 2017 – a great conference at University of Westminster, London where brilliant researchers and innovators in the field of music shared ideas, thought, results and visions. I also took the chance to present my study of the making of iMusic – my Javascript framework for playing back music in interactive environments. The focus is what differs my design from similar frameworks and the reason behind it. iMusic was developed as a teaching and testing tool for web pages and interactive exhibitions and has now been used in 50+ bachelor and master productions at the Royal College of Music in Stockholm including the beautiful Nobel Creations.

On my way to London, I couldn’t stop myself from solving a problem I’ve struggled with a long time: The jumping between Powerpoint, Youtube, web pages and interactive applications when people are presenting interesting things. I thought I better come up with a Javascript solution to the problem, so here we go: Have a look at my interactive, multimedia presentation, directly in your browser.

Interactive Music Software – interactive presentation

Abstract: Interactive Music Software