Without time, there is no music. Therefor “non-linear music” is a confusing term. Live-performed music is linear even if improvisations loosens up the form a bit. Even loop-based, produced music is linear even if just within smaller blocks. In music for games we talk about “adaptive”, “dynamic” or “non-linear” music, but is it really non-linear?
In adaptive music, the final musical form is not linear according to preconceptions the composer might have had but it is still linear when we here it.
If we want to build an adaptive music engine supporting performed music better, we can probably use a lot the theories developed in improvised music and editing practices in record production of classical music. This insight will guide me further into my studies of Adaptive Music Production.
To me, one of the most beautiful things with research is the nature of sharing knowledge and building communities. I’m lucky to have lots of time for reading what my fellow colleagues around the world has discovered and I feel blessed to follow in the footsteps of many great thinkers and practitioners. I also made myself a habit to write a personal email to say ”Thank you” when I read something insightful and helpful for my research. As a result I’ve already got colleagues near and far, all devoted to contribute with knowledge to the wider community of producers of interactive music.
Here are some of the articles I’ve read over the last week. A big thank you to all authors!
In many branches and sectors it’s a no-brainer to have a “Consumer based” design/focus/strategy etc. I have noticed it’s true even for research and development of technology for music in computer games. That probably seem to make sens for most people – developers and gamers alike – but it is often good to stop and think about the consequences.
Is the focus on the consumer always good? Is it different for different branches? Is art in general and music in particular different in this aspect? What happens to music when our focus as composers/producers/musicians moves from what we express to what the listener hears? What happens to a performance when it is edited so it has lost its original qualities? What happens to our souls when artificial intelligens satisfy our need for music?
What do we hear when we listen to AI-made music? Is it music? Or is it just vibrations in the air that tickle our souls with frequencies that is very similar to music?
”High Fidelity” – representing a good sound quality without adding noise or distortion. It was employed by audio manufacturers in the 1950s to describe records and equipment with ”faithful sound reproduction”. When I was a teenager in the 80s all of us wanted a good HiFi system for playing back our records.
My kids and their friends seem to enjoy music through their mobile phones speakers which means they don’t seem to care that much for frequencies below 1000Hz.
Maybe HiFi is of less interest now because there is no ”high fidelity” in the way most modern popular music is produced. In the 1950s music production and playback had the task to reproduce a real moment. Today we more often create the reality virtually which might make HiFi an obsolete term.
What trends do we see now? Are there any new interest for HiFi? Will we look for ”High Fidelity” in Computer Games and VR? In what sectors of our lives will created or generated music productions dominate and where will we rather listen to music productions documenting a musical moment with real musicians? (se my previous blog post: https://hans.arapoviclindetorp.se/2018/01/24/my-quadrant/)
In the work of narrowing down the limitations of the scope for my studies I found it useful to draw this figure for music production models. I borrow the X-axis from my friend and colleague Jan-Olof Gullö (2014) Sonic Signature Aspects in Research on Music Production Projects. In: Aalborg: ESSA/Aalborg University. (https://www.diva-portal.org/smash/get/diva2:781178/FULLTEXT01.pdf) Gullö describes two approaches for how to make music productions:
The recording is either documentation or production. An example of the documentation approach is a classical concert that is recorded with the objective to make it sound as similar as possible to the actual concert. In contrast, with a production there is no requirement to make the recording sound like a genuine acoustic event. With the production strategy the objective is to create reality, not to record it.
On the Y-axis, I’ve chosen “linear” and “adaptive”. Linear represents recorded music as we normally know it – songs on Spotify or Film music. Adaptive refers to music in interactive media like Computer Games or VR. The two upper quadrants are well defined by Gullö and the lower right covers most of the existing game music where the object is to create reality and make it adaptive to the game. The question rises: What to do with the lower left quadrant? Is it possible to have a documentative approach to the recording and still make it very adaptive? And if so; How can that be done? What challenges will we meet? Can it compete with a generative approach or will human composed and performed music in interactive environments be a historical monument belonging to the period when music sounded good but it weren’t that adaptive?
I quite like the lower left quadrant. I’ll run with it.
I sat down with my excellent co-supervisor, Per Mårtensson, the other week and got some good and hard questions to answer. It was clear that I wasn’t clear at all about what I’m doing. Finally I saw the missing part in what I was trying to describe: I didn’t see the thing that just was too obvious to myself.
Sometimes when you spend time with experts in a subject, you might forget that it’s your subject as well. In my case I’m teaching Game music, Programming basics, Web Design, Project management at the Royal Music College in Stockholm for years. In companion with my music production expert colleagues Jan-Olof Gullö, Juhani Hemmilä and Hans Gardemar I feel like a web hacker at a music college, but really, my background is music production. I’ve spent so much time producing music with hopeless technology. I’ve tried to make a great vocal track with an old spring-reverb. I’ve discovered the sys-ex code for my Roland MT-32 to turn off the reverb on the bass drum and I’ve lost hours and hours of work when Alesis MMT-8 lost the track data. I was also a part of SourceForce with Mats Liljedahl and Bjarne Nyquist developing the most advanced MIDI-Xtra (Sequence-Xtra) for Macromedia Director and have spent a lot of time developing interactive music pedagogic tools.
Therefor, it’s quite natural that my perspective on what I will explore in my research really is a music producers perspective. It’s not about interactive composition, the function of music in games, music theory or interactive live performance even if a lot will relate to it.
The question is really about something like “how the music production technology for interactive application can be improved to support musical expressions currently not supported”.
well…I know, it wasn’t that definite. I’ll be back. Refining.
How will I find answers to my questions? How do I know if I discover something meaningful and valid to people?
My first part of the study will focus on scanning journals and conferences for papers in my research area to see where lots have been done and where there still are questions to ask. I will interview composers and producers of game music to document their process to identify obstacles and challenges. I will examine and compare existing middleware used to integrate music into games to find areas where music is limited by the technology.
I will keep on developing my own interactive music framework (iMusic – more on that in future posts) to test different solutions for music integration into interactive applications. I will use iMusic to build an interactive, audiovisual survey where I can collect feedback from (hopefully) lots of users and evaluate their respons to the musical experience. My aim is to get a better understanding of how different technical solutions affect the listening experience for different listeners.
This is my current strategy and method. It might well be refined along the way. If any of you have ideas, if you want to get connected and involved in any way. Please don’t hesitate to contact me.
Many people asks me: “what is your research question?”, “what is your subject?”, “what exactly will you do??” and other valid and good questions.
The broad answer is “more knowledge in the area of music in interactive applications”, but there are of course much more to say. I also wouldn’t be totally surprised if the target moves a bit when I start trying to catch it, but here is a glimpse of what can be expected:
I will scan research done so far in the subject
I will network with other nerds around the world searching for answers
I will evaluate how different technical solutions (for producing and integrating music into interactive applications) affect the end result for different listeners.
I will focus on the challenges in the process of making (live) performed, traditional music for interactive applications rather than computer generated, experimental music.
I will continue building my interactive music framework to solve some yet unmet needs in this area.
It’s good to know WHAT you do, WHY you do it to find a way HOW to do it. In research, art and industry alike. I will try to answer these three questions regarding my own research in three blog posts. Please feel free to comment and give me your thoughts. It will be a valuable input to my future texts.
WHY am I doing this study?
I recently heard that kids growing up in Sweden today listen to more music through games (on smart phones, tablets and gaming consoles) than they do through more “traditional” channels like Spotify and Youtube. Ancient formats like CD seem to be completely outdated.
There are many indications of “Virtual Reality” becoming the next big thing in the entertainment industry and there has been a huge trend on the web where static web pages with information is turned into social, interactive experiences. We also see a trend in museums, exhibitions and even concerts where interactivity, feedback and participation from the visitor/audience/consumer is more and more an expected part of the experience.
The combination of audio and visuals in an interactive environment require new technical solutions and skills which at the moment leave most trained musicians, composers and producers outside.
You can also argue that the current available technology for integrating music into games and other interactive applications heavily restricts how the music can be used.
I’m passionate about music, musicians ability to communicate and the joy of interaction. To make it possible for this to happen even in the new age of interactive applications being the primary way for people experience music, lots of new knowledge, technologies and methods are needed. I hope my research can play a part in the answer to that need.