WebAudioXML makes it possible to build Web Audio Applications using XML to create audio nodes, managing audio signal routing and mapping external variables to control audio parameters. The project a part of my PhD and is available at https://github.com/hanslindetorp/WebAudioXML. The DEMO above is one of many available at CodePen.The keyboard in the example is implemented using Web Audio Controls.
Please feel free to use it, contribute and participate in the development. It’s not possible to buy or license the code nor to expect any warranties or support, but don’t hesitate to contact me via LinkedIn or my research page on Facebook if you share my passion for sound and music computing.
There is a 10 lessons tutorial available on YouTube as well to help you getting started:
Thanks to Simon Eidenskog for initial feedback, Kjetil Falkenberg for supervision and co-writing the paper, Peter Schyborger, Mattias Sköld, Mattias Peterson and my colleagues at KTH for valuable feedback.
I’m on my way back from Glasgow where I just visited CHI2019. The conference is known for being the biggest research conference within the Human Computer Interaction community. This year I was honored to be a presenter together with my brilliant colleague, Emma Frid. If you are interested in interactive multi modal installations, please have a look at our paper “Sound Forest – an exploratory evaluation of an interactive installation“. I will write another post about the paper so let’s get to the point:
First of all: CHI is huge! I was told, we were were 3800 delegates from all over the world. The biggest companies in the field were there, including Google, Facebook, Adobe, Spotify, Mozilla and many more. From Monday to Thursday there were over 20 seminars in parallell adding up to something like 1000 presentations!!. The schedule is found here https://confer.csail.mit.edu/chi2019/schedule and I went immediately to the filter and searched for “music”. I didn’t find that many talks, so I added “sound” which together with “tactile” made most of days busy enough.
Beside the talks, there was also a big hall where big companies shared the space with news startups and lots of researchers eager to share their thoughts and results with others. One of the most interesting posters to me was Jacob Harrisons “Accessible Instruments”: http://instrumentslab.org/research/accessible-instruments.html.
Looking at the conference as a whole, I would say the trends definitely points towards artificial intelligens and virtual reality. Even different aspects of “robots” seems to attract a lot of attention. One aspect of CHI I really liked was the playful attitude towards design. You could find anything from very useful tools for people with disabilities to more provoking studies on what an Internet for dogs would look like.
Zooming in a bit more on my own topic, Sound and Music Computing, I left with some thoughts:
It seems to me that there is more interest in haptics than sound and more interest in sound than in music. This actually leaves the topic of music in interaction design more or less left out and when someone involves “music” it tends to be sine waves, white noise or MIDI-notes controlled by an interactive system. The result is very rarely something I would consider “music”.
My conclusion is that there is a huge area still to be explored when it comes to integrate more “normal” music into interactive environments. So, let’s roll up our sleeves and see what we can to contribute to this area. It’s too big to be left unexplored!
In our division at KTH,Media Technology and Interaction Design, we have a nice tradition of going away for a few days to focus on writing every beginning of a semester. This time, we visit the beautiful “Wiks slott” north of Stockholm. The place has a long historic background and I cite their own web page:
Wik Castle is unique. It was constructed in the late 1400s and is Sweden’s best-preserved medieval castle. Its roots, on the other hand go at least another 200 years back in time. Solid walls and moats made the castle impregnable. In the Middle Ages, the castle was one of the strongest fortresses in the Mälar valley, and Gustav Vasa once besieged it for over a year without getting inside the walls.
It’s was a bit scary to prepare for the writing camp and I wonder why. The people are nice, the place is beautiful, the food is great and even if there are lots of rumors about ghosts in this area I’ve been sleeping like a prince (or maybe better). So what makes it scary? I think it comes down to the phenomenon that it is scary to expose your thoughts, your structure (or lack thereof) and your language skills to someone else and be ready to be criticized for them. I also realize that if I, a fifty year old man, still feel a bit intimidated by letting someone read and criticize my text, then how hard mustn’t it be for our students when they expose their songs, lyrics and productions for evaluation?
On the other hand, what a gift to future generations if we could build a community of trust where no-one is ashamed and everyone dares to expose their inner self without the risk of being dismissed.
I’m currently reading a course on “Interdisciplinary Perspectives on Rhythm” given by my supervisor Andre Holzapfel. It’s great. And it’s provoking.
I got a wonderful quote from N T Wright perfectly describing my feeling after reading four articles about speech rhythm for today’s session: “I’m still confused – but at a much higher level…”
This week I moved from the state of “not being aware” to “being aware of my unawareness”. I didn’t know there’s a whole academic community looking for and discussing speech rhythm. And I never thought about how difficult it is to define what a rhythm is. Most people I’ve asked think there is “rhythm” in speech. Interestingly enough, researchers still haven’t found specific rhythm patterns for different languages. There are bigger variations between different people and different moods within a language than there are between different languages.
The notion of “rhythm” is actually very hard to describe. Wikipedia tells us:
But there are no regular motions in speech and still there is rhythm. Looking at music, it’s not too different. Lots of music has rhythm without being periodic, pulse based or regular.
Maybe we can get some help from research in folk music. Sven Ahlbäck gives us some useful tools where he organises rhythm into:
“Gestalt” – a rhythmic gesture, phrase or motif
“Periodicity” – rhythms relating to pulse, meter, periods etc
Using Svens terminology, speech is normally using the “gestalt” approach and music often rhythm relating to “periodicity”. But when we read poems with a meter or say something synchronousely we tend to bend the speech rhythm towards “periodicity”. And when the music is more “free” it will use more “gestalt”. Would this do? Are there more rhythms out there?
(Varning: this blog post might contain content unsuitable for artists, music lovers and others. It includes some quite nerdy ideas related to music and research)
Last week I came across an extraordinary way of describing rhythm. When I first saw it I got a bit upset but when I suddenly realised what a beautiful way of describing a rhythm it is, I almost fell in love with it. OK. It’s got its limitations but as long as you stick to a three note rhythm, it will work beautifully.
The concept is as simple as it is genius. One way of understanding it is this:
Take this rhythm:
The relation between the three note lengths could be expressed
1 : 1 : 2
If we say that the total length of the notes is 100% we could also express the rhythm as
25% : 25% : 50%
Now, lets draw this rhythm using a triangle where the length of each side represents 100% of the total length of the rhythm:
In this way, we can actually represent any three note rhythm in this graph making it possible to study i.e. other divisions than our notation based systems of quarter notes, eight notes, sixteenth notes etc. It is also proven to be very useful when we want to study and visualise rhythmic perception and performance.
Perfectly in sync with my thoughts about linear and non-linear music I will attend a new course at Royal Institute of Technology (KTH) today. The name of the course is “Interdisciplinary Perspectives on Rhythm” and today’s seminar is about temporarily. We’re excited to meet Martin Schertzinger over Skype to discuss deep philosophical questions about absolute time, circular time, rhythm and related things. When reading he’s text on temporalities from the book “The Oxford Handbook of Critical Concepts in Music Theory” I came across one shocking line that i didn’t have a clue about. You might know it already but if you don’t:
The rotation speed of the earth varies all the time!
The time it takes for one lap differs 3 min 56 seconds depending on if we refer to the sun or the stars.
It varies 30+ seconds across the year
It slows down about 2ms per century.
Obviously the rhythm of the universe is not quantized!
I’ve been touching on linear vs non-linear music in earlier posts, and even if I argue that music is always linear when we here it, we also know that there is an element of non-linearity in music for games, VR and other environments where the music needs to adapt to interactions. Through my teaching at the Royal College of Music in Stockholm, me and my students have talked about alternative views in music productions. One idea that seamed strong was a Music Mind Map where we could have an overview of themes, tracks and parts in a production and have musical transitions between them when we’re navigating through the music.
I went to Milan this weekend. One of the worlds center for design. We went to see their design museum, Triennale, fantastic interior design shops, the beautiful MUDEC museum and the Leonardo3 museum. It was amazing to see all fantastic shapes, lights, colours and materials playing together like instruments in an orchestra.
Every now and then I stopped and listened. Listened to the beautiful sounds of people. And to lots of terrible sounds. Screaming sounds from an approaching train, beeping ticket machines, LoFi-speakers at museums playing different music simultaneously. I asked myself: What would this chaos of sounds look like if they were visuals? And a more pleasing thought: What would all the beautiful visual design sound like if it was translated into music?
I strongly believe in making public places more peaceful, creative and positive through design. And musical design would be an important part of it.
I find the question about what kind of knowledge we create more and more engaging. Learning to do research in collaboration with the Royal Institute of Technology (KTH) and the Royal College of Music (KMH) puts me in an exciting landscape between looking for general knowledge through lots of data, numbers and statistics and searching for the more specific by digging deeper through interviews and interactions. Luckily I have found myself in a very exciting workgroup at KTH with lots of experience in exactly this area and I’m realising there are many reasons for using mixed methods and combining insights from different approaches to learn more to get a better picture of the problem.
This week I’m planning for a study where I want to gain more knowledge about how music producers would respond to a new interface for music production applications. It involves prototyping, testing and evaluation and I realise that this is not the last time I will do something like that. The question is: what general or specific knowledge is there to find and how do we find it? How general can we be in our studies before the result is not interesting at all? How specific and personal can the result be and still be of common interest?
Without time, there is no music. Therefor “non-linear music” is a confusing term. Live-performed music is linear even if improvisations loosens up the form a bit. Even loop-based, produced music is linear even if just within smaller blocks. In music for games we talk about “adaptive”, “dynamic” or “non-linear” music, but is it really non-linear?
In adaptive music, the final musical form is not linear according to preconceptions the composer might have had but it is still linear when we here it.
If we want to build an adaptive music engine supporting performed music better, we can probably use a lot the theories developed in improvised music and editing practices in record production of classical music. This insight will guide me further into my studies of Adaptive Music Production.