An important part of research is communication. The challenge for many researchers is that it requires a very different set of skills compared to run tests and analysing the result. The traditional output of research is written papers/articles. I few years ago I would never have thought I would say it but I have to admit; It is very satisfying to read and understand a well written paper on an important subject. It is clearly formulated, the question is easy to understand, the method is well chosen and the study itself is well performed, analysed and discussed. It is often super-nerdy and it sort of have to be. Because the most of the audience are the fellow researchers.
For the rest of the world, the interesting thing is what implications the research result might lead to and this is an important question. Who communicates the research to people outside the research-community without the required nerdy knowledge and interest in the subject as the author of the paper has? My conviction is that this is a super-important question. Maybe more now than ever. There is a big temptation for anyone to grab a result from a study and use it for their own interest. It might apply to commercial interests, journalists or twitters alike. It’s easy to call anything “scientifically proved” without asking all the difficult and critical questions we have to ask.
Every now and then it’s good to stop and pay attention to what has led to where you’re at.
Last week I found this flyer in our basement. It reminded me of some very important friends and years that eventually led to where I am now and to my current research:
Bjarne Nyquist, Mats Liljedahl and me formed a company called ”Source Force”. Twenty years ago we built one of the very first online tools for music creation – ”WebJam”. We were inspired by eJay – a music application where you could arrange loops to make your own mix – and built WebJam as an web-based, super-light-weighted, collaborative and cloud-based music application.
WebJam was built with “Macromedia Director” (the predecessor to Adobe Flash) and our own plugin “SequenceXtra” (later acquired by Sibelius Ltd). It came with beautiful MIDI-loops by Rune Fränne and cool graphics from a company I unfortunately can’t remember the name of. Appart from the fairly traditional aspects of a music making application it had a very cool feature: An animated character that danced along with the music you arranged with the loops. The more intensity in the loops, the more intense dancing. And it was all in sync with the music.
Today, when reflecting upon what we achieved with the limitations of that day (If I remember correctly, the whole application with loops and all was less than 400Kb), I get quite impressed and realize that a lot of what we did actually have a lot of bearing on my current research; Interaction, music, design, sonification, multisensory perception. All made with web technologies. Similar ideas and perspectives. New platforms and solutions.
Without The SourceForce Team and the WebJam project my life wouldn’t have been the same. I’m truly thankful to Mats and Bjarne and for what we achieved together.
WebJam is gone but there are some still some digital ruins to watch:
WebAudioXML makes it possible to build Web Audio Applications using XML to create audio nodes, managing audio signal routing and mapping external variables to control audio parameters. The project a part of my PhD and is available at https://github.com/hanslindetorp/WebAudioXML. The DEMO above is one of many available at CodePen.The keyboard in the example is implemented using Web Audio Controls.
Please feel free to use it, contribute and participate in the development. It’s not possible to buy or license the code nor to expect any warranties or support, but don’t hesitate to contact me via LinkedIn or my research page on Facebook if you share my passion for sound and music computing.
There is a 10 lessons tutorial available on YouTube as well to help you getting started:
Thanks to Simon Eidenskog for initial feedback, Kjetil Falkenberg for supervision and co-writing the paper, Peter Schyborger, Mattias Sköld, Mattias Peterson and my colleagues at KTH for valuable feedback.
I’m on my way back from Glasgow where I just visited CHI2019. The conference is known for being the biggest research conference within the Human Computer Interaction community. This year I was honored to be a presenter together with my brilliant colleague, Emma Frid. If you are interested in interactive multi modal installations, please have a look at our paper “Sound Forest – an exploratory evaluation of an interactive installation“. I will write another post about the paper so let’s get to the point:
First of all: CHI is huge! I was told, we were were 3800 delegates from all over the world. The biggest companies in the field were there, including Google, Facebook, Adobe, Spotify, Mozilla and many more. From Monday to Thursday there were over 20 seminars in parallell adding up to something like 1000 presentations!!. The schedule is found here https://confer.csail.mit.edu/chi2019/schedule and I went immediately to the filter and searched for “music”. I didn’t find that many talks, so I added “sound” which together with “tactile” made most of days busy enough.
Beside the talks, there was also a big hall where big companies shared the space with news startups and lots of researchers eager to share their thoughts and results with others. One of the most interesting posters to me was Jacob Harrisons “Accessible Instruments”: http://instrumentslab.org/research/accessible-instruments.html.
Looking at the conference as a whole, I would say the trends definitely points towards artificial intelligens and virtual reality. Even different aspects of “robots” seems to attract a lot of attention. One aspect of CHI I really liked was the playful attitude towards design. You could find anything from very useful tools for people with disabilities to more provoking studies on what an Internet for dogs would look like.
Zooming in a bit more on my own topic, Sound and Music Computing, I left with some thoughts:
It seems to me that there is more interest in haptics than sound and more interest in sound than in music. This actually leaves the topic of music in interaction design more or less left out and when someone involves “music” it tends to be sine waves, white noise or MIDI-notes controlled by an interactive system. The result is very rarely something I would consider “music”.
My conclusion is that there is a huge area still to be explored when it comes to integrate more “normal” music into interactive environments. So, let’s roll up our sleeves and see what we can to contribute to this area. It’s too big to be left unexplored!
In our division at KTH,Media Technology and Interaction Design, we have a nice tradition of going away for a few days to focus on writing every beginning of a semester. This time, we visit the beautiful “Wiks slott” north of Stockholm. The place has a long historic background and I cite their own web page:
Wik Castle is unique. It was constructed in the late 1400s and is Sweden’s best-preserved medieval castle. Its roots, on the other hand go at least another 200 years back in time. Solid walls and moats made the castle impregnable. In the Middle Ages, the castle was one of the strongest fortresses in the Mälar valley, and Gustav Vasa once besieged it for over a year without getting inside the walls.
It’s was a bit scary to prepare for the writing camp and I wonder why. The people are nice, the place is beautiful, the food is great and even if there are lots of rumors about ghosts in this area I’ve been sleeping like a prince (or maybe better). So what makes it scary? I think it comes down to the phenomenon that it is scary to expose your thoughts, your structure (or lack thereof) and your language skills to someone else and be ready to be criticized for them. I also realize that if I, a fifty year old man, still feel a bit intimidated by letting someone read and criticize my text, then how hard mustn’t it be for our students when they expose their songs, lyrics and productions for evaluation?
On the other hand, what a gift to future generations if we could build a community of trust where no-one is ashamed and everyone dares to expose their inner self without the risk of being dismissed.
I’m currently reading a course on “Interdisciplinary Perspectives on Rhythm” given by my supervisor Andre Holzapfel. It’s great. And it’s provoking.
I got a wonderful quote from N T Wright perfectly describing my feeling after reading four articles about speech rhythm for today’s session: “I’m still confused – but at a much higher level…”
This week I moved from the state of “not being aware” to “being aware of my unawareness”. I didn’t know there’s a whole academic community looking for and discussing speech rhythm. And I never thought about how difficult it is to define what a rhythm is. Most people I’ve asked think there is “rhythm” in speech. Interestingly enough, researchers still haven’t found specific rhythm patterns for different languages. There are bigger variations between different people and different moods within a language than there are between different languages.
The notion of “rhythm” is actually very hard to describe. Wikipedia tells us:
But there are no regular motions in speech and still there is rhythm. Looking at music, it’s not too different. Lots of music has rhythm without being periodic, pulse based or regular.
Maybe we can get some help from research in folk music. Sven Ahlbäck gives us some useful tools where he organises rhythm into:
“Gestalt” – a rhythmic gesture, phrase or motif
“Periodicity” – rhythms relating to pulse, meter, periods etc
Using Svens terminology, speech is normally using the “gestalt” approach and music often rhythm relating to “periodicity”. But when we read poems with a meter or say something synchronousely we tend to bend the speech rhythm towards “periodicity”. And when the music is more “free” it will use more “gestalt”. Would this do? Are there more rhythms out there?
(Varning: this blog post might contain content unsuitable for artists, music lovers and others. It includes some quite nerdy ideas related to music and research)
Last week I came across an extraordinary way of describing rhythm. When I first saw it I got a bit upset but when I suddenly realised what a beautiful way of describing a rhythm it is, I almost fell in love with it. OK. It’s got its limitations but as long as you stick to a three note rhythm, it will work beautifully.
The concept is as simple as it is genius. One way of understanding it is this:
Take this rhythm:
The relation between the three note lengths could be expressed
1 : 1 : 2
If we say that the total length of the notes is 100% we could also express the rhythm as
25% : 25% : 50%
Now, lets draw this rhythm using a triangle where the length of each side represents 100% of the total length of the rhythm:
In this way, we can actually represent any three note rhythm in this graph making it possible to study i.e. other divisions than our notation based systems of quarter notes, eight notes, sixteenth notes etc. It is also proven to be very useful when we want to study and visualise rhythmic perception and performance.
Perfectly in sync with my thoughts about linear and non-linear music I will attend a new course at Royal Institute of Technology (KTH) today. The name of the course is “Interdisciplinary Perspectives on Rhythm” and today’s seminar is about temporarily. We’re excited to meet Martin Schertzinger over Skype to discuss deep philosophical questions about absolute time, circular time, rhythm and related things. When reading he’s text on temporalities from the book “The Oxford Handbook of Critical Concepts in Music Theory” I came across one shocking line that i didn’t have a clue about. You might know it already but if you don’t:
The rotation speed of the earth varies all the time!
The time it takes for one lap differs 3 min 56 seconds depending on if we refer to the sun or the stars.
It varies 30+ seconds across the year
It slows down about 2ms per century.
Obviously the rhythm of the universe is not quantized!
I’ve been touching on linear vs non-linear music in earlier posts, and even if I argue that music is always linear when we here it, we also know that there is an element of non-linearity in music for games, VR and other environments where the music needs to adapt to interactions. Through my teaching at the Royal College of Music in Stockholm, me and my students have talked about alternative views in music productions. One idea that seamed strong was a Music Mind Map where we could have an overview of themes, tracks and parts in a production and have musical transitions between them when we’re navigating through the music.
I went to Milan this weekend. One of the worlds center for design. We went to see their design museum, Triennale, fantastic interior design shops, the beautiful MUDEC museum and the Leonardo3 museum. It was amazing to see all fantastic shapes, lights, colours and materials playing together like instruments in an orchestra.
Every now and then I stopped and listened. Listened to the beautiful sounds of people. And to lots of terrible sounds. Screaming sounds from an approaching train, beeping ticket machines, LoFi-speakers at museums playing different music simultaneously. I asked myself: What would this chaos of sounds look like if they were visuals? And a more pleasing thought: What would all the beautiful visual design sound like if it was translated into music?
I strongly believe in making public places more peaceful, creative and positive through design. And musical design would be an important part of it.