Time and Configuration

Online DEMO: https://hanslindetorp.github.io/WebAudioXML/

Since I first began formulating an XML for audio configuration which ended up in the WebAudioXML-project, I have come across a lot of interesting perspectives on music, audio and programming. One of the most profound thoughts recently is definitely about time and configuration which I mentioned briefly in the post about envelopes and compositions. As WebAudioXML developed from a language for audio configuration, it became clear that as soon as you enter an envelope object or even a time-related property as a frequency, you’re in the business of dealing with time.

I have been looking at my two projects – WebAudioXML and iMusicXML as two different projects where iMusicXML deals with musical structures, like arrangements, tracks, regions, loops, motifs, leadins etc, and WebAudioXML takes care of the configuration aspect of the music production, like the volumes, filters, reverbs etc. But it doesn’t work. I’ll have to merge them. There are no clear borders between them.

An illustration of this is the latest default variable I added to the WebAudioXML – currentTime. It represents the time since WebAudioAPI was initialized in the web page and is constantly updated. By mapping this variable to pitches and frequencies using the <var>-element were are suddenly into making music using numbers in WebAudioXML. Oh no! (or: Hurray!) I found myself trapped in a Pure Data / MaxMPS – approach to music…

Presence

Photo: Kristine Arapovic Lindetorp

I talked to my supervisor the other day about media presence as a researcher. There is a tension between diving deep, focusing on the details on one hand and reaching out, communicating and being present on the other. And I guess this tension is here to stay. Without the narrow focus, no new stuff will be found or developed and without the communication, none of it will be known for the rest of the world.

So. As a part of my media presence, I asked my wife to take a picture of me doing something related to my research. And this is the result.
The details from my research might not be totally clear but at least I look happy 😉

What is the difference between an envelope and a composition, really?

I have been working on two different XML-languages for a time, WebAudioXML and iMusicXML, with two well-distinguish targets. I thought. Until now. So my current question is: Is there really any good reason for treating a composition as a different type of object than an envelope? Or even a phase of a waveform? They are all carriers of information for the sound domain even if they span over quite different ranges of time. A phase of a waveform might be as short as 1ms, an envelope around one second and a composition rather one minute. But can anyone draw a distinct line between them? And what is the contribution if we manage?

This relates to what music actually is and I know people have been thinking about that for a long time, including John Cage with his famous composition of 4 min 33 sec of silence. So what’s my point of coming back to the topic?

Well, it actually makes a difference when you are making up a language as I am. Is it one language or is it two? Should “arrangement”, “envelope” and “oscillator” be terms of the same language or different? I’m currently leaning towards the former and I think there are great potentials for doing so. If I am, I’m arguing that there are no real differences between an automation on a track in a DAW and an ADSR-envelope in a synthesizer. A melody and a vibrato are variations of the same phenomenon rather than something vastly different.

It’s all a composition. Simple or complex. Long or short.

Minor and Major with WebAudioXML

Since the advent of WebAudioXML, I’ve had the vision of a fully flexible language for describing relations between variables and audio parameters. The only way to achieve complex mappings using web audio technology so far has been to write your own application with Javascript. For creators without the coding skills that often lead to no application at all. My vision is to create a language that defines how variables are mapped and transformed to make (musical) sense in an interactive application. While working on the design, I realised that it’s actually quite related to a spreadsheet application like Excel. While Excel offers a way for finance people to connect data through formulas to visual graphs, this language makes it possible for the interactive audio artist to connect data through formulas to sound. Enough words. Have a try. You can play on your phone or on the computer. No installation required:

https://hanslindetorp.github.io/WebAudioXML/demos/FlexibleScale/index.html

And if you care, this is what the XML-language looks like. A bit like HTML but for sound. Two variables, constructing a scale, controlling the pitch of the oscillator, causing it to play major if the phrase goes up and minor if it goes down. Happy playing!

The day after a deadline

Very few things compete with the feeling of waking up the day after a submission deadline. There is a special kind of relief when you can do nothing more. The PDF is in the database. The submission system is closed. You are in the hands of the peer-reviewer. Time for contemplation. And rest.

It’s amazing how much energy and productive a deadline can produce. Focus, activity, collaboration, ideas and output. It flows out from the concept of a time set after which you can do nothing more about it. And you want to do it. Such a good couple of concepts to make things happen!

What if I could trick my brain and tell myself I have a very important deadline before I really have to do something about the thing I really want to have done? A point in time before I really have to do it or otherwise I can do nothing more about it. But to act upon the deadline I need to believe it’s real and if I have made it up for myself, I know it’s made up and nothing happens when the deadline passes. Just that nothing happens.

Maybe this is one of the most important strengths with the collective. If we want to live, work and act together in the world, we need to agree on deadline. Before which it has to happen. Or it’s too late. And something else happens. And that is what we don’t want to happen and therefor we stick to the deadline.

I love deadlines. Especially the day after.

Communicating research

An important part of research is communication. The challenge for many researchers is that it requires a very different set of skills compared to run tests and analysing the result. The traditional output of research is written papers/articles. I few years ago I would never have thought I would say it but I have to admit; It is very satisfying to read and understand a well written paper on an important subject. It is clearly formulated, the question is easy to understand, the method is well chosen and the study itself is well performed, analysed and discussed. It is often super-nerdy and it sort of have to be. Because the most of the audience are the fellow researchers.

BUT

For the rest of the world, the interesting thing is what implications the research result might lead to and this is an important question. Who communicates the research to people outside the research-community without the required nerdy knowledge and interest in the subject as the author of the paper has? My conviction is that this is a super-important question. Maybe more now than ever. There is a big temptation for anyone to grab a result from a study and use it for their own interest. It might apply to commercial interests, journalists or twitters alike. It’s easy to call anything “scientifically proved” without asking all the difficult and critical questions we have to ask.

THEREFORE

It made me so happy when The Swedish Public Service “Utbildningsradion – UR” asked me and my colleagues from Kungl. Musikhögskolan – KMH to record a series of talks where we present our research and special interests as a part of celebrating the 250th anniversary of the Swedish Royal Music Academy. I brought some music production gear to the recording session in the Royal Hall at KMH and talked about human beings, music, history, musical instruments, computers, the game industry, AI and the future. I will announce when it goes live on UR Play but it might require some training for my non-swedish followers 😉

Gratitude#1 – SourceForce/WebJam

Every now and then it’s good to stop and pay attention to what has led to where you’re at.

Last week I found this flyer in our basement. It reminded me of some very important friends and years that eventually led to where I am now and to my current research:

Bjarne Nyquist, Mats Liljedahl and me formed a company called ”Source Force”. Twenty years ago we built one of the very first online tools for music creation – ”WebJam”. We were inspired by eJay – a music application where you could arrange loops to make your own mix – and built WebJam as an web-based, super-light-weighted, collaborative and cloud-based music application.

WebJam was built with “Macromedia Director” (the predecessor to Adobe Flash) and our own plugin “SequenceXtra” (later acquired by Sibelius Ltd). It came with beautiful MIDI-loops by Rune Fränne and cool graphics from a company I unfortunately can’t remember the name of. Appart from the fairly traditional aspects of a music making application it had a very cool feature: An animated character that danced along with the music you arranged with the loops. The more intensity in the loops, the more intense dancing. And it was all in sync with the music.

Today, when reflecting upon what we achieved with the limitations of that day (If I remember correctly, the whole application with loops and all was less than 400Kb), I get quite impressed and realize that a lot of what we did actually have a lot of bearing on my current research; Interaction, music, design, sonification, multisensory perception. All made with web technologies. Similar ideas and perspectives. New platforms and solutions. 

Without The SourceForce Team and the WebJam project my life wouldn’t have been the same. I’m truly thankful to Mats and Bjarne and for what we achieved together.

WebJam is gone but there are some still some digital ruins to watch:

https://web.archive.org/web/20021130035802/http://webjam.sourceforce.nu:80/

WebAudioXML

After many hours of javascript coding and scientific paper writing I can finally present my latest invention: WebAudioXML. I will present it at SMC2020 in June this summer but here is a preview. Feel free to try it out and let me know what you think.

See the Pen Simple Synth With Mixer by Hans Lindetorp (@hanslindetorp) on CodePen.

WebAudioXML makes it possible to build Web Audio Applications using XML to create audio nodes, managing audio signal routing and mapping external variables to control audio parameters. The project a part of my PhD and is available at https://github.com/hanslindetorp/WebAudioXML. The DEMO above is one of many available at CodePen.The keyboard in the example is implemented using Web Audio Controls.

Please feel free to use it, contribute and participate in the development. It’s not possible to buy or license the code nor to expect any warranties or support, but don’t hesitate to contact me via LinkedIn or my research page on Facebook if you share my passion for sound and music computing.

There is a 10 lessons tutorial available on YouTube as well to help you getting started:

Thanks to Simon Eidenskog for initial feedback, Kjetil Falkenberg for supervision and co-writing the paper, Peter Schyborger, Mattias Sköld, Mattias Peterson and my colleagues at KTH for valuable feedback.

CHI Needs Music!

I’m on my way back from Glasgow where I just visited CHI2019. The conference is known for being the biggest research conference within the Human Computer Interaction community. This year I was honored to be a presenter together with my brilliant colleague, Emma Frid. If you are interested in interactive multi modal installations, please have a look at our paper “Sound Forest – an exploratory evaluation of an interactive installation“. I will write another post about the paper so let’s get to the point:

First of all: CHI is huge! I was told, we were were 3800 delegates from all over the world. The biggest companies in the field were there, including Google, Facebook, Adobe, Spotify, Mozilla and many more. From Monday to Thursday there were over 20 seminars in parallell adding up to something like 1000 presentations!!. The schedule is found here https://confer.csail.mit.edu/chi2019/schedule and I went immediately to the filter and searched for “music”. I didn’t find that many talks, so I added “sound” which together with “tactile” made most of days busy enough.

Beside the talks, there was also a big hall where big companies shared the space with news startups and lots of researchers eager to share their thoughts and results with others. One of the most interesting posters to me was Jacob Harrisons “Accessible Instruments”: http://instrumentslab.org/research/accessible-instruments.html.

Looking at the conference as a whole, I would say the trends definitely points towards artificial intelligens and virtual reality. Even different aspects of “robots” seems to attract a lot of attention. One aspect of CHI I really liked was the playful attitude towards design. You could find anything from very useful tools for people with disabilities to more provoking studies on what an Internet for dogs would look like.

Zooming in a bit more on my own topic, Sound and Music Computing, I left with some thoughts:
It seems to me that there is more interest in haptics than sound and more interest in sound than in music. This actually leaves the topic of music in interaction design more or less left out and when someone involves “music” it tends to be sine waves, white noise or MIDI-notes controlled by an interactive system. The result is very rarely something I would consider “music”.

My conclusion is that there is a huge area still to be explored when it comes to integrate more “normal” music into interactive environments. So, let’s roll up our sleeves and see what we can to contribute to this area. It’s too big to be left unexplored!

Writing c(r)amp

In our division at KTH, Media Technology and Interaction Design, we have a nice tradition of going away for a few days to focus on writing every beginning of a semester. This time, we visit the beautiful “Wiks slott” north of Stockholm. The place has a long historic background and I cite their own web page:

Wik Castle is unique. It was constructed in the late 1400s and is Sweden’s best-preserved medieval castle. Its roots, on the other hand go at least another 200 years back in time. Solid walls and moats made the castle impregnable. In the Middle Ages, the castle was one of the strongest fortresses in the Mälar valley, and Gustav Vasa once besieged it for over a year without getting inside the walls.

It’s was a bit scary to prepare for the writing camp and I wonder why. The people are nice, the place is beautiful, the food is great and even if there are lots of rumors about ghosts in this area I’ve been sleeping like a prince (or maybe better). So what makes it scary? I think it comes down to the phenomenon that it is scary to expose your thoughts, your structure (or lack thereof) and your language skills to someone else and be ready to be criticized for them. I also realize that if I, a fifty year old man, still feel a bit intimidated by letting someone read and criticize my text, then how hard mustn’t it be for our students when they expose their songs, lyrics and productions for evaluation?

On the other hand, what a gift to future generations if we could build a community of trust where no-one is ashamed and everyone dares to expose their inner self without the risk of being dismissed.