Friday, January 27, 2012

Blender and Csound

Allow me to show you a video I made a few days ago:
What you probably saw was a nine second video of a 3d computer animation of a purple cube striking a gold platform of sorts. If you had your speakers on, you would have also noticed a ringing sound happening every time the cube touched the platform.
The video itself is pretty lame and boring, but the idea behind is actually something I've been thinking about doing for a while.
What is happening
The video above has two separately created components: a video track and an audio track. The audio track was rendered using Csound, reading instructions on what notes to play from a something called a score file. This same set of instructions was read and interpreted by another piece of software called Blender, which then produced the visual part you saw above.

The steps
While there is probably a much more efficient way of going about this, this is the process I used to produce this animation. My computer programming knowledge is pretty sparse, so I go with what I know will work.
1. Write/Generate a Csound Score. (In this case, the score was generated with information regarding instrument number, start time, duration, pitch, and amplitude)
2. Render an audio file with the created score file in Csound. Make sure the samplerate is set to 48khz
3. Format the score file so that there are only i-statements. In other words, make the score file as simple as possible. (This was done using a short python script)
4. Parse the reformatted score file and generate a new file readable by Blender. In this case, it is a file with a bunch of lines saying dobounce(x,y) where x is the time of the strike, and y is the release. The parser was written in C to take advantage of the fscanf command.
5. Paste these new instructions in a Python script, which will be able to read them and directly work with Blender.
6. Run the Python script + generated score instructions inside of Blender. This will automatically create the necessary keyframes for the animation.
7. Render the animation to a video file.
8. Take the video file and the audio file and sync them together using mencoder. Since the audio is at 48khz, the audio and video should (hopefully) sync up perfectly.

And that is all there is to it!

Why this is so cool


This is a really cool breakthrough for me for a number of reasons. The biggest reason is because I finally figured out a way to incorporate visuals into my music in a simple, but complex way. Unlike most visualizers which take simply amplitude information from an audio file, mine took individual note information and parameters to render visuals. I just used start time and duration for animation parameters, but the possibilities are virtually endless! I believe visuals are especially important for 21st century music. Many times, the music is so out of touch with an average listener that it can be helpful to have a video to explain what the music is doing in order to gain a further appreciation of what is happening.

The second reason why I like this project so much is that it uses entirely open source software. Csound, Python, Blender 3d, and C are all under some sort of GPL or Creative Commons liscense. If there is anything that I want to stress in this blog it is this: the open source platform is the future of digital audio technology. I will repeat myself: the open source platform is the future of digital audio technology.

No comments:

Post a Comment