Friday, June 15, 2012

music21 v.1.0 released!

Dear music21 users,

I’m extremely proud to announce that Version 1.0 (not alpha, not beta, not even omicron or omega) of music21 has been released. The toolkit has undergone five years of extensive development, application, documentation, and testing, and we are confident that the fundamentals of the software will be sufficient for computational musicology and digital humanities needs both large and small.

This release marks a milestone in our software development, but also is reflective of so many wider changes in the scope of digital humanities and computational music research. Over the past few years, digital humanities has gone from the small province of a few geeks who were also interested in the arts and humanities to being recognized as fundamentally important tools for all researchers. This welcome change has brought with it the obligation to those of us who create such tools to make them more accessible and easy to use for people whose programming skills are non-existent or just developing, while at the same time not crippling advanced features for expert users. MIT’s cuthbertLab hopes that music21 has met these goals, but we know that it’s the community that will let us know how we’re doing.

Music21 is based on computational musicology research principles that were first developed in the work of Walter Hewlett, of David Huron (author of the amazing Humdrum toolkit from which we take our inspiration), Michael Good (of MusicXML), and many others. We stand on the shoulders of giants and know that a third generation of research tools will someday make music21 obsolete; we look forward to that time, but hope that the work we’ve done will put that day into the quite distant future.

I want to thank the contributors on this list and the developers we’ve had at MIT and elsewhere. Of particular note are the students who have contributed including (hoping I’m not leaving anyone out): Thomas Carr, Nina Young, Amy Hailes, Jackie Rogoff, Jared Sadoian, Jane Wolcott, Jose-Cabal Ugaz, Neena Parikh, Jordi Bartolomé-Guillen, Tina Tallon, Beth Hadley, Lars Johnson, Chris Reyes, Daniel Manesh, Lawson Wong, Ryaan Ahmed, Carl Lian, Varun Ramaswamy, and Evan Lynch. Thanks also to MIT’s Music and Theater Arts Section (Janet Sonenberg, Chair) and the School of Humanities Arts and Social Sciences (Deborah Fitzgerald, Dean), and to the NEH and other organizations participating in the Digging into Data Challenge Grant. A special note of gratitude goes to the Seaver Institute, the earliest supporter of music21, whose generous contributions over the past three years have made everything possible.

The release of v. 1.0 also marks a turnover in the staff of music21. Christopher Ariza, a prominent composer, music technologist, and music theorist has been Lead Programmer since summer 2009 and Visiting Assistant Professor at MIT. In addition to his incredible vision for music21, Chris has brought new directions in music technology to MIT. We are sad to lose him, but wish him the best in his future work in Python for industry and hope to occasionally see his work as Lead Programmer Emeritus for the project. But we take a moment to welcome Ben Houge as new Lead Programmer for 2012-13. Ben is a composer of acoustic and electronic compositions, primarily for video games, and brings a wealth of knowledge on embedding musical systems within large, complex computer systems.

Version 1.0 brings the following features added since beta 0.6.3 in April:
  • Improved documentation for Humdrum users at http://web.mit.edu/music21/doc/html/moduleHumdrum.html
  • Serialization/freezing/unfreezing (still beta): store complete streams with all data as cPickle (local) or json (alpha; interchangeable) via the Stream.freeze() and Stream.unfreeze() commands.
  • Variant objects – store Ossia and other variant forms of a Stream. Stream.activateVariants lets you move between variant and default readings.
  • Better work on FiguredBass (including pickup measures) and Braille output (thanks Jose!)
  • Support for .french, .dutch, .italian, and .spanish fixed-do solfeg (.german was already there). Scale .solfeg() gives the English relative-do solfeg syllable
  • Better MIDI import. Fixed PDFtoMusic musicxml import.
  • Much more powerful web applications (see paper to be produced this week and presented at the Hamburg Digital Humanities conference). See the webapps folder. (thanks Lars)
  • Works with MRJob for parallel work on Amazon Web Services Elastic Map Reduce (thanks Beth)
  • TheoryAnalyzer objects help find (or eliminate) musical occurrences such as passing tones, neighbor tones, parallel fifths, etc. (thanks Beth)
  • .show(‘vexflow’) will render simple scores in Vexflow (thanks Chris 8-ball Reyes!)
  • MusicXML now outputs audible dynamics. Improved ornament handling.
  • ABC files with omitted barlines at ends of lines are fixed (common abc variant)
  • Improved handling of Roman Numerals including automatic creation from chords.
  • Huge speedup to chordify() for large scores. (n.b. this version also works with PyPy for even more speedups). Faster corpus searches.
  • Gracenotes all work!


So with v. 1.0 we can say that the project is done and no more features will be added, right? Hardly! This summer we continue on work on a number of things, such as automatically creating variants from two or more versions of the same Score; automatically finding repeating or similar sections in scores; hugely expanding the corpus for 14th-16th c. music.; support for Medieval notation; better Vexflow and Javascript support; cached Features for quick machine learning on the corpus; and tons of improved docs.

Thanks again to everyone who has made the project possible. If you have examples of work you’ve done with music21, please share it with me or the list. shameless plug: It’ll be a lot harder to keep working on music21 if I’m unemployed, and your testimonials will help me with the tenure process next year.

     -- Myke

Thursday, June 7, 2012

Process Music with music21

Last July, I was watching this gorgeous video showing the waves created by a set of pendulums of different (but simple ratio) lengths after they're simultaneously released:


(courtesy Harvard Science Center demonstrations)
I thought about ways that such a demonstration could be made musically using simple processes.  After writing the music21 code to do so, I discovered that several other composers have done similar things, so I don't think that it's absolutely original, but I wanted to share the possibilities.  First the opening of the score and the recording:



Play here directly: Or if that doesn't work: Click here
Here's the music21 code used to make it (unhighlighted code in music21-tools Github repository under composition.phasing):
  1. def pendulumMusic(show = True,
  2.                   loopLength = 160.0,
  3.                   totalLoops = 1,
  4.                   maxNotesPerLoop = 40,
  5.                   totalParts = 16,
  6.                   scaleStepSize = 3,
  7.                   scaleType = scale.OctatonicScale,
  8.                   startingPitch = 'C1'
  9.                   ):
  10.     from music21 import scale, pitch, stream, note, chord, clef, tempo, duration, metadata
  11.    
  12.     totalLoops = totalLoops * 1.01
  13.     jMax = loopLength * totalLoops
  14.    
  15.    
  16.     p = pitch.Pitch(startingPitch)
  17.     if isinstance(scaleType, scale.Scale):
  18.         octo = scaleType
  19.     else:
  20.         octo = scaleType(p)
  21.     s = stream.Score()
  22.     s.metadata = metadata.Metadata()
  23.     s.metadata.title = 'Pendulum Waves'
  24.     s.metadata.composer = 'inspired by http://www.youtube.com/watch?v=yVkdfJ9PkRQ'
  25.     parts = [stream.Part(), stream.Part(), stream.Part(), stream.Part()]
  26.     parts[0].insert(0, clef.Treble8vaClef())
  27.     parts[1].insert(0, clef.TrebleClef())
  28.     parts[2].insert(0, clef.BassClef())
  29.     parts[3].insert(0, clef.Bass8vbClef())
  30.     for i in range(totalParts):
  31.         j = 1.0
  32.         while j < (jMax+1.0):
  33.             ps = p.ps
  34.             if ps > 84:
  35.                 active = 0
  36.             elif ps >= 60:
  37.                 active = 1
  38.             elif ps >= 36:
  39.                 active = 2
  40.             elif ps < 36:
  41.                 active = 3
  42.            
  43.             jQuant = round(j*8)/8.0
  44.             establishedChords = parts[active].getElementsByOffset(jQuant)
  45.             if len(establishedChords) == 0:
  46.                 c = chord.Chord([p])
  47.                 c.duration.type = '32nd'
  48.                 parts[active].insert(jQuant, c)
  49.             else:
  50.                 c = establishedChords[0]
  51.                 pitches = c.pitches
  52.                 pitches.append(p)
  53.                 c.pitches = pitches
  54.             j += loopLength/(maxNotesPerLoop - totalParts + i)
  55.             #j += (8+(8-i))/8.0
  56.         p = octo.next(p, stepSize = scaleStepSize)
  57.            
  58.     parts[0].insert(0, tempo.MetronomeMark(number = 120, referent = duration.Duration(2.0)))
  59.     for i in range(4):
  60.         parts[i].insert(int((jMax + 4.0)/4)*4, note.Rest(quarterLength=4.0))
  61.         parts[i].makeRests(fillGaps=True, inPlace=True)
  62.         parts[i] = parts[i].makeNotation()
  63.         s.insert(0, parts[i])
  64.    
  65.     if show == True:
  66.         #s.show('text')
  67.         s.show('midi')
  68.         s.show()
One nice thing that you can do is call pendulumMusic with different attributes, such as:
        pendulumMusic(show = True, 
                  loopLength = 210.0, 
                  totalLoops = 1, 
                  maxNotesPerLoop = 70,
                  totalParts = 64,
                  scaleStepSize = 1,
                  scaleType = scale.ChromaticScale,
                  startingPitch = 'C1',
                  )
Play here directly: Or if that doesn't work: Click here

which gives a denser score that has parts that sound like Nancarrow. Or this version...:
        pendulumMusic(show = True, 
                  loopLength = 210.0, 
                  totalLoops = 1, 
                  maxNotesPerLoop = 70,
                  totalParts = 12,
                  scaleStepSize = 5,
                  scaleType = scale.ScalaScale('C3', '13-19.scl'),
                  startingPitch = 'C2',
                  )
Play here directly: or if that doesn't work: Click here which should produce a 19-tone version of the same piece.

Happy process music composing!

(Updated 2021 September to replace Flash links from 2012 with HTML 5 and point to the correct Github repository)