Tuesday, September 1, 2015

Music21 v.2.0.8 (beta) released

The newest release of the music21 v.2 (beta) developments improves the stability and performance of the system. 
The biggest change in this version is the movement of all experimental modules into a new "alpha" sub-package; in the future, all releases that add something useful to music21 but which are not well tested or mostly documented will begin in this folder. They may graduate into the main music21 mainspace at some later point, remain in "alpha", or be removed. Among the modules that are moved include: webapps (system for running music21 as a WSGI service-oriented architecture), trecento (fourteenth-century musical analysis), theoryAnalysis(common-practice error detection), counterpoint/species (first-species counterpoint generator), medren (miscellaneous pre-1600 applications; this will return soon after refactoring), contour(contour analysis), chant (Gregorian chant generation), and analysis.search (find scales inside Streams).
The "demos" directory has also been reorganized. The next release will focus on making this directory easier to use.
Bug fixes and improvements:
  • search.lyrics -- modules for searching within lyrics while retaining position information about matches.
  • Interval objects have been improved to have additional properties.
  • .priority changes will automatically re-sort Streams. This change will make .priority more useful.
  • Stream.elementsChanged() is a new method that can trigger a cache clear for Streams.
  • Stream.remove() gets a recurse function.
  • Lilypond color works again (thanks Ringw)
  • Accidental becomes a SlottedObject -- pro: much faster. con: arbitrary attributes cannot be added to Accidentals.
  • MuseScore 2 is now discovered automatically in python3 configure.py.
  • Tremolo support (including in MusicXML)
  • Unlikely bugs in Chord fixed.
  • MIDI support for > 16 channels output.
  • Fixes for PIL/Pillow support of more versions
  • More places taking advantage of exact fractions in music21 offsets and lengths which used to be kludges

Tuesday, June 16, 2015

Parallel Computing with music21

First we start the cluster system with ipcluster start, which on this six-core Mac Pro gives me 12 threads. Then I'll start iPython notebook with ipython notebook.

In [1]:
from __future__ import print_function
from IPython import parallel
clients = parallel.Client()
clients.block = True
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]

Now I'll create a view that can balance the load automatically.

In [2]:
view = clients.load_balanced_view()

Next let me get a list of all the Bach chorales' filenames inside music21:

In [3]:
from music21 import *
chorales = list(corpus.chorales.Iterator(returnType = 'filename'))
['bach/bwv269', 'bach/bwv347', 'bach/bwv153.1', 'bach/bwv86.6', 'bach/bwv267']

Now, I can use the view.map function to automatically run a function, in this case corpus.parse on each element of the chorales list.

In [4]:
view.map(corpus.parse, chorales[0:4])
[<music21.stream.Score 4467044944>,
 <music21.stream.Score 4467216976>,
 <music21.stream.Score 4465996368>,
 <music21.stream.Score 4465734224>]

Note though that the overhead of returning a complete music21 Score from each processor is high enough that we don't get much of a savings, if any, from parsing on each core and returning the Score object:

In [5]:
import time
t = time.time()
x = view.map(corpus.parse, chorales[0:30])
print("Multiprocessed", time.time() - t)
t = time.time()
x = [corpus.parse(y) for y in chorales[0:30]]
print("Single processed", time.time() - t)
Multiprocessed 1.7093911171
Single processed 2.04412794113

But let's instead just return the length of each chorale, so we don't need to pass much information back to the main server. First we need to import music21 on each client:

In [6]:
clients[:].execute('from music21 import *')
<AsyncResult: finished>

Now, we'll define a function that parses the chorale and returns how many pitches are in the Chorale:

In [7]:
def parseLength(fn):
    c = corpus.parse(fn)
    return len(c.flat.pitches)

Now we're going to see a big difference:

In [8]:
t = time.time()
x = view.map(parseLength, chorales[0:30])
print("Multiprocessed", time.time() - t)
t = time.time()
x = [parseLength(y) for y in chorales[0:30]]
print("Multiprocessed", time.time() - t)
Multiprocessed 0.59440112114
Multiprocessed 2.97019314766

In fact, we can do the entire chorale dataset in about the same amount of time as it takes to do just the first 30 on single core:

In [9]:
t = time.time()
x = view.map(parseLength, chorales)
print(len(chorales), 'chorales in', time.time() - t, 'seconds')
347 chorales in 5.31799721718 seconds

I hope that this example gives some sense of what might be done w/ a cluster situation in music21. If you can't afford your own Mac Pro or you need even more power, it's possible to rent an hour of cluster computing time at Amazon Web Services for just a few bucks.