Dear music21 users,
I’m
extremely proud to announce that Version 1.0 (not alpha, not beta, not even
omicron or omega) of music21 has been released. The toolkit has undergone five years of extensive development,
application, documentation, and testing, and we are confident that the
fundamentals of the software will be sufficient for computational musicology
and digital humanities needs both large and small.
This
release marks a milestone in our software development, but also is reflective
of so many wider changes in the scope of digital humanities and computational
music research. Over the past few years,
digital humanities has gone from the small province of a few geeks who were
also interested in the arts and humanities to being recognized as fundamentally
important tools for all researchers. This
welcome change has brought with it the obligation to those of us who create
such tools to make them more accessible and easy to use for people whose programming
skills are non-existent or just developing, while at the same time not
crippling advanced features for expert users. MIT’s cuthbertLab hopes that music21 has met these goals, but we
know that it’s the community that will let us know how we’re doing.
Music21 is based on computational musicology research
principles that were first developed in the work of Walter Hewlett, of David
Huron (author of the amazing Humdrum toolkit from which we take our
inspiration), Michael Good (of MusicXML), and many others. We stand on the shoulders of giants and know
that a third generation of research tools will someday make music21 obsolete;
we look forward to that time, but hope that the work we’ve done will put that
day into the quite distant future.
I
want to thank the contributors on this list and the developers we’ve had at MIT
and elsewhere. Of particular note are
the students who have contributed including (hoping I’m not leaving anyone
out): Thomas Carr, Nina Young, Amy Hailes, Jackie Rogoff, Jared Sadoian, Jane
Wolcott, Jose-Cabal Ugaz, Neena Parikh, Jordi Bartolomé-Guillen, Tina Tallon, Beth
Hadley, Lars Johnson, Chris Reyes, Daniel Manesh, Lawson Wong, Ryaan Ahmed, Carl
Lian, Varun Ramaswamy, and Evan Lynch. Thanks also to MIT’s Music and Theater Arts Section (Janet Sonenberg,
Chair) and the School of Humanities Arts and Social Sciences (Deborah Fitzgerald, Dean),
and to the NEH and other organizations participating in the Digging into Data
Challenge Grant. A special note of
gratitude goes to the Seaver Institute, the earliest supporter of music21,
whose generous contributions over the past three years have made everything
possible.
The
release of v. 1.0 also marks a turnover in the staff of music21. Christopher Ariza, a prominent composer,
music technologist, and music theorist has been Lead Programmer since summer
2009 and Visiting Assistant Professor at MIT. In addition to his incredible vision for music21,
Chris has brought new directions in music technology to MIT. We are sad to lose him, but wish him the best
in his future work in Python for industry and hope to occasionally see his work
as Lead Programmer Emeritus for the project. But we take a moment to welcome Ben Houge as
new Lead Programmer for 2012-13. Ben is
a composer of acoustic and electronic compositions, primarily for video games,
and brings a wealth of knowledge on embedding musical systems within large,
complex computer systems.
Version
1.0 brings the following features added since beta 0.6.3 in April:
- Improved documentation for Humdrum users at http://web.mit.edu/music21/doc/html/moduleHumdrum.html
- Serialization/freezing/unfreezing (still beta): store complete streams with all data as cPickle (local) or json (alpha; interchangeable) via the Stream.freeze() and Stream.unfreeze() commands.
- Variant objects – store Ossia and other variant forms of a Stream. Stream.activateVariants lets you move between variant and default readings.
- Better work on FiguredBass (including pickup measures) and Braille output (thanks Jose!)
- Support for .french, .dutch, .italian, and .spanish fixed-do solfeg (.german was already there). Scale .solfeg() gives the English relative-do solfeg syllable
- Better MIDI import. Fixed PDFtoMusic musicxml import.
- Much more powerful web applications (see paper to be produced this week and presented at the Hamburg Digital Humanities conference). See the webapps folder. (thanks Lars)
- Works with MRJob for parallel work on Amazon Web Services Elastic Map Reduce (thanks Beth)
- TheoryAnalyzer objects help find (or eliminate) musical occurrences such as passing tones, neighbor tones, parallel fifths, etc. (thanks Beth)
- .show(‘vexflow’) will render simple scores in Vexflow (thanks Chris 8-ball Reyes!)
- MusicXML now outputs audible dynamics. Improved ornament handling.
- ABC files with omitted barlines at ends of lines are fixed (common abc variant)
- Improved handling of Roman Numerals including automatic creation from chords.
- Huge speedup to chordify() for large scores. (n.b. this version also works with PyPy for even more speedups). Faster corpus searches.
- Gracenotes all work!
Download at http://code.google.com/p/music21/downloads/list.
So with v. 1.0 we can say that the project is done and no more features will be added, right? Hardly! This summer we continue on work on a number of things, such as automatically creating variants from two or more versions of the same Score; automatically finding repeating or similar sections in scores; hugely expanding the corpus for 14th-16th c. music.; support for Medieval notation; better Vexflow and Javascript support; cached Features for quick machine learning on the corpus; and tons of improved docs.
Thanks again to everyone who has made the project possible. If you have examples of work you’ve done with music21, please share it with me or the list. shameless plug: It’ll be a lot harder to keep working on music21 if I’m unemployed, and your testimonials will help me with the tenure process next year.
-- Myke
No comments:
Post a Comment