OK, the title of this post seems to be a cheap pun because it refers to the previous post about the “Segment Grid” approach. But in a way it is quite appropriate because that approach allows us to split our workflow into similarly arbitrary segments as the segment grid in the score suggests.
As described in the previous post we have sliced our score into pieces, more concretely a grid of segments containing one “voice” for the length of one rehearsal mark each. Any segment that has been entered will automatically be placed in the score, replacing a previously empty spot. I have already mentioned a few of the advantages this approach provides:
- It’s straightforward to distribute work on any number of contributors.
- The complexity of the file an individual contributor has to deal with at a time is dramatically reduced.
- The contributor doesn’t have to worry about a consistent stream of music throughout the whole part (i.e. he doesn’t have to worry about breaking anything when editing the individual file).
- If timing errors are introduced they are immediately noticed (when compiling a whole part or the score) and can be easily tracked down and fixed.
But of course there is much more to it than that, particularly because we also have version control at our disposition.
Constant Peer Review
It is not possible to produce a score of such dimensions without making errors. We can also see that from the original score, in which numerous errors have even survived until today. And even the most scrupulous contributor will be faced with musical issues in the score that he or she wouldn’t want to decide alone. Deferring evertyhing to the “one grand proof-reading” after music entry is completed looks like hara-kiri, and it turns out that two decisions have been extremely valuable: implementing an annotation infrastructure and integrating permanent peer review in our workflow. The first item will get its own post, today I’ll talk about the latter.
As usual in versioned workflows all work happens in the context of working branches. That means there is an “official” state of the project which is represented by the
master branch, and work of the contributors will only be included in this official state when it has been completed. So the public surface of the project is always in a consistent state but it doesn’t reflect all the latest work that has already been done.
To improve quality control we have installed a workflow of continuous peer review. Anyone who has finished some work is not allowed to merge that into
master alone, instead someone else has to review the material first. This guarantees that everything in the official score has been seen by at least two pairs of eyes, and it has become clear that it is much easier to manage and to maintain that paradigm if peer review is going on constantly, in digestible chunks of 1-10 segments each.
The peer review process has also proven to be an appropriate place to discuss musical issues with the original score. We are not commissioned to do a scholarly review (so far), but it became apparent that it is not practically possible to copy a flawed score literally. People tend to spot many of the more and of the less apparent issues with the manuscript. The level of detail and attention is of course varying but I have been very positively surprised about the quality and amount of annotations that have been made to the score, by contributors who are only partially musicians or musicologists! As said I’ll go into detail about the annotations in a later post, but annotations are inserted in the textual input file (of course) and are highlighted by coloring the affected items in the score. That way it becomes natural to have a second look on them during peer review, so many of these annotations can be approved, discussed or rejected at a very early stage, before the respective material is even merged into the master branch.
Keeping Cool With Project Progress
Doing constant peer review and having the music entry encapsulated in working branches not only improves the quality of the musical text we produce, it also makes us much more comfortable with the progress we make. As said entering the music in segments takes the pressure out of keeping the “document” consistent, but doing that also in branches is a highly efficient safety net for the project management as a whole. An open branch indicates some work going on, and the list of open branches tells exactly what is being done currently. The same way anyone can pick any segment of music to enter anyone can also pick some work to be reviewed, and we don’t have to worry about missing anything. We have imposed a rule that any branch that is put up for review should be renamed to start with
review/, so listing all open branches with that keyword tells us what is available for review:
$ git branch -a | grep origin/review | sed 's/remotes\/origin//' /review/am/violin1-65-69 /review/bp/vc1-51-60 /review/bp/vc1-71-82 /review/bp/vc2-33-40 /review/bp/vc2-45-48 /review/bp/vc2-51-56 /review/bp/vc2-71-73,81,82 /review/bp/vc3-46-48 /review/bp/vc3-51-56 /review/bp/vc4-46-48 /review/dl/trombones+tuba-20 /review/dl/trombones+tuba-21 /review/dl/trombones+tuba-24 /review/dl/trombones+tuba-25 /review/dl/trombones+tuba-26 /review/dl/trombones+tuba-27 /review/ks/fluteI-52-56 /review/ks/fluteI-58-60 /review/ks/fluteII-53-56 /review/ks/fluteII-58-60 /review/ks/fluteIII-60 /review/ks/oboeI-16-21 /review/ks/oboeI-25-26 /review/ks/oboeI-27-32 /review/ks/oboeI-34-41 /review/ks/oboeI-45-49 /review/ks/oboeI-53-56 /review/ks/oboeI-58-60 /review/ks/oboeI-II-04 /review/ks/oboeI-II-10 /review/ks/oboeI-II-12 /review/ks/oboeII-16-21 /review/ks/oboeII-25-26 /review/ks/oboeII-27-32 /review/ks/oboeII-36-41 /review/ks/oboeII-45-49 /review/ks/oboeII-58-60 /review/ks/vn1-1_22-31 /review/mo/violas-76-82 /review/pc/segment67-86_remove-empty /review/pc/segment76 /review/pc/segment77 /review/pc/segment78 /review/pc/segment79 /review/pc/segment80 /review/pc/segment81 $ git branch -a | grep review | wc -l 46
So we know that currently there are 46 tasks ready-for-review. Some more Git trickery discloses that they contain more than 300 segments of music, so obviously review is currently somewhat lagging behind. This is due to the fact that reviewing is felt to be a task with more responsibility so the majority of contributors prefers entering music. I have always done more review than music entry, but recently I have been quite busy with programming Python tools for our project (that hopefully are generically usable for future projects too) so there has been some accumulation of review tasks. But the point is that this is no reason to worry about since everything is reliably stuffed and organized in branches. Anyone can pick any task to shorten the task list, and when it’s done it will be neatly included in
master. The most severe “downside” of the current situation is that the
master branch is proceeding slower than the actual work could warrant.
Demonstrating the Stability of the Approach With a Bug
Let me finish this post with an incident we experienced. It was a bug but ironically it’s perfectly suitable to demonstrate the reliability of our segmented approach.
One day a contributor noticed that the score didn’t compile correctly anymore. All of a sudden LilyPond threw tons of error messages about failed bar checks on us, but the actual PDF didn’t show more than one duplicated rehearsal mark and three extra “end” barlines on the last page. Thanks to the Git history it was easy to determine when this issue was introduced, but it was very hard to see why because nothing in that place seemed to be related to such an issue at all. It took us about 24 hours to track down the problem, and it turned out that it actually was a LilyPond bug. In newer versions of LilyPond you can enter “unpitched notes”, that is plain rhythmic values to be used with percussion instruments. We discovered that such notes can – in combination with ties and manual breaks – cause an error in the timing, which caused our score to break without any wrong input that we could have identified. We reported the issue as a bug to the LilyPond developers, and one day later it was already fixed.
2014-09-24 14:56 – [initial identification of the offending issue (unpitched notes)](http://lists.ursliska.de/pipermail/das-trunkne-lied/2014-September/000381.html)
2014-09-24 17:56 – [Minimal Working Example exposing the LilyPond bug](http://lists.ursliska.de/pipermail/das-trunkne-lied/2014-September/000391.html)
2014-09-24 20:55 – [Official bug report](http://lists.gnu.org/archive/html/bug-lilypond/2014-09/msg00078.html)
2014-09-25 (N/A) – [Patch uploaded for review](https://codereview.appspot.com/150920043)
2014-10-02 (N/A) – [Patch pushed, included in LilyPond 2.19.16](https://code.google.com/p/lilypond/issues/detail?id=4130#c5)
While this can serve as a nice example for efficient bug tracking and really snappy “support” and fixing, that’s not the main point in telling that story. The miracle around it is that the whole episode did not affect the work on music entry and review at all. Everybody who wasn’t directly involved in finding the bug could simply continue to work as usual. I admit it has been a very long time since I worked with graphical score editors, but I’d never want to imagine the impact of such a situation if our score would be developed in a huge Finale or Sibelius document.
Next time I will tell you about our system of in-file annotations. This is very much a work in progress, and I would love to have it developed much further, but it serves us really well already in its present state.