Posted
Something just occurred to me; something that I've been unknowingly missing in Synfire.
I would love a function or perhaps an entirely different editor to aid in matching up the grid to a performance played without a click. The closest I can think of would be something like Time Warp in Cubase, if anyone is familiar.
I never feel terribly comfortable recording to a click but I'll never leave the comfort of snap grids.
Wed, 2009-03-18 - 16:19 Permalink
I've never used Cubase (to much extent) but I'm assuming that the Cakewalk Sonar equivalent is the "Set Measure/Beat at Now."
You basically invoke the command and input the Measure & Beat that the current event (or transport position) should fall, and a tempo map entry is created to align the time grid with the events w.r.t the previous (and/or) next tempo entry in the map.
I'd imagine that with Synfire's vector-based approach to nearly everything, that something similar to this could be implemented in it's current state.
Wed, 2009-03-18 - 18:16 Permalink
Yep. Marking a span in the figure and asking Synfire to take that as an example for one measure is very easy to implement.
However, human performances tend to constantly change tempo. A better feature would be to analyse an entire take and have it suggest a constant tempo and compensate all tempo changes automatically. Not trivial.
Wed, 2009-03-18 - 18:37 Permalink
Yep. Marking a span in the figure and asking Synfire to take that as an example for one measure is very easy to implement.However, human performances tend to constantly change tempo. A better feature would be to analyse an entire take and have it suggest a constant tempo and compensate all tempo changes automatically. Not trivial.
I really don't need or want Synfire to try and give me a constant tempo. I just want the grid to match the performance as played by altering the tempo map to fit the performance. The music should define the tempo.
I would use this, not on a part by part basis, but to quickly lay out the base on which to build. The rest of the instruments would be recorded to that performance and the synchronized click.
This would save time and aggravation while composing and it seems like it would be rather simple to implement.
Sonar's function works quite well. If a constant tempo is desired, all one needs to do is flatten the tempo map, but like I said I would rarely want that and never would I wan't it done automatically.
Wed, 2009-03-18 - 19:06 Permalink
Maybe a temporary workaround could be to create a "click track" similar to what one create in Sonar to use the "Fit Improvisation" command, but import that click track as static pitches into Synfire and import the Tempo. Then copy that imported Tempo into whatever project? It is kind of a long workaround (if it works) but it was all I could think of at the moment.
Wed, 2009-03-18 - 20:43 Permalink
Here is a video of Cubase's Time Warp
I have two scores due on the 12th for Microsoft. I haven't used Syfire on a serious project and I'm very hesitant with this one.
My current plan is to use Cubase to lay out the tempo map to visual cues; placing markers on key points and then experimenting with thematic material to figure out how the time in between the markers should be filled and stretched--as per usual. After that I would potentially take that into synfire.
In other words I am going to be doing most of the prototyping in Cubase. I must say, that reality is a bit of a drag. It also means that the tempo functions in Synfire are all rendered useless.
The problems of aligning the grid to a instrumental performance and aligning the grid to visual cues are essentially the same.
As an extension to this idea, it would be nice if this kind of functionality was expanded on from Cubases implementation, in syfire. For instance, in the middle of a composition, I could open a "recording editor," play in a part at any tempo I like, adjust the rhythmic interpretation and then drop it into the composition. This would allow users to record difficult parts at a comfortable tempo without the pressure of a click track.
Think of it as a phrase library that is created on the fly -- parts created out of context of the composition, with varying keys, time signatures and tempos that get dropped into the composition for unexpected results.
Wed, 2009-03-18 - 22:03 Permalink
Think of it as a phrase library that is created on the fly --Good point. And a nice concept.
The issue with tempo maps is that they are useless for libraries. Phrases in a library need to be normalized wrt tempo (figures as such are even normalizes wrt harmony, btw). Otherwise they could not be freely combined and reused.
Hence, the only viable way to go is to take the human performance, guess the tempo and its changes and move all the symbols to their intended positions to get a "normalized" phrase and provide the extracted tempo map as a separate parameter that the composer may choose to ignore or reuse.
Tempo changes, and I mean the more subtle ones, are more on the right side of the composition vs. performance scale. On the one hand, subtle human variation is not essential to the process of pure composition, while on the other hand they make a performance more lively and credible. When possible, I would recommend to postpone these subtleties until the late stages and leave all phrases at "normalized speed" during the composition.
If you have to fill a certain span of time with music, calculate the approximate tempo and number of measures (admittedly, this should be supported by the software), fill a container with the desired music -- all in stiff, normalized time -- and then play with the tempo map until it perfectly fits the picture.
Thanks to everyone, btw, for your helpful input. I see a new tempo mapping feature crystalzing in front of my eyes already ;-)
Wed, 2009-03-18 - 23:30 Permalink
Think of it as a phrase library that is created on the fly --Good point. And a nice concept.The issue with tempo maps is that they are useless for libraries. Phrases in a library need to be normalized wrt tempo (figures as such are even normalizes wrt harmony, btw). Otherwise they could not be freely combined and reused.
Sorry, that was stated poorly. I meant the composers practical use of the library (something separate from the main composition that is brought in and adjusted to the context), not the literal implementation of synfires libraries. It would not be a figure that would automatically be added to a library, but constructed almost as in an painters mixing palate in a separate window, outside of the context and timeline of the main composition.
Past that yes, I see that the tempo would have to be separate from the stored figure if it were to be added to the library. That shouldn't prevent us from adding the tempo back in context of the composition in which it was created.
This was more of a separate feature that would benefit from the base functionality of the time warping, but not my main point.
Hence, the only viable way to go is to take the human performance, guess the tempo and its changes and move all the symbols to their intended positions to get a "normalized" phrase and provide the extracted tempo map as a separate parameter that the composer may choose to ignore or reuse.Tempo changes, and I mean the more subtle ones, are more on the right side of the composition vs. performance scale. On the one hand, subtle human variation is not essential to the process of pure composition, while on the other hand they make a performance more lively and credible. When possible, I would recommend to postpone these subtleties until the late stages and leave all phrases at "normalized speed" during the composition.
i'm sure it should be the composers preference as to whether or not he requires subtle tempo changes in the prototype or not. Why should we discard subtleties in a performance only to manually, and awkwardly, add them again later with a mouse? Of course we always have the option to flatten out the tempo again.
Though we are talking about two different aspects of the same issue. My need is to be able to correctly interpret the rhythmic content of an unguided performance. I'm more than fine to do the interpretation manually, but it is distinctly different than say going and just quantizing the notes where they need to be.
I do think the tempo of that performance should be available for use in the composition.
If you have to fill a certain span of time with music, calculate the approximate tempo and number of measures (admittedly, this should be supported by the software), fill a container with the desired music -- all in stiff, normalized time -- and then play with the tempo map until it perfectly fits the picture.
This is the biggie for me. The scoring work I do requires many cues to be hit in rapid succession and with frame accuracy. Sometimes there are 20, 30, 40 cues in a minute. It is not at all a question of filling a span of time with music, but weaving music tightly into the action.
To do that by hand would be jumping back a good decade. It is hardly creative work and it is very time consuming -- software can and should handle it.
Calculating the approximate tempo, number of measures, time signatures etc is totally backwards. The whole goal is to derive those parameters creatively and flexibly from the unchangeable amount of time available between cues.
I can see where this could be done in lots of short 1 second containers but organization hazard would be kind of silly.
I do see the goal of enforcing complete fluidity of structure in Synfire, but there are many practical applications where some constraints are necessary and to not have them would make it an impractical tool for scoring, especially as many projects aren't afforded the time for a second pass.
Thanks to everyone, btw, for your helpful input. I see a new tempo mapping feature crystalzing in front of my eyes already ;-)
Thu, 2009-03-19 - 00:58 Permalink
It would not be a figure that would automatically be added to a library, but constructed almost as in an painters mixing palate in a separate window, outside of the context and timeline of the main composition.
Ok, I see. I would open a blank arrangement window for that and then copy the results into the main composition. One of the strengths of Synfire is that it allows for multiple open documents and unlimiteed copy & paste.
[quote]Why should we discard subtleties in a performance only to manually, and awkwardly, add them again later with a mouse?
That's not what I meant. Of course should the subtleties, aka "unquantized" human performances be preserved and Synfire can handle imprecise data very well. However, the perfomance needs to be at least roughly in synch with the time grid. Otherwise the figures would not work with other figures.
Tempo detection makes a figure match the time grid without forcing a quantization, keeping manual subtleties. That is, you will still find symbols placed slightly off the grid, but the overall beat matches the musical content.
[quote]The scoring work I do requires many cues to be hit in rapid succession and with frame accuracy. Sometimes there are 20, 30, 40 cues in a minute. It is not at all a question of filling a span of time with music, but weaving music tightly into the action.
So you make the music follow the dancer rather than the other way round ;-)
[quote]Calculating the approximate tempo, number of measures, time signatures etc is totally backwards. Agreed. My suggestion was meant as a temporary workaround. However, for what you do, it would not help.
I would be interested in learning more about your workflow. Adding a SMPTE timeline with cues to the ruler should not be an issue. However, the features around it should be wisely selected and designed to best support the most common workflows.
Andre
Thu, 2009-03-19 - 03:08 Permalink
This is the result of greatly neglected work.
The blank arrangement window will work great.
"Otherwise the figures would not work with other figures. "
At the point that this is relevant, there are no other figures. The figure wouldn't be brought into the composition without its relation to the musical grid, it would be contextualized manually. The first layer of a composition would benefit greatly from the ability to define the initial tempo and signature in performance at the piano. This is not the same as preserving the small scale, humanistic variations within a rough grid.
As far as manual recognition, I understand that "manual" is a heretic concept at cognitone. The current automatic recognition will work fine for projects written to the click. If the automatic process can't function with out a relative grid, then a manual approach is preferable to not having the option of using a recorded performance effectively.
Here is the proposed work flow:
Open synfire
Record a basic piano part, expressively, with rubato, tempo changes and signature changes -- things I consider necessary to have in place when composing and already present in the take (though not defined.)
Stop the recording, a window pops up to allow a grid to be manually stretched around the locked notes. This would allow one define the tempos and set the time signatures of any performance, regardless of content.
Perform figure recognition.
At this point, the tempo can be smoothed or adjusted but in any event your 90% of the way there and the default case of keeping you tempo as played is enabled.
What are the alternatives?
Define the tempo map in advance, which is counter intuitive, prohibitive to improvisation and less precise than what is in the performance naturally.
Play parts unnaturally to a static click, disregarding all nuance and organic feel
Take the take a manually move the symbols around until every thing fits, which is very time consuming, error prone, unnecessary etc..
All of these are slow, tedious and old fashioned. I hope you will consider it.
As for the film scoring work flow:
Scoring is doing ones best to keep up with a deaf dancer. Scores come in with picture lock, they are not going to change the edit for lil old me... Even the projects that I edit myself rarely get reedited to accommodate music.. i enjoy the erratic imposition of narrative structure on the time so its all fine by me.
Currently my work flow goes:
I import the picture locked video into nuendo and watch it on loop, placing markers on elements that will need to land on beat. In nuendo I never place the video at the start of the time line so I will easily be able to pull in measures as I move the grid around.
I start setting time warp markers in the cues . I place an initial TW marker at an important cue. Next I pull the grid to the next cue, keeping an eye on the bpm. Depending on how I pull the grid the bpm in between the two cues changes to accommodate. With a couple of swipes I can figure out the tempo, based on the time signature and the number of measures in between the two cues (for example if I squish 8 measures into the four seconds between the cues at 4/4 the bpm will be 120, if I pull that out so only 4 measures are inside then the bpm would be 60). I continue this process working my way out. There is never music throughout a video so whats in between the larger scale musical cues grid wise usually doesn't matter -- this aspect of the nuendo work flow isn't ideal and I feel Synfires containers are definitely preferable for placing and moving long cues.
Once I have my tempo map done, or a large section of it, I begin to compose.
Compositionally I tend to write counterpoint, working as long as I can before settling in on a harmonic progression.
Nuendo's method isn't perfect. It is great for getting the initial tempo map from both film cues and the rhythm of live performance. It is not very good at freeing itself from those constraints after the fact. It doesn't do well with altering the tempo map once there is music written. An ideal solution would build upon the time warp work flow by allowing the number of measures to change in between the cues, late in the composition process with a number of options as to how to compensate (if measures are pushed to the other side of a cue they might: clip at the cue, merge with the bar after the cue or shift it, etc)