Yes, it's easier to write one polyphonic figure than to get multiple monophonic lines work together as expected. You see all voices in one place and they are intuitively pitched because they are sharing the same pitch range. The difficulty with multiple lines is that instruments have individual pitch ranges, so the octave is sometimes wrapping around where you don't expect it.
Try setting the same typical pitch in their playing ranges, or put them exactly by octaves apart. That makes things more predictable.
One work around I found to get multiple monophonic lines to work together, is to create a separate container with interpretation on the global track, as a child container. It "sums" up the lines to work better. I do this all the time.
Im thinking, what if we made voice leading a sseperate paramater with more control over the highest and lowest voice, amoung many other things.
There is one envelope parameter already: Dynamics (multiplies Velocity). Using that very simple envelope is already difficult to handle. Simple math can have unintended consequences when the value ranges aren't well balanced.
At some point we might train specialized AI models to do such meta things. On the other hand, I find it more intuitive to just play the desired dynamics on a keyboard and only use the recorded Velocity (you can even use Record Parameter here). Sometimes processing parameters directly is more straight forward that tinkering with indirect modulation.
I actually meant "envelope" as in the direction of the entire piece. Or as I later suggested, a better term would probably be "voice leading." So, to extract the voice leading portion of the interpretation and make it its own parameter, it would control the direction of a specific instrument or it can be applied globally, influencing the overall direction separate from the interpretation.
I could see it being very helpful in gluing a piece together that has multiple monophonic voices.
There will be an experimental feature that synchronizes typical pitch among instruments in the same group. This should make the interaction of multiple voices more predictable.
So if you have a section of strings in group A (e.g. four instruments) and check that box, altering a typical pitch will set all typical pitches in the group to the same pitch class (e.g. Bb). Pitch calculations within the group are now more in synch with each other.
Only the Limit Strictly setting for Interpretation can force segments into a narrow range, which might break this promise (not recommended anyway),
DAWproject export will be very usable already. I had some great fun today. It saves many hours of work when a completed Synfire arrangement is moved to audio production. Plug-in presets and insert effects are preserved. Colors, clips, names, markers, all preserved.
When I see a non-trivial large arrangement exported to a DAW, it reminds me of the (hard) old days when I was arranging statically. It's great to do the mixing in a DAW, but for the crazy creative dynamics I will never go back.
There are still limitations:
Only VST3, VST plug-ins are supported for now. I have no idea why AudioUnits are not yet supported by the format. It should be straightforward.
Some I/O routings must be selected manually in the DAW (multi-timbral rack modules only)
We are wrapping up now. It is unfathomable how much work is hidden in the technical bureaucracy around a product (build scripts, installers, docs, preferences, testing, cross-platform quirks, versioning updates, back end integration, etc). Even after most programming and testing was done, this also took quite a long time.
Looking forward to extensive testing. A nice opportunity to make music again ;-)
So., 26.10.2025 - 16:49 Permalink
Yes, it's easier to write one polyphonic figure than to get multiple monophonic lines work together as expected. You see all voices in one place and they are intuitively pitched because they are sharing the same pitch range. The difficulty with multiple lines is that instruments have individual pitch ranges, so the octave is sometimes wrapping around where you don't expect it.
Try setting the same typical pitch in their playing ranges, or put them exactly by octaves apart. That makes things more predictable.
So., 26.10.2025 - 17:24 Permalink
I agree . parameter aliases are a big deal for me as well. I can see this working well with custom automation, even rhythmic parameters.
Have we ever considered an envelop parameter, that would iinfluence the direction of the entire ccomposition, or iindividual instruments?
So., 26.10.2025 - 17:31 Permalink
One work around I found to get multiple monophonic lines to work together, is to create a separate container with interpretation on the global track, as a child container. It "sums" up the lines to work better. I do this all the time.
Im thinking, what if we made voice leading a sseperate paramater with more control over the highest and lowest voice, amoung many other things.
So., 26.10.2025 - 18:10 Permalink
There is one envelope parameter already: Dynamics (multiplies Velocity). Using that very simple envelope is already difficult to handle. Simple math can have unintended consequences when the value ranges aren't well balanced.
At some point we might train specialized AI models to do such meta things. On the other hand, I find it more intuitive to just play the desired dynamics on a keyboard and only use the recorded Velocity (you can even use Record Parameter here). Sometimes processing parameters directly is more straight forward that tinkering with indirect modulation.
Mo., 27.10.2025 - 14:34 Permalink
I actually meant "envelope" as in the direction of the entire piece. Or as I later suggested, a better term would probably be "voice leading." So, to extract the voice leading portion of the interpretation and make it its own parameter, it would control the direction of a specific instrument or it can be applied globally, influencing the overall direction separate from the interpretation.
I could see it being very helpful in gluing a piece together that has multiple monophonic voices.
Di., 28.10.2025 - 12:37 Permalink
There will be an experimental feature that synchronizes typical pitch among instruments in the same group. This should make the interaction of multiple voices more predictable.
So if you have a section of strings in group A (e.g. four instruments) and check that box, altering a typical pitch will set all typical pitches in the group to the same pitch class (e.g. Bb). Pitch calculations within the group are now more in synch with each other.
Only the Limit Strictly setting for Interpretation can force segments into a narrow range, which might break this promise (not recommended anyway),
Fr., 31.10.2025 - 17:49 Permalink
DAWproject export will be very usable already. I had some great fun today. It saves many hours of work when a completed Synfire arrangement is moved to audio production. Plug-in presets and insert effects are preserved. Colors, clips, names, markers, all preserved.
When I see a non-trivial large arrangement exported to a DAW, it reminds me of the (hard) old days when I was arranging statically. It's great to do the mixing in a DAW, but for the crazy creative dynamics I will never go back.
There are still limitations:
A pop song exported to Studio One:
Do., 06.11.2025 - 21:44 Permalink
We are wrapping up now. It is unfathomable how much work is hidden in the technical bureaucracy around a product (build scripts, installers, docs, preferences, testing, cross-platform quirks, versioning updates, back end integration, etc). Even after most programming and testing was done, this also took quite a long time.
Looking forward to extensive testing. A nice opportunity to make music again ;-)
Do., 06.11.2025 - 22:14 Permalink
Can we contribute in testing?
Seitennummerierung