Skip to main content

why?

Sun, 2013-11-10 - 19:44 Permalink

If you are talking about Maschine ... you need to read up on v2.0. It's now is just about 'unlimited.' Multitimbral instruments all have individual outputs in the new mixer and you can have as many tracks or scenes as you like.

 

The point is that it is hosting and transmitting both audio and midi within the DAW. How can that be 'caged.'

 

And, it can also be run free standing.

 

I can run a project in Cubase, Ableton, Reaper, Acid Pro and Pro Tools. Everyone of them has both greater routing flexibility and much easier setup than Synfire. If I want to use a former project as a template, I am never told that a midi port/ channel or instrument is "unavailable."

 

You're recent 'newbie' experience post points all this up clearly.

 

Where ... as JBones asked ... would Synfire be today in prototyping if all the energy and headaches Andre has invested in the audio had been focused strictly on the midi generation of Synfire? Undoubtedly further down the road on his compositional vision, if not his sonic vision.

 

To be a supporter of a developer does not mean you agree with all their choices and decisions about development direction. I am a supporter! But I do believe audio has unnecessarily over complicated Synfire and interfered with what it uniquely offers.

 

I constantly hear frustration from individuals that have been diligently working hard at learning Synfire for three years or more ... and it is almost entirely about the same issues of audio and device configuration.

 

I have nothing against those that want to use those things ... and if promised, then certainly Andre has a responsibility to deliver. But I ... and this is not just meant to be directed at Cognitone, there are elements of it in all software I use ... resent every moment spent configuring things when I only want to use them when inspired to create music.

 

Since I know there exists a platform, midi VSTi, that could at least easily deliver 16 'instruments' of prototyped AI, which is stable, easily configurable and seamlessly integrates with a DAW, I want it. Is that too much to ask?

 

 

Sun, 2013-11-10 - 20:03 Permalink

Remember not at daws support vsts and some don't allow midi out of plugins. Also you would still need to configure the instrument types, playing ranges etc within synfire.

 

I should have said 'Virtual Instruments.' Yes, there are three main formats, VST, AU and now AAX. Many of these instruments also run 'standalone' and can simply pipe midi over virtual cables directly to the host. This option would also permit just a 'stripped down' version without all the audio engine functions.

 

Yes, you would always need to 'define' the nature of the midi output, but you could conceivably just create presets for your own use without having to redefine everyting everytime. It would just have the GM patch number to define the instrument, set the range and then select the parameter to define the nature of the instrument, e.g., rhythm, melody, solo, etc ... just as we already do.

 

However, this way, we create a voice one time for always for any use in any project. With the 'preset' you'd select your midi port and channel, define the instrument by selecting your preset and the merry midi would be generated and piped to your selected hardware or softsynth sound. Articulations would be a breeze, as you could use another midi track routed to the same sound source to handle the keyswitching. CC modulation and all those other wonderful midi capabilities are right there to be 'tweaked' and 'massaged' in the DAW.

 

Well, I'm probably preaching to a very small choir here ... but those choir members get my sermon!

Sun, 2013-11-10 - 20:07 Permalink

Hi prado,
Who knows it is too much asked?

 

Jan, my friend ... I hope you know this is a 'rhetorical' question.

Sun, 2013-11-10 - 21:04 Permalink

Hi prado,
I am curious to see if your ideas will be implemented by cognitone
So keep on going With this

Sun, 2013-11-10 - 23:00 Permalink

Nobody should be worried about time being wasted. The setup got a lot easier already (see here: (https://users.cognitone.com/content/synfire-174)). Development will focus on the genuine prototying things now.

Concerning the VST discussion, I truly think Harmony Navigator LE would make a great VST. Whether that would make sense economically, is another question.

Of course can I imagine a stripped-down version of Synfire as a VST. I just doubt the advantages would compensate the disadvantages. And still many things would not be any different from what they are now. Things I envision for the near future of SFP would probably no longer work.

There's a use case where Synfire takes the role of an embedded side kick, generating partial content for a couple tracks, while the main focus stays on editing static content in the DAW. Although that does not fit well with the prototyping idea, where everything is fluid until the song is composed, I acknowledge that quite a few users go this "middle out" approach (adding content to a static frame built outside Synfire). That's what the drones are made for. Their setup is not much different to what would be necessary in a VST-based Synfire anyway.

Since composition always affects /all/ track's contents and even the timeline and structure, I can imagine little besides vocal or drum recordings and FX processing, that would suggest a DAW to be present all the time. In most cases, the DAW would be merely a shell around one Synfire instance, most of its native tracks empty (or populated with Drones, as is now). 

As always, your input is much appreciated. I still owe you more videos showing the genuine and unique things that Synfire can do (and only a standalone Synfire can do). Expect some of this to happen soon.

Mon, 2013-11-11 - 06:29 Permalink

There's a use case where Synfire takes the role of an embedded side kick, generating partial content for a couple tracks, while the main focus stays on editing static content in the DAW. Although that does not fit well with the prototyping idea, where everything is fluid until the song is composed, I acknowledge that quite a few users go this "middle out" approach (adding content to a static frame built outside Synfire).

 

Andre, I cannot agree with your conceptualization.

 

Consider two scenarios: 1) a version of SFP/ E without audio engine; and, 2) a version of a SFP/ E as a midi virtual instrument. The first would seemingly require very little work and as presently developed, accomplish everything that SFP/ E can. The second potentially would have the limitation of only 16 instrments/ tracks ... although this seems arbitrary as Kontakt currently provides 64 separate patches in one instance. I actually believe the second scenario is more suited to SFE than Cognitone LE.

 

Consider this please.

 

a. Please name one single thing that SFP/ E does that could not equally be accomplished by simply piping midi within a DAW? Can they both play Vienna Symphony Library instruments? Yes. Do they both require definition according to GM standards and SFP/ E parameter settings to inform SFP/ E's AI of the nature of the instrument in the arrangement to permit it to interact with the other defined instruments in the arrangement? Yes. The only difference I possibly see is the need to create device descriptions which PREVENT me from using the same patch twice in an arrangement because it is bound to a device and I can ... as best I understand ... only use the same device ONCE in any arrangement. And further, if I'm not careful, may have difficulty or complication using that same device in a later arrangement. In my DAW I can open, CPU and RAM permitting, multiple instances of the same instrument patch to route as I please, no?

b. What is necessarily static about this? Why do you conceptualize this as "partial content for a couple tracks?"  Who would be to say that one wouldn't want all the tracks generated from SFP/E contained in the arrangement hosted in the DAW? Piped midi in a DAW does not need to be recorded until one is totally satisfied with the production. Any professional DAW can host as many or more tracks of midi as SFP/ E. Why would it be any less "fluid" to generate the midi and send it through to the hardware and or software synths hosted by a DAW than sending that same midi within SFP/D to the exact same hardware and/ or software? This makes no sense.

c. I have created with SFP/ E the orchestral masterpiece of the 21st century. Beethoven is stirring in his grave. Is Deutsche Grammaphone now going to take my masterpiece as the stereo output of SFP/E and master it for their recordings? No! Were the to agree to use virtual instruments, they will want every track available as individual audio recordings so that their engineers can massage, adjust, etc. eq, dynamics and other parameters of each audio track, mix those tracks as they see fit and then master them to create an audio production worthy of the creation. The point is, anythiing worth professional release is gonig to end up as audio tracks in a DAW prior to publication regardless. So, what possible advantage is there to making this more difficult to begin with?

 

Perhas if I had stumbled across SFP/E and knew nothing of DAWs or audio production, I would happily use it exclusively. But in the real world that is not the case.

Mon, 2013-11-11 - 09:53 Permalink

I'm sure Andre can argue his own point but let me add a few of my own..

I'm assuming your points are argued as if synfire was running as a plugin...if not please ignore me..

 

a). Generate midi in logic... Don't think this is allowed by daw?

extact instrument and playing ranges from the vsti. Synfire still needs this info to work its magic.

b) you can do all what you suggest now, but I assume Andre refered to the cse where you have some tracks in teg daw that will not change such as an audio recording.

c) you can still use you daw to record the individual audio tracks with the current synfire.

if all you want is midi out of synfire, place midi drones on each daw track. Use the current version of synfire linked to those midi drones. Setup a global rack that uses some of those drones and your preferred vsti/au for the preview and make that daw tune the default template in your daw. If your daw can't do templates, complain to the daw manufacturer rather than have cognitone rewrite their app.

Now when you come to make a new tune, load the daw template, load extra daw midi drones and route as required (or include sufficient in the daw template) Then load up synfire. i can't see how the workflow would be any different composing with the current synfire compared to a vst version? you don't have to use midi drones, you can use virtual midi cables or vsti hosted in drones.

However I only use midi drones for the external hardware and ableton live instruments that can't be loaded into a drone but my workflow seems pretty simple, especially after the latest update. Normally I use the drones to host the vsti. Also I don't use gm instruments.

You aren't force to use the audio engine, but I find it great for a quick hack to test ideas out and if it goes anywhere, a relocate to daw works wonders to carry on In the daw.

let me turn your question round the other way. What would a vst version of synfire give me that the current version doesn't and how would that be worth all the development time taken to produce it?

 

Mon, 2013-11-11 - 14:36 Permalink

The point is, anythiing worth professional release is gonig to end up as audio tracks in a DAW prior to publication regardless.

Absolutely, however SFP was designed for creating the intellectual property that is supposed to be produced later in a DAW. Using it for final production is not recommended anyway.

SFP's highly dynamic content is volatile and complex. With each new release, Synfire may possibly render notes slightly different. Not a big deal, but once you got used to your song, you will notice it. At some point, you will want to see the final notes without all the dynamic magic and extra payload. Sooner or later, the composition needs to be imported in a DAW or notation software as static MIDI to make it truly persistent and ensure a reliable long term conservation of your work.

Who would be to say that one wouldn't want all the tracks generated from SFP/E contained in the arrangement hosted in the DAW?

That was my point. If all tracks are in SFP, the DAW tracks are empty. Then what would the DAW be contributing to the composition? For dealing with pre-recorded audio or midi tracks, which I was referring to as "static", the drones are fine.

Please name one single thing that SFP/ E does that could not equally be accomplished by simply piping midi within a DAW?

Many things. For instance, tracks needed to be added and configured always twice: Once in the DAW and then in SFP. And the user must keep track of their communication manually. Same for removing an instrument. Only one fixed-size window. How would one deal with palettes and libraries? Multiple monitors? Each project is isolated in a single DAW file, tied to a hard-wired configuration. Copy containers and drop phrases from other projects? No more. Hosting global sounds in a DAW is a pain, as one can not run multiple DAWs/projects at the same time (at least not on Windows). These are only a few limitations that come to my mind now.

We get a lot positive feedback from users that use SFP as their main go-to composition environment already, putting their DAWs aside. The Engine has proven to be a good move. You still have the option to go without it. And nobody needs to use it for final production.

Nevertheless, I understand there is demand for a VST, so we'll keep it on the wishlist. It's time to focus on the genuine prototyping features now!

Mon, 2013-11-11 - 18:04 Permalink

That was my point. If all tracks are in SFP, the DAW tracks are empty. Then what would the DAW be contributing to the composition? For dealing with pre-recorded audio or midi tracks, which I was referring to as "static", the drones are fine.

What the DAW would be contributing is all of the DAW advantages of routing in the studio and the fact that once the SFP/E project was finished to the satisfaction of the creator, the hardware and virtual instruments used could immediately be tracked to audio in the DAW for mixing and creation of mastering ready product. As opposed to having at some future time to start all over with exporting midi from a finished SFP/E arrangement, setting up a DAW project, assigning all the instruments to the midi tracks, etc., etc., the project is integrated from the beginning. This is twice the work, not my method.

 

When you wrote "embedded sidekick to the DAW" it suggested to me that somehow you see this usage as demeaning to SFP/E. Is Vienna Symphony Library somehow less brilliant because it is dependent upon a DAW? I don't think so. I would prefer to consider SFP/E a 'partner' to my DAW.

 

 

 

Nevertheless, I understand there is demand for a VST, so we'll keep it on the wishlist. It's time to focus on the genuine prototyping features now!

VST or 'Virtual Instrument' aside, why do you not also address my suggestion to provide a current standalone version of SFP/E absent audio and devices? I cannot believe that it would require any significant development time to simply assign the GM voice, playing range and instrument role parameter directly to a specific port/ channel absent the audio device,  The code is obviously already in place or the audio devices could not work. A virtual instrument is convenient, but the essence of what I hope for is pure midi generation. While both would be nice, either would be wonderful.

 

For instance, tracks needed to be added and configured always twice: Once in the DAW and then in SFP.

Not true. A DAW template needs only be configured once. I've done it. I have set up 16 midi tracks with the 16 channels fed from a GM defined device on the port SFE uses. I then set up an additional muted 16 midi tracks from the same port but with the patch changes filtered out in the template. I then let synfire play,  If I have an idea of a better voice I then mute the track on the  GM channel and unmute the GM filtered track with the same channel while auditioning different versions of the instrument voice. Done once forever.

 

 

And the user must keep track of their communication manually. Same for removing an instrument.

I don't understand this. If a DAW template is always using the same port and channels and is GM patch responsive, I simply change the instrument/ definition in SFP/E and it continues out on it's same channel to the DAW. I haven't touched the DAW ... only SFP/E which is already necessary to change 'manually' anyway.

 

Only one fixed-size window. How would one deal with palettes and libraries? Multiple monitors? Each project is isolated in a single DAW file, tied to a hard-wired configuration. Copy containers and drop phrases from other projects? No more.

All this is based upon the assumption that I must be working in the DAW. This is entirely unnecessary during the prototyping stage other than possible changing of a patch from time to time. I continue to work in Synfire and accomplish all the things you describe.

 

Synfire can continue to have focus unless and until I want to select a different voice through the DAW routing. Move a container, etc., the instrument is still generating midi on the same port/ channel configuration. My sounds routed through my DAW continue uncomplaining. When all my SFP/E arrangements are based upon the same DAW template, they would all continue to play perfectuly well when opened/ selected in SFP/E as the routing is unchanged.

 

 

Hosting global sounds in a DAW is a pain, as one can not run multiple DAWs/projects at the same time (at least not on Windows). These are only a few limitations that come to my mind now.

 

I disagree that hosting global sounds is a pain. It, as I have already said, requires a 'one time' setup of a template in the DAW for use with SFP/E. As far as running multiple projects at the same time, while true it is irrelevant to this discussion. Why? Because if I can run multiple projects in SFP/E and my DAW responds as a GM sound module, it will not have to be running multiple projects ... just the one dependent upon the instructions from whatever project is currently playing in SFP/E.

 

Readers can draw their own conclusion, but with respect Andre, I do not think you have yet illustrated one thing that can be accomplished with your audio devices that I cannot equally accomplish with midi routed to my DAW.

Mon, 2013-11-11 - 18:34 Permalink

'm assuming your points are argued as if synfire was running as a plugin...if not please ignore me..

Blacksun, I could never ignore you! You have far too much to offer to these discussions. While I'm still waiting to hear whether it's dark in the daytime in the land of black sun, I will, in the meantime, clarify.

 

No, all my points are argued from the viewpoint of synfire running without audio devices. I have detailed this further in my response to Andre. A VST or 'virtual instrument' is one useful approach, but equally a pure midi standalone version of synfire could do the job. I do want to comment on some of your statements, however.

 

Generate midi in logic... Don't think this is allowed by daw?

I'm not a Mac or Logic user, but I don't think this is correct. I understand RapidComposer has released an AU version of that programs midi generating virtual instrument.

 

extact instrument and playing ranges from the vsti. Synfire still needs this info to work its magic.

I'm a little confused by the word "extract." I think of it as "assign" the information you note to the selected VSTi and patch so the AI can use it. Yes, this would still be necessary. As I suggested in an earlier post, this could easily be accomplished by a preset system where presets with that information could be selected and "assigned" to the specific midi port/ channel SFP/ E uses in the arrangement to ensure the voice in the sound module is compatible.

 

let me turn your question round the other way. What would a vst version of synfire give me that the current version doesn't and how would that be worth all the development time taken to produce it?

Ah, clever man and excellent rhetoric! It would provide me a preferred work flow and one that is increasingly 'the norm' for midi generating/ prototyping programs used in a DAW. It would ultimately decrease the entry learning curve for use of SFP/E. It would obviate the seemingly endless workarounds that one user or another has with setting up audio devices and drones in their DAW of choice.

 

I subscribe to every single post on these forums I see the same experienced users flummoxed over and over again with the same issues. Andre has said things are simplified as much as possible. That may be true, but does not mean they are simplified enough for the many potential users intimidated at even jumping into synfire.

 

I do not like to have to learn things over and over again. I want a consistent interaction with my software. It maybe that a new automobile with the driver position in the middle, passengers on either side, the brakes on a lever by the steering wheel and the accelerator placed for the left foot would have certain advantages over current design. But I don't want to buy it. I am used to what works now. I think the vast majority of people feel the same way. I was willing to learn to drive such a car because it had such outstanding festures. After several years I keep asking the question: well, does the car (synfire) really need to have to be driven that way to be able to have those features?

 

My conclusion is that it doesn't. I love the features and will keep driving that car ... but I dearly wish it would be made simpler to drive and conform to what others do.

Mon, 2013-11-11 - 21:03 Permalink

I must confess that I assumed you were talking about a VST Synfire.

I cannot believe that it would require any significant development time to simply assign the GM voice, playing range and instrument role parameter directly to a specific port/ channel absent the audio device

There wouldn't be any effort at all. That's what we already have. Just disable or ignore the engine and setup your sounds as shown in the 3-steps video. That's basically exactly the procedure you describe.

Mon, 2013-11-11 - 21:09 Permalink
Image removed. Online

It's time to focus on the genuine prototyping features now!

Image removed.

  I didn't miss this ... and greatly appreciate it.  While Andre is under no obligation to do things as I might wish, I cannot accept 'straw men' arguments in response to my points.   Best to you, Jügen!

Mon, 2013-11-11 - 21:18 Permalink

There wouldn't be any effort at all. That's what we already have.

Yes, good old Synfire version 1.2. I still have it, haha... :D

 

 

Mon, 2013-11-11 - 21:20 Permalink

I must confess that I assumed you were talking about a VST Synfire.

I cannot believe that it would require any significant development time to simply assign the GM voice, playing range and instrument role parameter directly to a specific port/ channel absent the audio device

There wouldn't be any effort at all. That's what we already have. Just disable or ignore the engine and setup your sounds as shown in the 3-steps video. That's basically exactly the procedure you describe.

 

 

I am sorry I did not commuicate it well, but I thought my post began with the point that what I was stating applied equally to standalone synfire without audio devices or to a VST/ virtual instrument edition. At any rate, that is what I intended.

 

But to put closure on this recent discussion, I still contend that all of your objections to a VST, or more precisely a 'sidekick' in a DAW ... other than the distraction of the development work, which I assume considerable ... do not in the end prevent the use of synfire prototyping as you intended, even if hosted by a DAW.

 

Finally, appropos of recent discussions of some issues after installation of the audio engine with the 32 or 64 bit devices, why cannot an install be created that asks the user whether or not they want to install the audio engine, and if so, which one?

 

Thanks for listening.

 

 

Tue, 2013-11-12 - 19:49 Permalink

The new video "3-Steps Sound Assignment" is really very straightforward and clear. You should make sure that every new user will watch this video before he starts to play around with the software.

 

This brings me to an idea: You could integrate a startup window in Synfire that offers direct access to all important help resources. Steinberg has introduced this at Cubase 7. At the startup window of the software you have access to a so called "Steinberg Hub" which offers access to news and tutorials (at least in theory; it does not work very well for me). Even the notation software Finale, really not known for its outstanding superior user experience, has such an startup window with links to tutorials and videos (see pic). How about this? 

 

Tue, 2013-11-12 - 22:13 Permalink

Good idea.

The Help menu already points to the tutorial videos, though.

Wed, 2013-11-13 - 06:38 Permalink

Logic with their AU's don't allow processed midi out..  And I don't think APPLE gives a #$%# about it.  A serious drawback, as far as I'm concerned.. But seeing as I've used the program about 20 years now.. I'm not going to change..

 

Real Guitar is a great guitar plugin, with picking and strumming.. They have their own decent library of string sets to choose from.. But the VST puts out the MIDI data for all the six strings, and you can patch that into another guitar or any type of instrument..  Unfortunately according to Musiclab, there is really no easy way to implement that into a Logic AU....

 

Oh well.