Posted
I don't get drones. The documentation (and some forum discussions) imply that I can host DAW virtual instruments using drones - presumably so that MIDI data can flow to DAW and trigger virtual sound modules in the DAW.
But all I see in the DAW that are assignable in the drone track are effects not VIs. I'm using Digital Performer, but that shouldn't be relevant. Everything else seems just fine - I can sync the DAW and SF etc. What am I missing in concept??
Thanks in advance.
Mo., 11.02.2013 - 10:49 Permalink
On a Mac, the drones should appear as AudioUnits.
You may need to scan for installed plug-ins: In Audio & MIDI Setup window, go to the Host menu and re-scan all plug-ins (with the reset option).
Di., 12.02.2013 - 01:02 Permalink
SF would always be first in your chain ... unless you tried to feed midi output from a daw or other sequencer to enter a phrase as you might otherwise with recording keyboard midi input. While you could do this, it would be very unlikely ... since whatever midi file you had in your daw you could also directly import to SF.
So, for all practical purposes SF is always first.
MIDI is always second, but ... pay attention ... could be internal when SF hosts a VSTi instrument and uses its own audio engine; or, could be external by routing midi to a daw or directly to hardware.
The only way the prototyping gets out of SF is via midi.
What you do next is up to you. There are several choices. It also depends upon where you are ultimately headed. Are you trying to use SF for performance or are you tryiing to record audio for production? In the first case you don't need a daw ... but you will need some midi triggered sound module, either hardware or software hosted in SF or in a daw.
In my view ... others may disagree ... the main purpose of SF is to generate harmonically correct arrangements.
My preference is to create a song in SF and then export a midi file to import into my daw.
This is important to me for two main reasons. One, I cannot do all the midi editing and manipulation that I want to do in SF. There are midi editing and midi VSTs I can use in my daw to refine things. Two, I find it easier and more intuitive to assign different voices to the instruments in my daw than using the drones or other proceedures within SF. I do not want to spend a lot of time building devices.
So I just use a GM source, hardware, to play the arrangement in SF. I know I want something to be strings, so I use one of the GM strings. But when I export the file to my daw, I have many different types of strings I can replace the GM sound with.
That's my approach. While there is usually more than one way to use a creative tool ... as far as I can see, SF will always be first in your chain.
Di., 12.02.2013 - 01:23 Permalink
With HAIlion 4 you can choose the sound you like in Synfire Pro/Express
So I just use a GM source, hardware, to play the arrangement in SF. I know I want something to be strings, so I use one of the GM strings. But when I export the file to my daw, I have many different types of strings I can replace the GM sound with.
There are no restrictions whatever sound you want to use in Synfire with HALion 4
You can use a combination of GM and native HALion 4 sounds and other Vst plugin sounds directly in your arrangement in Synfire.
This all is exported to drones in your DAW, what can be complicated ..to go further in your DAW with the arrangement
Ideal is to do the same in Synfire Pro/express and your DAW ...using the same sounds.
Doing the work of the arrangement in Synfire and using the audo engine there is the most easy workflow
Di., 12.02.2013 - 01:32 Permalink
But the expectation is what ...
SF -> MIDI -> DAW ?
or
DAW -> MIDI -> SF ?
ie I'm confused about what the drone does/enables.
Thanks
Di., 12.02.2013 - 02:52 Permalink
Janamdo wrote: Doing the work of the arrangement in Synfire and using the audo engine there is the most easy workflow
LOL ... you mean the easiest after you spent almost 2 months and about 100 e-mails trying to get help to set it up?
It may be easy now, but it is certainly not the easiest way to get started.
Di., 12.02.2013 - 09:52 Permalink
:) Yes, it was for me a complicated issue, because i started with Harmony navigator and that was a big hassle too with the soundassigning system.
Than i switched to Synfire..also difficult to master the soundassigning and Cognitone was changing during this learning process also their software and there is minimal documentation.
With the video's for setting up, you could learn to control the soundassigning, but Cubase is not supported in the video's
Other DAW's like Ableton, Logic , etc working different than Cubase, that means that you must translate this knowledge into Cubase. ( that demands a good understanding of principles how a DAW operates )
But my point is that for exporting your sound to a DAW from Synfire Pro/Express you need first to master the soundassigning in Synfire local and than take the next step to the DAW and export ( relocate ) the Vsti from Synfire to your DAW with the Drones.
So than i concluded that this is the most easy workflow : stay in Synfire ( that means not that the soundassigning is easy ).
In Synfire local you can use a hardware midi soundmodule en Vsti combined , so GM sounds mixed with a hardware Midi soundmodule and Vsti in one shared rack is possible.
Th e first step to master working with a Drone ( or MIdi drone ) in Synfire is to master complete first his own soundassigning system.
There were other issues , like having 2 audiodriver at the same time ( open Synfire and Cubase at the same time ) and HALion 4 gives problem with exporting to the DAW
That is also involved in the e-mails, but the the actual soundassigning trouble takes less e-mails
Indeed a nightmare to deal with this all and it is because it becomes a trial and error issue..the worst learning scenario to master.
It may be easy now, but it is certainly not the easiest way to get started
It is trial and error ...
Di., 12.02.2013 - 10:15 Permalink
The way I use SF in Cubase works fine for me, but is not necessary.
The easiest work flow is to create in SF using GM with software or hardware and then export the arrangement as a midi file to your daw.
I know many people prefer to use the sounds they ultimately want from SF using drones or hosted VSTi ... but for me that never seemed worth the time to build all the devices.
But I'm sure if that is important to you and you have succeeded, it is very rewarding.
The first thing is to simply get SF working ... then one can decide what options are most important.
Di., 12.02.2013 - 11:24 Permalink
:yeah: I think you are right : the most easy setup is with a GM soundmodule in Synfire and don't work with the drones
The easiest work flow is to create in SF using GM with software or hardware and then export the arrangement as a midi file to your daw.
Combining GM soundmodule with Midi hardware soundmodule + other Vsti is also possible in Synfire as a shared rack device description ( is this correct ? ) performed and not a big deal to setup if you master the soundassigning.
You save your setup as a rack& setup ..( shared rack( Vsti's ) + Midi devices )
No drones are here involved ....
Di., 12.02.2013 - 11:49 Permalink
Hi Prado
Why limit yourself to GM sounds only in Synfire when it is possible to use any sound you like from Vsti or hardware midi soundmodules
You can prototype your arrangement already with your desired sounds in Synfire
Note: probably is the hardware Midi soundmodule more difficult to get working in Synfire?..is there a multitimbral mode.
You need first a device description for the hardware soundmodule for GM multtimbral
With the softsynt/sampler Halion the included standard : Internal GM synth can be used ... Halion4 must be set in the GM mode ( load a multi program in it )..than it plays imported GM files in Synfire.
Probably can the Tyros 4 GM multitimbral hardware soundmodule also be used by @ Mark Styles
Di., 12.02.2013 - 14:15 Permalink
Well, I think what dpart actually wanted to know is what the Drones are used for.
The short answer is: Drones are used to link SF with your DAW, so you can do the audio mixing in the DAW while editing your score in Synfire.
Since SF can host your plug-ins in its own Engine, you do not need the Drones, unless you really want to use your DAW, for example, to host vocal recordings.
Di., 12.02.2013 - 18:56 Permalink
Since SF can host your plug-ins in its own Engine, you do not need the Drones, unless you really want to use your DAW, for example, to host vocal recordings.
Or track and record your SF output?
Di., 12.02.2013 - 19:01 Permalink
Yes thank you supertonic, I just wanted to know what drones do and what behavior to expect.
Interestingly, I find that since to make the connection SF+DAW one needs IAC driver (on MAC) and my DAW then allows me to host *any* VI, there is no value gained in using drone (that I can see). Just SF ->IACDriver MIDI -> DAW (with TRANSPORT for sync) is great
(Also I find that using DP + MachFive3 is a great workflow with SF. So for those that are not familar with these SW check them out.)
Di., 12.02.2013 - 19:12 Permalink
Prado, you can listen to the output fom the synfire engine, mix the different instruments in the sf mixer, apply vst/au effects in sf and save the output to a wav file. You don't need a daw unless you have other audio sources or if you want more control or if you are just more comfortable doing it manually.
Btw I always use a daw...
Dpart, using th drones give sample accurate timing.
Di., 12.02.2013 - 20:49 Permalink
Hi - I started out thinking I could do it all in sfp but then I ran into huge problems getting live midi input recorded from my keyboard into sfp. I just couldnt get it to work (win8 64bit). As soon as i opened an audio host in sfp, it would kill my midi input.
I had to change my workflow. Now I fire up reaper with drones, then sfp with no audio engines and configure the instruments as midi drones and with midi input from reaper. You can mix as you go and it works great.
I would still rather work with NO DAW UNTIL EXPORT. Completely with sfp and the sfp audio engines...it's just so much faster to sketch out stuff. With drones, you sort of spend some time thinking about setting up stuff.
I hope someday it will be fixed, (maybe it's not broken?) but seems like audio engines with live midi input is systemic problem with windows.
If you use drones then you must start your work on bar 2 which is understandable but its a real drag for setting up the chords in the progression editor because the first bar you want no chords. Its all very doable but no ideal. Mybe having a fake or a intro bar with no information on it. called BAR ZERO so you could hide it?
Di., 12.02.2013 - 21:16 Permalink
Thanks, blacksun ... but no tracking, right? You just end up with a single stereo output of your 'mix.'
This is not satisfactory for me. I individually track all my virtual instruments including even each drum piece, i.e., kick, snare, hats, toms, etc., to seperate audio tracks for complete mixing and mixdown control.
Tracking gives you almost as much flexibility as midi. You can effect different parts, effect the same part diffrently, ride faders and automate, create sub-sections, etc., etc.
This is another reason why I'm not enthused about the audio engine ... unless or until it is a full multitrack recorder. A single stere mix file, regardless of the quality, has very little chance of becoming a professional recording.
Di., 12.02.2013 - 21:55 Permalink
problems getting live midi input recorded from my keyboard into sfp
This issue is on the list. It is probably simple to fix. Looks like a midi driver conflict.
I would still rather work with NO DAW UNTIL EXPORT. Completely with sfp and the sfp audio engines...it's just so much faster to sketch out stuff.
That is how SF is intended to work best for most users.
Mybe having a fake or a intro bar with no information on it. called BAR ZERO so you could hide it?
Bar zero is also on the list ;-)
In the meanwhile use a container and move it to bar one. Put your progression in there to keep it consistent and easy to edit.
Di., 12.02.2013 - 22:03 Permalink
I don't think this can happen, as SF will always require the latency inherent in running the algorithms to produce the prototyped midi output.
That's why I setup my DAW as a virtual synth rack. I send out the GM voices from SF with virtual midi cables and then filter the channels in my DAW so each track contains a single channel. Those tracks are routed to a good GM module.
I have a second set of 16 midi tracks initially with output 'not connected,' but with the same virtual midi cable input from SF. Those tracks also have program changes filtered so that there is no patch reset.
Once I'm basically happy with the SF arrangement with GM I start rerouting the midi channels to the second set of tracks and then picking preferred instruments and tracks for the arrangement. I can now work back and forth between SF and the DAW embellishing the arrangement.
When I've gone as far as I can go in midi, I export the midi file from SF and then sync it to the tracks/ channels I've been monitoring with. I'm already now to record or render audio.
This may sound complicated, but I use a 'private device' in SF and a template in the DAW. So once it is set up the first time it literally takes a couple of minutes to start a new project.
I've tried recording the SF input directly to the DAW, but for whatever reason the timing is not as tight as when I export the midi file.
This is a long-winded 'work-flow' agreement with you, Proto.
Mi., 13.02.2013 - 00:33 Permalink
Thanks Supertonic - That sounds great about the possible fix for midi input.
Prado - Yeah, I think Its definitely is possible! <live low latency midi input in sfp only> because I somehow tracked it in on win8 64bit with sfp demo using the audio engines - version 1.6.5b3.
Mi., 13.02.2013 - 01:36 Permalink
Prado - Yeah, I think Its definitely is possible! <live low latency midi input in sfp only> because I somehow tracked it in on win8 64bit with sfp demo using the audio engines - version 1.6.5b3.
Prototyping midi latency is not a midi input issue ... it's a midi output issue. I've never had any issue with keyboard inputting midi into SF for recording. There is no prototyping at input.
The drones are part of the midi output system.