Posted
Quote: @Supertonic
I cant get the nanoKEY working for DAW and Synfire at the same time. Either the DAW got it, or Synfire. Therefore, connect it to Cubase, if you use the drones, or connect it to Synfire, if you use the Engine. You can not run both at the same time.
This Windows audio is a mess.
Synfire and Cubase seems to be a wrong combination in Windows 7 ( even when Cubase is not opened )
- nano2 keyboard is not working now.. that means even Cubase is not open..it grabs the midi port from Synfire
So even working with Drones in Cubase and Synfire there is no keyboard in Synfire to get ?
With this..all ends.
That should mean that Cubase must be removed to get the nano2 keyboard working?
Perhaps i do miss something?
-----------------------------------
I was following the thread from andré about harmonising a vocal , interesting..and try to do the same..and learning from his workflow in logic and SFE
But this setup is not possible with Windows/SFE/Cubase.. no keyboard for playing along the voice
So this aspect of composing with audio vocals in SFE/cubase is for Windows not possible
What's left over for composing with Cubase 6.5 and Synfire Express ? --> @Juergen gives a example of Drones and Vst expressions were you can improve your arrangement/ instruments in Synfire with the articulations feature ( also the note expression feature too? ) in Cubase 6.5
What else is left over to do with the Drones ?
Pagination
Thu, 2012-10-18 - 14:53 Permalink
Try setting the Engine(s) to Windows Audio or DirectSound instead of ASIO. Use ASIO for Cubase only. This way you can use both at the same time. There will be more latency for the Engine, but that's a minor problem.
Perhaps with the multiclient driver of Steinberg?
Why not. Please let us know if it works for you.
Fri, 2012-10-19 - 08:49 Permalink
You are right.. i got a working setup with SFE ( internal Vsti) + Cubase audio vocal + nano2 keyboard, cannot reproduce this anymore
So there is contradiction with this and my post herebove than?.where i said that it is not possible
-----------------------------------------------------------------------
When SFE with the audio engines + Cubase are opened ..you got 2 DAWS ..and Windows cannot handle this audio
There is audio Voice in Cubase and Vsti in SFE running at the same time, while i improvise a melody line over the audio voice
Strange ..it seems that i got the wanted setup in Windows in the past?
Perhaps with the multiclient driver of Steinberg?..a server what connects the same asio audiodriver to different audio applications.
It is a mystery setup ..haha..wel the audio engineers can deduce how it exactly all works..not me
Fri, 2012-10-19 - 08:50 Permalink
Ok, it works for me !
I did for 32 bit only a setup for "asio multi client", because 64 bit seems not to be available for the asio multi client--> No... "asio client"to choose.
As audio driver... a one from Steinberg: generic low asio driver
So Cubase and Synfire get the same audio driver and it seems to be normal working
Further i disabled the nano2 keyboard in Cubase and enabled it in Synfire (only 32 bit)
Recording works also normal, and now i can record in SFE at the same time in Cubase a audio vocal listening ( i do have loads of them ) and the transport is in sync
Hopefully it is stable and gives a good soundquality and other audiodrivers are probably also possible.
For now it is only 32 bit SFE and 32 bit Cubase what works with the Steinberg asio multi client setup ( altough not official supported by Steinberg )
I try out also the setup suggested by @supertonic ..to be continued
Fri, 2012-10-19 - 09:51 Permalink
Janamdo ... you know, there is an entirely different and simpler way to go: Don't use audio in Synfire Express!
If you don't know about them, google "Toby's loopbe virtual midi cables." Get them ... they are free ... and install them ... or any virtual midi cable ... but I like Toby's the best since they also work in x64, unlike some of the old standbys.
Send Synfire out on a virtual midi channel port using GM. Open Cubase and load up as many midi tracks as you have channels ... up to your max 16 on SFP.
Set your Cubase midi tracks to monitor and match the channel settings up with the GM instrument in SFP and add Halion ... or whatever sound you like on a track by track basis. You now can monitor everything played in SFE directly through Cubase ... or record the midi when you are ready without exporting a midi file from SFE.
Now you only need one audio engine and presumably can run your setup with a x64 OS, Cubase 64 and SFE whatever ... as Cubase could care less if it's receiving midi from a 32 bit or 64 bit program.
It is such a simple but powerful way to work. Keep on composing in SFE till you're satisfied while monitoring through Cubase with your instruments of choice. Once the composition is right, record the midi in Cubase, polish it off and then render to audio for mixing.
I'm doing this right now on my laptop with HN2 and Cubase! Sounds like I want it to sound since I can use all my VSTi instruments as I choose.
Forgot to add that you will need to add a transformer midi plugin inserted on each channel and setting it up to limit the track input to a single channel ... otherwise all channels will come through to each midi track. That's the only tricky part, but you do it once and make a templat and you're all set.
Prado
Fri, 2012-10-19 - 11:16 Permalink
Thanks!
Janamdo ... you know, there is an entirely different and simpler way to go: Don't use audio in Synfire Express!
Not using audio in Cubase combined with working in SFE limits me in composing with vocals and add a audio track(s) can be a source of inspiration.
I can use Groove Agent one or Loop mash for drums as bonus.
By synchronisation of SFE with Cubase by Rewire ..SFE acts as a master for transport ( ruler )
---------------------------------------------------------------------------
Interesting setup..composing in SFE and monitor this in Cubase too and than record it in Cubase..but this has nothing to with audio..there is no audio involved
Perhaps adding audio to this setup ? :)--> put Rewire transport on in SFE and add a audio track ? :) .. improvise a melodyline in SFE about this Vocal .. see video's from Andre
Have you seen the video of @Juergen where when working with drones, there is a interactivity between SFE and Cubase?
In Cubase i can chance the articulation ( maybe also note expression ) for the SFE phrase
I am curious to see your setup in practice too..and set this up by myself.. so if you can more precise and give the setup workflow like @supertonic and @juergen do, than it can benefit more Cubase users
Note: with this setup i do need the loopBe loopbackdrivers from HN2..it is not included anymore in the SFE installer.
I already have asked about it on the forum.
Cognitone can add again the loopBe to SFE ?
Fri, 2012-10-19 - 13:58 Permalink
Think the simple loopbe is available for free, if my memory serves me right it only does 1 device so is limited to 16 channels, but I might be wrong.
Whilst using a virtual midi cable works... It was the only way synfire worked prior to the drones being introduced, I believe you get better sync using the drones.
Fri, 2012-10-19 - 18:33 Permalink
I mixed my apples and oranges.
LoopBe is from Nerds.de. But I think it is two port.
(http://www.nerds.de/en/loopbe1.html)
Toby Erickson of Toby's Midi Bag fame has MidiLoop. This one is definitely multi port. He also has a free LAN midi repeater for networks for the MAC.
loopMIDI
They are both free and you can run them both at the same time if you like.
Since I don't have Rewire with HN2, I cannot check out whether you can still do transport sync when SFP/ SFE doesn't access an audio driver ... but I don't see why it wouldn't work.
I cannot actually think of any reason why the drones would sync better with pure midi, as opposed to audio streaming. With audio stream coming through a drone, which I now understand to be a type of VSTi of SFP/ SFE loaded in the DAW, there would be Cubase or other DAW Plug In Delay Compensation to help with sync ... but for pure midi stream, I can't see why it would be an issue.
As far as the set up goes, think of it this way. It is nothing more or diffrent than having SFP/ SFE function as a midi controller like a keyboard. Except you have as many keyboards sending input as you have instruments running in SFP/ SFE.
As I'm only just getting into this, there are a few things I haven't worked out yet on the HN2 side, namely using fixed instead of dynamic midi channels so I can specify what channel an instrument plays on for easy setup in my DAW.
As I said, different DAWs will deal differntly with the port midi data as it hits the DAW's midi track. In Cubase you need to set up a MIDI transformer plugin inserted on each track and use it to filter out all channels but the one you want to monitor. The Input Transformer in the Inspector only filters as far as what will be recorded in the track, but it comes later in the path, so it will not prevent all midi data ... all channels ... from hitting the sound module receiving output from the track.
Other DAWs may work differently, but they all will be able to be set up this way.
Prado
Fri, 2012-10-19 - 18:46 Permalink
Janamdo wrote: Interesting setup..composing in SFE and monitor this in Cubase too and than record it in Cubase..but this has nothing to with audio..there is no audio involved
Well, there is audio from whatever is hosted in Cubase! Which is exactly the point. There is no necessity of having audio in SFP/ SFE.
Look at it like this: whatever SFP/ SFE or HN do with their internal prototyping algorithms, at the moment the music is made they are sending midi notes to the various channels of the attached ports.
Those notes don't care if they go SFP/ SFE house and let Halion play them there or if they go to Cubase's house and let Halion play them there. It is all the same to the notes!
And if you like how Halion plays the notes, you will like them whether Halion played them at SFP/ SFE's house or at Cubase's house.
It is really all that simple. It will always be less complicated to having Halion or other band/ orchestra with all the members playing in the same house, than to have some one place and some another and be trying to move them back and forth and keep them all coordinated. Don't you think?
When you try to run two audio engines and sync them ... what you have been struggling with and complaining about ... it is very complicated and prone to 'issues.'
So why not just run one?
Prado
Fri, 2012-10-19 - 23:38 Permalink
Thanks!
When you try to run two audio engines and sync them ... what you have been struggling with and complaining about ... it is very complicated and prone to 'issues.'
So why not just run one?
In the multiclient asio setup i examined, i have used one 32 bit SFE audio engine ( because there is no 64 bit asiomulticlient to choose) and also using for SFE and Cubase the same audio driver. ( no conflictingdrivers anymore)
In this setup i don't have to work with Drones anymore and the nano2 keyboard is normal working
So to answer your question: yes now i run one 32 bit SFE audioengine and Cubase 32 bit ( perhaps is 64 bit also possible)
Note: i don't know if it is better/possible to use 32 bit audiodriver in a 64 bit DAW ?
Perhaps it is possible to use the both audioengines if there was a 64 bit version for the multi client asio setup..but with windows it seems that you must strive to use only one asio audiodriver
Perhaps that the 64 bit engine accepts the 32 bit asio multic lient audio driver ? ..than runs all (32/64 bit SFE and Cubase on the same asio driver and that is not conflicting with windows it seems
I f it has a advantage to use the 64 bit audio SFE engine with a 32 bit audiodriver than you can try to replace the 32 bit audio SFE engine
So i am satisfied with the achieved setup with the asio multiclient driver ( no drones necessary ) and there is no interactivity between SFE and Cubase (in your setup i can record the SFE phrases in Cubase , in a setup with drones i can chance the articulation in Cubase and this effected the phrase in SFE when played, but perhaps is recording also possible ?.. i have to test this setup.
loopBe is also to get for 8 ports, but with two ports you get 32 midi channels
Sat, 2012-10-20 - 12:35 Permalink
@Prado:
I cannot actually think of any reason why the drones would sync better with pure midi, as opposed to audio streaming.
The drones receive MIDI data from Synfire ahead of time and join it with the audio stream at sample-precision timing. The best timing you could get. Pure MIDI always has latency and jitter.
Well, there is audio from whatever is hosted in Cubase! Which is exactly the point. There is no necessity of having audio in SFP/ SFE.
Running an Engine in addition to the DAW project is helpful, because you can host a shared rack on the Engine that is always ready to play library phrases, imported files, etc. Doing this in a DAW is more complicated. Often a track can not be heard unless it is armed for recording. You will constantly have to care about switching track monitoring in the DAW. There is also additional latency when playing live input.
The Engine cares about all that automatically.
Sat, 2012-10-20 - 13:53 Permalink
I am glad now than that SFE directly is working with the Multi asio setup!
For more backup setups.. i try out also the setup with a ausio driver in SFE and a windows audio /directsound in the DAW.. i am curious to see if this too also is a working setup ( de hassle with the nano2 keyboard that i don't get connected ..is perhaps also gone? )
Sat, 2012-10-20 - 18:57 Permalink
@Supa-T
I cannot actually think of any reason why the drones would sync better with pure midi, as opposed to audio streaming.
The drones receive MIDI data from Synfire ahead of time and join it with the audio stream at sample-precision timing. The best timing you could get. Pure MIDI always has latency and jitter.
And so too does audio have latency. What you say can only be true if there is an internal midi buffer inside of SFP and some type of Plugin Compensation Delay (PDC). I believe this is only 'the best timing you could get' with respect to running audio and midi together ... or more precisely, streaming audio from SFP.
Forget audio for the moment. I don't understand why midi internally generated by SFP and sent internally to hosted VSTi and then sent as audio to the DAW could have less latency than midi directly sent to the DAW. I know midi is serial and that their can be irregularites with drop out or slight timing fluctuations if there is too much traffic.
If you are saying that there is an extra layer of midi buffering within SFP to keep all channel streams hitting the instruments with proper timing, I would be impressed. However, even so, there will be variations in how hardware and software process that midi and the return of audio, either through SFP or in the DAW. Even if SFP also has an additional audio buffer so that when sending on drones they are synced, due to latencies in processing ... basically it's own PDC ... that process will still be repeated with PDC in Cubase, at least.
Which is why, I can't see how sending pure midi and no audio from SFP to a DAW could result in less midi latency or poorer midi syncing when taking into account the entire trip. Help me see my error?
If you are talking about some audio coming from SFP and other audio and midi generated in the DAW simultaneously being played back, I can conceive of the difficulty ... but not with straight midi from SFP to the DAW.
Running an Engine in addition to the DAW project is helpful, because you can host a shared rack on the Engine that is always ready to play library phrases, imported files, etc. Doing this in a DAW is more complicated. Often a track can not be heard unless it is armed for recording. You will constantly have to care about switching track monitoring in the DAW. There is also additional latency when playing live input.
More complicated to set up the DAW with the midi filtering and arming/ monitoring as you say ... but you only have to set it up once. I already have a template in Cubase where I can do exactly this with HN2. I am immediately playing phrases for audition since I have my basic 16 midi tracks all set up for monitoring and running to a GM module via Cubase. So I hear the shared rack.
Within the DAW I can simultaneously route midi channels to more midi tracks directly from the HN2 instruments set to midi, if I want other than GM voices. I have the convenience of quickly switching the voice/ patch within Cubase at the midi track output selection level. I can't see how that could be anymore difficult than switching devices for an instrument in SFP ... and I don't ever have to worry about setting up and routing the device.
Prad
Sat, 2012-10-20 - 19:17 Permalink
I don't understand why midi internally generated by SFP and sent internally to VSTi and then sent as audio to the DAW could have less latency than midi directly sent to the DAW. I know midi is serial and that their can be irregularites with drop out or slight timing fluctuations if the has too much traffic.
If you are saying that there is an extra layer of midi buffering within SFP to keep all channel streams hitting the instruments with proper timing, I would be impressed.
The buffer is in the drones. I think the buffer is loaded with new MIDI data in that moment when you press the start button at Synfire. Once the MIDI buffer is loaded the drone runs on its own. You can export the MIDI data from the drone (drag it to a MIDI track at the DAW) even when Synfire is closed.
Sat, 2012-10-20 - 20:20 Permalink
Exactly. So the latency takes place loading into the drones. Why then would this latency then be any less or different than the midi sent directly out the SFP midi ports to the DAW?
Of course, midi sync is much more important than midi latency when the midi is generated from a single source. But even inexpensive simple little programs like ChordPulse can generate 5 or 6 track/ channel midi without any sync issues.
This could be tested easily enough: record the midi from the drones into your DAW. Remove all drones and send the same song out directly on midi from SFP or HN2 and then comapre the mdid data.
The only thing that is not clear to me is whether SFP permits multiport midi out. If it does, then even more complex pieces with over the 16 channel port limit could also be direct midi out to the DAW.
Prado
Sat, 2012-10-20 - 21:41 Permalink
We all always want more!
Anyway ... 16 channels/ instruments is a lot ... especially if you use my streaming midi method.
Why? Well in my daw I can stack voices and send the same instrument to 3, 4 or more different midi tracks, each one routed to a different instrument. And in the Cubase, as you probably know, you can manipulate that incoming midi. So the midi data can be modified from the track input to the track output in a lot of ways, including using Arps or transposing the midi note up or down plus or minus 24 semitones ... and then setting the midimodifier to the key of the song so that those notes remain in harmony.
I'm pretty sure I can get totally lost with 16 instruments.
Prado
Sat, 2012-10-20 - 21:44 Permalink
The only thing that is not clear to me is whether SFP permits multiport midi out.
Yes, Synfire and HN support multiport midi out. You can connect each device description with up to 4 different MIDI ports (see enclosed pic) and for each sound that you define at the device you can select the desired port (see the right side of the picture). But don't confuse that with the 16 instruments limit for an arrangement at SFE.
Sat, 2012-10-20 - 22:00 Permalink
For SFP it is 16 instruments ..i like to see more instruments . say 24 or 32 ( best )
SFP has 260 instruments. ..how many midiports.. i guess also 16 midichannel, but that is not likely
I like to see see multiportmidi out for both SF's ..it is a reasonable request.
Probably there are 64 for SFP ..so 32 fro SFE sounds reasonable ( it also half price SFE against SFP )
The only thing that is not clear to me is whether SFP permits multiport midi out. If it does, then even more complex pieces with over the 16 channel port limit could also be direct midi out to the DAW.
Sat, 2012-10-20 - 22:10 Permalink
Prado, sorry, you missed the point with latency. The MIDI latency with the drones is zero. Audio and MIDI play in perfect sync.
You have some latency when playing the keyboard live while the DAW is stopped, though. Once the playback is running, there is no more latency.
Sat, 2012-10-20 - 22:17 Permalink
Prado, sorry, you missed the point with latency. The MIDI latency with the drones is zero. Audio and MIDI play in perfect sync.
Unfortunately, it won't be the last time!
OK ... can a Drone be a pure midi pipe, i.e., not routed to any SFP audio device, but streaming midi straight within the DAW? That would interest me.
Prado
Sat, 2012-10-20 - 22:23 Permalink
A drone can be switched to MIDI mode, that is, if it is VST, it's own MIDI output could be routed inside the DAW.
SFP is never streaming audio. The Engine does, or the drones do inside the DAW. Audio and MIDI are streamed from the drones to the DAW they are hosted in. Zero latency.
Sat, 2012-10-20 - 22:41 Permalink
Thank you. I see I have a bit to learn about the drones, but in broad strokes, does this mean a VSTi must be loaded in the SFP Drone or simply that the Drone is identified in SFP as a Drone device, regardless of whether it actually is connected to any VSTi?
I still struggle with whether this is an either/ or or both/ and. When you say SFP never streams audio to the DAW, you are reaferring to the fact that SFP is a separate module from the audio engine your provide, correct?
I had the impression that your audio engine was then rewired to the DAW, correct?
So in the end, your audio engine is syncing the SFP midi input, but your engines output will still have to be synced by the DAW if the DAW has other audio tracks or hosted VSTi for any reason, correct?
Or, the cognitone engine is master of midi from SFP routed to hosted VSTi/ devices, but the DAW will still be clock master of final output, correct?
I'm still ... I know, I'm hard headed, as we say ... not convinced how midi originally generated by SFP and sent to your engine through it's buffers and then providing midi output to the DAW can be faster than midi directly from SFP to the DAW. The midi is transmitted in either case and the drones require one more step.
Perhaps we are talking about issues of recording midi data in the DAW while the drones playback a song?
I'm otherwise puzzled as it simply doesn't make sense that adding an extra step creates less latency.
Prado
Sat, 2012-10-20 - 23:25 Permalink
When you play the audio in your DAW and use directly the audioengines , than SFP/SFE is no streaming audio by Rewire
Rewire is only for transport .. this makes that sfp and the audio is in sync
The same story for drones..SFP/E
Audio and SFP are separated
Correct me if i am wrong
---------------------------------
I am interested if @Prado can find his best Cubase setup with SFE ?
We try to reasoning it here..
Note : shortly i use a sdd harddisk..works very convenient ( works fast )
I'm otherwise puzzled as it simply doesn't make sense that adding an extra step creates less latency.
I can imagine this that you are confused by this, but could it be that the drone improve the midi signal? ( if possible?)..or perhaps is this before the signal reaches the drone?
Speculation ..the persons who knowing this are the two software engineers at Cognitone.
Cheers..
Sat, 2012-10-20 - 23:31 Permalink
Jan ... you're English is 'difficult' to understand. No offense intended, it's just that is sometimes confusing as to exactly what you are trying to say. I would do much, much worse in Dutch or Flemish!
Anyway, Rewire is absolutely not just for transport, unless you mean in SFP and its 'Engines.' I should probably shut up about what I haven't yet used. My discussion has been prompted by questions in my own mind about whether I need to use Drones or the 'Engine' at all. I must say I haven't heard compelling reasons. Reported advantages are still in my miind offset by the complexity of setup.
If the issue of midi transmittal is one of latency, or the 'travel time' from SFP to anywher, including its own 'Engine,' then your comment
could it be that the drone improve the midi signal? ( if possible?)
does not seem possible. How could it make the midi travel faster than it already does inside the computer? If the Cognitone 'gods' have invented 'turbo midi,' then they should certainly be peddling it to every other developer of software and manufacturer of hardware in the world. Actually I think we are not far from Heisenberg's 'Uncertainty Principle.'
Prado
Pagination