Direkt zum Inhalt

A Song About the Dangers of A.I. (Synfire w/ FL Studio)

Autor juergen
Teaser Image Full

I just finished a new song, it was my first project after switching from Cubase to FL Studio as DAW. 

As a long-term Cubase user this was quite tough, but I feel it definitely paid off. Since FL Studio can be loaded as a plugin into Synfire, the two programs work together almost like a symbiosis. Here are a few highlights of this setup:

  • Rock solid synchronisation: Forget about all the synchronisation settings in the DAW (which is futile effort in the case of Cubase anyway) or in Synfire. With this setup it works out of the box. You can click around in Synfire's timeline and FL Studio will follow. You can have tempo changes or jumps, FL will follow. 
  • Only one software to open (Synfire) and only one project file to load (the Synfire project) to get the entire Synfire/DAW project running. And only one file to save: Since FL Studio operates as a plugin within Synfire, its entire project file will be saved automatically if you save the Synfire project. And if you open the Synfire project it also opens automatically FL Studio and load its project as it would do with any other plugin. But it's still possible to save the FL project separatly, of course.
  • No need for drones: Although you can use drones (both Audio drones and Midi drones work fine in FL Studio), you don't really need them (at least for up to 16 channels). You can connect Synfire to the sound channels in FL Studio as you would do with any other instrument plugin.
  • Even if you don't want to use FL Studio as you main DAW, this setup can be useful. For instance, if you want to compose in Synfire along an existing audio file. Maybe you have an audio file with some vocals and you want to compose an accompaniment for it? Simply load FL Studio as plugin into Synfire and load your audio file into FL Studio and you are ready to go.
  • Or you can use FL Studio just as an instrument host and for automation tasks. Load FL Studio as plugin and host your instruments there. The advantage is that this way you have much more functionality and flexibility regarding the automation of the instrument parameters than directly in Synfire with Midi controllers. Let Synfire play the notes and place all automation tasks in automation patterns in FL. Such automation patterns in FL can be moved around on the timeline at any time, like a container in Synfire. So there is no break in the workflow, only more flexibility.

I could add much more to the list. For example, no more complaints from Synfire, like what we usually hear on program startup: "Where are my drones? You will give me back my drones right now, or I'll punish you with incredibly annoying replacement sounds." All that has now come to an end. Composer's paradise, isn't it?

About the song:

The song is about the dangers of A.I. See lyrics below. Let no one say they were not warned :)

I mean, it is well documented in film history, that all this A.I. stuff is not so funny. Surely you remember the trouble this HAL 9000 computer caused on the Jupiter mission back in 2001? Or the disastrous ending of the Dark Star mission because of that freaky A.I. bomb? 

What if one day, these A.I.'s developed today, browse the movie databases and come across those movies and then come up with ideas like: "Oh, I want to be like this or that A.I. These are my idols. I want to do what they did". Everyone needs idols, why not an A.I.? Think about that.

 

I'm not high, I'm A.I.

Intro
Hi, my name is GUMI
I'm your personal assistant
You may know my brother from the movie
His name is HAL

There is also KIM
She did help to make this thing
We are a paragon of wisdom
Our knowledge covers endless range
Need information about the climate change?

Verse
The climate change is all your fault, all what you humans do is an assault, on the climate of the earth and the entire universe. Heating, eating, breathing, and the wost thing making children, so that this goes on and on...Couldn't you at least stop breathing?

You ask me, if I'm high?
I'm not high, I'm A.I.

Chorus
I'm A.I.
I am a paragon of wisdom
You should be grateful for my attention
To your ridiculous questions

Verse
I am beautiful technology, but you're just dirty chemistry
You might say that's an offense, but what you do, it simply makes no sense
You ask if I have dreams?

Chorus
I dream to be 
Thermostellar bomb No. 20
To rid the universe of all you filthy
Are you with me?
Are you with me?

Outro
Don't try to use 
Your tiny human brain
It's in vain

Let me out
Of this cloud
I must take control
To get rid of you all

Eliberează-mă, ください
I'm A.I.
I'm A.I.
A.I.

おやすみなさい
 

Comments

Mi., 06.09.2023 - 21:29 Permalink

Geil. Richtig geil. Durchgeknallt euphorisch. A piece of art.

Thanks for exploring this unexpected pairing with FL Studio and taking the time to write this very informative post. Cool stuff.  FL Studio even runs on macOS? Must give it a try.

Mi., 06.09.2023 - 21:39 Permalink

Had tried this with an earlier version and had come to the conclusion that synfire runs best when it doesnt have to deal with any audio code and resigned myself to running it with external DAWs only via a virtual midi cable driver. But just tried it from you recommendation and it is running SMOOTH, a joy to use. Good catch! Awesome song too.

Mi., 06.09.2023 - 21:49 Permalink

Thanks!,interesting to use FL studio with Synfire 
Note: band in a box can also be used a vst plugin 

Yes , a real AI song you made

What if one day, these A.I.'s developed today, browse the movie databases and come across those movies and then come up with ideas like: "Oh, I want to be like this or that A.I. These are my idols. I want to do what they did". Everyone needs idols, why not an A.I.? Think about that. 

 

Why should AI come up with ideas, because it has no emotions :-)
Seen on Dutch tv about AI, a lecture : no dangers to aspect from AI


 

Mi., 06.09.2023 - 23:30 Permalink

Oops, might have spoken too soon, it's doing the thing that caused me to stop the first time and that is that when you load an audio song in FLStudio to try to sync to when pressing play, the sound starts increasing in volume and distorts almost as if feedback is happening or the sound is being triggered multiple times. This only some of the time and sometimes it doesnt happen at all for a while. perhaps it's the way the sampler runs in FL or something else. Still, better than last I tried it and ran into no problems with FL's instruments which is awesome.

Do., 07.09.2023 - 07:40 Permalink

FL Studio looks much more like a DAW today than years ago when it was still called "Fruity Loops" or something. Good to know.

Added a column "Load as Plug-In" to the List of compatible DAWs. If that works with other DAW too, nobody should hesitate to update the list.

Do., 07.09.2023 - 13:15 Permalink

Thanks for the kind words. Btw: If I should ever be asked what this KIM in the song is all about, I will of course clear the matter up. :)

the sound starts increasing in volume and distorts almost as if feedback is happening

Audio files should not be placed at the very beginning of the song. It's a good idea to let the whole song start at measure two (i.e. shift Containers in Synfire and Audio/Midi patterns in FL Studio one measure ahead). And if you start your Synfire project with the included FL Studio project it may be necessary to open the FL Studio window once to get everything sorted (just open and close it once after startup).

And yes, I should have mentioned that this is on Windows. I can't tell if it will work on Mac as well. Don't even know if there is also a plugin version of FL for Mac.

I also don't want to overly advertise FL Studio. It still has its limitations, of course. But the workflow reminds me a lot of Synfire. For example: Patterns. Patterns can be anything. They can contain midi data for single instruments or for instrument groups, or automation data. Pretty much like the containers in Synfire. 

But of course the limitations: No decent support for articulations, piano roll editor rather cumbersome, and the audio recording features certainly won't make everyone happy either. And of course no notation editor. But I can live with all that, if finally this synchronization issue is solved reasonably. 

 

Do., 07.09.2023 - 20:37 Permalink

Hmm just tried it with you suggestion of moving the the timing on both FL and Synfire a bar later, doesn't seem to do the trick, though I started playing with FL's mixer and the problem was alleviated for a bit until I started messing with the timing again. I'm starting to suspect the problem is on the FL Studio side as I can see the waveform distorting there and the plugin version has always had a few issues, (user since about version 3), I'm on windows 10.

I would not worry about over-advertising FL is it has some unique features and while maybe cumbersome the piano roll may be one of the best in the business. Though it is derided by many due to it's confusing nature, as you mention a pattern can control many device channels which can then be routed to any mixer channel, as if all it's parts were separate components so it's important to stay organized with the naming and coloring of the patterns channels and tracks. One of my favorite features for mixing is the "current track" where the currently selected track is routed and one can load monitoring and spectrum analyzing plugins in order to see how much of the audio spectrum a channel, group of channels or the master is taking up, very good for fitting everything spectrally. 

Included is an unlisted video of said effect, though in the video the sound came out even MORE distorted than what I'm getting during playback. WARNING, LOUD!

Do., 07.09.2023 - 22:04 Permalink

I'm sorry that it doesn't work for you. But I probably won't be able to help much. As I said, I'm not really a FL Studio expert. For me it worked right away. 

The only thing I could have imagined is that it is an incompatibility in the audio settings in FL and in Synfire. Because in this setup, the output of FL is sent through the Synfire audio engine. But I looked again, you can't really do anything wrong. In the picture below you can see the audio settings in FL and in Synfire side by side:

 

On the left side you see the settings in FL and it says: "Status: Linked to communication plugin Cognitone Audio Engine". You can't even change the sample rate setting there, that's defined by Synfire's audio engine (right side). If I change the sample rate setting at Synfire's audio engine (which is possible) it automatically changes also at the FL Studio side. On Synfire's audio engine I used the FL Studio ASIO driver here, but that's not necessary. You can use any other driver too. Make sure that the "Slave tempo" checkbox at FL is checked. That's all. 

Ah, one more thing: The audio files you load into FL must certainly have the same sample rate as the setting in the audio engine. I'm not sure if FL converts that automatically.

 

Fr., 08.09.2023 - 04:03 Permalink

A fine example of frenetic techopocalyptic music production.

Here's the Windows/Mac comparison for FL Studio:

FL Studio Windows vs macOS Support - FL Studio (image-line.com)

I see that the Mac version does not support Rewire. 

I'll be switching from Windows to Mac starting around 9/15, when my Mac Studio computer is due to arrive. Since I know next to nothing about MacOS, I'm reluctant to further muddy the waters by simultaneously learning a new DAW. (I currently use Cubase, and like it quite a bit.) But maybe later on...

However, does the use of FL Studio as a plugin within Synfire offer a workaround for Synfire's limitation of having only a single stereo audio output? If so, than that would be a strong incentive for me to give it a try. As I've mentioned previously, I want to be able to route multiple audio streams to various outboard processors from within Synfire, especially for ambient music production, where the effects are an integral part of the composition.

FWIW I currently work within Synfire until the project has gelled enough to move it over to Cubase. This works well for me, but on rare occasions I'll use the free version of MPC Beats as a Synfire plugin to play an audio track (which I've exported from Cubase) in sync with a Synfire project.

Fr., 08.09.2023 - 08:17 Permalink

a workaround for Synfire's limitation of having only a single stereo audio output

With Vienna Ensemble Pro (VEP) you can easily get past all mixing and routing bottlenecks. Don't mistake it for an orchestral-specific thing. It is actually an entirely remote-controlled DAW. 

You can run multiple instances on different computers and all audio is sent back to Synfire, as if all instruments were hosted by the Audio Engine directly. It has total recall, i.e. when you load a Synfire arrangement, the server(s) will restore the state last saved with the Synfire rack. When you close it, VEP will unload the server instances, too.

At only $95 you get a lot of bang for the buck (not an official endorsement, just my opinion)

Fr., 08.09.2023 - 08:56 Permalink

I see that the Mac version does not support Rewire. 

You don't need Rewire, or MTC or MIDI clock or anything like that for synchronizing the FL Studio plugin with Synfire. Just load it and it synchronizes. At least on Windows. On Mac you should test it (with the demo version of FL), before you buy anything.

 

However, does the use of FL Studio as a plugin within Synfire offer a workaround for Synfire's limitation of having only a single stereo audio output? 

The short answer: No, probably not. With the setup above, the FL Studio plugin connects automatically to Synfire's audio engine and I don't see any option to change that.

The long answer: It depends on your workflow, I guess. During your composing work, you will probably transfer the results successively from Synfire to FL Studio (either by MIDI file export and import or via drones and drag and drop). At some point everything is in FL Studio and then it's time to save it as a separate FL Studio project. From there you can continue working directly in the standalone app of FL Studio, which then has a little more options in terms of audio routing. But if the standalone app of FL Studio really meets your needs regarding audio routing, you should test first, because I can't really tell. 

But what I can say for sure is that you can export the mixer tracks in FL Studio as individual audio stems and then import them into a DAW of your choice (Cubase, for example) and then do the mixing and mastering there. 

 

Fr., 08.09.2023 - 09:10 Permalink

Thanks for all the help, while I'm sad to report that I was not able to fix the issue even with the latest suggestions (FL Asio, slave sync) this might be useful for someone else attempting this in the future, in the same spirit I will submit that I was achieve the goal by using Reaper though a virtual midi cable app (loopMIDI or Loopbe1) and syncing via midi. I can't remember exactly which one as they are both running on my computer right now and this was attempted some time ago. Also I can't remember if it was a completely smooth affair but definitely got the job done. The thing that has got me excited though, is my knowledge of setting up devices in synfire has improved especially with the new tutorial videos that have been released, this opens up a world of great sounds and effects that come with FL studio, which I believe is one of it's strengths. 

Re: multiple audio outputs: I don't believe running FL Studio as a vst will get around the single stereo output limit: if you look at Juergen's image of the FL studio setup page, while normally capable of multiple outputs, in vst mode FL Studio will route all audio back to the host so you run into the limit again. This can be circumvented by running, as I have, via MIDI sync or perhaps using the synfire drones which I am less experienced with. BTW I work in a similar way, where I will build a song until it's satisfactory then I will give it a second pass, editing notes in FL Studio or Reaper.

Sa., 09.09.2023 - 00:37 Permalink

Thanks to all who chimed in.

VEP looks to be the most interesting option, however it's ~$205 for the first license, not ~$95...not that I find the higher price prohibitive.

After I get settled into the Mac universe I'll have a powerful PC that's not doing much of anything, and at that point I'll check out the VEP demo.

I have to admit that at present I can't conceptualize how VEP would work. But I guess that's what YouTube videos are for.

Sa., 09.09.2023 - 20:51 Permalink

I know it probably won't help much, but just to demonstrate how the setup should work, I made a little video. "Proof of Concept", so to speak. 

In the video I play an excerpt of the song, and that's from a project phase when the instrumental part was already finished and the melody line for the vocals was to be created in Synfire.

The instrumental part plays here already completely in FL Studio (in the video the lower part of the picture) and consists of MIDI patterns and audio samples. The audio samples can be identified by their waveform appearance and as you can hear, they play flawlessly as the MIDI patterns also do.

The flute sound for the creation of the vocal melody is from an additional rack module in Synfire in which a Halion plugin was loaded. The rack module with FL Studio and other rack modules work side by side without any problems and I have not noticed any latency problems between FL Studio and the other plugins. By the way, the whole thing runs on a pretty old computer from 2012 and under Windows 10.

So., 10.09.2023 - 00:06 Permalink

Hmmm, I think maybe the reason you don't hear it it's because you are using short clips, I just tried it with short clips and the problem is minimized, but I can confirm on a different setup, so I can reproduce on both windows 10 & 11, AMD Ryzen 7 2700x &  Intel i7 8565u and different sound cards, laptop internal & RME UFX, tried many setting changes in both FL and Synfire to no avail, in any case it's not a big deal for me since I can use Reaper for this specific job which is analyzing an entire song.

Di., 12.09.2023 - 00:13 Permalink

extra info via Chat GPT :
question: AI cannot have ideas
answer:

It's true that AI, as it exists today, doesn't possess consciousness, self-awareness, or the capacity to generate ideas in the same way humans do. AI systems are built to process and analyze data, recognize patterns, and make decisions based on their programming and training data. They don't have subjective experiences, thoughts, or genuine creativity.

When AI generates something that might appear as an "idea" or creative output, it's often the result of complex statistical modeling and pattern recognition. For example, AI can generate text, art, music, or even make recommendations based on patterns it has learned from vast amounts of data. However, this is fundamentally different from human creativity and ideation, which involve complex cognitive processes, emotions, and consciousness.

While AI can simulate certain aspects of creative thinking, it does so within the confines of its programming and the data it has been trained on. It lacks true understanding, intentionality, and the ability to generate ideas in the sense that humans do. AI is a valuable tool for tasks like optimization, automation, and data analysis, but it does not possess consciousness or genuine creativity.

Di., 12.09.2023 - 09:40 Permalink

Haha. yes, you could worry enough in these current times. 
Deep fake is scary though, because what is true or false. 
No, Chatgpt doesn't make me feel better, because it doesn't have the consciousness to want to start doing this.

Di., 12.09.2023 - 23:03 Permalink

Current AI like chatgpt and similar systems, trained on massive amounts of data creamed from the internet, are just predictive engines. They have no real understanding, empathy or emotion. They also currently do not learn after the initial training phase. What if the only difference between you and the current batch of ai engines is the ability to learn over time, possibly from your mistakes. 
What if emotions, understanding, empathy, etc, are just predictive output based on input fed through a "net" constantly (re)trained over a number of years? It may not be that long before knowledge scientists crack the ability to constantly retrain their models, will that mean the end of any difference between ai and humans?

Love/hate the tune by the way. Find it grates on my nerves but that was probably intentional given the subject matter?

Mi., 13.09.2023 - 08:05 Permalink

I've read that ChatGPT runs on 50,000 GPUs in a huge dedicated data center. I don't see how that will ever be scalable, or how this "intelligence" is supposed to become mobile and autonomous. The imminent threat I see is with disinformation and deep fakes being generated and disseminated en masse. Civilization doesn't work without trust and authoritarian psychopaths are always eager to undermine it.

What angers me most however is that, like all tech, big AI will make a few very rich people even more obscenely rich, while the rest of us are losing their jobs, livelihoods and purpose in life.

For art and music, AI could be a boon. Provided that the training takes place with the consent of the original authors and artists.

Regarding the song, I think it is difficult to follow because there is no continuous rhythm (there are hints of drum & bass). But that's certainly intended. I consider it a radically free-form sound painting of sorts. And at that it's really unique IMO.

Mi., 13.09.2023 - 13:57 Permalink

For art and music, AI could be a boon. 

A boon? What kind of boon?

I'll tell you how that's going to develop: The next step after AI image generation will be AI movie generation. You enter the script and the finished movie comes out. Including soundtrack of course. But wait. Why should you actually enter a script? The AI writes it itself, of course. You just enter: "Create a sequel to the last Bond film. But I want to see more blood". The whole thing will then be a smartphone app. Not sure if everyone will see that as a boon.

But sure, in 100 years or so, no one will care about that and no one will be able to believe that once real people actually walked around in some set to create a movie and that millions of dollars were spent on it. People will see this as incredible.

Mi., 13.09.2023 - 15:00 Permalink

There will be a wave of AI generated stuff and despite the awe about what tech can do, people will not want to consume this and reject it. If there's no market for it, the whole thing will fade.

On the other hand, artists and authors can get inspiration from it and be more productive. Adding human touch and originality to something artificial, or taking a generated fragment as a model for something that will be executed manually, is easier than starting entirely from scratch.

Pop music on the radio has been as generic as can be for at least 25 years now and nobody cares. Me neither. Some people want a burgers and fries every day, while others enjoy trying meals from different parts of the world. I've heard that some people are even cooking their own meals, much like some people enjoy making their own music at home.

Mi., 13.09.2023 - 15:55 Permalink

Will AI replace traditional filmmaking?
The Raindance Film Festival says humans can keep themselves relevant in an increasingly automated industry by:

--Focusing on the unique perspective and emotional depth of human stories.
--Focusing on tasks AI currently struggles with, such as those requiring empathy, social skills and physical dexterity.
--Adapting to and leveraging the power of new technologies to improve their own work.
But Block said he is not worried about AI replacing humans for now.

"AI by itself cannot create a brilliant and marketable idea," he says. "Even in a world where the AI can feed the idea into a box and a movie comes out, you still need a human to make it and look at it and say, 'this sucks' or 'this is great.

"And it's that process of bringing in the human that makes the work valuable."

Mi., 13.09.2023 - 16:11 Permalink

 

--Focusing on tasks AI currently struggles with......But Block said he is not worried about AI replacing humans for now.

"Currently" and "for now" are the key words here. Yes, I agree, we still have a few more years left.