Direkt zum Inhalt

Experimenting With Generative Features

Posted

As summarized in a blog post published today (which hereby I want to promote), we are experimenting with potential new features that help with the process of creating your own unique phrases and parameters more quickly and easily.

In this thread, I hope to share some preliminary results in the near future. Feel free to add your comments and suggestions here.


Sa., 24.02.2018 - 14:44 Permalink

First of all thanks Andre. I want four wishes new version. 

1. Hardware integration ( I know there is possible for all keyboards MSB and LSB program. ) But This is so complex to make sample player by used Sound editor.      

2.  Drag and drop midi in Synfire. 

3.  Articulation note used as a VST. I know This is possible. But Sometimes so complex for VST. For example Ample Guitar strumming pattern. It is problem for Articulation to make a single figure. 

4.   Figure editing and combine something like this more simple than now. Because We don't achieve one more phrase library to combine one as a simple.

 

Thanks a lot again.

Mo., 26.02.2018 - 22:22 Permalink

Hi Andre,

I would like to have a bass generator that works independent from the chord progression ("orgelpunkt, pumpbass ect.")

And it could be useful to give the chordlayers more freedom in the selection of chords. For Jazz and Classic Styles sometimes it is useful to work with dissonant Clusters.

I would like some more possibilities with the variation creator. The three presets often are to raw. Something like a variation creator would be nice.

A generator for unisono tracks would be nice. Example: a Bass Track and a Melodie Track play exact the same notes with a preselected interval .

Mi., 07.03.2018 - 11:24 Permalink

A good point akem!

For "Orgelpunkt" or "Ascending/Descending Basslines" it would be very helpful, if "Live Chord Detection" could determine correct Bassnotes.

Example: If you are playing a C-major chord and then change to F-Major with Bass C (Orgelpunkt Bild entfernt.), "Live Chord Detection" is showing the correct F-Major chord, but with "Bass 1(F)", which is definitly not correct. Maybe there are settings, to change this behaviour, but I have not found them yet.

A "Live Chord Detection", that considers the bass notes correctly, could really improve intuitivly playing with chords and basslines Bild entfernt.

Many chord progressions work in a way, that only one note of the chord moves up or down, while the other notes remain, where they are. If "Live Chord Detection" determines the chords considering this behaviour, it is much easier for "Phrases" and "Figures" to interpret the progression as intended.

More freedom for chordlayers and variations Bild entfernt.Bild entfernt.!

Sa., 21.07.2018 - 23:48 Permalink

I just read your post about an interface to generators (a few months late). I have been using (and designing) algorithmic composition software for over 20 years, and so this sounds rather interesting to me. Has there perhaps been any development since your post you would like to share?

Best,

Torsten 

So., 22.07.2018 - 02:17 Permalink

EDIT: Please disregard what I wrote below: the MIDI export is actually better than I remembered – just trialled it with some algorithmically generated material  :)  

The main problems for me personally are certain rhythmic restrictions (e.g., tuplets are seemingly not always correctly recognised,  and complex rhythms are seemingly quantised). In particular the export of tuplets is restricted, but such problems are not related to the post below.  

 

If Synfire would implement some sort of API, e.g., some text-based data format, for specifying Synfire figures, that would allow users to algorithmically generate figures in various ways in their preferred algorithmic composition environment. For example, relatively widely used computer-aided composition systems are Opusmodus ((http://opusmodus.com/), currently my tool of choice), Max with the Bach library ((https://cycling74.com), and

Home
), Open Music ((http://repmus.ircam.fr/openmusic/home)), PWGL ((http://www2.siba.fi/pwgl/)), and Common Music ((https://sourceforge.net/projects/commonmusic/)). 

To an extent, importing individual figures generated in a computer-aided composition environment is already possible today by exporting some algorithmically generated MIDI file and then importing that (or a part of it) as a figure etc. into Synfire. However, such an approach requires first further editing of the imported MIDI data in Synfire (as does any MIDI input). It would be more interesting to instead be able to generate material, where users could algorithmically determine the anchor of a segment, or that certain symbols in the figure are, e.g., supposed to be bass notes. What would be particularly useful would be a way to import an algorithmically generated longer section that is already subdivided into individual segments, i.e., some API support for marking boundaries between segments (e.g., using nested data structures like Smalltalk arrays). Such an interface could provide a smooth connection between both worlds: the algorithmic generation of some raw material in some algorithmic composition application on the one hand, which is then manually revised on a high level of abstraction in Synfire and harmonised on the other hand.

All the above-mentioned systems happen to use or support Lisp syntax and to run Lisp code. (Most of them are implemented in the programming language Common Lisp, and Common Lisp is part of their user interface. Even for Max – which is not based on Lisp – exist extensions to run Common Lisp code. Only Common Music is meanwhile based on Scheme instead.) Translating a static data structure from Lisp syntax into Synfire's Smalltalk would be rather straightforward. If Synfire would offer some documented data input format for figure sequences in, say, its native Smalltalk syntax, or some widely used simple format like JSON for better integration with even more systems, I could provide some Lisp interface that would output this format, and which would allow all the above-mentioned systems to output figure sequences to Synfire relatively easily. 

Of course, that would be a very loose connection to algorithmic composition systems, and a more tight coupling (where figures can be algorithmically generated and transformed on the fly) would be more flexible, but I figure (pun intended ) a static "figure API" as proposed above is much more easy to do, and its appeal would be that it could form a bridge for many computer-aided composition systems to Synfire.

Best,

Torsten

 

Mo., 23.07.2018 - 09:59 Permalink

Hi Torsten,

Thanks for your suggestions. As said, import/export of vector data is already on our agenda. Whether that be XML, JSON or some LISP based format has to be seen. 

Figures however are structured vectors with different classes (symbol types) and groupings (segments), where time is expressed in relation to the anchor symbol (+/-). This is very specific to Synfire and is therefore unlikely to be supported by third party software anytime soon.

So your best bet is to have third party tools export linear data, like MIDI or CC, and have Synfire convert that according to your preferences.

Or go and write your own tools, of course, which are fully aware of Figure semantics.

So., 26.08.2018 - 15:04 Permalink

Thanks Andre for the insights into what directions you are thinking.

Having the option to influence the composition on all levels of abstraction is imho important. So if I come up with a certain concept of the song I want that to be respected. And if I come up with a certain imho cool melody phrase I want the program to also take this as specified and weave the rest around it. (That's still one of the points I sometimes struggle with Synfire - to make it to play some melodies exactly as I specified them).

But having the option to be able to define the "language" a generator uses to create notes, chords, phrases, sets and even songs ourselves which then can be shaped with varying levels of parameters, randomness or "providing examples" also sounds highly interesting. It would be composition on another level. Not necessarily better or worse, just different.

As long as we still have the option to overwrite any decision made by the algorithm and the algorithm being able to adapt to these manual overrides to make everything coherent again. Such a system is presumably much more challenging to come up with - but also what would make such a software really great in not replacing the composer but giving him/her a sparring partner for a creative dialogue - to throw ideas back and forth between person and machine!)

Mi., 26.06.2019 - 20:43 Permalink

Sorry to bump this thread up again, but I thought it fits best here:

The other day I stumbled across this one: (https://generated.space/sketch/rough-apparatus-client/) and found it nicely visualizes how generators based on parameters tend to create variations of always the same. If you saw two or three of them, you've already seen them all. If this was somehow translated into the music domain, you'd have a generator that creates variations of a style (actually: someone else's work). 

If for example you had a generator for 12-bar Blues, you'd get a thousand variations of blues riffs, all of which will much sound like you've already heard them decades ago. Why is that? Because the style itself is what bears the qualities of an original work, not some variation of it. We instantly recognize "12-bar Blues" as an expression of a bygone historic era. I'd say 99% of listeners can't even tell the difference between the countless interpretations and variations that have be around since that time (and are still created today).

The "Rough Apparatus Client" algorithm is the original work. The many variations it generates are merely renderings of that.

So a software that is supposed to help you 'compose' would ideally help you develop a new style. Something that is instantly recognizable and refreshingly different. Generating variations of that style is the easy part: Create one original style and 12 variations of it and your new album is complete ;-)

 

 

 

 

Mi., 26.06.2019 - 22:47 Permalink

Nice link! And well spoken, the unique (but not random!) ideas are what makes a piece special, less the 101th variation of a known phrase!

For some pop songs it sounds as if the audio variant of such a permutation algorithm was used. With the biggest common similarity between the top 100 chart songs from the last years as template, removing everything that made the individual songs special and great. Then heating up the marketing machine to still convince people that the product is supposed to be liked...

Do., 27.06.2019 - 10:47 Permalink

For some pop songs it sounds as if the audio variant of such a permutation algorithm was used

In these cases, it's the production values, the artist's performance and the fortunes invested in hyping the act, that make the difference. But that's a different business: show business.

As a composer or songwriter, your job is to come up with melody, rhythm and harmony that can speak for itself, until eventually some artist enhances it with their performance (unless it's instrumental). Pop is full of stereotypes indeed, but the styles of successful acts are still distinctive enough to be recognized on a solo piano. There is more to it than just four chords.

Even in the extremely stereotyped world of Pop, the original style of an artist builds on attributes that couldn't easily be implemented as parameters of some generic, all-encompassing 'Pop' style. One would probably need a choice of 50 distinctive Pop styles alone. And then comes along a disappointed user that was looking for style #51 ... 

I'm 100% convinced of (and committed to) tools that support creating your own style. Sure, it means more initial work than merely fiddling with random parameters, but the reward is tremendously higher. And there are no limits to what can be created.