On Intelligent Generators

Artificial intelligence has been around for decades, but only recently neural networks, deep learning in particular, became a hot topic again. I wanted to share a bit of insight into Cognitone's perspective on this is and what we are currently doing in the way of new intelligent features for Synfire.

One of Synfire's unique strengths is that you can harvest any MIDI material, including your own takes, to collect a wide variety of musical expressions (phrases) for arbitrary re-use, transformation and combination. Playing around with the example phrases that ship with Synfire already is some fun, but the prospect of building original music based on your own breed of expressions is more compelling, of course.

Once you got the hang of creating your own phrases, it quickly becomes straight forward routine. Above all, by working with your own material (wherever you got that from), your output is not limited by some hard-wired concept of musical style and structure, or by a random generator that does magical stuff you can't control. 

Admittedly, to enjoy this freedom, you need to invest a little bit of work up front. Wouldn't it be nice, if this was an assisted and largely automated workflow? Imagine the productivity, fun, instant reward and lucky surprises this would entail. 

Motivated by this prospect, Cognitone is currently researching new technology that would help with the process of creating new original phrases, harmony progressions, parameters and possibly even entire songs, by introducing assisted and automated elements into the workflow (don't hold your breath for that complete random song feature, though).

Generators, anyone? Well, not so fast. In order to be effective, every music generator has to make automated choices. Whether that be generated notes (e.g. from fractals, functions), or blends of previously trained patterns (e.g. output of a convolutional neural network). Beyond the latest neural network craze, there are also traditional generative concepts in AI, like generative grammars, finite automatons, inference rules and more. All of these could be utilized for generating musical data.

In general however, randomness only works to some extent, because music isn't simply a series of notes. The often boring and by no means catchy output of countless random music generators out there has proved this for years.

Music is a Language

The saying goes that music is a language that is understood everywhere. In fact, it really is a full-blown language with a vocabulary, grammars and semantics. This also explains why randomness isn't cutting it: Imagine randomizing the words or letters of a book. Not many people would want to read that. In order to generate meaningful results, a generator needs to take account for the inherent rules of a language.

Interestingly, what good composers always did is define their very own language and build all their works based on expressions of that language. Every work basically is an expression of words, sentences, paragraphs in that language. This is what makes up a recognizable style. And it's what makes a composer successful.

And this is also why making elements of a language become permanent parts of the user interface might not be the best idea (e.g. Coda & Reprise, Question & Answer), unless the tool is designed to cover a specific range of musical genres and eras. What if my music follows different ideas?

Therefore any truly universal generator requires a customizable definition of a language (style/genre), the many parameters of which could then be randomized. The downside of this clean approach is that Cognitone (or any other developer for that matter) can't possibly create hundreds, if not thousands, of such language definitions in advance (cost aside, no single developer is probably able to understand and define all the intricacies of every style out there). Even if we did, chances are most composers still won't find what they are looking for. 

So what Cognitone is therefore doing right now, is experimenting with technology that, if possible, lets users define their own musical language elements and structure that Synfire would use to generate phrases, parts and possibly entire songs. Ideally, these languages could be shared and modified, just like arrangements and other files.

Neural Networks

As we are at it, I should mention that I spent the better part of 2017 experimenting with convolutional neural networks (CNN), coming to the conclusion that they don't suit the purpose well. First, these networks were primarily invented for recognition rather than generation. Second, training these networks for all thinkable styles our users might want to compose is an impossible task. 

And third, since nobody wants to end up with variations of the same stuff thousands of others are also generating, a composer will want to modify and extend the styles, if not create their very own. Unfortunately for CNN, this requires extensive AI knowledge, special hardware and huge databases of pre-edited and pre-tagged music examples. 

So, however hip that deep learning thing currently is, for the purpose of composing original music it is only of limited use. Oh, and if you happen to have at hand so many examples of elements you envision using yourself anyway, why not import them into Synfire and start right away?

Outlook

Although there is no estimate yet if and when some of this will surface as a new feature, I thought it would be nice to let you know what's going on behind the scenes here, in addition to the day-to-day software engineering around our agenda that grinds its way forward.

(comments are welcome in this thread)

Scholarly Lite is a free theme, contributed to the Drupal Community by More than Themes.