The era of “integrated music api´s”

This post was written for my client Spectralmind and appeared initially on their blog:

In a recent blog-post, music analyst Mark Mulligan muses about a “Music start-up strategy 2.0“.

In the essence, he asks the question, whether or not a music startup necessarily needs to obtain music licenses from record labels. This is a question, which we discussed quite seriously at Spectralmind as well. Why would a music tech startup need music licenses?

Of course, we need large music catalogs to analyze them. The larger the better. Currently we work with sample libraries in the range of 100k items.  What we need is temporal access to music inventories for the sake of running the tracks in high-speed through our analysis software, without altering them in any way. But sole analysis does not necessarily presume to acquire music distribution licenses. This seems to be in line with what Mulligan says:

“…. in the more immediate term start-ups should look at ways to deliver their experiences without licenses.  No I’m not advocating the Groove Shark approach, but instead leveraging the content licenses of digital music services that are pursuing ambitious API strategies.  Music start-ups should think hard about whether they really need to own music licenses themselves to deliver a great user experience, or at least whether they need to right away ……. In the era of integrated music API’s it is no longer crucial for a music service to have its own licenses.  An investor wouldn’t expect a mobile app developer to own Android, iOS or Windows Mobile so they need not expect a music service to own music licenses.”

Mulligan addresses the era of “integrated music API´s”. In fact, there is a range of companies out there, some of them startups itself, which are striving to fuel a new wave of music applications by granting access to music, music metadata or other music related information.

In other words: the scope of upcoming music apps goes far beyond the creation of just another download storefront or just another streaming portal. Playback of music is certainly a central use-case, but there´s much more possible with music. The interaction of music consumers with their content is manifold, and with the broadening of digital listening experiences (e.g. through smartphones, cars, connected homes), new needs for contextual services to improve discovery, search or social interaction about music, emerge. This new breed of music apps does not only accommodate to consumer needs, they help to create differentiators for the established digital music distributors, each of them struggling to extend their footprint, if nothing else, than to generate the returns needed to cover the upfront payments for music licenses.

Search Ain’t Misbehavin’

This post was written for my client Spectralmind and appeared initially on their blog:

Searching music offside of the mainstream can be tedious. Recently i fell for a particular jazz piano genre, called “Harlem Stride Piano” while listening to a radio broadcast. Stride piano developed in the 1920s and 1930s in New York as an advancement from Ragtime. It is characterized by a rhythmic left hand play, where the pianist alternates a bass note or octave on the first and third beat with chords on the second and fourth beat, while the right hand plays the melody line. This causes the left hand to leap great distances on the keyboard, often at neck-break speed. Back then, pianists like Fats Waller, James P. Johnson or Eubie Blake were famous stride virtuosos.

Louis Mazetier introduces harlem stride piano

Today, only a few pianists are capable to play stride, and I was curious to find out about contemporary ”Harlem Stride Piano” interpreters and recordings.

The textual search for “Harlem Stride Piano” in iTunes led to zero results. Even in the advanced search of iTunes, you can only search for artists and interpreters, title- or track names, but not for genres. A search just for “stride piano” brought up one album, fortunately carrying both terms in its title. Similar, Spotify´s search for “Harlem Stride Piano” did not match anything, whereas a search for “stride piano” returned a few albums because of the use of the terms “piano” and “stride” in their titles or tracks.

Still unsatisfied, i continued the search for contemporary stride players in Google, YouTube and Wikipedia to find out about artists like Louis Mazetier, Günther Straub or Bernd Lhotzky. Knowing their names finally helped me to find the desired tunes in iTunes and Spotify.

This little research clearly depicts the limits of text based music search. It´s results depend largely on the coincidental presence of the chosen search terms in the title or artist name. If you have nothing but a tune, search is often impossible. What´s missing is search for music based on the sounds of a sample track.

While chasing contemporary “Harlem Stride Piano” records through Spectralmind´s audio intelligence platform, I certainly would have used Fats Wallers “Ain´t Misbehavin“. For sure, a sound-similarity search would have brought up more and better results in far less time.

Music – and How Computers Hear It

This post was written for my client Spectralmind and appeared initially on their blog:

Spectralmind works with music. But what is “music”? A look into Wikipedia  gives some helpful clues about music, and unwittingly, even about Spectralmind:

”Music is an art form whose medium is sound and silence. Its common elements are pitch (which governs melody and harmony), rhythm (and its associated concepts tempo, meter, and articulation), dynamics, and the sonic qualities of timbre and texture.”

In fact, these described elements of music are the ingredients Spectralmind uses for the creation of music tech products. Music is the base material from which we explore, analyze and extract information:

Algorithms, packaged into software, “listen” to music. What the algorithm “hears”, are music properties, including rhythm, timbre and many more.

Of course, a computer does not perceive music like humans do. Computers just calculate, they cannot take into consideration the cultural heritage, emotions and interpretations human listeners feel or are aware of.

”The border between music and noise is always culturally defined—which implies that, even within a single society, this border does not always pass through the same place; in short, there is rarely a consensus … By all accounts there is no single and intercultural universal concept defining what music might be.” (musicologist Jean-Jacques Nattiez, quoted in Wikipedia).
Applying a uniform algorithmic evaluation across a large number of music titles creates an objective mathematical description of each piece of analyzed music and, derived from here, an approach of comparability. We call it “music intelligence”. Such intelligence can be exploited in various ways like identifying music, determining similarities between music titles or organizing music. Still, there will always remain a gap between ”human understanding” and ”machine understanding” of music, as there will always be a gap in the understanding of music between human listeners.

“The creation, performance, significance, and even the definition of music vary according to culture and social context.”

Ever increasing sophistication of algorithms and availability of computational power lets us apply the music intelligence approach on large catalogs of music, thus eliminating great portions of cost and manual labor for large inventory music classification.