Make Music, Not Tone-deaf Conjectures about Robots

June 14, 2019

“I actually think 10 years from now, you won’t be listening to music.”

--venture capitalist Vinod Khosla, as reported in Techcrunch

 

One hundred years ago, people gathered in parlors to sing sentimental ditties from sheet music or centuries-old ballads from long memory--seriously, the Child Ballad “Lord Lovel” was still a big rural hit in parts of the South and Midwest in the post-WWII era. They were, to garble the historical perspective, producing their own customized soundtracks to their lives, by singing it or playing it themselves.

 

If you have never done anything like this, or never done it with any real gusto, maybe this kind of music making feels like a relic, a chunk of the past that is no longer all that relevant. Maybe you want to get totally crazy here, just for a minute, and posit that algorithmically generated music will somehow replace that experience with a customized passive soundtrack tailored to each listener. No more listening, no more music!

 

But what happens when two people, say, decide to blend and coordinate their personalized AI-generated soundtrack? Because you know it will. It’s too fun not to.

 

Oh. Wait. Hey, they are making music. That’s also a customized soundtrack of their lives. Just like those old-timey folks around the parlor piano singing about the tragic exploits of lovers long, long dead. Well, what do you know? (This is the kind of activity encouraged by cool startups like Endlesss, which you can hear more about on our podcast from MIDEM here.)

 

This one-off goofy statement by a VC points to something greater: There is a surprising tone-deafness among many in tech--not all, but too many--to music. They seem to have never experienced the intimate transformation that humans feel as they organize sound, shaping it with their bodies and minds simultaneously, molding it as a group or exploring it alone. In the good moments of this process, the sound and the maker converge and then merge. You are so with the beat that it’s you. Your voice blends so perfectly with a tone, it disappears. This is everyday stuff for music makers and lovers.

 

Yet it seems remarkably rare knowledge among those tossing around the millions. Without this basic knowledge about what music does in certain use cases, so to speak, you just don’t get it. All the smarts in the world won’t help you make sense of why, say, the opposite of this AI music-driven future is just as likely: a sharp reaction to machine-generated music that focuses on the artisanal sides of sound. With all shades of possibility in between.

 

Not hearing this aspect of musical experience--or any aspect beyond slapping in your earpods and turning on some bland electronic swishing to better focus on disrupting whatever--means you fall face first into ridiculous Jetson-esque futurism. You pursue the linear projection of a narrowly envisioned technology, but with more robots, and whoops, no more music listening. Let’s not get carried away because we’re at the very beginning of AI music’s technological evolution.

 

Let’s listen to what music actually is and does, in all its multifaceted glory. We often frame technology as changing everything, completely and utterly, yet social continuity, like inertia, is powerful, powerful force. Ten thousand years of musical practice won’t disappear in a decade, and technology usually elicits this kind of wild projection, as the history of reaction to photography and sound recording demonstrates.  

 

Just looking at Spotify usage, you see a diverse set of patterns, with people making playlists, listening to radio-like features, checking out curated playlists, not just doing one thing like, say, binge-watching, as they do on Netflix. People use streaming recorded music in radically different ways, and this is but a subset of the musical options out there. Because music isn’t just one thing, and it has always been customized to fit the listener’s life, in the moment, using available technology.

 

So, even if there are robots, we humans will listen to music. Because we will be making music. We’ll still be singing and composing songs about mythical loves from other times and shores, doing it together, and longing to hear more, from our own voices, hands, and speakers, and from our own and others’ hearts. 


 

Please reload

Conference

Music Tectonics: at the epicenter of music and technology

October 28-29, 2019 | Skirball Cultural Center | Los Angeles

​​

Podcast

The Music Tectonics podcast goes beneath the surface of the music industry to explore how technology is changing the way business gets done. The podcast includes news roundups, interviews, and more. Our host is Dmitri Vietze, CEO of PR firm rock paper scissors.

  • Grey Twitter Icon