Is Music Making Up for Grabs?
- Evan Nickels
- 2 hours ago
- 15 min read
What if every fight over music technology throughout history has actually been the same fight, and we’re just now facing a version of it we’ve never seen before?
In this special episode, Dmitri shares a keynote he gave at the Algo Rhythms conference last month called “Is Music Making Up For Grabs?” Drawing on four hundred years of disruption in music, from the harpsichord to amplification, Dmitri traces the pattern of how every generation has fought over new tools and every generation has been wrong about what those tools would destroy.
But this episode isn’t just a history lesson. It’s a live argument, complete with the Kalyuka, the WARBL, and few sounds you won’t expect. Along the way, the stories of T-Pain and Blanco Brown show exactly where the pattern holds and where it finally breaks. Because the question Dmitri lands on is one no generation before us has had to answer. Not what counts as an instrument, but whether the creator is still human.
The news
Listen wherever you pod your casts:
Looking for Rock Paper Scanner, the newsletter of music tech news curated by the Rock Paper Scissors PR team? Subscribe here to get it in your inbox every Friday!
Episode Transcript
Machine transcribed
[00:00:00] Dmitri: Is music making up for grabs? Who gets to decide?
Thank you and thank you Alan and and IU and all the participants here. It's great to be here at Algorithms. This is one of the earliest forms of music making. Just a simple read, no finger holes, just the breath. The Kalyuka or overtone flute traditionally was made only in the springtime when the center, the pith of the reed was soft enough to pull out and long before email, let me put this away.
Before I hurt somebody, long before email or doom scrolling, shepherds would use the Kalyuka to communicate from hillside to hillside across valleys, and by the end of the day, the reed would dry up, wither away, the flute was gone. There's no software update, no platform, no algorithm, no offense, just a human making sound for a moment, just like we're here in the springtime, just for a moment.
It's very ephemeral. This group of people is only here right now. Flute, music, sound, all of it disappears just in moments. And for most of history, when music changed, so did the instrument in the 16 hundreds. Lute players were pissed at harpsichords. You know why they spent. All their lifetime learning how to play the fast plectrum, strumming, the notes, et cetera to play.
This is 400 years ago, to play the sounds of the masterful works that were meant to be played like that. So what happened? Automation harpsichords are literally automation. You don't think of a wooden harpsichord as automation, but that's what it is. 10,000 hours of practice just wasted In the late 18 hundreds, piano players feared phonographs.
You know that the original record was actually sheet music, and if you wanted to listen to a song, you had to play it yourself, right? That's how it went down. And the music establishment at the time was concerned. These were teachers and publishing like sheet music publishers. Basically, they were concerned that, an entire generation would become illiterate in music.
John Phillips Souza called these New Players of Phonographs Records a menace. The pattern repeats itself in the 1960s, purists, criticized, overdubbing, the process of recording one instrument and then adding another one and so forth because. real bands, they record all in one take. Right. And in the 1980s, orchestras protested against synthesizers and drum machines.
How could a, here's the keyboards coming back to ruin somebody's life, right? How could a keyboard player play the, power and the feeling of an entire stream section or the entire funk of a horn section? In the nineties, opera singers fought amplification at festivals in San Francisco in the 1990s, not the 1890s.
In the 1990s, this was amplification was being fought because you must sing from your chest. That is the craft. You have to sing and project all the way out to the back, right? then things get really crazy. but the ca, the pattern still continues. In 2009, Christina Aguilera donned this shirt.
In Broadway, musicians protested the use of DJs on Broadway. They're not musicians. DJs protested the use of auto sync software because it allowed, newcomers to DJing to, beat match without using their ears. Right? They're not DJs. This is the point where things really start to blur.
I've been talking about instruments having turf wars with each other. And now we're talking about what is an instrument? Tape machines record players autotune, but even as the instruments shifted from physical to digital and music making went from originals to remixes, the crater was always Human DJs are human.
Okay. These technological changes swing between disrupting how music is made, the loot player versus the harpsichord, and how we experience music as listeners. Play our piano versus record player. And over time a lot of the inventions blurred the line between creating music and experiencing music. We talk about listeners a lot.
I think we're gonna start talking about experiencing music. There's something in between playing and listening that's happening a lot. Playing a record is only one step away from scratching or remixing. And making a playlist is only one step away from being a DJ at a party or maybe at a venue.
And each generation used new technology to redefine who gets to define what is music, what is style, what is genre, what is cool. So there's this knee jerk reaction that we've seen historically for literally hundreds and hundreds and hundreds of years around new technology and creating music. But I think we can't just look.
at technology inherently as destructive. We could ask some more specific questions around creative impact, cultural impact, and market impact. With each technological shift, how is creativity impacted? Harpsichords, record players, tape machines, overdubbing, synthesizers, drum machines, amplification.
Sure. You could argue that they're a threat. They create fear for people who are the incumbent players of a sound or a genre. But in each case, new scenes and new genres and artists emerge. They speak to a new generation, to a different region. Lemme tell you a story. There was a boy in Tallahassee, Florida, an 8-year-old who loved music.
He was going to be making music no matter what. And one day his dad came home with a keyboard he found in the neighbor's trash, covered in ants, and the kid took off all the ants, disinfected. It started playing the keyboard. He learned by listening to the radio. He wanted to learn all the basslines from the pop hit songs that he loved on the radio.
And one day he heard a Jennifer Lopez song and he was like, there's some effect in there that I love. Because he realized as a rapper he was pretty good. He was getting known in his local, regional area, but he wasn't sure that he would be able to himself from all the other rappers that are also getting good there.
He's like, I need something. I need a little something else. It took him two years to find that vocal effect. It was autotune. and this is who that little boy was. Tallahassee, T Pain is his name. and he, put out this record called Rapper Turnt Singa, because he was trying to figure out how do I be more than just a rapper?
Let's hear what it sounded like, because he changed the whole sound of music.
So you might think that autotune was for correcting people who were singing out of pitch, and maybe it was that's what it was invented for. But once again, creative musicians, human beings tweaked the technology to do something different with it. That warble sound became known as the T-Pain effect, and it took over the entire scene.
So eventually it became known for and well loved within. Disco Cher, thank you very much. Rap Hip Hop, and now Afrobeats, you can't, you practically can't hear an Afrobeats song, which is exploding without hearing that Warbly sound. So that's autotune. You can't deny the human creativity of scratching on turntables, turning a record player into a song maker time, traveling the same way that Stravinsky did through genre mashups.
That's human creativity. Some of you might think DJing is not making music because it's playing other people's music. You think until you realize DJs are also playing the crowd using a unique collage of sounds to transform the energy of a dance floor. Of course, a lot of them DJing is their entry drug, right?
What they end up doing is actually making beats, remixes, scratching, all that other stuff as well. So sure it's derivative at first, but it's creative. And playing other people's songs is the backbone of folk music all over the world. So here we're our DJs. They're not real musicians. Go back hundreds of years.
What is folk music? It's literally cover songs. That's what it is. And what we love about cover songs is how each individual artist makes it their own right, which is what DJs are doing. Let's listen to a modern day folk song.
Okay. I can't play too much of that in school. Like that's not allowed. The next set of words, you're not gonna wanna hear in the classroom necessarily. Some of you heard this, the the modern day folk song. You're like, oh, that's Tom Tom Club. I love Tom Tom Club. They're the spinoff band from the Talking Heads.
When David Byrne went and did his solo career and other people like, that's Lotto. That's big energy. That came out a few years ago. That was a huge hit. It is both. In fact, genius of Love by Tom Tom Club has been sampled over 180 times. you may have also heard it in Mariah Carey's Fantasy. That song put the band, the, the members of Tom Tom club's kids through college, because they were attributed, it was licensed, they were paid.
So, even though it's derivative, right, Lotto literally made a song on the back of this beautiful melody, but it ended up being a time traveling song in collaboration with a band from decades earlier. So in these examples, from autotune to DJing to samples. Humans are still in charge. They're in charge of those artistic and creative directions and decisions.
And again, the definition of cool changes, but the definition of who gets to define music changes and. They're still people, humans playing sound, composing new works. Are you following me? You with me here? Yeah. So let's ask, do these shifts make people like music or less? This is where I say is the cultural, what's the cultural impact here?
We talked a little about the creative impact. Now let's talk about what does this mean for the rest of us? If harpsichords record players, tape machines, and overdubbing synthesizers, drum machines, amplification. Autotune and DJing. Create new artists. New genres, new styles. That doesn't make the fans of prior music like that.
Prior music less does it. If you're over there listening to your classical music and you're like, Nope, nope. Don't wanna hear her big energy, that's too much energy for me. You still like your classical music. Right? And in fact, if the fans of earlier styles of music dwindle, isn't it good for music that new artists.
New styles and new scenes emerge. I mean, if music was a species, this would be the evolution and the survival of the lineage, right? This is good for music growth, rejuvenation, springtime, the overtone flute, it's all coming together, guys. That's the impact on creativity and culture. But let's talk about the financial impact.
Do these shifts grow the market? Historically, these shifts do tend to expand the opportunity. Let, let me explain what I mean. Before amplification. Remember those opera singers, the largest venue you could perform to is about 2000. You might with really good architects, get to like three, four, 5,000.
You're not getting past 5,000. Those cheap seats, they're not seats, they're the bathroom. They're like, not even there, but last month, over 400,000 people came to see Shakira play live in person in Mexico City. Because of amplification. That's an opportunity to sell a lot more tickets, I would say. And those orchestral players who are a little bit threatened by those synth players that now can play an entire string section 'cause it might take away opportunities for them.
The third chair of violist can now conduct the entire symphony, and she could go ahead and do game sound design or film scores as well. There's new opportunities for the individual artists, even though it might look threatening at first. Across the board, new technologies turn listeners into composers and players into conductors and producers increasing access to more and more musical humans across economic class and geographic lines.
Technological innovations often grow the market, but that's not the whole story. While we're fighting over which instruments make art, we're missing the true battle for human agency. Some technologies change the tool, the others change The creator. We've got expansion technologies which help expand their creativity.
We've been hearing about some of those in the conference here, and we've got replacement technologies. What we've been, been kind of fun to try to avoid talking about at the conference, which remove the human from making creative choices. The question as a society that we're asking right now, it's bigger.
It's where does human creativity begin and end? where does human agency begin and ends? I grew up banging on desks, turning everything around me into sound and music. That's, I was one of those kids, you gotta leaf, you can pop it. You got a blade of grass, you can blow it.
You got one of these and you can do this.
The jaw harp,
thousands of years old. The jaw harp traditionally made out of bamboo. This one is made from the bullet casing leftover at the Vietnam War. Materials and engineering change. They change instruments, but they don't fundamentally change who the musician is or how creativity works. Every music technology disruption before now changed the instrument.
The workflow, the format, but what's happening now is different. This may be the first time the creator changes. So we had a lot of conversation about, oh yeah, there's always somebody who's upset about a shift in technology. What, how it disrupts, da da, da, da. That's true, but it's not the whole story. This may be the first time the creator changes, not who, but what Generative AI is a molecular shift.
I think, I think of pre AI song, DNA as all of our life experiences. Our relationships love, heartbreak, war, famine, what's going on in the streets, fashion, food, all those things are what make up our embodied experiences that lead to us creating songs. With unlicensed generative ai, melodies, lyrics, and songs of thousands of people get poured into a vast cauldron, and those platforms learn patterns from vast amounts of that music and generate new outputs that emulate those patterns based on probability.
They adapt them to the text prompting question, but they're not having that heartbreak. They're not having all that experience. They're having this thing around probability of figuring out, well, what makes sense to go next? Humans see new things, experience new things. Models see old things, right? You can't, models are not happening in real time.
They're happening kind of based on something that came in the past, and we're taking all of history of music and compressing it into those models so it kind of gets flattened the same way. Music, somebody talks about music getting flattened earlier. so is the song, DNA. We went from playing a note to programming a drum machine or sequencing a song in a DAW, but even then, a person is structuring the order of the notes and choosing the words.
A person, and with generative ai, we have a totally different molecule of music creation. Sure, there's the text prompt, but there's also the model trainer, the dataset cauldron, the model itself, the user interface built on top of it, maybe even the credit card platform that's used inside the website. All that stuff is influencing tokens, how you buy access to creating music.
All that stuff is in there. So that's why I keep saying it's not who is the author, but what is the author? So many influencers are smashed together in this process that it's nearly impossible to truly identify the exact sources, the Sonic and Lyrical DDNA that went into each algorithmically created song.
DJs did not claim that they own the songs that they played or remixed. They're crate diggers. They're looking for stuff that's already out there, and they knew they were engaged with the art of collage. But with this new technology, we're scraping away the authors and the performers of the song. With a model where songs are used as training data, data with no system in place to link back to the original authors, composers, lyricists, and owners of the songs.
Attribution is near impossible to recreate this idea that unlicensed models will sort themselves out later. It's gonna be messy at best. Many of the songwriters, performers, record labels and publishers of the songs ingested will likely not get paid equitably if they get paid at all.
This handsome fellow goes by breaking rust. Let's listen to this song he put together last fall.
Some of you who were at the Music Tectonic Conference last year might have met Blanco Brown who was there. I got a chance to talk to Blanco and interview him. And, uh, when this song came out, his phone started blowing up. Blanco is, you may remember the song called The Get Up.
It was one of the first TikTok Viral Dances. That's what it was. His song led the viral dance. Um, so there were kids all over the world who were dancing to this song. He became an amazing celebrity. He's kind of like a early, country rap.
they call some call, sometimes call it trailer trap, styles of music. and he's such a cool guy, very positive, optimistic, religious, very engaged with his community. And his phone started blowing up 'cause someone was, people were like, Hey, there's this white dude who's trying to sound like you.
He didn't know about the song. But it's his belief that the models were trained on his music and his songs without his permission. And that the person behind, or the people behind this song were trying to emulate his next hit song without him being involved with it is all, this is the shoot first, by the way.
That's Blanco. He's always styling. He wears nice pants. Trust me, this is, this is the shoot. First ask questions mentality. Oftentimes breakthroughs with internet. Technology break norms around attribution, ownership, and ultimately remuneration of intellectual property. Unlicensed generative AI changes authorship at the molecular scale in that we no longer ask who is the author, and instead ask what is the author?
And the answer's complicated because lots of people, like Blanco Brown had their music trained on their songs. the models were trained on their songs without their permission, without them knowing. So it's not any one person. Sure. We know about song splits complicated in the, in the hip hop and r and b genres and most pop songs at this point.
But this is a whole new level. And it's not just people, it's technologies and corporations inserting themselves into the, as rights holders of music thanks to their technological inventions, not musical ones. Technologies and corporations inserting themselves as rights holders of music thanks to their technological inventions, not musical ones.
And with the stripping of attribution, who wrote what, who owns what it means. It'll be impossible to put this all together. Once the dust settles and new behaviors are fully adopted and accepted, call back. YouTube, Facebook, Spotify. Behaviors changed. How are we gonna unwind this knot?
Generative AI song making is frictionless. Usually in tech, friction is considered bad, right? 'cause it's, it reduces efficiency. But with culture, making friction is actually becoming scarce. People taking the time to compose and record a song increases the value of the song. When a person makes a song, it indicates that it might actually have been worth making, plus the songs connected with a person.
We want to know about which then creates meaning in our lives. It changes how we feel about ourselves. Let me put it this way. When music becomes frictionless to create, it becomes frictionless to ignore. I think that's what Blanco meant when he told me that that fake ass Marlboro man copycat missed the soul of his music.
He actually said the soul of God was missing from the music. That's how Blanco saw it. Do you know what Blanco did? He made a cover song of the AI song that used his music without his permission, and so he totally flipped the script on what could happen with AI and attribution by saying, that's my voice, that's my style.
I'm gonna make it better. I'm gonna put my soul into this song again. So now he has a cover song of that, that AI song, which I don't know legally, ethically what exactly happens with that, but it certainly flips the script and changes the conversation about who owns what. So people, artists, humans, they're starting to figure out the lack of attribution and some issues of fraud that were talked about in some of the other sessions.
Unlicensed generative AI is not the only path technology can take. Even now there's a path where technology can expand what humans do. Let's see if this works. And that's the most applause I've ever gotten for any performance ever. I'm very amateur. This is a, midi wind controller called the Warbl that responds to breath and movement. You may have noticed that tilt and rotation and breath all shape the sound in real time. We started with breath, we're ending with breath.
This is very different than the acoustic reed flute that I played at the beginning, uh, of the talk because it's, it, as technology increases, it becomes more expressive. There's more human in that the musician is still making every decision. It's not replacing human creativity, it's expanding it. Some technologies replace human creativity.
Others expand it. The question is not just what can technology do, but what role can we as humans play in it? So with the original reed, there was no software, no data set, no platform, just breath. And now we're here where music can be generated at scale from a prompt. So the question isn't just, is music making up for grabs?
It's what do we want music making to be? Do we want a world where music is infinite, frictionless, generated instantly, but no one knows who made it or why it matters? Or do we want a world where music is expressive, embodied, imperfect. And deeply human because this isn't a technical question, it's a cultural one.
Technology doesn't decide what music is. We do, and we've done it before. Every generation has redefined what counts as music, who counts as a musician, and what's worth listening to. This is just the first time we're deciding whether the creator is still human. So is music making up for grabs only if we let it be.
Thank you.
Let us know what you think! Find us on LinkedIn, and Instagram, or connect with podcast host Dmitri Vietze on LinkedIn.
The Music Tectonics podcast goes beneath the surface of the music industry to explore how technology is changing the way business gets done. Weekly episodes include interviews with music tech movers & shakers, deep dives into seismic shifts, and more.




