top of page
  • Writer's pictureEric Doades

Star Series: Jonas Norberg of Tuned Global & Spencer Mann of Moises

This week, Tristra sits down to discuss the transformational role of AI in music.

Dive into a riveting discussion with Jonas Norberg, the head of AI at Tuned Global. Jonas paints a candid picture of how AI is not just making music more accessible but also sparking an explosion of creativity in the industry.


We also have the insightful Spencer Mann, VP of Growth at Moises, weighing in on this exciting topic. Moises, with its impressive community of over 35 million musicians, is harnessing the power of AI to unlock unprecedented avenues of artistic expression. Spencer shares fascinating instances of how AI is empowering both musicians and their audience, including how it's playing a game-changing role in areas like brain injury rehabilitation.


What are the challenges facing AI music – how do we define quality? What implications does this technology have for copyright and employment in the industry? Find out in this week’s episode.

Looking for Rock Paper Scanner, the newsletter of music tech news curated by the Rock Paper Scissors PR team? Subscribe here to get it in your inbox every Friday!


Join the Music Tectonics team and top music innovators by the beach for the best music tech event of the year:


Listen to the full episode here on this page, or wherever you pod your favorite casts.


Listen wherever you pod your casts:

Listen on your favorite podcasting platform!

Episode Transcript

Machine transcribed


0:01:02 - Dmitri

Welcome back to Music Tectonics, where we go beneath the surface of music and tech. I'm your host, Dmitri Vietze. I'm also the founder and CEO of Rock Paper Scissors, a PR firm that specializes in music, tech and innovation Excitements. Heating up for the Music Tectonics conference, October 24th to 26th, irregular host Tristra Newyear-Jaeger has been chatting with more of the leaders in music and innovation that are most deeply involved with the conference. This week we're exploring the topic on so many music minds this year how AI is changing how we make and experience music, both for fans and artists.


I'm sure you've read some scary headlines about the topic, but today you'll hear nuanced conversations with the pioneers who have been working with AI for years to unleash greater creativity and engagement. First up, Tristra sat down with Jonas Norberg to get a high-level look at artificial intelligence and music, informed by Jonas' long experience in the field. Then Tristra goes deep into groundbreaking use cases of AI for musicians, healthcare and more with Spencer Mann, vp of Growth at Moises. Are you ready for that deep dive? It'll be fun. First up is Jonas Norberg, head of artificial intelligence at Tuned Global, a leading provider of B2B streaming technology solutions that power some of the world's most successful streaming services. Jonas is a perfect guide for the shifting landscape of AI right now. As a founder of Pacemaker, an award-winning AI-driven DJ app acquired by Tuned Global earlier this year, he's worked with Machine Learning and AI for over a decade, long before it was cool. Take it away, Tristra and Jonas.


0:02:39 - Tristra

Hi, I'm speaking right now with Jonas Norberg of Tuned Global, and Jonas has a very interesting background in AI. You founded Pacemaker, which was, I think, you started out as an AI app for DJs, correct, Jonas?


0:02:57 - Jonas

Yeah, it's kind of an AI app for non-DJs, non-djs to play DJs. The AI is for making it easy for more people to do DJing basically.


0:03:18 - Tristra

But you took it from a consumer-facing app to other more B2B applications and you were working with AI as part of experiences and music listening and curation, and all that for quite some time. Things are really crazy right now. They're changing really fast. There's a lot of excitement bleeding into hype around artificial intelligence. I'm wondering, as someone who's been in the field for quite some time, how is all of this feeling to you? Is it feeling like finally, people are getting it? Or like, whoa, everyone. This is a breakthrough, but not quite as big a one as you think? How are you interpreting this moment?


0:03:58 - Jonas

I mean, I think all development tends to be this S-curve, where it starts off slowly and then there's a very quick phase and then it's kind of slowing down and then it goes into a new S-curve and etc. So I think we're in that fast phase right now. So AI is developing very quickly and there is so many new things you can do with this disruptive new technology, which is known as generative AI or large language models, and there's a couple of things like for people in general, the access to information now is so much easier. You used to have to Google for information and look at many different websites etc. But now you have this AI, that sort of have done that for you already and it can give you clear answers Not always truthful answers, that's sort of the backside with it, but I think that will make it possible for people to learn things much more quickly. And then another incredibly cool thing with AI now is that it's really empowering people. So things that have been almost impossible for many people to create is now within reach.


And that goes for music, Of course, for DJing. We were early on using machine learning techniques to make DJing easier, but not also music creation. You also see image creation and art creation. There's also video creation coming and it's like writing books. Learning and creation has become much better, if you like, like learning quicker, creating much more easily, and it's really like I think this is going to have an impact on everything in our lives really. So, yeah, I'm just very excited about the future, of course, also a little bit like concerned with copyright and jobs and all of those kind of things, but overall, very excited.


0:07:27 - Tristra

Well, you've dealt a lot with how AI can enhance experiences of listening. Maybe not necessarily even as you started out thinking about the craft of DJing and curation and putting tracks together, but you ended up thinking a bit more about what could happen in the background so that music could feel more seamless, could flow more. I'm curious what you think. I mean there's been a lot of talk, a lot of excitement about generative AI because it has this big wow factor, but I think in some ways, ai may end up playing a much bigger role in the sort of behind the scenes kind of way for a lot of our entertainment experiences, whether we're talking audio, visual or just audio. I'm curious where you see AI having an impact now and maybe we can also talk a little bit about where you hope things might go in the next three or four years.


0:08:30 - Jonas

Yes, so I think so, if we just look at creation today and there's like experiences need to be created before they can be experienced. So today the creation phase is quite long because you have to understand those very complicated tools and lots of theories, etc. Etc. So, and then you know, eventually, if you push through which very few people do then you will have an experience in the end. And this goes for, you know, books, music, video. It's like you know, all experiences need to go through a creation phase first.


What's happening now is that the creation phase is much quicker and it's kind of like you, it's more complicated than just pushing a button. It's, you know, it's not. There's a lot of text to music, text to images, text to video etc. To books, text to text. So it's not just pushing a button, but it's like you know it's finding the right prompt and it's almost like you know it's going to be more emphasis on curation. So you do something and then, like, creation is kind of quick and you get an experience, like as a result of this text prompt, or it might also be knobs and buttons etc. But you can quickly get an experience that you can look at and then you're going to be like no, it should be more like this. And then you go back and then you repeat this, but all the time it's like you know an end result or something that's close to an end result that you kind of look at and and feel so that thing, like we're moving more, like we. Creation is still a big part of this, but curation becomes kind of more of the thing.


I had this discussion with a friend that he was just blown away by this, this Oppenheimer movie. That's now all the way, and he is a very intelligent and kind of nerdy person and he watched the movie. He was so blown away so he went home, read up on it and how it's produced and all the little bits and nuances of the movie, and then he watched it again and like he never watches movies twice, and then he explained to me how Nolan, kind of you know, manages the artists and everything. And he gave this very, very elegant example where I can't remember the name of the actor that plays Oppenheimer.


0:11:58 - Tristra

I can't either.


0:12:01 - Jonas

Yeah, he came in very like you know, hard in the scene and then Nolan told him so Oppenheimer isn't a boxer, he's more like a chess player.


So what Nolan is doing there is, like you know, he's giving direction to the actor and then the actor understands and then eventually, you know, after having that discussion and redoing the take and everything you know, probably, you know I don't know lots of time passes here.


Then he's going to see the result of their, you know, giving that extra direction and then he can curate Now with large language models, generative AI, you know, giving that direction and then getting a new result, a new experience is so quick. So, nolan, he obviously has a very good curation feeling. Like you know, he knows when a scene is where it's supposed to be and throughout the years, you know, of course he's practiced, so he understands like these scenes, like this is good, and he kind of know what he want to create and from experience he has learned, like you know, this is what I want to create. And then his tools are, you know, talking to the artist and this is the camera shots and everything which he has learned also throughout the years. But I think you know there's many more people that kind of have that. Good, you know, curation, curator, what do you say?


0:13:56 - Tristra

properties or you know that instinct for creation and how to put things together to make a meaningful statement.


0:14:07 - Jonas

Yeah, like exactly you know. There's a lot of people that can say you know, this is good, you know, etc. But there's a lot less of the people that are prepared to go through the process to get to the experience, like Nolan you know he has done that many, many times.


So I think this kind of mean, like you know, this new and quick creation is going to lead to many more, possibly also better experiences for us, because there's like a new group of people that had the ability but you know they didn't really, they weren't ready to put down the effort that was required. So now you know it's bigger group and therefore the quality of the output should be more, but also higher.


0:15:13 - Tristra

Yeah, no that makes a ton of sense.


And as you were speaking, you know, taking it back to music, I was thinking about the traditional studio recording process where often you did have a producer or someone else being like, hey, we need to do another take, like, can you try to hold back a little bit until we get to the bridge and then, like I'm thinking like a vocalist, and then, you know, just go for it, or whatever.


Or you know, we need to hear a bit more of a bit more of your hi hat or something like that, but you know, it meant you had to do a fresh take, right, or sometimes you could punch in a couple bars, but that's really difficult to do. Well, so in some ways, ai could unleash that iterative capacity and allow us to, you know, do more with, you know, with fewer inputs, right, like so, instead of me re recording my vocals for the entire song or for that entire section of the song, we could, you know, adjust certain certain things to make it more intense or lesser, I don't know. There's just you're making me think that it can really change the way people approach making music and and let them iterate more quickly until they reach something that's really compelling and effective emotionally.


0:16:26 - Jonas

For sure, and I mean, you're kind of touching on the thing where you can use AI to, instead of record, just, you know you make it better, and that applies for for, for everything kind of, and I think so I was in this panel with a group of people and there was this musician producer that I can't remember the name of, but he was explaining his process and you know he really wanted to explore AI in creation. So you know he's he's he's been doing it for quite some time, even though the tools have been kind of blunt and so forth. And then I asked him like okay, so with with this new, you know AI technologies and and, and you know the empowerment will, with your sound, be more or less unique. And he was like immediately more unique.


Because with, with AI, he can create, you know, alien sounds and that's that he would never have come up with. And that just blew my mind because you know, that's that's such an interesting future. Like, okay, so here you have completely new sounds, and that also reminded me of a completely different thing, which is like it's kind of the same thing but completely different Mechanics. So, for example, a car or a rocket, they, they are today using AI to develop optimal designs and and those designs look like, they look very organic. So, you know, think butterfly, or you know, shells or those kind of things. So it's it's like, you know, it's it's mechanical design that looks organic but not really, you know, from our world. It's more like something from from from the alien movies. It's incredibly cool and, you know, when he said that his sounds are now, you know, completely new, made me really think like, yeah, that's alien sounds.


0:19:08 - Tristra

Yeah, you were making me think when you're talking about these new sounds of you know how AI is being used to develop pharmaceuticals, or to, you know, come up with new proteins and how you know, basically all these interesting forms and configurations that are really difficult for a human to sit down and be like okay, now, what if we stick this thing over here, you know? Or what if we fold it, fold this protein this way? What would it be like? It's just in that way, like you can just imagine, this audio feature where all sorts of really cool things are coming up and there's, you know there's some concern about with you know, with generative music, that there's going to be the sea of sameness, but it sounds like there's going to be these mountainous islands of uniqueness that will be truly groundbreaking and exciting and just sound very different from everything that's come before.


0:19:59 - Jonas

Yeah, I think you're probably gonna have both. I'm kind of convinced that, like as I said before, like we will have more people creating and that's going to lead to a higher quality, like the peaks, but I think we will also have a lot of things that kind of are the same and especially so. So if we look at AI for music today generative or you know so generative is kind of a weird term. I understand what it means, but you know, if you have something that generates different sounds based on, like, input from this music artist, that is generative AI. But it's not creating an entire track, it's creating, you know, new sounds etc. But then that's kind of the professionals and that's kind of where the tools are today, except for Bumi and a few like Bumi, and there's kind of more coming there as well.


But, you know, for the professionals, they've had the tools. They can sort of, you know, take the results, enter their DAWs and continue to work on it, just like they've done, you know, since a very long time back, and it's going to mean their creations will be more interesting and so forth, because they're ready to put in the extra effort. But then we have Bumi and those kind of services, which is like, you know, that's probably a lot of the same. And then, if we push that all the way and you will have generative AI that is licensed and you can say you know, give me a Taylor Swift track, you know, boom, and then you're going to get one, then it's going to be a lot of the same. This latter one, I think it's more about fan engagement rather than creation, so it's like it's kind of a new way for people to engage with their favorite artists, and this is an idea, this is a new way and.


I completely agree with it. So it's not really creation, it's engagement.


0:22:42 - Tristra

Awesome, I love it. Well, Jonas, this has been a really fun chat. Thanks so much for taking the time and we'll see you, or we'll see someone from TunedGlobal, I know, at Music Tectonics.


0:22:54 - Jonas

It's always a pleasure to chat with you.


0:22:57 - Tristra

Yeah.


0:22:58 - Jonas

Yeah, it's always great fun. Thanks for having me.


0:23:01 - Dmitri

That was great Tristra. In just a minute, Tristra will be back with Spencer Mann of Moises to dive into how they're blazing trails for enhancing creativity in music with AI. The Music Tech Tonics conference is coming up so fast. We're gathering with thinkers like these October 24th to 26th in Santa Monica, california. It's going to be awesome. Speakers from Spotify, tidal Lander, midi Research, splice, riot Games, leading investment firms and so much more will be there to map out the future of the music industry. And I want to make sure you get your ticket before the regular price tickets expire on October 16th. Right now, tickets are a mere $350 for three days of kick-ass keynotes, scintillating sessions and noteworthy networking with music innovators, but after October 16th you'll pay the walk-up rate $450 buck-a-roos. Claim your spot at the conference, grab your ticket and check out our speakers and session topics at MusicTechTonicscom. Let's get ready for the future of music together on the beach, and we've just added a new opportunity for AI music companies.


At the Music Tech Tonics conference the last week of October, demo your AI music app on stage for all to see in here. Then take over a spot at the AI Innovation House and the AI Alley at our Music Tech Carnival at the Carousel on Santa Monica Pier. We're looking for one supernova and several asteroid partners to support and be a part of this effort. We make MusicTech Tonics an experience way beyond a traditional conference, and you can be part of the fun while getting the music business into your AI app. Call Shali S-H-A-Y-L-I at rockpaperscissors.biz to find out how your AI music company can get on board.


Now Tristra is back to dig into the state of AI-powered creativity. She's speaking with Spencer Mann, vp of Growth at Moises. With tens of millions of users, moises is the musicians app that turns AI into real tools to make better music and audio. With a wide range of carefully crafted ethical features, it's now offering its tools to other music innovators, letting AI curious companies incorporate AI features quickly and easily. But I'm going to let Spencer tell you more about all the cool stuff they're doing. Over to you, Tristra and Spencer.


0:25:26 - Tristra

Hi everybody. This is Tristra Newyear-Jager, the Chief Strategy Officer at Rockpaper Scissors, and today I'm talking with Spencer Mann, who is the VP for Growth at Moises. Moises is a really interesting company. We talk to a lot of folks in AI. A lot of them are doing things more on the back end or more B2B model, but Moises does both music-making, music, maker-facing products, as well as more B2B services for companies who want to incorporate AI into what they already do. So Spencer can probably explain this way better than I can. So the great thing about this is, Spencer, you can give us some insights into what's going on, both with the people who are directly making stuff with AI and people who are building businesses around AI stuff and that's a very technical way to put it, but thanks for joining me today.


0:26:27 - Spencer

Yeah, so happy to be part of this. One of the things that drew me to the company and that I love about it is we have over 35 million musicians using our platform, and what that means is the creative potential for these artists has been unlocked and they're able to do a lot more than they have previously. So we're here in very interesting use cases, like from Headway East London, where musicians are using this in programs for people with brain injuries and allowing them to experience music in new ways.


0:27:02 - Tristra

Can I ask you exactly? How are they? What are they doing with music to make it more amenable to treating folks that have a brain injury?


0:27:10 - Spencer

Yes, so what they do is they describe it as making a cake, right, and they're able to take all the ingredients of a song and look at them as parts. They can stem, separate and listen to just the vocals, let's hear just the drums, let's hear the guitar. How are those things working together and how are those elements? And then they're able to have the class. People with these brain injuries participate, right, and okay, we're going to take out the drums, let's come up with our own drum version of this. And they're creating something new and novel and participating in music in a way that they couldn't do so before.


0:27:49 - Tristra

That's really cool. That's really really wonderful. So you have a lot of insights into how these millions, or tens of millions, of musicians are using Moises. What are some other things you've seen in recent months? I mean, ai has really exploded. A lot more people are aware of it and how they might use it to do various interesting creative things. What have you all been seeing?


0:28:15 - Spencer

Yeah, one of my favorite use cases is people who go to band practice. They have their song they've written, they're practicing rehearsals and the drummer doesn't show up. Okay, well, what do you do? Do you cancel your practice? Well, using Moises, they take the song that they've previously recorded, they just have the drums play and now their drummer's back right and they're able to play along and they're still able to have a productive practice. So it's a tool that enables a lot of different use cases like that.


We have a guy named Danny Mo at Berkeley School of Music loves Moises and he tells this great story about. In class they were studying a piece of music and they were trying to understand what was the bass player doing at this really unique and specific part. And by isolating it out, by slowing it down, the instructor was able to work with students and say this is what's happening. This is why these things matter See the subtle things that they're doing with the beat and the chords, and they're able to experience music in a deep way and isolate the part that they're really interested in.


0:29:31 - Tristra

That's really really cool. I mean, that's always sort of the thorn, and a lot of music education heels is trying to analyze parts, especially complex parts or things that come really fast, and break it down that way. And I mean, back in my day it was through things like transcription, which is not everybody's forte, and a lot of us are oral learners. Right, we learn by listening. So it's really really cool to now be able to basically have the kind of analytical discernment that you get from looking at a score by listening to these lines separated out and transformed so that you can really bring out whatever quality you're trying to learn from. That's super, super interesting. All right, but are you seeing any shifts in behavior on your app? In some ways, you guys have some really interesting potential data. Are you seeing people using one AI tool more than another? Are you seeing more adoption in different parts of the world? See if you can give us a quick around the world in a minute from the perspective of Moises and its AI tools.


0:30:46 - Spencer

Yes. Well, this has been really fun to see because we are a global tool. We are in every country in the world translated in 30-something languages, and so we see users from all over and it's interesting to see where you get this heavy adoption. Brazil is one of our core areas. It's an incredibly musical group of people. They're passionate about it. They love it. I recently went through and I looked at what percent penetration do we have in each of these countries and I was surprised to see in the US, about 3% of the population. We have a user account for 3% of the population in the.


US which is insane, and you go to United Arab Emirates, which I never would have expected. It's 5%. So there's certain countries where you see this really incredible adoption and they use the product and they get a lot of value out of it, and it's not always what you'd expect, so it's been really fun to see where you get that penetration and the adoption and the things that they're creating with it.


0:31:48 - Tristra

That's going to be really interesting to see in the next couple of years too, how that evolves and how that impacts local music scenes. I could imagine everyone talks a lot about the sort of more localizing dynamic that's going on right now with music, with recorded music, and I could imagine, like some tools like Moises or AI tools could be really helpful for people to take that to the next level where they can maybe add a translated line, a translated vocal line, to an existing track, or add maybe even a local instrument to an existing track. It could be pretty impactful.


0:32:29 - Spencer

Yes, yes, one of the things that another use case that I love to see is when people take a song that they love and they take out the instrument that they play and then they do their own version of it. Right, and now they're playing along with their favorite bands, but they're creating something new and it's this magical experience. Even as a beginner guitarist, I can take my favorite bands and I can have those magical moments where I learn the song with them and then I'm playing and I and you have that, that insight of now. Now the lead vocalist is looking at me and he's nodding because I'm part of this song and I'm doing okay and it's. It's this really magical moment, and so for me, that's something that we want to give all musicians right, this opportunity to participate in the music that they love.


0:33:22 - Tristra

That's such a great sentiment and sorry, I just to be a little silly you're kind of making me think that this is a different, a different flavor of AI hallucination, where you get to hallucinate that you are shredding with your favorite musicians or throwing down some bars with your favorite MC, or something like that.


0:33:43 - Spencer

Yes, well, you know, that's the thing I love about music. I think there's something magical about it. Whenever, when anyone is in the car and they're listening to music, they're tapping along, they're singing along. It's something that we all want to participate in actively and to enable people to do that, give some and the closer you get to that experience, like you said, it's the, it's the AI hallucination of now, I'm part of this thing. And I'm. I'm that part of that band that I always dreamed of.


0:34:15 - Tristra

Yeah, amazing, All right. So now that we've had a wonderful, very quick whirlwind tour of all the amazing stuff that people can do with AI tools on the creative side, let's talk business. Sorry, let's get a little, a little deeper into the music business side of things. So Moises also works with companies to create, to help them create their own sort of proprietary tools or spaces where they can have a little sandbox and goof around with AI. Because you know, I understand it's tough to nobody wants to build their own model if that's not what they do right and getting things like compute lined up can be really difficult in the stand age. There's just a lot to consider and it's all big risk. So you know, how are you seeing companies? You know existing companies that may have a very different product or service or approach to music or creativity. How are you seeing them starting to weave some of the tools that Moises provides into existing, their existing sort of business?


0:35:21 - Spencer

Yeah, that's a great question. One of the important parts about making this technology available to all the businesses is scale Right. As soon as you go from a consumer to businesses, you need to have scale and you have a really robust infrastructure. Moises has processed one billion minutes of audio through its consumer base, which is which is incredible.


And in doing so we had to build this infrastructure that was very strong and stable and scale, and so what we were trying to do, or we're doing, is we're making that infrastructure now available to all these other businesses who may kind of lead. Doesn't make sense to build out all these different modules and all these different solutions in addition to the ones that they specialize in.


So as we partner with them, we collaborate to create an output that's better. So they use parts of what we can provide a scale combined with their own unique value to make it more efficient. So, for example, let's say that you had a giant catalog of songs that you wanted to get the lyrics for Right. Well, great solution for AI. And we can help get those, those lyrics, transcribed and provide it, and then the human touch can come in and optimize those and tweak them and take them to the next level and make sure everything's perfect. So it's not there to replace the individuals who are doing all the work. Is to replace the tedious part of that job Right.


And to help those businesses scale and grow much quicker.


0:36:51 - Tristra

Are there. So are you seeing more interest from the more creator tool side of things? Would you say that companies that you're seeing interested in AI are folks that haven't incorporated music into what they're doing before but AI kind of makes it more possible? Or are they pretty solid in the music lane, looking to do like you just mentioned, like to work with catalog or to you know, just to kind of enhance some of their operations or increase their efficiency doing things that they've already been trying to do? What kind of, what sort of general trends are you seeing?


0:37:26 - Spencer

Yeah, no, it's really interesting because to solve the use case of music, the quality has to be really high and that enables you to solve a variety of other use cases. So we have an interesting situation where someone came to us and they they had been recording podcasts for years and years and years and they had some background music through all of them.


Okay, and they realized that background music was a problem for them. They didn't have the licenses for it and it was a huge liability. So they had to either re record all that content or they needed a way to use a, an AI API, to strip it out at scale.


And so that was an interesting use case where he came in, we allowed him to take that background music out, put a new background music and do it at scale and instantly they they say their catalog of all these podcasts. So that's an example of how some of these, these tools can go beyond music and serve these other use cases In a lot of ways. Music is really one of the most challenging because the quality matters so much, and that's why, as we think about developing things like voice synthesis and voice models, it's important for us to partner with that music industry, because that's the way you get the highest quality content right when rights holders and artists get paid, when the side effects is, you also get good quality content to train models that are going to be the most valuable to the users as well. So there's really a lot of partnership and collaboration that happens in creating the best possible product.


0:39:09 - Tristra

So how do you define quality? That's a hard, hard, hard question, right, and I want to just give you a little bit of a lifeline here by saying when I don't necessarily mean you know objective, subjective ideas of this is good music versus bad music. How does Moises define quality? How should, as we move forward and start to understand how AI models might work in the music business, what should we, what parameters should we start setting, like, what should people be thinking about in terms of quality?


0:39:48 - Spencer

Yeah, I heard a great podcast the other day on this. They were talking about what was it? If we take all the ethical pieces aside from voice synthesis? Their complaint was that the quality of the voice synthesized models didn't sound right. So when you took the weekend's voice or Drake's voice and they created these things, the problem was is it didn't sound good. That was the core problem. There's ethical things which are important and critical, but the quality has to be good as well. When you hear that, your brain has to not think something is wrong with this. It needs to pass that uncanny valley of voice synthesis and you need to arrive on the other side and say oh wow, this is a viable replacement for it.


Now, the only way to get there is to have really high quality training material provided clean examples of that voice and it's not five seconds of audio, it is hours of audio to really match it. Then you have to understand that when you create this, you're not actually replicating that person's voice. That voice includes all sorts of things like their persona, values, beliefs. What you're replicating is the timber of their voice. It's an aspect to what makes a voice. Now, it's an important aspect, absolutely, and it can create incredible products. But in order to get that high quality product, not only do you have to have that great, high quality timber model, but you also need to have a talented artist that you can wrap that voice around.


So, somebody still has to sing, someone has to create the foundation, and so you're not taking artists out of it. You're changing the artist's experience by getting some new tools that allow it to be more flexible.


0:41:50 - Tristra

That's really an interesting point that I don't think many people make. When they talk about voice clones or voice synthesis, is that just like with any other part of the AI process? It's garbage in, garbage out. So if you're a terrible singer, your voice clone is also going to sound terrible. It'll just kind of sound like Bad Bunny, right? That's the thing.


0:42:10 - Spencer

And that's exactly right, and I had this experience firsthand as I've played with our own voice synthesis model. I sat in front of my microphone and I practiced singing and then I'd wrap these incredible artist voices around it and it solved some of my voice problems, but the 80% of my problems remained right.


0:42:31 - Tristra

So the talent has to be there to make it work. Wow. I think that's something that's so important to remember, and even everyone was kind of wowed by things like Heart on my Sleeve, just to use the most obvious example. But if the songwriting isn't good and, as you said, if the performance isn't compelling, behind the model you just have a gimmick, and maybe a really crappy sounding gimmick. So that's really really interesting to think about.


0:42:59 - Dmitri

So quality has.


0:43:00 - Tristra

So we'd use, in some ways, AI music should be judged by the same parameters that we use to judge other kinds of creative expression, Like does it move us? Does it stick to certain norms in terms of wherever we may put our musical norms, Because those vary around the world? But that's a really, really cool point. All right, so are there any other interesting things that make you go wow, in five years, we're going to be looking at a very different industry. Are there any unexpected moments that, when you all at Moises are talking about the future of music and AI technology that you're really excited about, and you're not really hearing a lot of other people talk about Any kind of cool hidden future trends you think everyone should start thinking about?


0:43:53 - Spencer

Yeah, I would say a couple of things. One, we are all amazed at how quickly it's going. It's a flood of new tools, and what's going to differentiate them is the quality of them, so and the relationships they have with the rights holders and artists. So that's what we think, as we're leaning on heavily is, how do we make sure that we're doing a high quality output and we're doing it in the right way? So I think that's going to be a trend you're going to see across. All AI is that quality matters it's always mattered and doing it in the right way it makes a sustainable solution. I think the other part that impresses me is this is a disruptive technology that is changing not just music, but is changing art, is changing writing, is changing all of these different pieces, and we have some interesting precedents for technology doing this. I remember well I don't remember I wasn't around, but when photography came, out Exactly.


Everyone was like well, what's this going to do to the artist? I can take a picture of things and I now have this perfect replication of it. Well, high quality artists we still buy their art. We still want it. We want that personal connection, we want value, something that's creative and high quality, and that will always matter. That will always matter throughout this whole thing.


0:45:22 - Tristra

I think you're so right and I just recently came across some of the writings of this guy, Nadar, who was a French critic and early advocate of photography, and he would get up in hot air balloons and take the first aerial shots and did all sorts of really creative, quirky things with photography.


But he said basically everyone was just unsure even how to process this. It was just like this blinding flash of novelty that people couldn't figure out how to even wrap their brains around, forget use effectively. So we don't know who out there is going to take AI and get up in the equivalent of a hot air balloon or do other things that we haven't even imagined yet. But in some regards I think we got to go back to what you're saying earlier, which is there will always be deep humanity at the center of the most successful AI experiments, and that's a really important thing for us all to keep in mind before we completely freak out and go and throw our shoes into the escalator or something like whatever it is, or get too crazy about the disruption, Because sometimes disruption is actually we need a better word, Because disruption sounds just so angry, aggressive, destructive as opposed to. Maybe AI could bring lots of eruption.


0:46:52 - Spencer

I think you're touching on an excellent point here, which is, I think we lacked the vocabulary to describe AI in a meaningful way. When people hear AI, a lot of times they're thinking generative AI, they're thinking replacing humans, and I think that is only a narrow aspect of it, and I think the smart companies are thinking about collaborative AI. How do we enable creative potential? How do we take and replace the tedious parts of a job?


How do we make musicians better at what they do, and so I think in the next year or two, there's going to be this effort to establish the vocabulary to accurately describe AI, so we can communicate more effectively on it, because I think we lack it today.


0:47:39 - Tristra

That's a great point. Thank you, Spencer. So much for talking about AI and AI and music and all the interesting new use cases you're seeing pop up.


0:47:50 - Spencer

Yes, happy to Thank you for having me on.




Music Tectonics at NAMM 2023

Let us know what you think! Tweet @MusicTectonics, find us on LinkedIn, Facebook and Instagram, or connect with podcast host Dmitri Vietze on LinkedIn, Twitter, and Facebook.

The Music Tectonics podcast goes beneath the surface of the music industry to explore how technology is changing the way business gets done. Weekly episodes include interviews with music tech movers & shakers, deep dives into seismic shifts, and more.

bottom of page