top of page

NAMM 2026, pt 1: AI’s Inflection Point (ft LANDR and Yamaha)

  • Writer: Evan Nickels
    Evan Nickels
  • 5 days ago
  • 24 min read

For the next two weeks we’ll be talking about all things NAMM and getting into some of the exciting innovations we spotted on the show floor, along with some trends that we noticed this year, starting with AI's inflection point in the music industry.


Last year, the conversation around AI was tense with creators expressing fear over being replaced by AI. This year, we saw more AI tools designed to support creators, and in some case, become integrated directly into software and hardware that musicians already know and use.In this episode, you’ll hear two conversations from NAMM that capture this shift. 


Daniel Rowland from LANDR discusses how creator-first AI tools are evolving, LANDR's new Layers feature that adds real musician performances through AI, and why the technology is becoming less about replacement and more about expanding creative possibility. (Recorded in the John Lennon Educational Tour Bus).


Jun Usui from Yamaha demos a prototype that integrates Boomy's AI sample generation directly into Yamaha's Seqtrak hardware, showing a glimpse into a future where AI lives in your instruments, not just the cloud.






Listen wherever you pod your casts:



Looking for Rock Paper Scanner, the newsletter of music tech news curated by the Rock Paper Scissors PR team? Subscribe here to get it in your inbox every Friday!




Episode Transcript

Machine transcribed


[00:00:11] Dmitri: Hey, welcome back to Music Tectonics, where we go beneath the surface of music and tech. I'm your host, Dmitri Viza. I'm also the founder and CEO of ROT paper, scissors marketing and PR firm that specializes in music innovation. For the next two weeks, we'll be talking about all things. Nam, I'm getting into some of the exciting innovations we spotted on the show floor.

Along with some trends that we noticed this year, one of the biggest trends we noticed was that AI seems to have hit an inflection point in music. Last year, the conversation around AI was tense with creators expressing fear over being replaced by ai. This year we saw more AI tools. Designed to support creators, in some case become integrated directly into software and hardware that musicians already know and use.


In this episode, you'll hear two conversations from MAM that captured the shift. First, I sat down with Daniel Rowland from LANDR to talk about how creator first AI tools are evolving and what changed in just 12 months. Then I got an exclusive demo from June Zui at Yamaha, showed me a prototype that integrates ai.

Directly into their hardware



First up we're sharing a conversation with Daniel Rowland, the VP of Strategy and Partnerships at LANDR. For over a decade, land has been building music creator tools, starting with AI powered mastering and expanding into samples, plugins, distribution, and generative AI features. We talk about how creators are now using AI tools and lander's, new layers feature that adds real musician performances through ai and why this technology is becoming less about replacement and more about expanding creative possibility.

This conversation was fun. We recorded it on the John Lennon educational tour bus.


Hey, I'm here with Daniel Rowland with Lander.


[00:16:04] Daniel: What's up dude? Great


[00:16:04] Dmitri: to see you. We're here at Nam.


[00:16:06] Daniel: Always good to see you. Always good to be at Nam. Always good to hang with you D


[00:16:08] Dmitri: Yeah, likewise. Saw you at cs. Now we're here.

We're sitting in the John Lennon bus. That's pretty fun. It you is doing something cool here.


[00:16:14] Daniel: I mean, there is not a cooler place to be than this bus. It blows my mind every time I come on here.


[00:16:18] Dmitri: It's the quietest place at Nam.


[00:16:19] Daniel: It's by far it's worth, worth the cost of admission. Just to get away from everything here.


[00:16:24] Dmitri: Thanks for carving out the time. Yeah. Uh, you're VP of Strategy and Partnerships at Lander. Yep. Kind of a lander mascot. I would say, ma,


[00:16:32] Daniel: for about 11 years I've been the mascot running around. Yeah. Talking about all this stuff. Yeah.


[00:16:36] Dmitri: Now, the Music Tectonics audience mostly knows about Lander, but let's say we have some new listeners.

What's, what's the soundbite? How do you describe Lander now?


[00:16:42] Daniel: Cool. Lander is kind of the broadest ecosystem for music creator tools, right? The idea being that there's. A bunch of amazing companies out there do amazing stuff if you wanna make music. But it's all very fragmented and there's all different subscriptions, so we kind of take everything from coming up with an idea through, you know, samples and plugins and mastering and distribution and put it under one subscription.

So it's an easy way to get in and, uh, get everything you need and then tack on all the other cool stuff as as needed.


[00:17:04] Dmitri: What's the first thing people usually do when they enter the Lander ecosystem?


[00:17:08] Daniel: That's a good question. I mean, we're, we're most known for mastering and have been for a long time, so people are always kind of toying around with that.

But really a, a lot of our AI tools that we've released recently and, and some of which are in beta and some of which have been out, like people are really in there playing with that to kind of get a vibe on where things are heading, you know? So,


[00:17:23] Dmitri: and you're not just like a random guy doing this. You're, you're a, a, a bit of a producer and engineer yourself.

Tell us a little bit about your story and then we'll dive in with some cool.


[00:17:31] Daniel: Sure. I mean, I came up as a guitar player and, uh, for, since I was 15, that's been my only job is, is in music, right? Until I joined Lander, maybe 10 years ago. So yeah, I still produce and engineer a lot of music. I, you know, Emmy winning stuff, academy Award winning stuff.

I've got a project for a Grammy with Star Wars this year.


[00:17:46] Dmitri: Nice. Congrats.


[00:17:47] Daniel: So, yeah, it's awesome. I can keep a foot in that world, right? And then, and also do this fun tech stuff. And as you know, I'm a college professor at MTSU and have been for about 15 years. So it's, I get like to dip my toes and all that.

I


[00:17:57] Dmitri: don't know how you do it all, but.


[00:17:58] Daniel: Yeah. But, but Lander is my, is the main thing and I'm really super passionate about what we've, you know, kinda the mission of the company over the past. Yeah, yeah. 10 or so years.


[00:18:06] Dmitri: So AI is the topic everywhere at cs, at Nam, in the rags.


[00:18:10] Daniel: I'll be happy when it's not the


[00:18:11] Dmitri: topic.

I know.


[00:18:11] Daniel: Exactly.


 Okay. What's the next thing?


[00:18:13] Dmitri: But I am curious to hear how Lander, how you think about AI today. You've got a very artist oriented perspective. Tell me what your view and philosophy is there.


[00:18:23] Daniel: Yeah, I mean, I tend to approach it with a, a glass half full. Sort of a view, right? And it was been, admittedly for everybody, it's been a bumpy couple of years on the AI side of things.

When you, you have the Silicon Valley style innovation thing come into our space, right? Where it's move fast, break things, don't license, all that kind of stuff. It's really. Put a bad taste in a lot of people's mouth on the AI side of things when there's a bunch of really cool stuff you can do with this.

Right? And we are seeing the resolution of a lot of that stuff, right? With the deals, with the major labels and the lawsuits going away against the news and uio. So we're not, we're still at the, the cusp of a lot of that stuff getting resolved. But, um, no, I mean, it's part of what we're gonna chat about today is I think there's, you know, people, especially on generative ai, people have this aversion to it still a little bit, right?

Because you think of prompting and then I, I'm not needed anymore. This thing just exists that I didn't really have much agency over. The reality is. Now, companies like ours and there's other ones too, can come kind of backfill. That extreme version of what Gen AI is with things that actual creators want, like what I want to use, like what do I, what do I want this to be?

We can do that now. We just need creatives to be involved in the decision making process of the tools that get built, and I think that's what was kind of missing from some of those early gen things that kind of drove us all nuts.


[00:19:28] Dmitri: So don't throw the AI baby with six fingers out with the bath water.


[00:19:32] Daniel: That's exactly how I would frame


[00:19:34] Dmitri: it. Yeah, yeah, yeah. So like there's this phase of kind of disruption and people, wild, wild west, scraping everything. We're gonna replace artists making music is boring. Nobody wants to make music to perspective.


[00:19:46] Daniel: No.


[00:19:46] Dmitri: And now it's saying, well. Not all of that is true. It doesn't have to be that way, but what are tools that creators might actually want?


[00:19:53] Daniel: Yes. Like what fits into the way that people currently make music now that also will take them to the next stage of how they're gonna make music? We are in a bit of a transition period about how we're all gonna kind of engage with this technology. So it's not totally es shoeing the, you know, so some of what can be done, it's more how do you incorporate it into the way that we all work as opposed to forcing us into some way that we're not.

It's uncomfortable, honestly.


[00:20:15] Dmitri: Yeah, you were talking about this a little, but I wanted you to go into it a little more about what's changed over the last year. Like what have you seen that has gone from that kind of like fear disruption, uh, taking sides to something that feels like a more mature, more palatable, more useful perspective and an implementation of ai?


[00:20:34] Daniel: It's fascinating. I was on a panel about, uh, ai, on device AI here with some awesome people. And one of the things, it was in the same room that I was on a similar panel, uh, last year, right? And the, the temperature is. So I talked about this so dramatically different. I was talking mad crap about Suno last year because that's when that quote had just come out.

Oh yeah. Like the CEO O of course about, you know, people wanna make music and I understand what he was trying to say, but he is also talking to investors when he said that. Right. But, uh, or, you know, he is pitching to investors subtly, but I, I get we're kind of the subtext of that, but. The audience was like, yeah, screw Suno back then.

But this year, much different. I mean, so many of the songwriters that I, that I talk to and I work with now, they use Suno all the time, right? To iter, not to text prompt music to existence, but to iterate on ideas in different genres. And I always use the, the Childish Gambino, Donald Glover quote of, to make mistakes faster, right?

Mm-hmm. It, we're in such a fast, the, the way, way, uh, AI can be good, whether it's us or other companies. Is that depending upon what you're doing in music industry, you have to work quick. Right? And there's a high volume output. I mean, if you're in K-Pop as an example, I was on a panel yesterday with a K-pop producer and he's like, man, we have to put out so much content so fast we don't have time to experiment anymore.

And that's, that really hurts the creative process. AI taking over some of the tasks and helping us iterate, allows us to experiment more and really kind of evolve our production as opposed to kind of being stagnant just because we have to put out so much content. Anyway, there's a lot of different aspects of, I think,


[00:21:55] Dmitri: and I agree with you last year.

Everybody that was on the music creation side was like,


[00:21:59] Daniel: anti


[00:22:00] Dmitri: I is going on. Yeah. And it wasn't just that replacing the job, it was also the scraping of the content with no attribution, no monetization, which is still a open question in a sense. It's starting to get addressed. There are licenses happening, but it hasn't been fully implemented enough to really no know it's gonna happen.


[00:22:15] Daniel: Yeah. Those, those models exist with scrape data right now. Right? They haven't included, uh, all these deals have kind of been done with the majors and whatnot, but. You know, that's why then I, every year you could say this, but the next six to eight months will be very interesting as those models come back.

To people trained on Clean


[00:22:29] Dmitri: up that relationship. Clean up those attribution. Like can you do it retroactively after it's been scraped, for example?


[00:22:34] Daniel: Well, can you, and those models are gonna sound different now.


[00:22:36] Dmitri: Mm-hmm.


[00:22:36] Daniel: Right? Because they're not, you can't use everything under the sun now. You had a much more limited data set.

It's, it's also, I think it creates some interesting opportunities for startups to come participate. And, and innovate in a way that felt like there was really just two major players before. And now, you know, I think there's room to do other cool stuff.


[00:22:52] Dmitri: Yeah. Yeah. It's interesting too when we talk about it in the context of Lander, because your mastering tool, the sort of automated mastering was disruptive at the time and certain people in the industry were feeling similar ways, but then everyone.


[00:23:06] Daniel: I have the scars to prove it, man. I was like, good lord.


[00:23:09] Dmitri: But then like new use cases for mastering emerge. Yes. Like the AB testing. With mastering, you could still use a traditional mastering engineer, but first run it through to be like, well, what are we, how are we gonna describe this sound? Yeah. You know, what, what are we gonna pull up?

What are we gonna like deprecate? You know, like what, what, maybe we're doing some songwriting in the, in the last minute as well, because once we've heard a quick. AB test of, of mastering. It's a whole different ball game. And it actually could help for, even for traditional mastering engineers.


[00:23:36] Daniel: Yeah, I mean it's, I think anytime you build technology, it's fascinating to see how people actually use it versus what your intent was.

And for mastering, you know, and Lander, I came on at the first year anniversary of the company, so I didn't build it from scratch. Right. I influenced it a lot over the years, but people have used it for all sorts of things that don't. Completely cut the master and mastering engineer outta the equation.

And I use this analogy all the time, and it could pertain to a lot of what's happening in Gen AI too, right? 13 years ago when Lander launched the, it was gonna be the end of the music industry, right? The sky was gonna fall, whatever. And what we've seen, we've data to back this up now over a decade, it hasn't impacted the mastering industry industry pretty much at all, right?

What it's done. Is exposed a whole new group of users to what mastering is, right? They're getting something back that sounds better than what they put in and they don't have the skills or the money to pay somebody to do that, right? And some of those people move on through their music production journey.

Some are, you know, kind of casually dipping a toe. But if even a percentage of those people that we widen that funnel for come into the more pro audio sector, right? Buy a microphone, hire a mastering, become a mastering engineer. Wow. Become a mix engineer. That I think ultimately and has proven, I think ultimately benefits all of us.

Could be the case for Gen AI where instead of playing a video game, I'm gonna mess around making some music, which I wouldn't normally do because it's so crazy accessible now, even if it's just kind of for fun to put something also on social media. But if you catch that bug right, and if a percentage of those people kind of matriculate on, um, think it.

Ultimately could be good for the industry. And that's my glass half full take on it.


[00:24:59] Dmitri: Yeah, yeah.


[00:24:59] Daniel: But, uh, I want more people involved in making music period and expressing themselves through that. And I think they will learn an instrument. They will, there's, it's gonna resonate with people beyond just creating meme music, you know?


[00:25:10] Dmitri: Yeah, yeah. I mean, I think you're right. I think there's some interesting happening with the market, growing with some of this new technology, which is super, super intriguing. Um, you were talking about kind of the shift, uh, that, that. A shift of tamr basically in in the field around ai. Let's talk about some real world examples from Lander.

How are you guys shifting product and features to respond to that more positive creator first uses of ai?


[00:25:33] Daniel: Yes. Okay, so that's a good examples here. 'cause that's kind of my job to go to go source what I think. Also talking to a bunch of people what they would want right to exist in the world. So we have two things that we launched here.

Not shilling for lander, but just, you know, two things that we launched. One is called blueprints and it's Suno esque in the way that's, it's an idea generator, right? If you're, if you're just stuck and you wanna click some buttons, some pick genre, pick vocalist style and come 'em up with some music ideas, maybe in some, put some lyrics.

You can do that. It's not for releasing music. That way. Right? It's so that you get, you know, stemmed out components of a track that you can, you can take and do whatever you want with, which is cool. The thing that I'm most excited about is this singer release called Layers. And what it does, uh, is you can, you have to contribute something to it, right?

So you have to give it a beat you're working on, or a singer, songwriter, singer, whatever, right? And you pick from real musicians, right? These are actual people. They're real names. You can go look them up, or if you want a guitar player or trumpet player or whatever, and select the range in your song. You can add a trumpet solo, or you can tell the guitar player to play rhythm a part for you, or an ambient part, or a solo part.

You can dictate if they're finger picking, you can D take how they're playing and the dynamics of they're playing, and it generates an actual audio file as if a musician or studio musician actually played it. I think that's fascinating, right? So it allows you to kind of, it takes. Kinda the idea of virtual instruments and sampling and, and, and takes it kind of to a new place.

But the thing that's the coolest, and I think this is the thing that will be adopted by the most people, is you can take any sample or loop. 'cause we all look at, you know, sampling is a huge thing, right? Using loops is a huge thing, but loops are very fixed things, right? You take it, you loop it out, you know, music's kind of boring that way.

It's not dynamic. It doesn't breathe the way a human would perform it. This will take that loop and it'll actually look at your song and match it harmonically. Rhythmically to your track, and it will change over time. So it's like, it's like you gave that loop to a player, a real musician, and said, I want you to replay this, but actually make it fit my song and have it change.

So music in that sense is a, it's a use of AI where music becomes more musical than what people were doing with these very rigid samples. And that's the future of the, the music industry when it comes to DWS and production we all do is contextually aware music and loops everything, every piece of audio and midi.

We'll know what happens before it, what happens after it, and what happens vertically in an arrangement and can alter to and adjust to that whether you recorded it or you got it from Lander.


[00:27:45] Dmitri: Yeah, I love it. I feel like this Nam does mark a, uh, an inflection point.


[00:27:50] Daniel: I agree.


[00:27:50] Dmitri: Uh, I'm seeing that with, uh, other conversations.

Even the hardware. There's a couple of hardware companies that have AI chips built into Prote prototype hardware. So you guys are doing it in software on web and stuff like that. There's some hard, there's a hardware component that's, it's like, oh wow. I roll into new tone and there's very cool stuff.

Yamaha has a, has a sequencer that has some ai Yeah, there you go too as well. So yeah, I'm hoping to go check those out later today as well. So, but it feels like an inflection and you it kind of felt like it was, it had nothing to do with the hardware industry Oh,


[00:28:18] Daniel: yeah.


[00:28:18] Dmitri: Gear. But now it's, it's like onboard AI is interesting because it doesn't feel like it's out in the wild west.

It's really like. Much more controlled. The models are specific.


[00:28:27] Daniel: Yep.


[00:28:27] Dmitri: You know where it's coming from and then you kind of own it when you make it 'cause it's on an instrument as opposed to in the cloud.


[00:28:32] Daniel: Yeah. It feels, it's a very, it feels very different. Right. And the, the way this is all gonna be successful.

Ultimately it's in the, the only reason so much AI stuff is in the cloud right now is 'cause. Our computers aren't a, they don't have the processing to support those models. And B, those models are quite large. Right. Well, the models are getting smaller and the GPU power, you know, in our computers are getting, you know, more, more robust.

Exactly. So it's all gonna end up on device and that's where you have low latency. This stuff can work in real time. It's more secure. Your stuff's not, there's so many advantages to that, right? Including how much this stuff costs right now, if you want to generate a piece of audio is just talking about generative value for a second.

It's it or that runs on G, those models run on GPU clusters, it cost money. Right. So every time you press generate, you have to pay. There has to be a subscription model for that. Yeah. 'cause you have to pay for it every time. And what's bizarre right now is when you press that button, you've paid for it before you've even gotten to hear it.

Whereas a sample, if I go, if I find a piano loop, I like, okay, I spend a credit, I get that loop, I can use it. Yeah. Generative AI doesn't work that way. You can't hear the output until you've, you've actually used, you know. GPU processing, which costs money to hear it. So it's backwards from the way we're even used to.

You know? That's interesting.


[00:29:39] Dmitri: Economics are shifting. So if it's on device as well, you're paying power, but that's device power.


[00:29:44] Daniel: Well, and then it's the way we're used to using everything. Like when I use a virtual instrument, I don't pay every time I play the virtual instrument. That's preposterous, right.

So that's a, a big leap for me, and this is gonna happen in the coming few years, is when we start to see these models move on device and it becomes just the way that we're used to working inside our doll, inside the creative space that we live, and not something that's external to that.


[00:30:03] Dmitri: Yeah. Cool. All right, one last question as we close out, future facing some more.

Looking ahead, where is this all heading? What could, what should, uh, creators be paying attention to? What risks do companies face if they don't engage seriously with AI and Creator Trust?


[00:30:19] Daniel: Yeah.


[00:30:20] Dmitri: I'm curious. Let's, let's end with a little bit of that. Yeah. What should be keeping an eye on?


[00:30:23] Daniel: Well, the Creator Trust thing is huge, right?

All, you know, lots of companies would love to dip. Toe more in the space, but they're really concerned about disrupting their current business 'cause So people understandably have some anxiety and concerns around a lot of this technology because there's companies who've done really bad stuff, right? Yeah.

So of course they should have that, myself included. But I think what you're gonna see in the future, what, and I've had so many meetings with D companies here as an example, right? Where the technology is will just, it won't even be, it'll be. Transparent to you, it will exist in the environment you're in.

You won't even, we won't be calling it ai, it's just software that runs another software, right? That's all this branding crap is gonna go away. Things just work more fluidly and you can engage with us that some of that stuff. Or you can't, or you won't, right? And, but it will all live where, where you work.

And I think that's, that's the hurdle that we need to get over. So that, you know, it doesn't feel like this, this separate thing. And that's coming in in the next, you know, let's say three to five years maximum. But every doll that we use, uh, within the next few years will have substantial, even generative AI components to it.


[00:31:23] Dmitri: Yeah.


[00:31:24] Daniel: Pretty wild. It's gonna be table stakes, right? Everyone has to, as soon as somebody does it, everyone else has to kinda level up and match that.


[00:31:29] Dmitri: Yeah. So there could be some creative music and new genres and new access dude, and all sorts of cool things happening as


[00:31:35] Daniel: that's, I find that so interesting.

I mean, I know this is a cliche, but like. If you have taste right, I always, I always have to say in this for almost 10 years, curation is creation. Like look at DJ culture, right? If I, I don't have to make something from scratch to be able to impart my taste on something. Just selecting music, right, and presenting that to an audience can move them, right, and it express something for myself.

AI kind of, is that right? It opens up the ability for. People who would never dream of making music, but do have something to say to participate in that. And like I said earlier, some of those people are gonna go on to be well-known artists. They're gonna go on to be musicians. Even if they didn't start that way in a way that I think, I think is gonna have a positive impact on the music industry as after we get through all this BS that we've been dealing with over the past few years.


[00:32:15] Dmitri: Yeah. Amazing. Well, Daniel, thanks for taking time out. Nam, great to see you. Always happy to hear kind of where your thinking is. I think you always push me forward when I have these conversations,


[00:32:24] Daniel: kiddo. Sad. Okay, and sad. So great to see you, man. Appreciate it as always. See you. Music tectonic. Yes, mostly, definitely.

Of course.




[00:32:31] Dmitri: Next up, we have a special demo from June Usui from the Yamaha booth. June showed me a prototype that integrates Boomi's AI sample generation directly into Yamaha's seek track instrument. It's a glimpse into a feature where AI isn't just in the cloud, but embedded in the instruments that musicians use every day.

You can also see this demo on our YouTube.


[00:32:52] Jun: Hi, I'm Jun, uh, from Yamaha. So today I'd like to introduce, um, six track with Boom Integration. The Boom is the, uh, AI sound generation, uh, AI company. And, uh, they have, um, loop magic that is a sample. The loops generation. AI is very high quality things, so, uh, we are collaborating, uh, integration, the booming function to the hardware product.

So, um, what we wanna do is, uh, kind of expand the creativity of the producers. So we are seeking the, the Yamaha seeking the way of expand the creativity, imagination with the users and the producers. So, uh, I think the prompting the sample, using the generating sound, using prompt and generic hands, they want expand the creativity and the way of creation.

So I like to show the how it works. So here's Citra. This is kind of groove box. There are a lot of, tons of sounds inside, but I think still, you know, e wants to, sounds they like. Yeah. So here's the prototype of the AI sample generator here. It's not available in the market, but you know, it's the project.

Yeah. Here. Um, this is a single mode than pack mode. Single generating the single sounds like kick base, collapse, snare, something like that. P mode generate all the kind of samples for a six track project. So for example here that prompting something like, uh, so I think, yeah, for example, quite on past, but like taking out focus spectrum pack with dark roll, functional character, then press create.

Now generic sounds.

And yeah, so this prompts, they composed into these nine sounds. So therefore, kick, for example, kick with the soft punch and Getty front edge sna run with dry crack on cost, texture, body. So, um, it's kind AI agent generates each prompts here and now it's generating so, so like this. And if you don't like sounds, you can edit here these prompts and regenerate it.

Yeah. Awesome. Generate it so we can preview it here.


[00:35:24] Dmitri: So, so you were previewing each individual sound and then it puts together the whole loop together. So you can both look at the individual stems as well as the full loop.


[00:35:33] Jun: So this is kind of just a preview of the, you know, how it work, how it combine the sounds.


[00:35:38] Dmitri: Oh.


[00:35:39] Jun: So, um, if you like add sounds, for example, I like this kick sounds. So I liked art so that you can name here and, and GK for example, and target and three six, yeah, the list. So now you can use the sound. So here's the kind of sound pull here. Yeah, these are sound that I generated. So here. Punchy, punchy kick.

Yeah. This one. This ones I generated and press load. The sound is loaded into this product. So now I can use the sound. Okay. Here's the editor. The sound is now, and you can probably make like this.


[00:37:08] Dmitri: Yeah. Yeah. That's very cool. Five looks. Love it. Yeah. So. What do you think is the, um, the use case for this prototype? Mm-hmm. Uh, normally people would maybe go to splice or another platform and, and download specific sounds. Mm-hmm. Then they have to load them into the device. They can only choose from what's already available in the catalog versus this was prompt base and it goes directly in, is that right?


[00:37:32] Jun: Yeah. So the visitor integrated this product and also, uh, if general kind of nuance they want. So if you like, you know, kind of little bit more lo-fi, you just add low-fi in the prompt or kind of, kind of bit, there's just prompts they want, if you generate sound is a little bit different, you can just, you know, add from to it.

So it's a little bit different from, you know, just selecting sounds, but you can generate. What they want and also kind of, it can kind of accidentally, they, it generates something different from, you know, their inspiration maybe. Yeah. But it, it, a four is kind of, kind of possibility expand, uh, how to say that, you know, kind of source of inspiration maybe.


[00:38:16] Dmitri: Yeah.


[00:38:16] Jun: So that's, so things, uh, also interesting to use kind of AI for creation.


[00:38:22] Dmitri: Yeah. And, um, it co to me this is like a inflection point at NAM where you have onboard, almost onboard


[00:38:30] Jun: right,


[00:38:30] Dmitri: uh, AI models with physical hardware. Mm-hmm. Uh, do you also see that as a, a shift from what we've seen up till now with AI music?


[00:38:39] Jun: Yeah. So, um, yeah, I think, you know, the computational power, it requires a lot of, you know, computation to, you know, using the AI model basically. But this has a kind of good combination of software and hardware that we can utilize the power computer and the kinda accompany app. And also, yeah, of course we like to, you know, integrate into this kinda, you know, standalone for draw that.

But I think this is a kind of good way to, um, integrate, uh, AI technology and we can understand what's, what kind of feeling we can, uh. How we can feel that on using AI with the hardware. So I think, yeah, so that's a very interesting to see. So using AI in the hardware.


[00:39:29] Dmitri: Yeah.


[00:39:29] Jun: Yeah. So I think that that's very interesting.


[00:39:32] Dmitri: It, what's interesting to me about is it feels like it's, um, when it's out there in the cloud and you're typing prompts, you're not sure where the AI models came from, whether they've been. Uh, attributed whether the cr creators of the, what the training data have been trained on it. Yeah. It sort of feels like the Wild West.

It's just out there. You don't know whose music you've created on top of, and you don't really know if you own it as well.


[00:39:57] Daniel: Right.


[00:39:57] Dmitri: And this is interesting. Once it's on a, on a, on a gear where it's, feels like it's a closed model, uhhuh, it changes how you feel about what you're making with it as well. It feels more like mine in a way.

Mm-hmm.


[00:40:07] Jun: Mm-hmm. Yeah. So. You know, this is kind of not only whole song but sample so you can, you know, use your kind of combination. The samples, I think it's a little different from that general whole sound, whole whole song. And also Boomi is, um, training a model using the. Very high quality towns and also very ethical data.

So that is why we can now collaborate in Boomi.


[00:40:32] Dmitri: Yeah.


[00:40:33] Jun: And it, um, yeah, sounds sound amazing.


[00:40:35] Dmitri: Yeah. What, what goes, what happens after prototype? This is prototype. It's only available here in this studio at Nam. Yeah. What happens next?


[00:40:43] Jun: Yeah, so, um, yeah, so this is just, uh, you know, we just generat we just, um, build a prototype.

So I like to, you know, hear about how, you know people feel a producer feel. And, uh, what's the best way to integrate? Is it okay for just software and hardware integration or is it necessary for, you know, itegrated hardware or the functional point of view? Is it enough or, you know, maybe they need more kind of additional features.

So we are now getting feedback and uh, I can say, you know, main, it's available, but yeah, I like to continue this fraud side project.


[00:41:23] Dmitri: Yeah. So, um, not specific to this prototype or this partnership. If you just had to guess, how many years off are we from shipping a device, a controller, a sequencer, a synth that has the model built in on a chip that's not even connected to a computer?


[00:41:39] Jun: Yeah, of course. It depends, you know, see it's a functional, you know, how it's complicated on the, you know, uh, processing required, but you know, now it's changing that kind of paradigm. You know, the crowd models, the edge computing. So, yeah, so the, I think, you know, and it's difficult to say when, but so in near future, I think the more kind of edge computing, the hardware, AI things will come.


[00:42:06] Dmitri: Yeah. Yeah. Cool. Awesome. June, thank you so much for the demo. Congratulations on the prototype. I can't see, can't wait to see what happens with the next.


[00:42:14] Jun: Thank you so much.


[00:42:17] Dmitri: Thanks for listening to Music Tectonics. If you like what you hear, please subscribe on your favorite podcast app. We have new episodes for you every week.




Did you know we do free monthly online events that you are lovely podcast listeners can join. Find out more@musictectonics.com. And while you're there. Look for the latest about our annual conference and sign up for our newsletter to get updates. Everything we do explores the seismic shifts that shake up music and technology, the way the earth's tectonic plates cause quakes, and make mountains connect with music tectonics on Twitter, Instagram and LinkedIn.

That's my favorite platform. Connect with me, Dmitri Viza, if you can spell it. We'll be back again next week. If not sooner.






Let us know what you think! Tweet @MusicTectonics, find us on LinkedIn, Facebook and Instagram, or connect with podcast host Dmitri Vietze on LinkedIn, Twitter, and Facebook.

The Music Tectonics podcast goes beneath the surface of the music industry to explore how technology is changing the way business gets done. Weekly episodes include interviews with music tech movers & shakers, deep dives into seismic shifts, and more.

bottom of page