The Self-Recording Band

206: A.I. Talk: The Tools We're Using, Thoughts On The Future Of A.I. In Music Production, And Why We're Not Afraid Of It

January 28, 2024 Benedikt Hain / Malcom Owen-Flood Season 1 Episode 206
The Self-Recording Band
206: A.I. Talk: The Tools We're Using, Thoughts On The Future Of A.I. In Music Production, And Why We're Not Afraid Of It
Show Notes Transcript

Here are three next steps for you to take:

1. Get our free video training & checklist,
"Standout Mixes - The DIY Musician's Guide To Exciting Mixes That Stand Out And Connect":
theselfrecordingband.com/standoutmixes
-
2. Apply for The Self-Recording Syndicate, our personalized coaching program!

It all starts with a free clarity call where we talk about your music, give you feedback and a step-by-step roadmap that you can then implement on your own, or together with us.

Best case: We end up working together and completely transform the sound of your music forever.

Worst case: You get an hour of free coaching, feedback and recommendations for what to do next.

Sounds fair? Cool. Apply now and book your free clarity call:
theselfrecordingband.com/call
-
3. Join the free Facebook Group ("The Self-Recording Band Community"):
theselfrecordingband.com/community
--

Episode show notes:

This is a purely speculative episode on the current state and future of A.I. (artificial intelligence) in (rock) music. 

We're talking about what it can and can't do (yet), where we think things might be going and whether or not we all need to be afraid of it.

Again, we don't know nearly enough about this subject and the tech under the hood to make any serious predictions here. Hardly anyone does. 

But we are already using A.I. tools on a daily basis. In our personal lives, in our businesses and when working on music or other audio & video projects, like this podcast. So we have some experience and have done enough research and testing to at least have an opinion. 

And we're hoping for certain tools to be released soon, so that artists, producers and engineers can be even more creative with even less friction.

Let's dive in!

PS: Please join the conversation by leaving a comment, a rating and review, or a post inside our free Facebook community.

--

For links to everything we've mentioned in this episode, as well as full show notes go to: https://theselfrecordingband.com/206
--

If you have any questions, feedback, topic ideas or want to suggest a guest, email us at: podcast@theselfrecordingband.com

Speaker 1:

And some person still had to tell it what to do, what they wanted to be like, what they wanted to feel like. They have to make a decision if it's good or not, and I never viewed it as a bad thing if someone was able to do it faster and quicker. I always think view it as a cool thing. So how awesome would it be if people could just talk about or write about their feelings or what they want to hear from the song or whatever, and then it would just happen. That's what we're trying to do anyways. We just limited with what we have available right now, but that's what we're trying to do. We just put our emotions into a song and make it come out with the speakers, and we use whatever advantage we can get to do that, and so I actually think it would be awesome to just have a machine buddy who would help you with that. This is the Self Recording Band Podcast, the show where we help you make exciting records on your own, wherever you are. Diy style. Let's go. Hello and welcome to the Self Recording Band Podcast. I am your host, benedict Heijn. If you are new to the show, welcome. So glad to have you. If you're already a listener, welcome back. We appreciate you listening again.

Speaker 1:

Today we're going to start talking about a thing that people have been talking about for a year and a half or so now, constantly, basically, and so it's a no brainer kind of episode. It's about AI, ai and music and we kind of I don't even know if we ever talked about it. Maybe we had an episode, but if we had, it was just scratching the surface, so we kind of avoided that topic for a while. But now it's time to do that episode because everyone's talking about it. They went stop and it also got to a point where it has become really impressive and powerful. And I'm mainly interested in hearing my friend and co-host, malcolm's friend and co-host, malcolm's thought on thoughts on this, because I clearly I clearly didn't do as much research as you did, malcolm. So, yeah, welcome to the show, malcolm, glad to be doing this with you again, and I can't wait to dive into the whole AI topic today.

Speaker 2:

Yeah it's been a long time overdue actually, so I think I'm more excited about AI than like most people on this planet. I'm pretty in deep right now. I'm using AI a lot every single day for weeks and weeks now and like it's like, yeah, it's, I use it for everything and where I think a lot of people can't figure out how to use it at all, and that's not a bad thing. It's like that's exactly where I was when I first went to it as well. I was like, okay, cool, I can.

Speaker 2:

It's a chatbot. You know it answers the stuff I type into it. But like what do I actually? How is it useful? And it's taken me long like quite a while and experimentation to figure out how, like what it can be useful for, but also how to make it like, how to get it to do what you want. You know there's like there's a learning curve to using it as a tool, but slowly, as I've experimented with it, it's my brain started getting better at recognizing ways that it can help me solve problems essentially, or help me be more efficient, and then also visualize things that it could be useful in the future as well, which is like the whole thing about this, this topic of the future of AI in music and what it could potentially be doing for us not too long from now, and I am so excited to buy those possibilities.

Speaker 2:

I think it's going to transform the entire recording industry like massively. I think it's going to remove the barriers that are stopping a lot of people from being able to record themselves. Yeah, I'm really excited about it and, personally, benny and I are going to get into this, but I'm not threatened by it at all musically, I know people are going to make some really bad AI generated music, but I'm not worried about it and we'll get into that too. There's a whole whole thing to this 100%.

Speaker 1:

Yeah, I got to say that I use AI all the time as well. I just don't have, I just haven't used it as much in music or mixing Right.

Speaker 2:

Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah yeah.

Speaker 1:

Yeah, yeah, I really said like my brain got way better at like noticing things that could be done by AI versus having to do them themselves myself, so I use it all the time. Yeah, I just haven't looked into it much in when it comes to music and art in general, yeah the music space, yeah, the music space especially, and aside from, like, plugins.

Speaker 1:

But we've got into this, like there's a couple of things I've used, but yeah, yeah. So what we're going to be talking about is the current state and future of AI in music, and rock music in particular, because I think it's. There's a difference right now at least, there's a gap between what it already does in other genres versus what it does in rock music, or how many people apply it there, what it can and can't do, where things are going or we think they might be going. We cannot predict the future, but we can speculate and whether or not we need to be afraid of it. Like you said, malcolm, we're going to discuss that as well. And so, yeah, let's talk about where it. What do we start with? Should we talk about where it is right now, or should we talk about our general thoughts on it and whether or not we were afraid of it?

Speaker 2:

Maybe general thoughts, because I think it is so polarizing and I totally understand why there's already some terrible things that people are using AI for that.

Speaker 2:

I can't even. I'm just like, wow, those people suck, why would they do that? But that's the same with anything, I guess you know. And then, of course, the music industry is the like, literally the most notorious, least scared section of people when it comes to advancements in technology. And we can look back all the way to radio coming out.

Speaker 2:

When radio came a thing and they started wanting to play songs on the radio, the musicians didn't want it. That was the common stance of musicians everywhere was like no, because then nobody's going to come see me play live. Like live music will die. What do they know? Like they just couldn't see that the radio was going to make them famous, you know, and actually give them an industry Like radio was a good thing. Even recording in general, they were scared to be captured and that that would take away from their jobs, which obviously that's the entire point.

Speaker 2:

Now, multi-track recording like, like the, the mention of digital recording, those huge pushback and delay on embracing that, like pitch correction, editing, like all of these things where musicians seem to naturally be scared of and then eventually realize, oh, this actually makes things better, we should probably get on and like streaming and downloading music online.

Speaker 2:

That's the best example, because I think people still think it's bad, and I understand why. But it's only bad because we were too slow to adopt it and figure out that we should own it. And Apple did it for us and stole it. You know like it was like well, I mean LimeWire did it, and we're all like, oh the stop, limewire will. Meanwhile, sony or some record label should have just made iTunes for themselves and not set the price to 99 cents a song and yeah fortune. You know like it would have. We could have been so much better off if musicians have been in charge of this tech, instead of just like being scared of it and then letting some tech giant do it, you know? And and now we're making less than a penny, a lot less than a penny for for a stream, you know.

Speaker 1:

So and and also what we have to say. Also, what we have to say here, to be fair, though, is that all of these technology advance. You know these, these new advanced technologies. They came out, All of them were abused at some point to, or like people.

Speaker 1:

It took a while for people to figure out how to use it properly, which added to, like the fear and which sort of was proof quote unquote to for those people that it's a bad thing. So, when multi track recording came out, even like just stereo, people didn't really know what to do with it. They put all the drums on one side and the rest on the other side and stuff like that, and you know it took a while for them to figure out how to properly use it. Same with, like, any sort of surround format or early digital or voc, like people abuse the vocal tuning thing until that becomes a thing and the genre and the style, and then they did on purpose and like, but every single time it was abused or it took a while for people to to realize how to use it.

Speaker 2:

And so that added to it.

Speaker 1:

And it will be the same thing here, like there will be terrible AI music, but then there will be people who will figure it out.

Speaker 2:

How? How to use it in a cool way, yeah.

Speaker 1:

And it's the same thing as they always say in like business as well AI is not going to take your job, but someone knowing how to use AI will. That's the thing right. So it still takes skill and a person to operate it properly and engineer the right prompts and have the taste to actually be able to tell if the result is good or not. So that won't go anywhere anytime soon, but there will be people who make stuff faster, quicker, more efficient and actually maybe even more creative for various reasons than people who refuse to use it.

Speaker 2:

So, absolutely, yeah, and I like I'm going to plug my YouTube channel here because this is something I'm going to be exploring deeply on there and already have been, and like, namely, my latest video, I'm talking about how I think it's actually going to make it more, more natural and more human than than recording has been in a long time and I'll explain that statement a little bit later, when we're in that section of this, this chat here.

Speaker 2:

But, like, how we use it is going to kind of determine everything, and I think, like there's there's definitely people listening to this right now that are like strongly opposed to AI, and that's totally fine. I am trying to convince you otherwise, but I do want this like it's going to be a debate and people should have like different, different opinions, you know, and there's probably really good reasons for why I should be more cautious about AI that I haven't been exposed to yet. So I just want that disclaimer out there. There's like this is our opinion, but we're there's there's so much more to this than what we know, of course. There is so like be open to what we're saying, but unfortunately we can't hear you talking back to us on a podcast.

Speaker 1:

Yeah, exactly, and if you have any, yeah, but if you have any thoughts on this, as always, feel free to comment below, feel free to send us an email, feel free to post in our Facebook community. We love to hear your thoughts on this, because this is really. We can actually speculate and we can share our experience with what is available right now, but I'm happy to have a discussion there. This is not a. We're not telling you how this all works and where this is going to go. We're just telling you what we think it could do and what we have done so far with it. That's all, and so I think we have to separate two different kinds.

Speaker 1:

I don't know if you agree, malcolm, but I view it as like two different kinds of AI tools that are available. One is like AI tools that have some sort of machine learning implemented and some sort of intelligent you know reaction to whatever the input is, and then make a decision based on that. And then there's generative AI, which is you. You tell it what to do and then it creates something from nothing.

Speaker 1:

These are two different things to me, so there's various yeah there's various AI tools and plugins that I use frequently, that I love, but they don't create anything for me. They just help me do my job quicker and better. And then there's generative AI that you can. You know like many of you have probably played around with JetGPT or image generation, where you enter a text and then it creates something from nothing, which is also impressive and awesome, but it's like a different, a different thing.

Speaker 2:

So yeah, yeah, that's a really great distinction. And I think most people are scared of that generative AI in the music space because the I got a comment on my video about this Like aren't you scared about people being able to just type a few prompts and make good music? That's like that's a hit. You know, and I'm not Well, I think that will be technically possible. Like write a song in the Star of Drake or something and then it spits you out a song that sounds like that People are already doing that. Like there's a whole millions of people that are just stealing people's beats and rapping something else over it and keeping the hook. You know, like there's, it's rampant already and this is just going to be more unique than that because it's going to create something original at least.

Speaker 1:

Yeah.

Speaker 2:

And you know what?

Speaker 1:

The whole question, like aren't you afraid of that? Is kind of weird. No, it's not weird. I understand where it's coming from, but I think there is a mindset problem as well there, because Even if that would happen, like even if it was possible to just add a few prompts and then it would create a really awesome song, why would I be afraid of that? That would be actually really cool to me.

Speaker 1:

So I'm never afraid of like great results in more music and art out there, and I don't necessarily care how the person or the AI created it, because some kind some person still had to tell it what to do, what they wanted to be like, what they wanted to feel like.

Speaker 1:

They have to make a decision if it's good or not, and and I never viewed it as a bad thing if someone was able to do it faster and quicker. I always think view it as a cool thing. So how awesome would it be if people could just, you know, talk about or write about their feelings or what they want to hear from the song or whatever, and then it would just happen Like that's what we're trying to do anyways, Like we just limit it with what we have available right now, but that's what we're trying to do. We try to put our emotions into a song and make it come out with the speakers, and we use whatever advantage we can get to do that, and so I actually think it would be awesome to just have a machine buddy who would help you with that.

Speaker 2:

Right, I guess, like the point I could see being made is like no human is on the supporting. You know, and in that area I actually think people are overestimating how good AI is.

Speaker 2:

Like, will it be able to make a song that, like, does what you asked and sound good, yes, but will you be able to, like, tweak it to the degree that an actual musician is going to want? Not really, like you know you can't get, you're not going to be able to get that guitar tone you want in your head. It's going to give you a cool guitar tone, probably, but like it's, you can't tweak it. Like you can't grab the knobs of your amp, you can't play it differently and decide, okay, like, let's build into that course, like it's, it isn't actually better than human ability to play.

Speaker 1:

And not even just the tones like, let alone like the feeling, the playing Like even if you could nail the tone, like every single guitar player on this planet is so different and has their own unique thing that they put into the music that AI could never do Like it can do something awesome, but it will never be you. That's the thing. It might be as good as you, but different, but it will never be you and your emotions, your feelings. So this is why I'm absolutely not afraid of it, because it doesn't have your brain. Even if it is as advanced as a human, it will still not be the exact same human.

Speaker 2:

So it told the 100%, and at a certain point it actually won't even be a faster tool for how granular people go when they're making their the recordings and their songs. It'd be slower, you know it'll be able to spit out a song way faster than you, but as far as like the individual decisions, it's not. That's not going to be a real solution in my opinion.

Speaker 1:

Cool. So yeah, go ahead, malcolm, tell us about some interesting tools you recently discovered in use.

Speaker 2:

Yeah, so like that's the fear of AI has been discussed, and.

Speaker 2:

But now let's talk about AI that already exists and this is like a good, I think, point for AI not being that scary is that the tools that already exist aren't really doing anything that we couldn't do before, but it is doing them better now and faster. So those kind of tools, like the main two that I use on a regular basis are both my waves are clarity DX and clarity D reverb, and one is like a noise removal. So if I'm recording a chat like this but I'm not in a studio, so there's like traffic noise and stuff like that, it removes that from the dialogue recording in a really efficient and quite clean way without it sounding bad. And then the D reverb one takes the room out of the equation. If I record in an echoey room, I can combat that. Is it perfect to know, but it's pretty darn good and it's better than the tech that I had before from Isotope, for example, which is also really good, and Isotope has advantages. I'm not trying to crap on them.

Speaker 1:

Absolutely, absolutely.

Speaker 2:

But this is like a new version of a tool that I already had. I already had tools that do this. Now I've got these AI ones that do it a little usually better, in my opinion, and definitely much easier. It's like one knob I turn. So is that taken away from the human element of this podcast? If we use the tool to make clean up the audio? Not, in my opinion, at all right. Nope, because it hasn't. Yeah, it's just made it quicker, which is going to let us do more human things like keep chatting and having more episodes. So like that is an area of this AI just doing a better job of the things that we are already doing.

Speaker 2:

Another one is this tool that Benny and I were both exposed to In Germany, called Moises AI, which will separate the stems out of a song so you can upload a WAV file and it'll give you back a WAV file for the drums, WAV file for the bass, guitars, vocals. It separates them all and does like a remarkably good job at it, but this is also something that existed before AI. It's just that AI seems to be making it happen a little better again, but it's still like is there anything more human about me doing it with worse results using the old software I had versus this new tool? I don't think so. Right, it's still me clicking the button and trying to get a result, and I'm getting a better result, which is gonna let me do more creative things with those results. Yes, so those are like the two main ones that are existing to me right now as current tools. Is there anything that, like you can think of, danny, that you've seen or tried?

Speaker 1:

Not really there's, like I still. I also used the clarity waves thing. I completely overlooked the reword, one less, I said before we recorded this, but the clarity VX I used that. I tried Moises. I tried a few other of these separation tools. I'm not necessarily sure which one of these are, which ones of these are AI or implement some sort of AI, versus which don't. Maybe all of them use some version. You know, I don't know enough about the tech under the hood to be able to tell that. I just know that Moises does it and it's awesome.

Speaker 1:

Then what else have I used? I'm sure I've used a bunch of plugins. You could argue that there are some plugins. I don't know if they fall into the category of AI, but like the smart kind of plugins, like Golfos, eq or you know and stuff. It's not necessarily AI, I think, but like it was just to me, this is kind of the transition stuff between what we had before and now the intelligent tools.

Speaker 1:

There's some plugins that are in between, like Soothe started it in a way, and then Golfos and a couple of these, and then Isotope, neutron or whatever it's called, like these sort of things, intelligent plugins. I've used all of these and then there are a few things I wish there were, where, for example, we talked about it in Germany as well, where I can't wait for someone to implement into a DAW something like smart suggestions, where it's not really doing something to the audio, but something like, hey, you've been doing this same thing 36 times this week. Now how about automating it this way and creates a macro for you, right, and then just helps you improve the workflow like a little efficiency buddy. That, just that would be awesome.

Speaker 1:

Like not really doing anything to the audio, but like some workflow improvements that I could totally see happening. And so some of these tools I wish I would have, I do have.

Speaker 2:

I think I got a whole bunch of conceptual ideas that I think are so exciting, Like that would just be so good.

Speaker 1:

It's just a matter of time until it happens, and I think yeah.

Speaker 2:

But I do want to give one kind of example that I think helps people wrap their heads around and I think, using even though we're a, you know, a music recording podcast, not like a dialogue recording podcast I think we all are familiar with how voices work, so it's a really easy example. And if you picture a de-esser, you know everybody's you probably use a de-esser by this point, most people anyways and that is just going to take the Ss, it's going to take those down, and now all of these plugins right now I think all of them anyways don't actually know what an S is. They just are listening to a frequency range that gets louder when an S happens and then it's turning that down. So it's not like able to understand the language that's being spoken or sung into it. It's just a frequency range compressor, essentially, that's all.

Speaker 2:

The de-esser is when an AI version of that would be able to understand the whole language, being that it's hearing, and then know well, that word has an S noise. So I'm going to turn down that section of that word and that section, and then you know if you move that to a breath. Well, the AI isn't going to be looking for a certain frequency to turn a breath down. It's going to be like that was a breath. I understand the audio. I'm hearing that's a breath. I'm turning down the breath. You know only the breath. It's accurate. It's as accurate as a human would be going through and manually automating down the breath. Right, because we can tell what a breath is with 100% accuracy. An AI that understands what a human voice sounds like in the same can also do that. Where a frequency range is going to have some errors, even a de-esser if there's a lot of high end on a certain word, it might duck. That you know, even though it's on an S, you know.

Speaker 1:

All those cleanup and editing tools basically are what I would really be excited about and, honestly, I knew saying that I actually used more than I thought, because when I edit this podcast and so right now, these last couple of episodes I've edited myself before we get a new podcast editor soon here but what I'm using is a tool called Descript and it's an AI-based editing tool and I actually use it all the time not that I think of it, and what it does is exactly these things that you just mentioned. It just does it for dialogue and not for music, but it could totally be applied to music. So it automatically detects filler words, it removes the ums and us In one click. It detects how long the pauses are and then shortens it to a certain you know. It improves the flow of the podcast. Basically, with the click of a button it even replaces. You can even like it's text-based editing.

Speaker 1:

I can edit the transcript and then the video and text follow these edits and I can even replace things, so you can even like remove a word type in a different word. It generates my voice. It regenerates my voice based on what it learned about my voice and then I couldn't make it say things that I never said. I don't do it because it doesn't work for video right now. It only works for audio, and we have a video and an audio version of the podcast. So everything your hearing is actually something we said, but if it was on audio only, I could remove an entire word or whatever and put in something else and correct mistakes that way. Which?

Speaker 1:

is kind of insane if you think about it. It's wild, but you can. You could use those things in music totally.

Speaker 2:

I think like deep breath and stuff like that. 100%. Yeah, it's just like you have to kind of start considering AI as your human computer friend.

Speaker 1:

That's weird as that is. Yeah, the studio assistant who is editing stuff for you, basically quicker.

Speaker 2:

The more I treat and communicate with AI as a human, as an equal, the better results I get and the more I'm able to understand what it is able to do.

Speaker 2:

And so that's. I mean, it's like a lot to absorb if you're not familiar with it or haven't tried it yet, and, of course, there aren't tools that will allow you to do some of the things that you could think up even, which is probably where we should go right now into the conceptual stuff. Oh, a few more features that do exist. Ripxpro is like one we just recently got put on to and I haven't even got to try it yet but it seems to be able to like do the stem separation thing. But then you could like be like okay, the vocal melody, let's change that to a guitar line and like it'll generate a guitar playing that melody you know, and then like so, that's like another form of generative AI and that could be cool because, like that, now you can mix that with the vocal that you recorded, you know, and just have a lift real easy, like it's like, that's awesome.

Speaker 1:

Dude, there's so much more that I think of now you're talking about it that I totally didn't realize. I actually tried one more thing in terms like actually generative AI, where it worked great. So what I used to do when someone sent me stuff that didn't have like someone sent me a song to mix and it only had a lead vocal but no harmonies I would often have to duplicate the lead vocal, create fake harmonies out of that, you know, and like I still do that to if we wanted the chorus to be bigger, and sometimes the artist just doesn't know how to come up with a harmony. They can't do it, and then they ask me for some wizardry to make it happen, and so I what I did was yeah make fake harmonies with tuning and stuff like that.

Speaker 1:

But I recently tried generative AI, where you could, you know, tell an AI to sing a certain melody based on like MIDI notes that you put in, or like notes you tell it, or like chord progression, whatever. You could say, hey, the song is in this key, I have these chords, I need backing vocals, I need like a choir or whatever, and instead of downloading a sample that a hundred other people have used as well, or a million other people, you can create that small part of harmonies or backing vocals downloaded and important to your session. And this is awesome. It did to me like this is really great. This is, I mean, it's not an artist, not a human doing that, but it's also not the focal point. It's nothing that is important for the emotion or whatever. It just supports the thing that has the emotion. It makes it even more powerful and if that is the only way I can fix that or I can make the vocal even better and the song even better, then I'm happy and I'm all for that.

Speaker 1:

So I actually used this and it's quite impressive, and one of our coaching students actually made a song where he created an entire. It was a hip hop song, a hip hop track, and he created an entire hook of like, like there was rap vocals and then there was a hook in between, like a glitchy process tune type of lead vocal in between, and he generated that with some AI tool and I mean you can think about that what you want, but if you hadn't told me, I wouldn't have noticed. It would have sounded like a hip hop track with a hook on top of it and I thought, I totally thought it was a human. So, without like having an opinion on that, you have to decide if you want to do that or not. But it's possible. And for stuff, but I think for supportive stuff, like backing vocals, harmony stuff like that, why not? I mean 100%.

Speaker 2:

Like we're already doing it. We're already doing it without AI. It's just going to do it better and sound more human again, like it actually sounds more realistic than the non AI version that we're making it work with. You know where I'm just pitch shifting somebody around like crazy and tuning them really hard and stuff like it. This could be a brand new voice added to the mix, potentially, which would sound even better. Anything that helps people make the music they want to make I'm fully down for. And again, there's going to be people that just use it to make all the decisions and aren't actually being artistic at all. They're just, you know, it's like AI verbal garbage. That's going to happen, but like I don't think those people are going to stand out. I don't think people are going to connect with that and there's an argument.

Speaker 1:

Of course people say like, well, but what about the you know studio musicians and there's a whole industry of people singing backing vocals on tracks. Nobody will hire them anymore. I think the people who now use those AI tools to do it won't hire they wouldn't hire these people anyways. It's the majority of people are the ones without like, on a tight budget, who want to do it themselves. They wouldn't hire, like, a professional person to record backing vocals or harmonies and the professional productions with the budget. They will still hire people to do it and maybe, if those jobs disappear, maybe the people providing the backing vocals or harmonies, maybe they have to adapt to and use tools to make it faster, quicker, whatever, like things are always changing. That's just the reality of it.

Speaker 1:

And again, there's no, I'm not saying it's right or wrong, it's just the way it is and I can fight it and I can be against it, but I can't change it. So I can also just embrace it and work with it. And whenever an artist comes to me and they have a budget for real people, I'm all for that. I love that. The more people are involved in the project, the better it is. Let's do it. But if there's no way we can make it work, and the options are either not make it at all or using the AI tool. I know I'm gonna use the AI tool.

Speaker 2:

Totally. That's the reality. It will shake up some people's jobs, it's just it's a, you know, unfortunate reality, but I don't think it's gonna be as like. It's not gonna erase the jobs that I've thought of so far. Like the procession guys are still gonna be like the procession guys yes, totally. And then final things to mention for that stuff we know AI can already do and it's now able to do really really well is like key detection, tempo detection. So you know, like put in a song and it tells you what key it's in, put in a song and it generates a click track for you. For that, you know Lyric transcription. You can detect, auto detect the parts of a song and like now you've got you know timestamps for like this is the course, this is the verse. You know they can do all that stuff. And then, okay, I think that more or less covers what we know AI can do in music so far. One more thing one more thing.

Speaker 1:

There is a thing that used to be very bad but it's now also getting pretty good, and that is AI mastering. So, again, a really good mastering engineer is the best thing you can do in terms of mastering. But there are things like masterio that I actually have an affiliate link. So if you go to the self-recordingbandcom slash master, it will take you there. I think you even get a 10% discount, if I'm not. If I'm correct there, like, yeah, I think you get a 10% discount and we get a get compensated for that because we partnered with them and I only did that because I really believe in the tool.

Speaker 1:

I tried it against a couple of other software tools out there and master always killed it. Like, by the way you write it, you spell it M-A-A-S-T-R, master, and it's really, really good. Do I think it beats a word class mastering engineer? No, but if, again, if you have a lot of songs that you want to finish, or if you have a lot of demos that you just want to get loud and compare to other stuff, you know it's a quick and very affordable way to do that. It can teach you a lot about mastering because you can use those results as a reference same with isotope, ozone and all these other things. Like you can use the AI to train yourself, because the goal is then to beat the AI and you have a reference sort of which is very, very fun thing to do.

Speaker 1:

And then, and one thing again, is that even if it nails it and it sounds awesome, there's more to mastering that AI just can't do, which is, you know the sequence of the record, the adding, the. There's a couple of things that a mastering engineer does that can only be done through communicating with the artist. There's like a certain feel to the flow of a record that someone has to do. There's a quality control. Someone still has to listen to it and be like, yep, that's good or no, that's not good. So, again, it could be something that speeds up the workflow but doesn't replace the engineer, right.

Speaker 2:

But I think that's one more thing we need to add here, because it got to a point where it really works well and as much as I love Mastery, there is one world where I really think the AI Mastery is actually really cool that I saw somebody do and it was like this band had it wasn't a master, but it was a different one Again, you remember which one but they had a shared account on there and they were all making demos and uploading it to there and it was automatically just kind of giving them a rough master so that when they listened to it, all the tracks were kind of equally loud, you know, and like so they could just like go through all their collective song ideas kind of at a uniform volume and just kind of, you know, because none of them were good enough to know how to do a rough master even they're just like barely managing to get recordings done. So it kind of just created this like shared platform with like hey, this is listenable, let's go. Which is good.

Speaker 1:

Yeah, again, and such a good teaching tool.

Speaker 1:

I use it in our coaching program all the time.

Speaker 1:

So we have a mastering action plan there and or we teach people to do kind of a DIY quick mastering thing, because mastering is really advanced not the first thing you should learn as a self-recording artist, but we help people get their stuff release ready, right.

Speaker 1:

So that's part of it, and what we always do is like when they submit their first masters like our students, I will always take their mix, send it through an AI tool master and then we'll send it back to them and we'll see if they were able to beat the AI master and if they weren't able, then the goal is to beat it. And then we have this reference and instead of me having to master all these songs all the time for all the students, which would be impossible it's a coaching program and not you know, they're not hiring me as a mastering engineer, so instead of doing that, I can quickly run all those things through the AI, see if it reveals any problems with the mix, see what it sounds like once it gets loud, helps them get a reference for their own masters. It's such a great teaching tool as well, so Totally 100%, 100%.

Speaker 2:

And yeah, I mean like back to Moises with the stem separation, where you can put in a mix and it gives you all the multi-tracks not all the multi-tracks, but it gives you multi-track stems. Back then you can listen to the drums that you know, john Bonham, laid down with just the drums, and you're gonna learn way more about how those drums actually sound in that context like, or the guitars you know like. It gives you a way to look deeper into a mix and learn more about those individual stuff. And you know, yeah, you'll discover all sorts of stuff. It's pretty cool.

Speaker 2:

Okay, now the conceptual stuff. This is the things that, to the best of our knowledge, don't even exist yet. But that's the most exciting part because, like I said, music's gonna be notoriously slow to adopt this, too slow and like, honestly, the first person that does this and shows an aptitude for like, really implementing, I should say specify that a little differently the first DAW that really can showcase that they're investing heavily into implementing AI into their DAW framework. I'm switching like immediately, like, if that's not Pro Tools, I'll be moving to whoever it is, as long as they're still like an audio first DAW, not like an electronic music first DAW, because that's my world.

Speaker 1:

But DAW implementation and it shouldn't be gimmicky. It should still be a DAW with all the features that we used to plus the AI thing, because the danger is that they make something gimmicky where it's like nice but not really useful because you don't have all the other stuff anymore, right so?

Speaker 2:

100%. Yeah, I'm not looking for the first DAW to implement an AI tool. I'm looking for the first DAW that's like able to communicate to me that that is their future. They're building that in a serious way. That's what I'm looking for and that'll become clear. And there's one more thing about AI that a lot of people don't know that we need to explain first, and that is that AI can be communicated with verbally, not just through text anymore, so you can talk to it like a human being and it will give you back the information that you're asking for, essentially so chat.

Speaker 2:

Gpt is the one that I've used that has implemented this, and on my phone I can click a button and it starts a voice chat with it. And then, like I can't really explain how good this is for me, because I'm the worst chef in the world, I'm a really bad cook and like I don't know how to explain it, but like how recipes are communicated online with food blogs like is the opposite of my learning style. I can't make a recipe from a webpage. It's just like I'm scrolling all over the place. It's not formatted in a way that works for me. And then there's yeah, I like it's like literally one of the most stressful things for me is trying to cook from a recipe online.

Speaker 2:

This, though, having my phone sitting on the countertop and I can ask it questions and or like communicate it, just like it's my wife helping me cook something beside me, because Beth is a much better cook than I am Like it's a total game changer. Like it's literally like I have this little like chef beside me just talking me through it and I communicate it in a totally human way, which is so insane. It's so, so insane. And I am making some damn good food, guys, some damn good food. I'm getting way better cook.

Speaker 2:

It's transforming me in the kitchen and it's all just this little chat bot. But it's interfacing verbally. That is the difference, right, if I had to stop and like type in a question and wait for it to like populate and then read that and like it would be the same as the food blog, but the fact that I can talk to it and it talks back to me is like that's my learning style, I guess, like that's way more effective for me. And that is gonna be the reason I'm bringing this up, because I can see how you're all wondering how me cooking has anything to do with music is that I think verbal interfacing with our DAW is the most exciting thing in the future of music recording and I think that is what is gonna change. That's what's gonna get pros on board, I think. And then that's gonna trickle down, because if I am recording guitar myself and I don't have an engineer and I have to run Pro Tools and my guitar and stay in tune all at the same time and then comp and stuff, like it's really hard and your arms end up, you end up the spaghetti mess. Well, you're like last fall slides off your lap and it's like it's super tricky, and then you end up with the sore back because you're like leaning in all these different directions.

Speaker 2:

But if I could just say, hey, do you mind going back to the start of the song and punching me in there? I just wanna try that again. All right, that was pretty good. Let's try punching in on the verse now, like I'm just talking to my DAW as if there's an assistant engineer in the chair.

Speaker 2:

That is gonna be the biggest game changer for anyone, because not only do I not need to know how to do that, necessarily, if I can communicate it with it that way, when a lot of people don't they don't know how to punch in. You know, if there's learning how to punch in, how to rage pre-roll, how to keep the part that you wanted but then only punch in the part that you didn't want, how to find the old takes, like there's all of these skills that are basic but really hard until you know them. Yeah, 100%, that entire barrier gets removed. If your DAW is as intelligent as you need it to be, to communicate it with verbally, like it's gonna be beyond massively transformative. It'll just make anybody's gonna be able if they can play their instrument and plug in a mic and not screw up the gain, which I could also fix the gain thing. You're gonna be off to the races and I can't wait for that. That'll be amazing.

Speaker 1:

Yes, 100%, yeah, so, yeah, absolutely. So this is what I'm looking forward to as well. I think we can sum it up by saying that we don't. We're definitely not afraid of it. There is a lot of really cool tools helping us already that you can absolutely use without taking away from the art or the emotion or anything like that. We gave you a couple of use cases, a couple of do's and don'ts, and it still requires skills, and we're looking forward to the future of that very much. So I think what we should do is put in the show notes a couple of these tools to help you yeah, to help you find these and then play around with them yourself. And if you have any that we haven't mentioned, please let us know in the comments, please reach out, please post in our community and yeah, that's yeah, yeah, absolutely.

Speaker 2:

If you want more conceptual ideas of where I think AI could go, I've got a video on my YouTube channel, which is my name, Malcolm Owen Floods. That's the channel and I did a pretty deep dive on all of these different ideas that I think could come up too deep to go into for what we've got available for time right now, unfortunately, but I think this gives you a really good overview of like where AI is at and how it could be used.

Speaker 1:

Yeah, and maybe we can do a part two at some point in the future as well. Yeah, I would love that, yeah.

Speaker 2:

And it's changing so rapidly, we'll have entirely different things to say about then.

Speaker 1:

Yeah, exactly All right. Thank you so much for listening. Yeah, talk to you next week.

Speaker 2:

Thank you all, bye.