Timestamps:
Summary:
In this episode, Amith and Mallory discuss the latest AI news and developments. They analyze Microsoft's upcoming AI model MAI-1 and what it means for the tech giant's competition with OpenAI. They also dive into a Google report on how nonprofits are utilizing generative AI, highlighting opportunities and barriers to adoption. Additionally, they explore the innovative use of AI to re-create country singer Randy Travis' voice after a stroke, touching on the creative possibilities and ethical implications of such technology.
Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.
This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.
Follow Sidecar on LinkedIn
Other Resources from Sidecar:
Tools mentioned:
Other Resources Mentioned:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress (BlueCypress.io), a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
Follow Amith on LinkedIn.
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on Linkedin.
Disclaimer: This transcript was generated by artificial intelligence using Descript. It may contain errors or inaccuracies.
Amith Nagarajan: Greetings and welcome back to the Sidecar Sync. We are excited to be back with you again for another fun and action packed episode of the Sidecar Sync. Uh, my name is Amith Nagarajan and I'm your host.
Mallory Mejias: And my name is Mallory Mejias and I run Sidecar.
Amith Nagarajan: And we are excited to get going. We are going to run through some really exciting topics at the intersection of artificial intelligence and AI, as usual.
Before we do that, let's take a moment to hear from our sponsor. Amith,
Mallory Mejias: how are you doing this week?
Amith Nagarajan: I'm doing really well. You know, it's another busy week, a lot going on in the world, and I was [00:01:00] out of the office for a good bit of last week, so I'm still looking for that AI that will eliminate my inbox entirely upon my return, but I haven't found that tool yet, but doing pretty well.
How about yourself?
Mallory Mejias: Yep, it's indeed been a busy one last week. I was actually at the blue cypress family of companies innovation hub in chicago We had a fantastic event Um, I got to lead a marketing ai panel, which was really exciting for me And we are doing it all over again and meet next week in dc and you'll be there, right?
Amith Nagarajan: I will. Yeah, I was bummed to not be able to join in Chicago last week, but I'm looking forward to, uh, being there. It's, uh, a week from today. Today's, uh, Tuesday, the 7th of May that we're recording this episode, and on, uh, Tuesday, May 14th, we will be in D. C., and looking forward to welcoming a, uh, Really nice sized group of association folks, uh, into the, the DC event.
It's going to be a lot of fun.
Mallory Mejias: And I believe we still have a few spots left at that event. If you are in DC and you're [00:02:00] interested, we'll drop the link in the show notes, but you can find more information at blue cypress. io. And we have another exciting announcement. We have officially launched. Fan mail with the sidecar sync podcast.
If you are listening on mobile right now, you can go to the show notes of this episode and you will see a link that says something like send a text message. If you click that link, you can actually text the show directly from your phone. Let us know if you have any thoughts, questions, concerns, challenges.
We are really looking forward to getting some fan mail. So don't hesitate to reach out.
Amith Nagarajan: And Mallory and I will actually read any messages you send. We might run them through AI as well, but we will definitely read whatever you send us.
Mallory Mejias: I will read them all. Amith might run them through AI. We'll see how many we get.
That, that's a fair, that's a fair bet. Today, we have a few topics lined up. First, we're talking about Microsoft's new AI model called MAI 1. Then, we will be talking about Google's non profit generative AI [00:03:00] report. And finally, we're talking about Randy Travis. A country singer. I won't tell you exactly what just yet, but stay tuned if you want to know how that relates to AI.
First topic, Microsoft is preparing to launch a new in house AI model named MAI 1, making a significant step in its efforts to compete with leading AI developers like Google, Anthropic, and even OpenAI. This move comes after Microsoft's substantial investment in OpenAI, exceeding 10 billion, which granted Microsoft the rights to utilize OpenAI's models.
The development of M Ai one is under the supervision of Mustafa Soleman, a former Google AI leader who recently served as CEO of the AI startup inflection. As a reminder, Microsoft acquired a majority of inflection staff in page $650 million for the rights to its intellectual property back in March.
It's important to note that M AI one is a Microsoft project and not a continuation of inflections work, although it may leverage training data and technology. [00:04:00] from the startup. The new model is set to be a leap forward in terms of size and complexity compared to Microsoft's previous smaller open source models.
It will require significantly more computing power and training data, and it will thus be more costly. The model is expected to have approximately 500 billion parameters. For context, OpenAI's over a trillion parameters, while smaller models from companies like Meta and Mistral have around 70 billion parameters.
Microsoft's development of MAI 1 underscores the company's ambition to remain at the forefront of AI technology, competing with the biggest names in the industry. So, Amith, I'm sure our listeners, like me, immediately have the same question, which is, why would Microsoft want to compete with OpenAI when they've already made a 10
Amith Nagarajan: Yeah, you know, so it's a great question.
There's so much to unpack on this topic. I think this would be the biggest chunk of the pod today because there's just so much to talk about. Um, that particular [00:05:00] question is going to be top of mind for a lot of people. So Microsoft is very well known to be highly aligned with OpenAI. Uh, they've invested a billion dollars in OpenAI and subsequently 10 billion, so 11 billion dollars in total.
Over the course of a couple of years, uh, and it's an important asset to them. OpenAI is, uh, the leader still in terms of frontier models. GPT 4 still is top of the charts in terms of its capabilities, although not by a large margin. Uh, but this is too big of a prize for Microsoft to rely on an outside company.
It doesn't have full control over it. They want to throw a lot more money and a lot more energy and have a lot more degrees of freedom than OpenAI will allow them. OpenAI is independent of Microsoft, even though Microsoft can heavily influence OpenAI. Also, Microsoft wants to have more direct control independent of the percentage of ownership.
That's not what it's about as much as it is Microsoft doing whatever they please. Um, the modelists themselves are a function of data and money. So if you have [00:06:00] processing power, which you need a lot of money for, and if you have the right data, obviously you have to have the right engineering talent, you can put together a pretty powerful model with the current architectures that are out there.
Uh, and Mustafa is someone, of course, who's been at the center of AI for a very long time, uh, coming from the Google and DeepMind background, having formed Inflection, which was behind the Pi personal assistant that Microsoft acquired. They didn't buy the company for the tech. They bought it for Mustafa and his team of researchers to do exactly this.
So this was no surprise. Uh, it was a little bit surprising that Microsoft just swallowed inflection when they did. Uh, but not really, honestly, that surprising because they were kind of like the one company that, that was out there that, that made sense to do that with. Um, so digging a little bit more into like why they would do this.
Um, there's a lot of room for improvement in this world and they don't want to have themselves hitched just to one potential, uh, model or model family, open AI. And, you know, open AI is likely to be. A [00:07:00] major player in this space for a long time, Microsoft probably keep putting a lot more money into open AI, but Microsoft, this is too big of a deal for Microsoft to allow their destiny to be controlled by one fairly small company.
Remember, Microsoft's a 3 trillion market cap company. A 10 billion investment is frankly a rounding error to them. It represents, you know, less than an eighth of their annual profit. Um, it is not that big of a number in Microsoft. What is a lot bigger number in Microsoft is the growth potential they can have if they truly are the dominant AI player globally.
They're very well positioned, arguably best positioned to execute on an AI strategy. So, to me, those are the major reasons why Microsoft would do this. Um, I don't think this means there are any concerns about OpenAI. I think Microsoft's going to keep backing them, but, you You know, expect them to be a major player directly here.
Um, and you know, there's, there's, even though these models will likely directly compete, they may have differences where some are better for certain things and others are better for other things. Hmm.
Mallory Mejias: I think this is the [00:08:00] prime business example of not putting all your eggs in one basket, it sounds like.
And I don't know if you saw this, Amith, but I saw this on LinkedIn earlier today, actually, a quote from Sam Altman saying something like, what you see in chat GPT now is embarrassing. Did you see that quote?
Amith Nagarajan: Yeah, you know, Sam's been saying stuff like that for a while, though. He tends to be, uh, kind of take that tone in all of his interviews, saying that actually chat GPT and GPT 4 is pretty lousy, pretty embarrassing, mildly embarrassing, or slightly embarrassing, I think is his most recent quote.
So, uh, that's reflective of both kind of foreshadowing what they're about to unveil, which no one knows exactly what that is or what it's going to be called, but, you know, we all expect something quite remarkable to be coming from them in the near future. Uh, or they're not going to be the leader for very long, right?
So that's part of it. The other part of it is, um, kind of the attitude of the leader in the space. Wanting to say, yeah, what you see is great and we're still better than everyone, but it's still crap, basically. Whereas, you know, everyone else is trying to catch them and still hasn't caught up to G54. So there's some attitude [00:09:00] packed into that.
Uh, there's some foreshadowing of what's to come. Um, and he's also right, because GPT 4 is not that great. It's, it's amazing in so many ways, but it's also a toy in others.
Mallory Mejias: Absolutely. You know I have fun critiquing GPT 4 all the time on this podcast. If Microsoft's new model, MAI 1, is the next best thing and awesome, do you think that Microsoft CoPilot will start using its own models?
And then if that's the case, Where do you, what do you see happening to Chat GPT or GPT 4?
Amith Nagarajan: I think, you know, it's, it's directly competitive. Um, when MAI 1 becomes available, assuming that it's even at parity with GPT 4, that will 100 percent be what Copilot will substitute in. It's simple economics. There's no, like, payment that has to go to a third party.
They own the model. Uh, they also would have a different environment in terms of licensing and data retention. And so some of the things that Microsoft has to kind of tap dance around OpenAI, Bye. They have more degrees of freedom around because OpenAI, again, has this particular charter in terms [00:10:00] of the way they're structured.
Um, and they impose that upon a licensee like Microsoft. Uh, so, uh, what that basically means is Microsoft has more control. You can consider that good if you're a Microsoft stockholder, but you can also consider that potentially concerning. Uh, which I think it's true for everyone, right? In terms of the kind of data, these companies are starting to gather at even more rapid paces.
So the co pilot, I would be shocked if 12 months from now, if it's still powered by jeans, I don't think it will matter to you. And I think it will get better, right? They're not going to take it backwards. In fact, if you think about co pilot, you know, I've used it a fair bit, but honestly, I still go to chat GPT and more and more club three, the family models, because there is better than co pilot.
Part of it is kind of like. Um, it has GPT 4 under the hood, but it's, it's kind of watered down a little bit because Microsoft has to serve so many hundreds of millions of users in parallel. Um, and Chat GPT, it's the same thing in a sense, but you know, that is the product. So they, they don't [00:11:00] water it down quite to the same extent.
I'm sure that they're using either quantized or some other optimization technique to make it possible to scale the level that they've scaled it to. Um, and so GPT 4 within copilot is nowhere near as good as GPT 4. Or by itself. And I think you're looking to fix that.
Mallory Mejias: I'm just thinking in the greater landscape though, having meta AI that we talked about recently, powered by Lama having, you know, Google workspace, powered by Gemini and Microsoft may be eventually being powered by its own MAI one model, where do we see these standalone models like QPT four or Claude?
I mean, I still use them both actively. So I guess for the time being, nothing will happen, but kind of, what do you see as the future for those standalone models?
Amith Nagarajan: I think there will be uses for both stand alone tools like ChatQPT and Plot and others. There will certainly be very vibrant competition for that type of market, but you'll also see stronger and stronger embedded use cases, like with Copilot.
You [00:12:00] know, those types of tools have a significant advantage for certain types of tasks. If you're working on a document, Having the context of that document, but also having the context of lots of other documents you've written pre loaded into that model is super valuable. So, you know, the theory behind Copilot is it has access to the Microsoft Graph, which is essentially like all of the knowledge you store in SharePoint and OneDrive for you and for your organization.
And therefore, in theory, it can contextualize what it does for you. In reality, you have to specify the documents that you actually want it to consider. And it's a very limited term, to the extent that it can do that right now. So, it's really not all that different from uploading a document in ChatGPT.
There is a major difference in information security, because you're already in the Microsoft environment. You're under Microsoft's licensing and terms of service, so by not going outside, you're not transferring data, so that's, that's helpful. Um, but it's not really much better right now, so I think over time, what has to happen is the inbuilt tools within particular software, within Excel, within [00:13:00] PowerPoint, within Word, they're going to have a significant advantage living in that environment, because they have access to data that, that other people don't.
The flip side of it is I think tools that are standalone probably will have Certain advantages that have to do with user experience They're probably going to keep innovating in ways that don't make sense when you're like an embedded tool with bit word to be determined. I think it's it's competition is a beautiful thing.
It's going to shake out a lot of great ideas. I think, you know, there will be standalone tools. It's my suspicion, at least for the next couple of years. But the tools that are in these places are going to become better and better
Mallory Mejias: at the Innovation Hub event last week in Chicago actually asked the panelists and the audience.
That's what their favorite models were. And by far and large, most people said a chat GPT is the tool that they go to most often. And then it was Claude probably in second place and no one else had really dabbled with any other models. But I'm sure the listeners can relate when I say that sometimes we have model fatigue.
What [00:14:00] do you think are the key pieces of information as individuals, as people who work at associations or nonprofits? When you hear about a new model coming out, what do you say, Amith, is the key info that we should know? Is it parameters or tokens? Is it whether it's open source versus closed source or should we just know what it does for us?
What do you think?
Amith Nagarajan: I think it's, it depends on the circumstances and what you're trying to do. If you're an end user trying to, you know, achieve a chat GPT type experience, Then the size of the model probably matters a lot, uh, because the more sophisticated models tend to be larger. Um, that's starting to change a little bit because, you know, like LLAMA 3, the 70 billion parameter model, uh, which is, you know, 20 times smaller than GPT 4, is pretty close to GPT 4 in a lot of areas of language processing, conversational intelligence.
It's, it's pretty amazing actually what they've done. And that continues to be the case. The smaller models are getting very powerful and the larger models are getting more powerful still. However, the question is how much power do you need? So, you know, it's a [00:15:00] certain, it's like horsepower in a car after a certain amount of engine power, like how much more do you need to get to go to the grocery store?
You know, if you're going to go race, which some people want to do, then maybe you need 500 horsepower horsepower. But if you're just like going to get groceries or pick up the kids or go to work, 200 And so model size and power are roughly correlated right now, but that's starting to change Continuing the engine analogy actually a little bit further.
Um, you know I don't know if you're a fan of plastic cars or not. Mallory. Are you a fan at all? I
Mallory Mejias: can't say i'm a fan of plastic cars, but take it away
Amith Nagarajan: And i kind of gauge myself by saying this but like plastic cars to me are like 50s 60s Maybe older but nowadays the cars I grew up with that were new at the time are classics to some people.
So You That pisses me off a little bit, but that's an aside. Um, but the point is is that back in the 60s and 70s in particular there was until the oil embargo There was a race to bigger and bigger and bigger [00:16:00] engines. So the way you got faster cars was bigger engines So you went from small block v8s to big block v8s and you went from Engines that had displacements of, you know, 300 cubic inches, which is roughly 5.0 liters all the way up to 400, 450, you know, these kinds of passive engines that were 7, 8 liters and above in passenger vehicles, right? And kind of the muscle car era. And that's really how you got more power is bigger and bigger displacement engine. That's kind of like the parameters in the model. You have these bigger, massive, and also very inefficient models right now.
Um, and what happens over time is the technology gets better. And in the 80s and the 90s and more recently, uh, you can get incredible amounts of power out of much, much smaller engines. My wife just got a car, uh, recently she replaced her old car and her new car has a 3 liter engine that produces 400 horsepower.
You know, which even back in the 90s, uh, that would have taken a massive engine. So the point is, is that in a similar world, uh, in a similar sense, I should say, Um, these models are becoming bigger, but they're [00:17:00] also getting condensed down and the power is compacting away and you're getting more efficient, getting smarter.
So I think you're going to see that same type of thing evolve in a more aggressive way. And like what you get out of like, you know, llama seven or sorry, llama 8 billion parameter version three is way better than what you got out of llamas 70 billion parameter llama two. Um, so, you know, nine months, not even nine months.
And, 10 X reduction in size almost. And you have similar power between those two models. So, anyway, um, I digress somewhat, but the point would be that, yes, model size is relevant. But to the average end user, that's going to become kind of about as important as how closely you pay attention to size of the engine in your car.
It's going to be like, does it have the power? Does it get the mileage? What are the features of that model? Does it give you the outputs that you need for your job? And we're going to reach a point where most of the use cases people are using right now for AI are going to be achievable across [00:18:00] all the models.
So the differentiation will be if you do more, like harder things, more reasoning, more planning, more complex things. Types of work then you're going to want the bigger and more powerful models But there will also be scenarios where you want faster performance And that's one of the areas where the small models are just brilliant.
Um, I recently, uh, we have this video I posted on linkedin one of our companies produces this ai tool called skip and skip is Basically like the interactive agent that lives inside a database And can like create reports and answer questions about your data all in a secure way some really cool products and skip Um speed is super important And so the the team had run skip with llama3 on a tool called brock Which we've talked about here at brock with q Um, the inference engine based on LP rather than GPUs, and it was 10 x faster and LAMA three to begin with is quite fast.
So it was like a nearly instant response time when you talk to it and ask questions about your data. It [00:19:00] was pretty cool. Uh, we'll, we'll share that LinkedIn post in show notes, but, um, my point in sharing that is it's not all about power and size. Sometimes speed can be the advantage you really want.
Think about like real time voice translation applications where you want like instantaneous, no latency responses. You're going to need that for certain kinds of applications like real time translation or real time dubbing or whatever you want to do in that, in that category.
Mallory Mejias: Okay. So maybe keep in mind the size of the model, but also maybe in the future that might not correlate as much to power as what you're saying.
Amith Nagarajan: Yeah, I think you're going to see models that are in the, uh, you know, tens of billions of parameter power in the next, by the end of the year, I think you'll have like a model in the 10 to 30 billion parameter range that's as powerful as like GPT 3. 5, which is, I think, 300 billion. You'll have models that are GPT 4 caliber that are in that 70, 80 billion parameter size.
So, um, they're going to keep shrinking and keep getting more powerful. What's exciting at the same time, of course, is what [00:20:00] everyone wants, which is what's the GPT 4. 5 or GPT 5 class of models, right? I'm not talking about OpenAI specifically, but across the board, what are those categories that models have in terms of new emerging capabilities?
Do they have better reasoning, better planning? Uh, things like that. So those are the things we should look ahead for because they'll open up a new category of applications.
Mallory Mejias: That makes sense. I must say, I thought when you brought up the cars, you said plastic cars. And I was like, where is this conversation going?
But you said plastic cars. I just had to, if you're listening closely, you probably heard me say plastic. I needed to clarify. Classic cars is what we're talking about. Um, moving on to topic two, Google's non profit generative AI report. Google recently surveyed more than 4, 000 organizations, nonprofit organizations about how they're using generative AI and it's Google for nonprofits program.
Here's what they learned. The survey revealed that 75 percent of nonprofits believe generative AI has the potential to significantly transform their [00:21:00] marketing strategies. Despite the recognized potential, Two thirds of the non profits surveyed indicated that a lack of familiarity with generative AI technology is a major barrier to adoption.
The report also points out that 40 percent of non profits do not have anyone in their organization educated in AI, which underscores the need for targeted training and resources to enable effective use of generative AI tools. Non profits that have started using generative AI tools have reported a 66 percent improvement in productivity.
This report emphasizes that generative AI can help non profits to be more creative and effective in serving their communities. It also mentioned the launch of a 20 million Google. org Accelerator program to foster AI. fun nonprofits that are developing generative AI solutions, providing them with technical training and mentorship.
Amithh, I've seen productivity claims with AI range from probably 30 to 60%. I, I think 66 percent is on the higher end. Are you [00:22:00] surprised to hear about that product productivity boost with nonprofits specifically?
Amith Nagarajan: Not really. I think that, um, you know, nonprofits are, they tend to be very information driven.
And so when you're dealing with digital processes that are, you know, and perhaps are a little bit outdated in some cases, you know, there's, there's more headroom or more opportunity. If you have an older process that antiquated AI can be kind of a, Faster jumpstart. So, you know, if you're taking 10 hours to do something and you bring it down to two, that's an 80 percent improvement.
Uh, maybe somebody else had already optimized a similar process from 10 to 4, and they went through 4 to 2, so they see a 50 percent improvement. So, part of it's the absolute hours saved or improvement in output, but doesn't surprise me a whole bunch. I'm excited. I was just in general super excited to read this report, and, uh, I think there's, there's still headroom above that.
Mallory Mejias: Given that AI is on an exponential curve. We talk about that all the time. Will we see a cap on productivity eventually you think, or are we just going to see [00:23:00] just double our work, triple our work, so on and so forth.
Amith Nagarajan: And it's a great broader question in terms of the history of economics and productivity.
And output relative to influence, right? So we think about what we've seen demonstrated clearly through the global economy over hundreds and hundreds of years is that technology innovation drives an increase in output. And that's because of productivity for sure, but it's also because, um, we are really good as a species of coming up with more ideas.
So if we automate something to a great extent, um, most of the time, you're not going to see that we're just going to sit around, right? We're going to figure out something next. Like what's, what's next step for that? What's next step for that? That's, that's the story of the evolution of our species and, and, and beyond.
Right? So it's, it's something that I wouldn't expect anything different to happen here. I think people are going to have a lot more free time and they're going to figure out like how they can do more value added new things. Like, you know, an AI is moving fast. So AI has been coupled with new ideas as well, but [00:24:00] Um, I don't think there's any limit on productivity, uh, increases because there will always be new things to go do.
So productivity for a given static process, sure, you can optimize that all the way down to zero hours. Uh, but in terms of like what happens to the time where you operate beyond it, there will always be new domains for profit.
Mallory Mejias: Now, 40 percent of non profits did not have a single person in their organization educated in AI.
And then two thirds of the organizations in the report say that lack of familiarity with generative AI is a major barrier to adoption. But on the other hand, three fourths of the non profits interviewed said that AI can transform their marketing strategies. What do you think is the Disparity. Why do we see this?
Amith Nagarajan: People are, uh, you know, busy and fearful, right? Those are the two big things that slow people down. The busy side, I get. Everyone's got a lot going on. The fearful side is just either not knowing where to start, not having a good place to go, but for some solid [00:25:00] education to get started. Obviously, we're trying to help with that here.
But, uh, you know, that's a big part of it is people are overwhelmed and fearful. I'm fearful, perhaps, even of, like, making the wrong decision on where to start and going down the wrong path, perhaps. And, you know, I always tell people there is no wrong path in terms of learning. As long as you start learning, it doesn't matter what you start learning.
Start learning whatever you're interested in. That would be a good path. But I think that the overwhelmed part and the busyness part is something people have more agency over than they give themselves credit for. You know, if you think about what you do in the day or what you do over the course of a week, And you say to yourself, well, how much time would it take to be effective at using AI?
And I always start with the very small, and when I speak to audiences, I say, um, at the end of this talk, I want you to do one thing for me, which is I want you to walk off 15 minutes. It's 15 minutes in your calendar, once a week, a recurring invite to yourself. That is a non negotiable piece of time where you're [00:26:00] going to learn something about AI, right?
Just do something new with AI that's going to expand your knowledge and over the course of 15 weeks, you're going to have, you know, roughly 12 ish, 13 ish hours of time where you've dedicated. You're going to know a lot more than someone who spent zero hours. Uh, and of course, if you can do a half an hour, an hour, that's even better.
A lot of people say, okay, I can do 15 minutes, um, but in reality, actually, if you think about the amount of wasted time that exists in the world and waste of time that exists in your organization, pointless meetings, projects that are frankly garbage, you know they are, but you just don't do anything about it and you keep investing time into them.
Part of what you have to think about is to stand up and say, what am I going to stop doing? Because that 15 minute example is cool and it's better than nothing, but what if you could spend 10 hours a week on AI work, right, or 20 hours a week. And, and most people actually can because their organizations, you know, my, my favorite thing to pick on is like large systems implementations, right?
We've talked about that here before. Um, I talk about like AMS implementations or LMS [00:27:00] implementations where people are like, Oh, I got to go implement my AMS and that's our big technology thing. All 2024, we're going to invest a ton of money in it, a lot of time, and blah, blah, blah, blah, blah, blah, blah. And the reason we're doing it is because the current system is horrible, I hate it, you know, it's really inefficient.
And I always ask them, I said, okay, well, so in that scenario, if you're unbelievably successful, if your AMS implementation is an absolute home run, how much will it improve your productivity? What percentage will you get out of it? And they're like, oh, I don't know, maybe 10%, maybe 20%. No one really thinks the AMS replacement project is going to result in 100 percent improvement or 30 percent improvement even.
They think it's just a small incremental change. They might think they need to do it, but like, I always ask, well, what if you just did pause on that for six months? And use the time, forget about the money, just use the time. Um, that you would have spent on a project like that to learn AI and do some AI experiments, small scale experiments, where would you be as an organization?
And I can almost guarantee you'd be further along as an [00:28:00] organization and as individuals. So my point though, and of course, those are like bold, you know, types of things to say and think, and that's easier said than done. But I do think there's opportunity to stop doing things that are not producing outsized returns for you and for your organization and invest that time with some of the AI learning.
So coming back to the Google report, I think if, you know, these non profits, and really I don't think it's just non profits, it's organizations in general, thought more critically about what they could stop doing, even small chunks, they could solve that problem.
Mallory Mejias: Something that's been sitting with me, For a while, kind of heavily is that prediction we talked about from Vinod Khosla about expertise being available to anyone and everyone with the help of AI at little to no cost in the very near future and Amith you mentioned you called associations expertise brokers and so we talked about what happens to an association of expertise is available to anyone and everyone and I talked a little bit about this at the innovation hub when speaking about digital now, AI is absolutely going to [00:29:00] disrupt business models, and I guess in my mind going on with what you're saying is, you know, will that AMS upgrade kind of save you from your business model being disrupted?
I don't know. I've never worked in an association, so, but I'm just putting that out there in terms of the hierarchy of thinking.
Amith Nagarajan: Most people would not argue that a new AMS would save their business model, you know, or help them create a new business model. In fact, if anything, it's going to be further calcified.
The business model they have, because what does a large scale system like that do? Well, it really, like, you know, it, it basically puts in place technology to codify a process that they've built. Uh, sometimes things like a pricing structure or the way they use certain business processes in events or in committee management or whatever.
And, and that's the last thing you should be doing right now is to further calcify your business processes with more technology. Automate. It's probably not a great process and probably not going to be the thing you need to do as your business process to serve in the future, to your point, Mallory. So I [00:30:00] think it's the perfect time to hit pause, not just for the reasons I earlier mentioned, but to your point that you need to actually explore what your business model will be.
And most associations truthfully don't know the answer to that. I don't think any of us really do.
Mallory Mejias: All right. So in a way we're speaking to the choir because if someone's listening to the Sidecar Sync podcast right now, they probably are not the one person in their organization that's not educated on AI, but what can listeners of the Sidecar Sync do to get buy in from their colleagues, from their leaders, from their staff?
Amith Nagarajan: I think you can share resources a little more proactively. You know, I know that, uh, I've heard from listeners who have received the Sidecar Save podcast from people who've heard them. I've heard that a ton from the content that we do, webinars that we do. So I think there's people who do that kind of passively.
Um, but I do think that you can become a bigger sharer of content where You know, whatever you're reading, listening, or watching, uh, wherever it is, whether it's our stuff or anyone else's, if you find something useful, get in the [00:31:00] habit of more actively sharing it. Maybe create a Slack channel for sharing resources in your organization if you don't have one, or a Teams channel, uh, or create a distribution group that you email to that people can sign up for, or just, just proactively email people or text with people that you think should be getting it.
Um, for one thing, that that'll actually, over time, if you do that enough, put you in a position as the knowledge expert in your organization. Which is awesome in a lot of ways, um, but I think it's just people have to take a little more initiative to share stuff if they're the ones receiving it, but not that many people in the organization are hearing it, you know, you can take it upon yourself to be that change agent and share it like every week, share one piece of content with one more person, right?
If you did that consistently 50 weeks, you've shared 50 pieces of content with potentially 50 different people or 50, you know, some, some of the people being the same. That can have an impact, and if more people did that, you, you really start to make an impact, put that problem.
Mallory Mejias: We have a Blue Cypress, uh, Microsoft [00:32:00] Teams channel for AI things.
And you'd be surprised, it's not always the same people posting. Sometimes there are new people posting from new companies about new tools. It's, it's a really nice collaborative space to share ideas. Now, topic three, Randy Travis, a renowned country music artist, made a comeback with a new song titled Where That Came From, thanks to AI.
Travis has been unable to sing or speak effectively since suffering a stroke in 2013, which left him with severe aphasia and limited his ability to communicate. The process of recreating Travis's voice involved a collaborative effort with his longtime producer Kyle Lenning and the use of a surrogate voice provided by country singer James Dupree.
Initially, Warner Music extracted vocal tracks from Travis previous recordings and then used these tracks to train an AI model. This model was capable of generating a voice that closely resembled Travis distinct vocal style. The project was definitely a technological advancement, but also had some emotional resonance as [00:33:00] well.
Mary Travis, Randy's wife, described the profound impact of hearing the song, noting that Randy experienced a wide range of emotions upon hearing his voice again, which was both surprising and deeply moving for him. This use of AI in music has sparked discussions about the ethical implications and the potential of AI to aid artists who face similar challenges.
It also raises questions about the authenticity and the artistic integrity of using AI to recreate human artistic expressions. In Travis case, the project has been seen as a heartfelt tribute to his legacy and a way to overcome the limitations imposed by his health condition. I mean, to me this is The two sided coin, right, of AI and everything, especially kind of AI and music.
It's good on one hand. And then, you know, we talked about this with the Suno episode. It's bad on the other hand, perhaps. Um, what are your initial thoughts for this one?
Amith Nagarajan: Well, I was really moved by it. So I'm not like a Randy Travis super fan or anything, but I've enjoyed his music over the years. Um, I actually didn't know that he [00:34:00] had a stroke, but I was, I didn't ever thought about it, but I, Realized I hadn't heard a new song for a long time, so, um, it was really a tragic year, and at the same time, really exciting to know that, you know, B Music is something he could participate in creating again, even though he's not able to perform the song in a traditional way.
Um, so I think that's a really interesting creative outlet, that the expression of that creativity can still come from a person's talent, as he is, even though he is physically unable to perform his song. Um, so that's fantastic in terms of, uh, of that, you know, that modality being available. Obviously, a lot of resources went into this.
Uh, this is not like suno or video. It's just like a professional grade. AI model trained on a single artist, so it's way, way, way better, um, than something you get off of a consumer product for nine bucks a month. Although those tools are quite amazing, actually, uh, but it's not at this level, obviously. Um, but like with all things with AI, [00:35:00] the technology gets faster and cheaper.
At a relentless and crazy fast pace, so that also means that this kind of targeted voice cloning of a particular artist could very easily be performed by any consumer very very soon using open source tools that have no gating on them. There's all sorts of havoc that that could create. So that flip side of the coin that you were describing.
Uh, it's, it's also deeply concerning. Uh, I think that, you know, music and visual arts, uh, and acting and so many other forms of artistic expression are uniquely human forms of expression. Uh, and I'd love to see ways for that to thrive while AI continues to advance. I don't personally have an answer, but I, I, I'm taught by it, honestly.
I think it's both incredibly impressive and, and in this particular case, An incredibly great use case for it, but also, you know, kind of deepens the concern at the same time in terms of how good these rules are. What do you think?
Mallory Mejias: Yeah, I think this is really beautiful. I mean, I'm hearing about this. I think [00:36:00] this is just a fantastic example of the good AI can do in the creative space.
I mean, Randy Travis. May have never sang again or had the opportunity to hear his voice with a new song new lyrics Surely he's been you know writing and and wanting to produce music. So I think on that hand It's fantastic. But then you know on the other hand, I think how long is it until everyone has access to this technology?
And then from there, what what are you going to do with it? I wanted to ask you ameeth in thinking this through it seems obvious But I don't know if i've ever really like thought this or given it a name in my head But Is this a pattern with all new technology like it comes out and we see so many good opportunities But then there's always bad actors with the internet i'm thinking with social media probably other technology I'm not thinking about.
It's just, is this a typical pattern that we're going to see play out?
Amith Nagarajan: I mean, it's a classic dual use technology and that means it's general purpose technology. It can be used for both incredibly good and incredibly bad things. You can use airplanes to [00:37:00] deliver, you know, organ transplants across the country.
You can also use them to deliver nuclear bombs, right? So Uh, technologies tend to be, you know, that are general in nature, tend to be usable by, for, for many different purposes. So, I think it is a predictable scenario. I mean, the more powerful the technology is, the more extreme those examples become.
Mallory Mejias: And with this specific technology, I know we talked about Suno, um, and how that could be used for business, but do you see any, um, obviously this is much more advanced, do you see any kind of potential use cases, uh, for business with something like this?
Amith Nagarajan: Yeah, I mean, I think it's in the same general category of when we did the episode talked about Suno. If you haven't caught that episode, we'll put a link in the show notes, but it was just a handful of episodes ago and we, uh, you know, shared the whole song and we created the sidecar. But I think, you know, music is a modality for communication and expression.
It's something that I think is, it opens a creative outlet that, uh, is unnatural in business except for [00:38:00] consumer brands that are used to creating jingles and commercials and stuff like that. But. The ability to rapidly and easily and inexpensively create music to accompany a variety of things. Obviously in the marketing realm, there's so many ideas that can come from this route.
You think about creatives, the creative stuff that you want to put on a website or in social media or, um, you know, personalized music you send to each prospect in a campaign, right? All these kinds of interesting ways. of leveraging this technology. But I also think it's an interesting way to educate. You know, think about like, you know, we as a species are wired to have incredible emotional recall of music, stories, because music in many ways is a form of storytelling.
Not always, but you know, there's an element of storytelling in many forms of music. And so, we are wired biologically to be, you know, storytelling preachers and story receiving preachers, and many books have been written on how marketers should do better jobs at telling stories, how educators should use the storytelling [00:39:00] paradigm instead of trying to educate people.
Whether it's in a classroom or a professional setting. And so I think there's tons of opportunity here. Um, you know, my wife's a great example. She remembers, like, every single commercial jingle from, like, the 1980s to the 90s when she was growing up. So she'll just, like, break out of this fucking guitar sometimes.
And, uh, sing all sorts of things from stuff that I remember from when I was a kid. So it's, it's super fun. But, like, You know, I don't think she'll remember the words that were spoken from those commercials with the music, you remember, right? So, there's something to be thought about there.
Mallory Mejias: Yeah, that's, that's really interesting.
That's a random skill. She probably can't put that on her resume, but it is, it's a neat one. I was thinking, too, in terms of business, I was thinking about keynote speakers. I don't know. Maybe like training up a model of a keynote speaker. Maybe that's no longer with us. And then maybe we'll have, you know, Steve Jobs one day at Digital Now.
That was at least one area that I was thinking about.
Amith Nagarajan: Yeah. And that becomes an opportunity. There's, and there's opportunities for, you know, those that [00:40:00] departed, there's also in their estates and the license and stuff. Obviously, that's the assumption of proper licensing and so forth of that intellectual property effectively.
But also there's an opportunity to do that in the context of delivering content at scale that you could do, um, you know, with the actual physical distribution of your atoms, right? Like if you want to physically go from one location to another to speak. Um, there's a limitation there and, you know, also there's other technologies that are around the corner in terms of virtual or mixed reality, uh, there's the possibility of holographic projection at HD type of quality down the road and the not too distant future, all sorts of other interesting things that could bring all sorts of experiences that open up new aperture for communications and, and, and really new modalities for, for, uh, um, uh, Expression and computation.
So I find it exciting. Again, I am an AI optimist through and through, but I also looking at this with. And I for how can you watch out for this stuff [00:41:00] because the opportunity to get fooled is extremely high as well
Mallory Mejias: I think you can be an optimist and be a realist at the same time or at least balance those Well, I think this is the first time ever Maybe we've talked about the intersection of uh country music and ai on the sidecar sync podcast So I guess you never know what you're gonna get in these episodes and reminder to you all if you enjoyed the episode I'll Click the link if you're listening on mobile to send us fan mail. And Ethan and I would love to read it.
Amith Nagarajan: And share this with at least one person this week.
Mallory Mejias: I love it. We'll see y'all next week.