Skip to main content
Intro to AI Webinar

Timestamps:

00:00 Celebrating 25 Episodes of Innovation 
02:35 Meta's Llama 3 Announcement
07:33 Forecasting the Future Frontrunners in the AI Model Race 
15:13 The Potential of Personalized and Automated Content Creation 
20:25 The Debate: Open vs. Closed Source AI Model Advancements 
30:30 How Associations Can Help Their Members Adapt to AI Disruption
36:59 The Potential of Personalized Podcast Experiences 
39:48 Human-made Content vs. AI-generated Content 
42:41 AI's Potential Impact on the Middle Class 
46:10 The Acceleration of Economic Growth through Technological Disruptions 
49:17 Differing Perspectives on the Impact of Rapid AI Growth 

 

Summary:

Welcome to the 25th episode! In this episode, Amith and Mallory cover the upcoming release of smaller versions of Meta's Llama 3 language model, the AI startups featured in Y Combinator's Winter 2024 class, and the potential impact of AI on the middle class.

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

Follow Sidecar on LinkedIn

Other Resources from Sidecar: 

Tools mentioned: 

Other Resources Mentioned:


More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress (BlueCypress.io), a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on Linkedin.

Read the Transcript

Disclaimer: This transcript was generated by artificial intelligence using Descript. It may contain errors or inaccuracies.

Amith Nagarajan: [00:00:00] Alright, cool. You good? Yes. Alright.

Greetings and welcome back to the Sidecar Sync. Excited to have you here with us today. We have an exciting episode with a lot of interesting news at the intersection of AI and associations. And in fact, this is episode number 25 of the Sidecar Sync. Can you believe it Mallory, we're already at 25.

Mallory Mejias: I really can't believe that.

I feel like this was just a seed of an idea a few months ago and now we're, you know, a quarter of the way to a hundred. So, we've got, we've got some big goals ahead of us.

Amith Nagarajan: All sorts of great conversations, interesting news, lots has changed since the first episode of the Sidecar Sync and, uh, had a lot of fun with it.

So, looking forward to this episode. Before we get going, let's take a moment to hear a quick word from our sponsor. [00:01:00]

Mallory Mejias: And in honor of the 25th episode, Amith, I guess we have a special edition storm episode today. Um, if you all have listened before, you know, Amit and I are located in New Orleans and right before we hopped on this call to record the podcast, we got a severe thunderstorm alert with 80 mile an hour wind.

So stay tuned for this episode. Hopefully neither of us lose power, but you might hear some soothing thunder and rain in the background.

Amith Nagarajan: It'll be fun either way. We'll

Mallory Mejias: Of course. All right, so today's topics we are talking about first are Lama three, very exciting news announcement. We'll be talking about why combinators winner 2024 class and the AI startups that are within that class.

And then we'll have an interesting conversation around AI and the middle class to wrap up this episode. So first and foremost, Lama three. Meta plans to release smaller versions of its upcoming AI language model, Llama3, next [00:02:00] week. These compact models will make Llama3's capabilities more accessible to developers and researchers without the need for expensive hardware.

While the full Llama3 model is expected to be very large, potentially surpassing 140 billion parameters, the smaller variants will be optimized for running on consumer hardware like GPUs and cloud instances. This follows Meta's strategy with previous Llama releases for providing scaled down versions alongside the flagship model.

By making the model more widely available, Meta hopes to accelerate research and development in areas like conversational AI, code generation, and multimodal applications. Meta's approach contrasts with the closed ecosystems of some AI companies. The company believes that open sourcing key infrastructure while retaining control over future model iterations will maintain its competitive advantages in the long run.

So Amith, my first question for you here, um, when talking about the size of models, we talk about [00:03:00] parameters. I don't know if we've ever really talked about that on this podcast, but what are parameters and how do these relate or how are they different from context windows? You're muted.

Amith Nagarajan: Now that's a great question. Parameters basically refer to the size of the model, whereas the context window is like it's short-term memory. It's the ability for it to remember. information that you prompt it with and remember longer conversations. So, when we talk about a model having 140 billion parameters or seven billion parameters or whatever the number is, GPT four is believed to be over a trillion parameters.

Um, what you're talking about essentially is the It's a rough way of describing the size of the model, which has generally correlated with the power of the model. Um, in the neural network, when you're training using all the different training techniques we've only touched on a little bit in the past, what ends up happening in this training process is the algorithm doesn't change.

The algorithm for the neural network essentially [00:04:00] is the same. same, but the weights are being constantly recalculated through the training process. And that can also be adjusted through what's called a fine-tuning process. And ultimately, the weights are the model, and the number of weights essentially refers to this parameters.

And so it's effectively like how big is the model? That's the way to think about it. So we talk about something like Mistral 7B, which is a seven billion parameter model in Mistral on their website. They call that Mistral Tiny or the Mixtral model that we've talked about in this pod, which has eight times seven billion parameters because it's a mixture of experts model.

Um, you know, you have these, these models of various sizes which do different things. Um, what is really exciting about models to me is certainly bigger models in theory are interesting, But we found very clearly through research and also through practical applications that the quality of the training data is actually just as important, if not more important than the rough size of the model.

So, and Mistral really is [00:05:00] the company at the forefront of this, being super efficient with training, Models that actually have a lot smaller training sets and therefore have fewer parameters, um, but ultimately are as capable as some models considerably larger. So, the model size thing is, um, certainly worth paying attention to, um, but it's, it's not necessarily the most effective way to determine the power of the model.

Um, but it is, um, So it'll be very interesting to see when they release llama three, even with the smaller versions that are supposed to be released next week, what their capabilities are. I'm sure they'll release benchmark scores and all the typical benchmarks. And then, of course, the larger model that is expected to be available later this year, maybe this summer sometime.

Perhaps we'll have GPT 4 or surpass GPT 4 capabilities. That's certainly what I'm expecting. I'm speculating by saying that last comment. But Llama 3 is Meta's flagship AI effort, and they are not trying to be a second contender, you know, second [00:06:00] place contender. They want to win. They want to have Capable AI as strong as OpenAI's Frontier models.

And everybody's been trying to chase GPT 4. Until recently, uh, it was the only, you know, category, only model in that category. But now we have Cloud 3 Opus and also Gemini Ultra and Gemini 1. 5 Pro that are in various ways as good, in some cases better. So it's going to be really interesting to see how this evolves, um, as we get more information next week with the official release.

Mallory Mejias: Given all the models you just mentioned, in your opinion, which do you see, which do you foresee being kind of the forerunner within the next few months or by the end of the year?

Amith Nagarajan: The short answer is it depends. It depends on what you're doing. So we know that, you know, GPT 4 continues to impress in many broad categories.

Um, there are models that are much smaller that are really, really good at specific things. Um, you know, there's a company called Stability AI that is best known for stable diffusion, uh, but they actually have models that In a variety of [00:07:00] other contexts, and they have some language models to that are very small on.

They have a completely different strategy and an interesting side note. In the open source a I world, their CEO actually just stepped down, which is a story unto itself to pursue a different approach to distribute a I. His viewpoint is that even with open source, we haven't gone far enough in distributing the possibilities of a I to the masses.

And so, he's working on a Something that seems to be at the fusion of Web three and distributed computing and A. I. Open source. Um, but I digress. The point is, is that these models you have to look at them like specialists in a way and say, what is it that I'm trying to solve for and which model might fit best?

Um, and ultimately, in fact, that's actually what this mixture of experts models are doing, like Mixtral 8x7b, which is a wonderful technical name, probably not the best marketing name, but I do think Mixtral is kind of cool. Uh, what it's doing is it has eight different 7 billion [00:08:00] parameter models, and it has this dynamic capability to pick two or more of those models for each request, um, and then to use those models based on what would be best for you.

In the sequence of token generation, essentially. And so, um, what's powerful about that is Mixed Drill has performance, uh, in excess of GPT 3. 5, even though it's considerably smaller. So I think there's a lot of interesting things to talk about there. Ultimately, there is no one model to rule them all, and we're increasingly going to see that.

Now, we don't know what OpenAI is up to, because they're famously secretive about what's happening with GPT 5, whatever they want to call it. But Sam Altman has stated many times in recent interviews that they will release a very powerful model this year. It'll probably, you know, reset the high watermark for AI capability in language models, and it'll be interesting to see where it goes from there.

But it's gonna go faster and more aggressive from this point for sure.

Mallory Mejias: Yep. I like what you said there. I think the key is, it really does [00:09:00] depend. Thomas Altman, who I've mentioned on this podcast before, he's co founder of Tassio, and we co host the Intro to AI webinar every month through Sidecar. In our last version of that webinar in March, he actually said he thinks Claude is the best tool to use when writing marketing copy, which is interesting because I myself typically use ChatGPT, but I think that is kind of the future, figuring out what tools really Or what applications, what models perform best in certain scenarios.

Also on that note, we do have another version of that intro to AI webinar next week, actually on April 16th. So we'll include that registration link in the show notes if you're interested. But

Amith Nagarajan: that would be a great webinar. And one thing I wanted to quickly comment on is even if you've attended prior intro to AI webinars, it's always worth checking out the new edition because these webinars are constantly updated as you.

You know, follow along in this journey. If you listen to Mallory and I speak about these topics, you know they're changing all the time. So, the Intro to AI webinar is a way to reinforce what you already know, even if you've already gone down that path, and it's certainly a [00:10:00] way to get the fresh new content.

And if you can't make it, but you're interested, register anyway, because then you'll get access to the recording.

Mallory Mejias: Absolutely. And Thomas shows Claude, Gemini, ChachiBT, we talk about MidJourney, we talk about Munch, which we've discussed on this podcast before. Lots of cool demos of tools and tons of association specific use cases, so I would highly recommend that you check that out.

I mean, it seems like more and more within the podcast, maybe it's confirmation bias, but I do feel like in the last few episodes, we've spent a lot of time talking about smaller models, whereas we weren't talking so much about smaller models back in November and December. What do you see as the most exciting business use cases potentially for smaller models?

Amith Nagarajan: Well, if we talk about smaller models that are at least as capable as GPT 3. 5, which is a very capable model, that's when chat GPT burst onto the scene and fall of 22, that was powered by GPT 3. 5. And in fact, I believe the free public version of chat GPT is still powered by [00:11:00] GPT 3. 5. So it's a very capable model, not as strong as GPT 4, but very good.

So as long as we're talking about models of that caliber and better, when you talk about small models, you're talking about efficient, super low cost. Very fast models. The things you can do, um, really open up possibility because when you're dealing with the GPT four Gemini Ultra Opus type model, that category, they're big, they're heavy and they're also slow.

Um, there's a lot of work being done to speed things up in the world of AI inference, which is the runtime speed. Um, but right now the size of the model is directly correlated to the inference speed. So smaller models are faster, they're cheaper, they're more energy efficient. And they're also things that because of open source, you can use in lots of creative ways that you can't necessarily do with models that are closed source.

So I think that small models are more exciting area to play with because so many problems can be solved by GPT three five class models that people haven't even [00:12:00] touched on. You know, we've talked about use cases for associations a bunch in this podcast. I spend time with a lot of executive leaders in this market talking about the same types of things, and anytime there's language involved, you know, a GPT three five class model can help you.

As an example, we often see unstructured content, meaning like just text or images coming into an organization. Someone might submit a response to a call for speakers or a call for papers. And in that process, it's either all or none. People think about automation as being binary. They can either automate it all, Automate none of it.

And in reality, there's shades of gray there. You can automate portions of it. And the interesting thing in a unstructured content process like when you get content from a person submitting, let's say, a talk for your annual conference. Is that there's a lot that can be done with even a GPT three five class models.

First of all, just to classify the talk. Let's say that you're in a particular branch of science like [00:13:00] physics, for example, within your conferences, you probably have many different tracks, many different sub areas and the people that are going to evaluate each of these papers that are coming in the door.

Have to be specialists in those fields. They can't just be random people on DSO. In that context, you can use a very small model as a basic classifier to just route these applications to the right people, eliminating the need for a person to look over the proposal and actually route it. So that's a very simple example.

We're not automating the entire process. Of selecting or rejecting a call, you know, submission for a call for speakers, but we're taking a little chunk of it off and you can do that very inexpensively. In fact, it's pretty close to free already with a model of the caliber of GPT three five. So I'm really interested to see what the smaller llama three models do.

We've talked in this pod briefly in the past about a Microsoft model called Phi two, which is spelled P H I two. Certainly the mistral models. Um, you can do so much with these things. We [00:14:00] often also say that this is the worst day you're ever gonna get the A. I. You have today. And even if they I never advanced from where we are today, if you simply took the time to integrate into your business and into your profession, the things you can now do with these models at low cost, very close to no cost, it would take you years to take full advantage of this stuff.

So to me, the small models just compound that the other thing is, is that small models are getting smarter, they're not staying at GPT three five levels. Very soon you will see an open source tiny model like a seven billion parameter model that's as smart as current GPT four might take about another year or so.

But you're seeing the smaller models getting smarter and smarter. That's partly algorithmic improvement, but it's also heavily driven by the fact that That the large models are actually training the small models. Um, which makes the small models smarter. So it's kind of this crazy loop that's happening.

And I'm, I'm actually honestly more excited about what you can do with the small models because you can weave them in everywhere. Uh, and then for those organizations that are deeply concerned about privacy and [00:15:00] security, you can run them yourself. You can inference on your own hardware in your own virtual private cloud or literally on your own hardware if you wanted to.

And you can't do that with the frontier models.

Mallory Mejias: That makes sense. I'll speak from a, an average user, perhaps a power user of chat GPT, for example. I understand that you're saying the smaller models are faster, um, and are more specific in terms of what actions they can take, like classifying speaker submissions, for example.

However, from like a user perspective, I want to use the model that's most powerful, right? I want to use GPT 4, I want to use Cloud 3 Opus. I'm still not fully understanding. From an individual point of view, when would I opt to try out a smaller model versus the most powerful one?

Amith Nagarajan: As an end user, you probably have no reason to care about that because you're going to use an interface like a perplexity or you're going to use a chat GPT or Claude's, you know, UI and those models actually are, those are actually not single models like behind the scenes, there's a lot going on in [00:16:00] terms of looking at the request and, you know, handling the request in different ways because when a request comes in, you can look at it and say, what is this?

Like, what's Mallory asking for? Um, she asking what the weather is today. Well, I don't need to employ the most state of the art, biggest powerful model to handle her request. I can just give it to a GPT three five class model. It can in turn, you know, talk to a weather service, get the information, pass it back to you.

Similarly, a lot of classification and summarization tasks. can be done very well with smaller models. If you want something that is more of a, you know, kind of approaching reasoning, logic, things like that, that's when we get into really needing the heavier duty models because the GPT 3 class models are terrible at that.

GPT 4, frankly, is not the best at it either. And that's really what is most exciting about on the Frontier Um, when we talk about frontier, we're saying the biggest, newest models being able to do things that the current models cannot do at all. So what you're gonna get in your consumer experience, and you very much see this in copilot inside the Microsoft suite, [00:17:00] is they're bringing you down to, like, the model level you need, and then occasionally they'll burst up to the bigger ones.

In fact, copilot, in my experience, for the last couple months, Is not nearly as good as just using GPT 4 inside chat GPT because I think they have tamped it down quite a bit to make it scale. So, um, I think from an end user perspective, a lot of this conversation is just going to go away. Because if you think about GPT 4 and certainly with GPT 5 class models, not those specific models, you're going to get so much out of it that most users will get everything they need from it and won't necessarily be too concerned.

It's kind of like, you don't really know what processor you have in your laptop, and it just, it works really well. And if you get a new computer in three years, you're probably not going to notice that much of a difference. You're probably replacing your computer because, you know, you just kind of need a new computer.

The physical hardware has gotten to the point where it's broken down a little bit.

Mallory Mejias: Okay, that helps. So from a user perspective, we're just going to keep using the models that we use. On the back end, perhaps things will change. We'll have these mixture of experts, architectures, or small models doing certain tasks, [00:18:00] but the way we interact with it will stay roughly the same.

Amith Nagarajan: I think that's true for a period of time. I mean, there's always an opportunity for some disruptive user experience, and that's where one of the strategic advantages that you can build if you're building these tools is to think of a novel user experience, something that's even lower friction, more natural, more intuitive.

Chat is great. It's so different than what we've had in the past, and we've had chat for years. You know, there's been checked different kinds of chat apps. They've just all been universally terrible. Now, all of a sudden, they're good. So it's a great experience. But people are rapidly kind of Building hybrid experiences where it's part chat and part structured.

There's things like chat being infused throughout apps. So I think there's a lot of innovation that's gonna happen in user experience. That's where I'm really excited for consumer apps on. Don't get me wrong. Improvements in underlying models are going to power new capabilities that are very exciting for the average consumer.

Um, I just think it's one of these things where, you know, most people like you definitely are a power user. Most people aren't [00:19:00] doing what you're doing with chat GPT already, and they have a ramp up in terms of just what they know to ask for and what they will want out of these systems.

Mallory Mejias: We've seen a wave, certainly, of open source models from Google, for example, Elon Musk's X. A. I., like we discussed, I think that was last week, Mistral, like you mentioned, um, and then we've also seen some powerful closed source models like CLAWD3 Opus. Do you see, um, Large language model advancements leaning one way or the other in terms of open versus closed.

Amith Nagarajan: I think it's gonna be fun to watch That's really my only Conviction here.

I believe in open source personally because I think that the only way to fight bad things with AI is to develop really powerful Good AI and closed source is really hard to know if it's good or bad I'm not accusing anyone at anthropic which makes Claude or open AI which makes chat GPT of You know, malicious intentions whatsoever.

But you just don't know because these are very small companies with an extreme amount of power and open source. You know, the theory behind it is [00:20:00] it potentially democratizes some of this technology. And I think that's important. That being said, closed source systems right now are still far more powerful because of the many billions of dollars it takes to stand them up.

So To the extent that model training continues to be that expensive. I think closed source is going to continue to have some advantages, but it's hard to say. You know, it's an algorithmic improvements are happening very rapidly. There's a lot that's also going on in terms of training efficiency. You know what used to take 10 million to train a model of like a GPT three five class capability.

You can now train with 5000 the same model capability. So you know, the same things are happening over and over again that we talk about here in terms of exponential growth. So the short answer is I really don't know, but I think it's good to have both. I think that it's, it's a healthy competitive environment.

Mallory Mejias: I'm sure it's something we'll continue to discuss on this podcast well into the future. I'm really excited for topic two, because we will be chatting about some Interesting AI use cases, some interesting AI startups that are in Y [00:21:00] Combinator's Winter 2024 class. If you don't know, Y Combinator is an American technology startup accelerator and venture capital firm that was launched in 2005.

Notable companies that have gone through Y Combinator include Airbnb, Dropbox, Amazon, etc. Instacart and Coinbase, among others. Y Combinator's Winter 2024 class features a mega batch of 157 AI startups, a trend which highlights the growing adoption of AI across various industries. So I want to give you a little bit of an overview of how some of these startups are grouped.

So first, we have software development. 44 startups in the class are dedicated to innovating software development with AI automating routine coding tasks and enhancing the development process. There's a company called Codient AI that focuses on automatically fixing bugs and then a company called Momentic AI that specializes in AI powered automated testing.

The next kind of grouping we have is Customer Service with 12 startups aiming to optimize customer [00:22:00] interactions using AI powered solutions like AI. AI driven receptionist services and AI powered platforms for customer engagement. Retail AI is one of those companies that builds infrastructure for voice artificial intelligence.

Another one of those companies is kiosk, which is an AI driven platform focused on WhatsApp marketing, which sounds pretty interesting to me. The next group we have is biotech and healthcare with 10 startups in that group. They're focusing on areas where AI can reduce the time and cost of developing new treatments.

Radmate AI aims to serve as an AI co pilot for radiologists. MetaFICO offers no code data analysis for life sciences. This next category, I think, is probably most, well, I won't say most interesting for me, but Amit and I had an interesting talk about these companies, which is AI as a creative partner. So 14 startups are enabling creative individuals to leverage AI tools as collaborative partners in their creative process.

PocketPod [00:23:00] creates fully personalized news podcasts made just for you every day. Potentially a big competitor to the Sidecar Sync. And then we've got Magic Hour and Infinity AI that are both video creation platforms. Which is really no surprise given how popular and how much conversation has existed around Sora, OpenAI's text to video tool.

One of the last groups we have is finance and investing. AI is being applied there, with 11 startups developing AI powered solutions for various use cases, like Clarum AI, focusing on AI accelerated due diligence, and Powder, which is an AI co pilot for wealth managers. And finally, our last group, seven of the startups within this mega batch, Seek to revolutionize the physical world with InspectMind providing faster inspection reports.

HeyPurple is a company that is, provides AI property managers. And then there's another company called DraftAid that converts 3D models to CAD drawings. [00:24:00] So I personally had a ton of fun clicking on all these links, exploring all these companies. It was kind of a rabbit hole. Amith Y Combinator has a knack, you could say, for identifying potentially very successful companies.

Are you surprised that 44 of the startups in this group fall into the AI and software development category?

Amith Nagarajan: Not at all. I think that's a natural concentration because there's, first of all, you build what you know, and so there's so many people, YC is very much a group that, you know, has had a great track record of picking big ideas, you know, swinging for the fences, going for it, that's really what they're about.

Um, and so there is so much inefficiency and opportunity in software development. You know, the two you mentioned focus on bug fixing and testing are where probably 70 80 percent of the software development life cycle is spent in that maintenance fix test kind of mode building completely new software is also an area I can help a ton with.

But once you have a code base and [00:25:00] you want to maintain it, fix it, upgrade it, test it, Those are naturally things that are going to be automated for a particular startup. You know, I never really look at it and say, Oh, well, this particular one or that one is going to be a great one. I think that these are categories that are so broad that, you know, there's gonna be a lot of acquisition activity.

If someone figures something out, that's good. Um, I think you're gonna see major platforms from providers like it last seen in Microsoft and others. Um, incorporate these capabilities of natives as native features, either that they've built themselves or that they have acquired on. You're gonna see a lot of the bug fix test cycle go away.

So to me, that's kind of the obvious place. And it's obviously a category I know. Well, um, it's software development is kind of this intersection of creativity and science. There's this mix that goes on when you build software. So I think it's a great area for generative AI to have an impact. We certainly have seen it across all of our companies.

Um, you know, it's interesting because just yesterday the team at Member Junction released version 1. 0 of their software, which is the open source common data [00:26:00] platform for associations and non profits. Uh, and that's hosted on GitHub, and you know, GitHub is a leader in AI with Copilot. Uh, they were actually the original Copilot out there for software development, and they're, they're gonna be introducing a whole bunch of other capabilities in that platform, I'm certain, uh, that will do exactly what you described.

Mallory Mejias: You are a software developer at Meath, and you At least from what I've heard, you like to develop software for fun, as well as for like business purposes and professional purposes, what do you think about AI impacting software development or what have you seen thus far?

Amith Nagarajan: Well, I think software is fun to develop because it's a combination of, you know, engineering and building things and science with creativity.

That's what makes software fun. It's, it is a form of expression in a lot of respects. And so for me, for, you know, 40 years now, basically I've been, you know, doing this stuff and I love it. And I still do. My point of view is that I view these things as catalysts, as accelerators, um, not taking anything away from me, uh, but actually empowering me to go faster and do better because I [00:27:00] don't like doing the redundant, boring things.

I like to have the AI take care of everything I don't want to do and then focus on thinking about the architecture of software, working with other team members and saying, Hey, how do we want to build this and what do we want it to do? Um, and building some of the interesting code still. I think there's definitely a place for that.

I think that that might extend to other fields as well. I don't know. But, um, I think there's room for a I take on a lot of the redundant things. I don't know any software developer that loves fixing bugs. I haven't. They're probably out there. I just haven't met him. And, you know, certainly automated testing and testing in general.

There are people who specialize in that. But, uh, and having high quality software is a great thing to be passionate about as an outcome. But the process you go through of finding bugs, fixing them, finding them, retesting, It's, it's just really laborious since it's perfect fit for AI. So I don't see that taking anything away from the creative side of software development personally.

Mallory Mejias: Looking through this list of AI startups that I have in front of me, I think pretty much every one of these could impact an association in some way, either [00:28:00] the way an association runs at its core, or even like the industry specific ones, there are likely associations for property managers and things like that.

Are there any of these categories that you find are most exciting for associations?

Amith Nagarajan: I think the things that you mentioned about if you look with the external lens of what the association does, you have to pay attention to this stuff. So if you're in anything around radiology or similar fields, understanding what's happening there with these types of copilots is key.

Of course, in that field, this is nothing new. A. I. Has been a You know, uh, leveraged as an assistive tool in these fields, uh, for, for many years, but it's, it's accelerating. So there's opportunities there. Um, I think that if you're, you know, if you're in a field like real estate, um, these tools around property management or home inspection or real estate inspection are super interesting.

Um, so you have to look at it with the external lens. How are these disruptive forces going to affect your field, your profession, your industry? And think about the way you need to adapt to provide services, training, education, et cetera. [00:29:00] to the what the future needs will be based on that disruptive likelihood.

And then, of course, in the internal side, how can you do that work better? How can you be more efficient? How can you essentially employ an army of a I capabilities to, you know, supercharge what you're able to do and how you can deliver it, which might create opportunity to develop entirely new products and services as an association that you previously could not do.

Um, my favorite example of that is this idea of a knowledge assistant, People are deploying chatbots certainly to do customer service stuff. We've talked about clarinet, for example, in a prior podcast, which is one of these by now pay layer companies on they've automated a large portion of their kind of wrote customer service with actually great results.

Customers are as happy with the A. I. S. They have been with human, uh, respondents, but also they get an answer much faster. That's a great use case. But what if you could take all of the knowledge of your association packaged up into an A. I. With high accuracy, And then unleash that to serve your industry, serve your profession to advance what everyone in that [00:30:00] space does.

That gets exciting. And that was not a capability you could even dream of pre AI. So, uh, to me, it's thinking about new services you can provide that raise the bar in terms of your impact in your sector.

Mallory Mejias: Okay, so for example, the tool I mentioned, the company I mentioned called DraftAid, which I'll go over one more time, converts 3D models to CAD drawings using AI.

Um, should The hypothetical Architects Association. There is one, and I don't know the name of it off the top of my head. Should they be strategically thinking, creating content around companies like this, like interacting with companies like this? Should they sit back and wait and see what happens? What do you recommend there?

Amith Nagarajan: They need to be deep in this stuff. They need to get people who are experts in the field because a lot of associations within the association don't have deep expertise in the field or only a little bit of it. And so Someone like the American Institute of Architects should absolutely be all over this.

They should be really studying it. I'm sure that they have people that are, that are working on this, um, and [00:31:00] to, to go deep and to understand what's happening in the space, uh, because it's gonna affect architects. It's gonna affect people who do drafting. It's gonna affect all of it. Um, and I think that's true for every branch of engineering and science and, and basically everything, right?

So, um, my point of view is that the association has to develop a competency In a I in the context of their audience, you have to do that because otherwise you're going to very quickly be out of touch with what people in your field are doing and what they need to do. And it's your job as an association in my mind to help lead the way to help say, Hey, this is where we need to go.

As a profession, we need to embrace this tool in this way in order to be more effective, more productive, create more opportunity, have better quality, etcetera.

Mallory Mejias: And what if someone listening is thinking, well, we don't have. The subject matter experts. This stuff is so new. We don't have anyone who can speak on draft aid because this is an A.I. Startup. What would you say to that?

Amith Nagarajan: You go find them. That's your job to go figure it out. And there's lots of resources out. There's lots of people talking about this. So you go [00:32:00] to the Y. C. List and look at what people are doing. You call up those startups and say, Hey, You know, we're such and such association.

We represent this profession. We want to talk to you. We want to hear what you're up to. Be curious. Go and explore and investigate and learn. Maybe get some of your more closely aligned, highly engaged members who are, you know, have a knack for technology or whatever to get involved as well. I think there's a lot of ways to approach it, but sitting still is not the right approach.

You know, I think that that's the key is just you got to get there and explore and learn. And in your field, you know, you're going to see disruptive forces Um, some associations that I talked to are looking at it as a protectionist kind of thing. Like, how do we stop AI from negatively affecting our, um, our people, our industry, our profession?

That's certainly a concern for a lot of labor unions, which broadly fall into the category that we focus on helping and why I totally empathize with that. I also think that it's the job of those organizations to figure this out and say, how will AI disrupt this job type? And then to [00:33:00] learn. That content and then help bring that to their people so that their people don't become obsolesce, you know, we repeat ourselves a lot in life and in this podcast and we say that, you know, you're not likely to replace by a I by itself, at least at the moment, but you're very likely to be replaced by someone who knows how to use a I to do the job that you do.

So if you're a podcaster and you don't know how to use AI tools, I don't know how you can keep up with a podcaster that is good at AI. Same thing for writing, same thing for engineering, same thing for software development. Um, and I think it's the association's job to figure that out. Like, for, for whatever that profession is, how can AI help that profession?

And how do we train people to understand the technology? Uh, and, you know, whether or not you like it, it's there. So I think that, you know, if you represent a profession that employs 100, 000 people, or a million people, or 10 million people, Um, we've got to do what we can to try to help people understand this stuff.

Mallory Mejias: I, yep, I love that sentiment. I think everyone has fears around this. Everyone has questions and how powerful could it be if [00:34:00] your organization, your association is the place that people can go to learn about it. I think that's, that's really powerful. Amith, on a lighter or maybe darker note, depending on how we look at it, what do you think about pocket pod, the company that's creating on demand podcasts and podcasts?

That's for everyone, every day. What do you think about that?

Amith Nagarajan: I mean, it's, it's, it's just an example of personalization at scale, taking the idea of personalization of content and saying, Hey, we're going to create a personalized podcast just for you, Mallory, and it's going to know what you're interested in, whether you tell it that or somehow it figures it out through other signals.

And then generates the script for that podcast based upon what it knows about you and then synthesizes audio, maybe even video just for you. Um, it's, it's very much something that's within the realm of current AI. You, if you break down some of the things we've talked about in this podcast and said, Hey, if you have an idea for a podcast episode, You can definitely get the A.I. To create a script for you. If you have the script, we know that there are tools like 11 labs and there's tools from open A. I. And other people that [00:35:00] can synthesize text to audio. Um, and so the idea of being able to do this is really a matter of connecting some building blocks together. Doing it well, of course, is another level of doing it in a way where you find it engaging and interesting and You know, there's some art along with the science, so I'm excited to see stuff like that I actually think that you know when you have a podcast like the sidecar sink or you know Much bigger shows obviously that are out there believe it or not.

There are some shows bigger than the sidecar sink Yeah, and you know it amazes me too Mallory But those are those other shows that have large fallings like Tim Ferriss or someone like that or Lex Friedman You know those those are shows where the personality is Of the producers of the content really are a big part of what draws people in and then certainly keeps them.

If people like the host, if people like the content, their way of communicating. I think there's a degree of protection for podcasters around that, at least to some degree. Um, you could say, Hey, well, sidecar sink. Like, will we embrace this idea? So if pocket pod podcasters. Yeah. [00:36:00] Yeah. Uh, is a great tool. Well, we have a Sidecar Stink personal edition, right?

Where you can basically train the PocketPod tool on all of our podcasts we've ever done. By the time that tool is available, maybe we'll be on episode 50 something. We feed every episode we've ever done into PocketPod, and then we make it available to our listeners to create a personal version. Could be interesting, right?

It could be an interesting thing to do where it over, you know, overweights on most recent podcasts, but then also has availability in older ones. And then to take that idea even a step further beyond what PocketPod's talking about, what if it was a two way conversation? So you're listening to the podcast, I know some people probably do this, but you start talking back to the podcast and saying, well, what do you think about this?

You know, and so, um, but if you had that kind of interactive experience, that could be very, very fascinating as well. So. Um, I think when you kind of draw, pull these things and draw them out to logical conclusions, it gets, It's fascinating. Um, who knows if this particular company will be successful.

There's a lot more than just the tech that has to make that, you know, has to be there to make them [00:37:00] successful as a business. But I think the idea is great. And there's some lessons to be had for the association market where you're talking about new content modalities. You're talking about something at scale that wasn't possible pre a, it's an AI scale problem.

You know, we like to say that a lot and we're talking about personalization is a great example and it's, you know, certain kinds of personalization are becoming really easy to do. Um, most associations I know are not doing any personalization or any meaningful personalization. I talk to a lot of association groups and I, one of the questions I ask them is, where are you on content personalization?

First of all, do you believe in the idea? Are you bought into it? Most people say yes to that. And then I say, well, have you done anything? The most common answer is basically no. Um, we maybe put it in Mallory, your first name. Maybe we segment a little bit and say we'll send you content in Category A instead of Category B.

But that's about it. It's not true one to one personalization. And they all pretty much say that, not all, but like many people say, well, we tried it, it was really expensive, it was really hard, it didn't work very well. And that [00:38:00] bias from that failure, that challenge, um, will oftentimes influence people's plans going forward.

They'll say, well, we tried personalization four years ago, and it was hard and it didn't work very well. And I would say to you, well, the reason you tried it is because the promise is actually incredible. Try it again, because the technology has doubled and doubled and doubled, and it's better and better and cheaper and cheaper.

So, um, Coming back to PocketPod, it's just a form of personalization at the extreme level. And we're gonna see stuff like that pop up all over the place. I wouldn't be surprised if you log into Netflix a year or two from now, and it just starts generating content for you based on what you, like, you talk to Netflix and say, and it knows everything you liked and didn't like, and you start to say, Hey, I've got 25 minutes.

I want to watch something that's an action comedy. And it just spins up something and you know, there's all sorts of interesting issues around intellectual property and so forth But that's where we're headed

Mallory Mejias: Very interesting. I said it on last week's episode. I think for now and this could change my personal stance [00:39:00] is I hope to Seek out creations that are made by humans and consume those so like human made podcasts human made movies tv shows, whatever But I will say on the podcast front.

I think there's An exciting use case where there just aren't podcasts around certain things like I know there's certain Parts of history that I don't remember learning when I was younger and I'm like man It would be really interesting to get like an interesting version of what happened in latin american history in these years That I could see myself using something like pocket pod for so I guess it just it kind of it depends on the situation

Amith Nagarajan: Yeah, I mean, the way I would describe that is you're separating kind of the raw utility of the thing from the art.

And so when you want to consume art, it's because of the enjoyment. It's because of the feeling you get from it. It's not just because you captured, you received that information, right? You got that information and that's it. Uh, it's like saying, okay, well, I can, let's say I could get the same, uh, historical content from a 15 minute personalized pod about the, uh, the movie Oppenheimer and what happened [00:40:00] there.

I just want to consume the history, get the facts. I want them given to me in a way that's relevant to my level of education, what I already know, which the personalized pod can do. It doesn't give me things that I already am aware of, but it really gives me the pieces and bits and pieces. But I don't experience any of the art of that film, right?

And so from the perspective of a consumer, if you're looking for raw utility and just achieving a thing, I think AI generates stuff can be great. But I do agree with you that I very much look forward to consuming. Um, you know, from it for enjoyment, uh, true artistic works from other humans. So it's, it's, uh, and you know, we talked about A.I. Music recently, and it's the same thing there. I think there's a tremendous utility there, and there's gonna be an explosion of use cases where you take advantage of A. I. Generated music for business and for communicating with friends and so forth. But you're also going to have situations where, you know, is that really art?

Is that really something you would, would you listen to AI generated music for fun? Eh, I, I don't know. Maybe some people [00:41:00] eventually would. There probably will be a genre called AI art or what, AI music. But, uh, maybe I'm old enough where I'm not going to particularly seek that out. I'm looking for, you know, music created by humans.

Mallory Mejias: Mm hmm. Yeah, I think there will be a market for those people Absolutely, and for people who want the netflix on demand film made just for you and I get that I I can respect it But I love what you said separating utility from art. So i'm gonna kind of i'm gonna hold on to that as part of my Belief system.

Thank you, Amy In topic three today, we're talking about A. I. and the middle class, not something we spent a ton of time talking about on this podcast before David Autor, an economist at MIT, has changed his stance on A. I. 's potential impact on the workforce. Previously known for research showing how technology and trade have hurt middle class incomes, Autor now argues that generative A.I. could reverse this trend and benefit the middle class. Autor believes modern AI is fundamentally different from past automation waves. He contends that AI can assist more people, including [00:42:00] those without college degrees, in performing valuable work currently done by expensive experts like doctors, lawyers, and professors.

By enabling a broader segment of the population to take on higher skilled work, AI could increase their earnings and lift more workers into the middle class. perspective is notable given his background studying the negative effects of technology on workers wages. However, he explains that the facts have changed with the advent of powerful generative AI capabilities, prompting him to re evaluate AI's potential economic impact.

Amith, what are your thoughts on this? When you shared this article with me, and it was an interesting read, so I'd love to hear your thoughts on it.

Amith Nagarajan: Well, first of all, I love the idea of an academic who's as well known as celebrated in the field as this guy flipping his stance on anything. Because I think the idea of going against what you've previously said is, it's just so taboo in so many professions that people don't [00:43:00] consider it.

They don't consider their own views being taboo. Subject to change. And I think that's a giant mistake. Um, you know, I think it's really important to maintain as much mental agility as you can to approach the world based upon what you've learned rather than what you've said. So, uh, and there's a big, big difference there, right?

So I think that's one thing that's notable. And A. I. Is clearly a remarkable enough technology that someone in a field that's obviously adjacent to A. I. But not in A. I. would flip his stance on this particular topic. Now, as for the substance of the conversation, I think there's an interesting argument to be made that people in the middle could benefit from AI subject to two conditions.

First, being that what he is referring to holds true, that people who have a middle level of skill can elevate their game because of AI. So can a paralegal perform many of a lawyer's tasks? Can a medical assistant perform more of a doctor's tasks, [00:44:00] right? And if you can do more of the tasks, theoretically you can go up the value chain and you can achieve greater level of utility and greater economic gain.

Now, at the same time, what we have to assume is that there is increased demand. Because, um, if there isn't, the question is, is who's competing? Because even if you are able to do more, if demand is fixed, which I don't believe it is, but if it was, You have a problem with that equation because ultimately you just actually drive down price if demand is fixed and supply increases that equilibrium pushes down price ultimately will suffer on.

That's great from a consumer perspective, but terrible if you're a provider in the field. And so the middle class would be hurt in that scenario, but the growth of the middle class in American particular and most developed nations that I'm aware of has gone on through major technological disruptions now for centuries and particularly in the last, uh, 70 years with the I.

  1. Revolution. There's been downstream effects for people out well outside of the I. T. Sector, [00:45:00] and a lot of that has gone from because of the increase in demand, essentially. So we talk a lot in our A. I. Briefings that the 10 X increases are really interesting order of magnitudes to study, certainly in computing.

Uh, it's certainly an A. I. But it's also interesting to study in the economic realm. And so, you know, if you study economic growth over a long period of time, I'm talking about pretty much all of human existence. We say, How long did it take us to get to 1 trillion in global GDP on a current basis, meaning inflation adjusted so equivalent to current 2024 U.

  1. Dollars to How long did it take humanity to achieve a 1 trillion global output? Um, and that took about 1, 700 years from modern era, right? So it's really hundreds of thousands of years, millions of years. But from, you know, recorded history, basically, we know that roughly 1, in that era is what economists estimate we achieved 1 trillion in global GDP, in current U.

[00:46:00] S. dollars. It's essentially all of human history to get to a trillion. Then the next question is, how long did it take to get to the next order of magnitude? 10 trillion. And it took about 250 years through the Industrial Revolution, the steam engine, electricity, all these things that happened. We had a 10 X increase over 200 plus years.

And then the question is, well, what's happened since then? Because right now we're at about 130 trillion global GDP. And When we got to 100 was around 2014, I believe. So from 1950 to 2014, 64, 65 years, compared to 250 years before that for the preceding 10x increase in global GDP, compared to all of human history for the preceding one before that, thousands of years.

So something is definitely accelerating. And what it is is demand increases when you have the ability to produce more value at lower cost. Um, demand is not fixed. You can't argue that demand is fixed if you look at that period of time. Now you could say with AI, like, is it possible to have a quadrillion, uh, [00:47:00] you know, level of global GDP?

And I think the answer is yes. I think that the opportunity exists to create new products, new services. And, uh, the demand will likely grow to consume those things. So coming back to your question and my point of view on it, if demand continues to grow, even at the rate that it's been for the last 67 years, I think there'd be a ton of opportunity for the middle class, uh, because using AI assistive technologies will definitely level up skills.

Um, last thing I'll say about this is that there was a study last year that was done in partnership between Wharton and the Boston Consulting Group that looked at the different levels of people. They took, I think, about a thousand employees at BCG. And BCG is a well known, you know, high end management consultancy.

But they, like anyone else, have a bell curve of talent. And so the people at the top of their performing, you know, curve and the people at the bottom, people in the middle. And what was most fascinating about their study wasn't just the productivity increases felt across the board from using, in their case, [00:48:00] GPT four across the board.

There was an increase, but there was a larger increase in productivity in the middle of the bell curve. And so those people in the middle of the bell curve had a lift far greater than people at the high end of the bell curve. It's that leveling function is a lot of what David Autor is describing, I believe.

And so I think there's an interesting case to be made there. I think there's cause for optimism. At the same time, there's other economists out there like Anton Koronek from the University of Virginia's Darden School of Business who we've spoken about previously and we've written about, who look at this from a different lens, from what I understand, and he's looking at it more from the viewpoint of how quickly AI grows.

So, um, if AI grows so rapidly that AGI is upon us, well then it's all bets are off, because if AGI can literally do all the work, then it doesn't matter how high you elevate your skill. So in that scenario, you have a radical decrease in price or a k a lots of unemployment. Of course, the value being created by the I so great.

[00:49:00] How do we tap into that? Uh, to provide for society's needs. That's a different question, right? Um, but ultimately, my point of view is that I think this piece is really interesting to read. I'd encourage everyone to read this piece in the New York Times, I believe. We'll link to it in the show notes. Um, there are many layers to this conversation.

And ultimately, in your profession as an association, I think you need to read this article and then think about how this piece Uh, applies in your sector, whether that's nursing or engineering or life sciences, you need to think about this critically as part of what we described in the earlier section.

How does this stuff apply to you and your profession?

Mallory Mejias: I think one of the key points in this article is that, uh, less skilled quote unquote people will be able to do the same things that doctors or lawyers, not everything, but some of the same tasks that doctors and lawyers can do. My question then is, To you, Amith, and really we're just speculating here because we don't know the answers, but then what are lawyers and doctors doing?

Do you think there's just a whole new world of work that only they can do that [00:50:00] maybe we can't even imagine now just because we're not there yet?

Amith Nagarajan: Yeah, it's a great question. I mean, and even even independent of what we're talking about in terms of upskilling people in adjacent fields like medical assistant to doctor, paralegal to lawyer within the legal profession, just for lawyers or doctors, you have varying skill levels, right?

And so you have varying levels of experience. There's this whole process where people go into a profession like accounting or law. Or architecture. And there's like this period where you're learning, where you're picking up the knowledge where, you know, very little coming out of school in most of these fields, and you learn a lot on the job.

Um, and if that work is automated, maybe the senior partner in the law firm still has work to do in terms of thinking or arguing a case in front of a judge, but the junior lawyer doesn't have the opportunity to learn by virtue of doing the lower level tasks. Uh, what do you do with those people? Right.

And so. Short answer is, I have no idea. Um, but I think that the rate of change in AI is the thing to really watch for. The fact that there's displacing technology, we've been through [00:51:00] that a bunch of times over the course of history, but the speed at which AI is moving is the thing that's, I think, going to throw a lot of people off.

There are a lot of AI optimists who are saying, Hey, look, um, every time we've had this happen before, we've figured it out. We'll figure it out here. And that's, that's the New Orleans Tornado Warning. So hopefully Mara and I will still be here to complete this podcast. Might be the last episode of

Mallory Mejias: me.

Amith Nagarajan: But we, it might be, yeah, hopefully not.

Uh, this is pretty fun. Uh, but we'll get AI to step in for us, right?

Mallory Mejias: Absolutely.

Amith Nagarajan: Um, but, you know, in, over the course of time, people have figured this out. Society has figured this out. Industries have evolved. And that's true, but that doesn't mean that there isn't a massive disruptive effect along the way.

So you look at these curves and say, Hey, we've had all this growth, all this productivity, all this prosperity. Along the way, it's a jagged edge. It doesn't mean it's a smooth curve. And when you have a compression of timelines with less time to react, less time to adapt, which is what we need to do.

Technology doesn't need time to adapt. Technology is just there. But we need time to adapt. Society needs time to adapt. Your [00:52:00] association needs time to adapt. If you have a highly compressed time frame, It's much harder to predict what's gonna happen. And that disruptive effect potentially is more acute.

So to me, I think that's again coming back to what we talk about a lot. Spend time on this. Learn about it. You're listening to us on this podcast or watching us on YouTube. It means that you're taking time. Go deeper. Spend a little bit of time every week and learn this stuff because the disruptive effect is not a question mark.

It's it's a it's really a when is it going to happen to your field?

Mallory Mejias: Do associations that impact specific professions have a responsibility to bridge the gap between their current member workforce capabilities and an AI enhanced workforce?

Amith Nagarajan: I don't see how the answer cannot be a resounding yes. I think that every association has the responsibility.

Uh, and the opportunity, but the responsibility to serve their members, to serve their field. Uh, and to serve society by helping their profession adapt to a I don't think there's any [00:53:00] question of that. Whether you like it or you hate it, or you're somewhere in the middle, it's real. It's here. It's what's happening.

And so you have to figure out how your workforce in your field is going to adapt. How you're going to serve the people that are in your field and the adjacent fields and then the end consumers, whoever they are, that consume the services that your field produces. You have to go do that.

Mallory Mejias: Well, I think that's that's a great line to end today's episode on.

Thank you everyone for tuning in. Amit and I made it through the storm and we will see you next week.

Amith Nagarajan: Thanks everyone.

We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.

Mallory Mejias
Post by Mallory Mejias
April 11, 2024
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.