Skip to main content
Intro to AI Webinar

Timestamps:

0:00 Introduction & AI at Work
7:54 Advances in Protein Folding Prediction
11:44 Exciting Potential and Risks of AI
26:19 Advancements in GPT-4o
33:04 Product Roadmap and Future Innovations
41:10 The Future of AI in Work
46:36 The Role of AI in Organizations
54:58 The Value of AI Implementation

 

Summary:

In this episode, Amith and Mallory delve into the transformative impact of artificial intelligence (AI) on various sectors, with a keen focus on its role in the association world. They begin by discussing the advancements of AlphaFold 3, Google's latest AI model, which revolutionizes drug discovery and biological research. The conversation shifts to the newly released GPT-4.0o by OpenAI, highlighting its enhanced capabilities in text, audio, and image generation. They also explore the potential future of AI in personalized medicine and complex problem-solving. Finally, they review the 2024 Work Trend Index by Microsoft and LinkedIn, revealing significant insights into AI adoption in the workplace and the evolving job market.

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

Follow Sidecar on LinkedIn

Other Resources from Sidecar: 

Tools mentioned: 

Other Resources Mentioned:


More about Your Host:

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on Linkedin.

Read the Transcript

Disclaimer: This transcript was generated by artificial intelligence using Descript. It may contain errors or inaccuracies.

Amith Nagarajan:

Greetings everybody. And welcome back for another episode of the Sidecar Sync. My name is Amith Nagarajan and I'm one of your hosts.

Mallory Mejias: And my name is Mallory Mejias. I'm one of your co hosts and I run Sidecar.

Amith Nagarajan: It is great to be back. We have another action packed and exciting episode at the intersection of artificial intelligence, AI, and you.

But before we get going, let's hear a quick word from our sponsor.

Mallory Mejias: Meath. Last week, I think it was, we were at the Innovation Hub in D. C. Can you talk a little bit about your time there?

Amith Nagarajan: It was fantastic. Uh, Yeah, it was. It was last week, wasn't it? In fact, I think it was almost exactly a week ago or 8 days ago, and it feels like 3 months. Things [00:01:00] go so quickly. Uh, we had great turnout.

There were dozens of association executives from I don't remember how many different associations or many and a lot of really great conversations about what they're working on a lot of collaborative sharing. There were some great speakers from across. The Blue Cypress family, um, Rasta. io announced an exciting new product, their personalization engine, outside of the newsletter, there's all sorts of cool stuff happening, and it was super fun to be in D.C., I used to live in D. C., I lived there for almost a decade, a long time ago, and it's always fun to go back and reconnect with people. old friends and get to meet new people. So I had a fantastic time. How about you?

Mallory Mejias: I had a great time myself. I got to lead a marketing AI panel for the second time. I did it the first time in Chicago, second time in DC.

We had some really great insights shared. And then I think was it the day of the innovation hub of me that open AI dropped GPT four Oh, and then alpha fold, which we're talking about today was also dropped right around then it was a crazy few days.

Amith Nagarajan: Yeah, Monday of last week, GPT 4. [00:02:00] 0 dropped. So it was like the night that I was arriving in D.C. I was able to catch up because they actually live streamed it while I was flying. So I didn't catch it fully live. And then the next day, while we were together in D. C. with the association leaders coming to the Innovation Hub, uh, Google had their I. O. event, which is their developer conference. And they announced a whole bunch of different, uh, really cool AI things there.

So, yeah, it was a, uh, It was a busy week, so I didn't blame too many people who were paying attention to Google and OpenAI instead of us, but, uh, you know, it was a stiff competition for attention those two days.

Mallory Mejias: Yep, this week for sure we had a lot to pick from in terms of topics. It was kind of tough to narrow it down into three.

I also want to shout out our fan mail feature. We mentioned that on, maybe, it was two episodes ago that we first mentioned it. But if you are listening on your mobile phone, your mobile device right now, you can go to the show notes and click send fan mail. I think that's what it says to the show. We had some great fan mail from one of our listeners, Liz.

So shout out Liz from two weeks ago. So if you have any [00:03:00] thoughts, questions, concerns, anything that you want us to cover on the podcast or just feedback in general, feel free to reach out to us through that fan mail link. Today we've got three topics lined up. The first of those will be alpha fold three.

Really excited for that one. Next we'll be talking about GPT 4. 0 because how could we not? And then finally, we're talking about Microsoft and LinkedIn's work trends report. So first and foremost, AlphaFold3 is a new AI model developed by Google DeepMind and Isomorphic Labs. Alpha fold three is designed to predict the structure and interactions of a wide range of biological molecules, including proteins, DNA, RNA, and small molecules that are called ligands.

This model builds on the success of its predecessors, alpha fold and alpha fold two by expanding its capabilities to cover all of life's molecules and their interactions. So I'm going to try to contextualize this so we can kind of understand Why it's important. The [00:04:00] model can predict the 3D structure of multiple biomolecules.

Like we mentioned, proteins, DNA, RNA, helping to understand the complex interactions within these biological systems. This capability provides detailed structural information that can be used to speed up the drug design process. AlphaFold3 can predict how proteins interact with small molecules with high accuracy, which is crucial for designing new drugs.

Because AlphaFold3 can predict the structures of various biomolecules, it allows researchers to model complex biological systems more accurately. And by accurately predicting how drugs interact with their targets, AlphaFold3 can optimize drug efficacy and minimize side effects. Finally, AlphaFold3 can predict the effects of genetic variations on protein structures, which can be used to develop personalized treatments tailored to individual patients genetic profiles.

Amith, that's kind of a lot, a little bit on the technical side in terms of biology. [00:05:00] Can you help set the stage for why this is such a huge advancement?

Amith Nagarajan: I have so many thoughts on this, but first of all, did you know what a ligand was? I'd never heard of that before this.

Mallory Mejias: I, you know, I'm gonna say that I'd heard of a ligand, but I couldn't have probably answered in a multiple choice question what it was, but I'd heard of it.

I had.

Amith Nagarajan: I'm not a big life sciences guy. I'm intrigued by life sciences, but it was always my weakness in high school and in college. I was much more of a physical sciences, physics, uh, computer science, obviously, and also finance type guy, so. I had a hard time with it, but I find it extremely fascinating and I want to talk about some of the reasons I'm so excited about this area of AI, perhaps more than anything else that's happening in AI.

Uh, but ligands, yeah, they kind of occur to me. I'm like, is that some kind of special New Orleans cuisine or something like that? I don't

Mallory Mejias: want a plate of ligands.

Amith Nagarajan: You never know. Uh, anyway, so So, you know, when I think about, first of all, AlphaFold is now in its third generation and many, many things are happening with this over a period of time, you know, when AlphaFold first came out, it [00:06:00] was this novel way to predict protein structures in three dimensions.

And so just to quickly cover that. It's kind of like going from puzzle pieces to Lego blocks. So we knew the chemical structure of proteins for a long time. Um, not all proteins, but many of them. We had an understanding of proteins that naturally occurred, as well as engineered proteins. But we couldn't predict how they would fold, meaning as a protein goes from its like 2D chemical representation, to the actual 3D visualization of how the bonds are formed, and therefore, what are the angles, and what does the structure of that protein look like?

You can kind of see it in 3D, and that's the point of alpha folds, to predict that protein folding, or the 3D structure, essentially. And so, if you imagine going from flat puzzle pieces, from a jigsaw puzzle, to 3D Lego blocks of all sorts of shapes and sizes, Um, you can then think about how these things fit together.

And so many diseases, when you're thinking about how to target the disease and how to develop a [00:07:00] molecule that potentially can, um, be, you know, effective in curing the disease, um, you're talking about being able to try to find the right, Molecule to fit. Essentially, it's almost like a lock and key system.

Um, and so the concept of being able to predict at high scale, now hundreds of millions of proteins have been predicted by AlphaFold. Going back to AlphaFold 2, actually, and those designs, essentially those predictions, which were very high accuracy, uh, were all open source by Google. So that was a real shout out to them for doing that because I think in the, uh, podcast I listened to recently, uh, where they were talking about this innovation, they said that 1.4 million people had downloaded the AlphaFold2 dataset to do experimentation work. So that's amazing that that many people are trying to use, um, this protein folding prediction data to advance life sciences. So, uh, coming back to AlphaFold3 and some of the enhancements you're talking about, going beyond basic proteins and dealing with more complex [00:08:00] molecules.

Um, and how they interact and being able to model these complex systems, uh, it's going to usher in, first of all, a better fundamental understanding of biology than what we have today. And so it's going to help us advance fundamental research in biology and many other related disciplines. And then building on top of that, when we talk about, you know, solving problems, whether that's ecological problems, like, how do we get rid of plastics in the ocean?

How do we help, uh, with deforestation? These are all things that you can have a better understanding of biology to help with this stuff, right? How do you deal with soil erosion? How do you deal with coastal erosion in, in Louisiana, for example? Um, I don't have a specific example within those subdomains, but the point is, is that stronger fundamental understanding of these systems is going to be critical for innovation.

Obviously, a lot of the things we're dealing with, the carbon capture, dealing with, you know, climate in general and the challenges ahead in the coming decades. [00:09:00] This knowledge is going to be tremendous, in particular related to drug discovery and, uh, you know, when we think about, like, how do you solve for either broad diseases where millions of people or hundreds of thousands of people suffered from something or even, you know, very narrow cases, um, This is a category where I think our ability to have a much better chance of success at drug pipeline is going to increase.

So, if you think about how drugs, drug discovery has worked for a long time, you know, for every drug that is approved by the FDA in the United States and comparable agencies elsewhere in the world, Um, there are a high multiples of numbers of drugs in each prior phase of the process. So meaning as you go through, uh, even the earlier stages of drug discovery, when you have a candidate compounds, and then you have to go through animal testing.

And then from animal testing, there's three phases of human clinical trials. The first one is tend to be focused on safety. The next one is on efficacy. And then you go to like this larger scale trial to [00:10:00] look for larger samples. And each of these phases of clinical trials becomes. Radically more expensive, which by virtue of constraints, it means it narrows the number of candidates you can push through that pipeline.

Um, if you get it wrong early and you spend hundreds of millions of dollars on a phase 2 or phase 3 clinical trial, as well as a lot of time, uh, you end up at a dead end. And I think it's something like only 11 percent of drugs that enter phase 1 clinical trials ever make it out of phase 3. Uh, so it's basically 1 in 10, right?

So you can improve your odds. That is an incredible thing, and one of the ways you improve your odds is you have a better understanding of these systems, these complex biologic systems, which this type of AI will help you model. So, very high level, I think that perhaps is the most exciting thing happening in the world today, because who could argue with more effective, more cost effective, available drugs that are targeted, safer, more effective, I mean it's just an amazing thing.

So, I get really pumped up about it, as limited as my own understanding is [00:11:00] of the underlying science in the biology. I get really pumped about this application of AI.

Mallory Mejias: Mm hmm. Yeah, this is incredibly exciting. I mean, just on this podcast, we've talked about AI advancements with material science, with weather prediction on one episode, and now with drug discovery.

Do you see a future, a near term future, where we do away with traditional research methods and kind of AI models for this kind of thing?

Amith Nagarajan: I think in the world of, I think there will always be some Work to be done on the bench, so to speak, meaning in the lab, uh, with molecules, being able to actually test things out, then leading to various types of trials.

Uh, we might be able to eliminate certain steps in the process. For example, if we get so good at predicting the way these compounds will work in humans, you might be able to bypass animal studies entirely. Like, you know, for example, we know an awful lot about how a wide array of compounds interact with rats and mice.

Uh, and rats and mice [00:12:00] obviously share some characteristics with us, but they're also extraordinarily different from humans. So Uh, you know, and then that's expensive. There's all sorts of ethics questions around animal testing as well. Uh, and it's slow and it's expensive, right? So I think that you might be able to eliminate some of those steps because you have higher probabilities going in with better and better AI.

Um, the other thing, though, I think that's important is, you know, Um, probably for a while, and the while, how long that is, is going to be a question that people have different answers to. You'll probably need, you know, some degree of validation in the lab before you go to any kinds of trials. Uh, it's interesting, because here in New Orleans, there's a company that I happen to be an investor in, that's a, uh, early stage, very innovative life sciences, uh, drug research company.

And they work on essentially simulating with actual live human tissue, uh, how different drug compounds and molecules essentially will interact with human nerve tissue that their focus is on trying to, uh, find cures for [00:13:00] neurodegenerative diseases like Alzheimer's, Parkinson's, et cetera, ALS as well. And so.

Um, their innovation was this idea called nerve on a chip, which is this concept of being able to actually take live human nerve tissue and put it on a semiconductor and be able to do very controlled experiments at scale, uh, and get feedback loops through that that are much, much faster and much less expensive, uh, for a much wider array of potential candidate molecules to see how things interact.

Uh, both in terms of how the tissue reacts and also with electrical stimuli and all these other cool things they can do that you can't really replicate in humans at all. Uh, but it can give you essentially a prediction of how things are going to act in the actual much more complex biological system of a living breathing person.

Uh, and so I think that that kind of technology will still be really valuable, uh, at least probably for the next decade is my guess, but the AI will potentially be a precursor and like a co pilot along with, uh, you know, new, new forms of in lab testing like that. Uh, so [00:14:00] there's just so much happening that it's, that's exciting.

I think these, we talk about exponentials a lot on this podcast and in our book and in all of our other content. And there's exponentials happening actually within biology itself with knowledge of things like gene editing and the technology I just mentioned with like the, you know, uh, lab or cells on a chip kind of concept, right?

Which is a new idea. And that's an exponential growth. Technology itself, and they feed off of each other because AI is an exponential, and these other exponentials are also growing at an exponential pace, obviously, and they're feeding off of each other. So it's just an exciting time for scientific discovery in general.

Mallory Mejias: Absolutely. New drug discovery is certainly exciting. And then I also think that last part of what I mentioned, personalized medicine, I'm excited to see what advancements we see there, not only with new drugs, but new drugs, perhaps tailored to our own genetic profiles. That just seems like next level medicine.

Yeah. Thank you.

Amith Nagarajan: Yeah, and then it's like the concept of, hey, Mallory, like, you know, you have all these various unique attributes [00:15:00] and things you're trying to improve or treat or whatever. And maybe there's like, you know, a 3D printer in your home that just pops out a pill for you that morning, not just based on your genetic profile, not just based on whatever your goals are.

or whatever your issues are, but also based on how your body is doing at that moment in time, right? How is your blood sugar? And how was your sleep last night? And it gives you that personalized, like, ultimate, ultimate capsule for you to take in, right, that morning. And you just feel great all day. So, I mean, we're not that far off from that type of sci fi.

Mallory Mejias: Wow, that's pretty crazy to think about. I feel like Amith you're really good at setting the stage for like What's possible next with these things, which is really helpful Um, I'm

Amith Nagarajan: not here. So i'm good at making stuff up and then trying to make it happen That's pretty much what I'm saying. Hey, I don't

Mallory Mejias: know a lot of what we've talked about.

I feel like is coming true We'll see about the 3d printed drugs, maybe Um, yeah,

Amith Nagarajan: you know mallory before before I know we want to move on to other things But I just want to say one thing about uh alpha fold. It's that it's it's And the [00:16:00] guys behind AlphaFold, uh, one person from Google's team, another person from a major investor, I forget which one, we're talking about this recently.

We're talking about like open sourcing the database in AlphaFold 2 and the potential downside of not even open sourcing the model, but the database itself, uh, and the possibility of open sourcing technologies like AlphaFold 3. It kind of goes back to the same general conversation we've had. About potential downside risk of open source in general, or maybe even just the potential downside risk of AI in general.

Right? Like, what are the potential malicious use cases? What could a bad actor do with these technologies? That's where I tend to focus as opposed to, um, the idea of, like, would the AI itself go bad, right? Like, will AlphaFold become, like, some kind of bad actor itself? That's like a really unlikely scenario.

It's nothing's impossible, but it's unlikely. But what is likely? is let's say a terrorist organization takes AlphaFold3 and says let's design a novel pathogen that can kill people at scale better than ever, ever before, right? So, you know, [00:17:00] those kinds of negative use cases could exist. And so we have to think about that, but we also have to recognize that whatever's happening at the frontier, meaning AlphaFold3 seems to be the best in this particular subdomain.

The people that are right behind that are probably not that far off. So what was the cutting edge in AlphaFold 2? Probably anyone and everyone around the world, even in very small labs, can do AlphaFold 2, and that's not too far behind AlphaFold 3. So we have to keep raising the bar because the good AI has got to stay ahead of potential bad use cases of AI.

So there is downside risk to all of this stuff. People could do bad things with any of these tools because they're powerful. Um, and my, my central point of view is simply that we have to keep advancing it because everyone has this stuff now. So it's just a theoretical argument to say, well, what if we don't have it?

Well, everyone has it. So we have to develop stuff that's capable of defending against the things that could go wrong. Hopefully the world doesn't come to that, but hey, that's, that's part of the way I frame it, because I think we [00:18:00] have to keep advancing the good AI to stay ahead of bad use cases.

Mallory Mejias: That's really helpful because that's, I wanted to ask you about potential downsides.

I know we talk about. You know, new discoveries all the time, or new tools, new companies. Suno AI has won the text to music model, and then Sora, and there's kind of always a great side and a terrible side. With this one particularly, I felt like, huh, could there be a downside to something so powerful? But I guess you're right, if it gets into the wrong hands, which it will.

I mean, the fact is, you're right, we're going to keep pushing that line, the frontier line. So it will get into the hands of bad actors. I guess that's always a downside with this technology.

Amith Nagarajan: For sure. I mean, the more powerful a tool, any tool of sufficient power is fundamentally dual use, meaning exactly what we're talking about.

It can be used for good, it can be used for bad. And that's like the fundamental idea of like gunpowder, same thing, right? You can use it for construction, you can use it for killing people. Um, there's a lot of different dual use technologies out there. AI is [00:19:00] a perfect example of this. Um, and, you know, there's history preceding this conversation that I think we can look to for both insight on what, how to do things well and also where to avoid drawbacks.

But also we're entering new territory simply because of the speed at which this stuff is evolving. And if you think about it like GPT 4. 0, which we'll talk about soon, Um, basically crushes the stuff that was state of the art six months ago. So how do you keep up with something that's evolving at that pace?

That's the open question. All of us are struggling, but

Mallory Mejias: Yep. On a funnier side note, I know a lot of people, when they talk about the potential AI destruction in the future, people reference the movie space odyssey, which I had never seen. Uh, so I actually started watching it this past week because I said, you know, this is probably essential.

If I'm going to be talking about AI this much that I, whoo. Watch Space Odyssey and they got a lot of stuff right. That movie is from 1968. So I'm just throwing that out there. It's, it's a good watch. All right. Topic two GPT for, Oh, where [00:20:00] the O stands for Omni. It's the latest and most advanced multimodal large language model developed by open AI released on May 13th of this year.

GPT 4. 0 is an evolution from its predecessors, including GPT 4 and GPT 4 Turbo, and integrates and enhances capabilities across multiple modalities like text, audio, and image. So diving in a little bit more to that multimodal integration, always a tongue twister, multimodal integration, GPT 4. 0 excels in text generation, of course, comprehension and manipulation.

With audio, it can ingest and generate audio files, providing feedback on tone and sound. speed, and even singing on demand with images. It has advanced image generation and understanding capabilities, including one shot reference based image generation and accurate text depictions. In terms of performance enhancements, GPT 4 0 generates text twice as fast and is 50 percent cheaper than GPT 4 turbo.

It [00:21:00] supports a 128, 000 token context window, allowing it to handle extensive data. extensive and complex inputs. It also shows improved performance in non English languages, making it more versatile globally. GPT 4. 0 can process and generate outputs in real time, making interactions more natural and intuitive.

It can handle interruptions and respond with human like voice modulation. It's available via OpenAI's API and supports text and vision models. Wait, sorry, one second. Available? I think there's a word missing there. I think this should say it. Okay. It's available via OpenAI's API. And it supports text and vision models with plans to include audio and video capabilities for trusted partners.

Something really interesting is you can test out GPT 4. 0 for free. Actually, you don't even have to sign up with the paid account, but you do have limited access. Amith, have you tested out GPT 4 Omni? [00:22:00] What do you think?

Amith Nagarajan: Yeah, I've been working with it a bunch since the release date and I have some very positive impressions.

A couple of quick things I want to point out. Um, In addition to your excellent summary, there is, um, number one, uh, if you use the free GPT 4, remember that any product that you use that is free, you are the product. It's not a free product. You're the product. So there's always a downside risk to that. In the case of GPT 4, Yes, you can get access to it for free.

But if you choose that, that by default, that means you're opting into allowing OpenAI to use your conversations for training future models, which you generally do not want. Now, you can turn that off, but a very small number of people turn it off. Um, it's better to just pay the 20 where the default is the inverse of that and your, your data is protected and private.

But that's just a little bit of a side note. I think free is great. I love the idea of open access. For a lot of us that are thinking about 20 a month, they're like, oh, that's trivial. We don't care. But that's not true around the world. So the fact that they are making it free for everyone on the planet [00:23:00] is awesome.

And I applaud that. Just be aware of it and be thoughtful about turning off that setting that allows training. So other quick thing is, um, it's half the cost for API access compared to GPT 4 Turbo. So it's twice as fast. And half the cost. And, oh, by the way, it's smarter and more capable, so it's a pretty big deal.

We talked a lot in this pod, Mallory, about how AI is on roughly a six month price performance doubling. GPT 4 Turbo was released in the fall of last year, which was the update to GPT 4, which was released in March. And GPT 4 Turbo was about half the cost and twice the power of GPT 4's original release, as is Omni, six months thereafter.

So just with OpenAI as one kind of quick heuristic for this and test case against that idea of doubling every six months. It seems to be holding. That by itself is both stunning and crazy and exciting. Um, so testing GPT 4. 0, a couple different things. So, first of all, many of you have heard us talk in the past about how one of the projects we have going on at [00:24:00] Blue Cypress is this AI agent that we call Skip.

Um, Skip is an AI agent that lives within the member junction data platform and what Skip is able to do is to have conversations with you as a business consultant. Uh, and then put on a different hat and say, Oh, okay, well, I can also be your data scientist. I can be your coding partner. I can also be your report writer and your analyst to look at the output of the report.

So skip can do all these great things. And skip is powered traditionally by G or actually skip is capable of using cloud 3 Opus as well as Gemini 1. 5 Pro. But most people are using GKC 4. And so we updated to GKC 4. 0, which just required a little bit of work on the engineering team's part. And we found immediately that performance doubled roughly.

And the output was better. So that was exciting. And that's a very complex use case. Because SCIF is a very complex, multi agent, multi shot prompting style architecture that does a lot of things in very, very deep prompting strategies. GPT [00:25:00] 4. 0 performed extremely well, so that was exciting. And then as a consumer, just using chat GPT, I found GPT 4.0 to pretty much live up to how it's been advertised. It's faster, it seems to be somewhat smarter, there's more nuance in its responses too, which I enjoy. It's definitely better at writing than GPT 4. 0 Turbo was. Um, so, so far so good. I'm really excited by it. I think that, um, the feature set that they demonstrated with a lot of audio and video interaction where GPT 4.0 is capable of watching you through the webcam or through the, uh, phone camera if you choose to turn that on, adds more context. Uh, it's also capable of looking at your screen if you use the Mac or PC, uh, desktop app that OpenAI is making available. You can choose to share your screen. So, like, for example, if I'm working on something on my screen, um, GPC 4.0 can look at what I'm doing, and I can ask it questions in audio, and it can say, [00:26:00] oh, no, you clicked on the wrong button. You should click on this other button, or, you know, if I'm writing code, it can give me feedback in real time of the code I'm writing, or it can look at the app that I'm designing, or it can look at the email that I'm writing.

So, it's got context awareness across everything I'm doing on my desktop. You can see me and my surroundings. And I encourage everyone who's listening to this to check out the, uh, YouTube videos for OpenAI from last week. Um, there's one in particular that I loved the most, which was, uh, Sal Khan, founder of Khan Academy.

He's also a New Orleans native, by the way, which is really cool. Just a side note, uh, he is the founder of, I think, the world's largest, uh, free online education resource, Khan Academy. They've been playing with GPT 4 as a launch partner since last spring. for listening. And they launched this thing called Conmigo.

Conmigo is a tutor that does amazing things, even in its original version, to help anyone learn any number of topics like math and science and language arts and so forth. And they demoed a new version of [00:27:00] Conmigo that had this context awareness of what's happening on your desktop or tablet or video. Uh, and it's actually Saul and his son, uh, where his son is getting tutoring from Connigos.

It's quite a stunning two, three minute video. I'd really encourage people to watch that. In particular, in this market with associations, who many of which are delivering education, think about the kind of layers of multimodality involved in that demo, where the AI is watching the person, looking at the screen, Hearing the voice, looking at what the, uh, what Sal's son is doing on the screen with a, with a, uh, stylus and all this kind of stuff happening at the same time.

It is getting closer and closer to a real live human expert tutor sitting right next to the kid, you know? So it's amazing and by the way, the other thing that happened this past, I think the last couple of days is Microsoft announced they are giving Uh, they're partnering with Conmigo to give Conmigo away for free to the entire world.

Uh, previously it was a premium product, very expensive to run it. Microsoft's just [00:28:00] underwriting it. Um, and Conmigo is available for free for anyone on the planet. So that is super, super exciting. And that's all powered by GPT 4. 0.

Mallory Mejias: Wow. Yeah, I did just see that announcement this week. I think you shared it on LinkedIn, Amith.

Okay, so I tried it out myself. And I, the, the thing that I immediately noticed GPT Much faster. I mean just the the speed at which it's generating text. You you can tell so I would highly recommend All of you listeners to test that out and then what I didn't realize on the app last night I normally don't uh interact with chat gpt using audio, but I tested that out last night thinking it was gpt40 But after talking with you and me before we started recording this episode, I think that was Older functionality.

Is it correct to say that with GPT 4. 0, instead of it transcribing our audio into text to understand it and then doing the reverse, GPT 4. 0 can actually just understand the audio itself?

Amith Nagarajan: That's right. So GPT 4. 0 is a native, multi modal model. So [00:29:00] let's unpack what that means. It means that the model from its pre training onwards has been given text, of course, but also audio and video.

And perhaps a number of other types of content to ingest as part of its self supervised pre training process, which means the content is just being thrown into the model as it's consuming it to instruct the model, the model's functionality, essentially. So, because it's been natively trained with multimodal content, It natively understands multimodal inputs, and it natively generates multimodal outputs.

Um, and that's the future of all models. We won't be saying large language models, small language model, or large multimodal modelized things. Just be large and small models. You're going to see these acronyms get abbreviated because all models That consumers use and most developers use will essentially be multi mobile models at the from the start So what you're referring to mallory is the chat gpt app on the iphone and I believe on android as well for [00:30:00] quite a few Months have had an audio feature and I love this thing I use it all the time when i'm driving when i'm walking around new orleans i'm talking about chat gpt all the time And it is a translation layer the current version that we have available.

So What happens is I speak that, uh, there's a different AI model, which is a speech to text model that translates what I'm saying to text. That text is then fed to the underlying language model, which is either GPT 4, or you can actually switch to GPT 4. 0 now. So you can use the language model GPT 4. 0, but it's getting a text input.

It's not getting my voice. And then it generates a text response. And then that text to speech, uh, separate model that you just described, the reverse, then basically speaks in SUNY. And that's the way the current consumer app works. While we're waiting for the native multimodal capability to be brought to the app, and I think that's weeks away is my understanding, so that would mean that there will be lower latency because you'll be able to talk directly to GPC 4.0 without that, you know, [00:31:00] speech to text, text to speech part in the middle. Um, and you're likely to get a much higher quality response because If you think about this, like, if you take this podcast and you run it through an AI, uh, transcription tool and you get the transcription, it's not the same as listening to us.

You lose all of the additional information that comes with audio, um, that you don't get with just plain text. And so, that's the issue is that, um, these translation layers are lossy. Meaning the information density goes down when you go from audio to text and then also from text to audio. So with the model natively being able to generate and ingest, uh, these other modalities, you're going to get better quality, uh, automatically.

So that's exciting. And I think we're a handful of weeks away from getting native access to that.

Mallory Mejias: All right. I'm not saying you know the answer to this question, but I'd like to hear your take on it. Why do you think this is GPT 4 Omni instead of GPT 4. 5? Or five, do you just see this as a slight update?

It seems like there's some [00:32:00] big updates in there, but I just want to hear your take on it.

Amith Nagarajan: I think, you know, this is really a branding positioning marketing type question more than anything else. I'm not sure what their product road product roadmap looks like. I think this is a four dot five ish type of release.

It's not intended to be the five release. Um, they're, they're saving that for something probably much bigger. Uh, I think they're also kind of testing the market a little bit to see how competitors react, the GPT four Oh. And they're releasing something, you know, my suspicion is, is that OpenAI has something considerably better than GPT 4 already.

Um, but its capabilities are, uh, far more, you know, frontier, meaning they're significantly better than what we have access to. Um, hopefully they're doing a lot of red teaming, meaning like safety testing, all that. And that will probably be available, like, later this year, is my guess, you know, roughly 3, 6, 9 months from now, in that time, in that time scale.

Uh, OpenAI has a tendency to be ahead of the pack quite a bit, so they also tend to have a bit of a reaction where if someone else gets attention for half a [00:33:00] second, They dropped something new, like they did with Sora when Gemini 1. 5 came out. They were like, Oh, well, check this out just to get everyone's attention back to OpenAI.

So I think there's, there's some degree of mastery and marketing there that they're doing. Um, but I also think that it's, it's, um, kind of a smart, smart product management move. Because even if they had GPT 5 available, ready to go right now, Um, they might feel like they don't need to release it yet. That they can put something, uh, like this out, which is still, you know, GPT 4.0. Is, you know, notably better now than cloud 3 Opus. It's notably better than Gemini 1. 5 Pro. And I'm talking about both in terms of performance and overall, like, generalized benchmarks that we're looking at. Is it dramatically different? And should you use GPT 4. 0 instead of Plot or Gemini, not necessarily, there's obviously a lot of subtleties to that, but they're, they've been on the top of the leaderboards and they are again now.

So, I think they're playing a little bit of a game there, and if there's truly a remarkable advance from someone else, they'll probably drop something bigger pretty quickly. I'm giving them a lot of credit there though, this could [00:34:00] be like the best they've got, and they might be a year or two away from GPT 5 and, you know, just kind of posturing that way.

So, I really have no idea other than I think they have something better than this based on Other rumblings we've heard, and it would actually kind of make sense based on the time they've had since GPT 4 to have something. As remarkable as P4O is something considerably better than that and there's something with better reasoning capability.

We've talked a lot on this pod about multi step complex planning and reasoning going beyond like next token prediction that these autoregressive LLMs are focused on having like true like reasoning capability kind of baked into the model. It's essentially like a lot of the agentic types of behavior we talked about in the spot where you have an agent that's capable of taking multiple complex steps, breaking them down, executing those steps on your behalf, taking action, making decisions.

These models can't do that yet. And you can build systems around these models that in fact are capable of pretty advanced planning and reasoning, but the [00:35:00] models themselves do not do that. And so I think that's kind of where GPT 5 probably will end up, and I suspect they're well on their way towards that.

Mallory Mejias: Yep, especially given how we recently talked about Sam Altman saying, what, GPT 4 is mildly embarrassing or something like that. It seems to be hinting that they've got some other stuff in the pipeline, but that's helpful.

Amith Nagarajan: Yeah, I think it's interesting because, you know, sometimes it's hard to retain broader perspective when you're super deep and it's something, you know, like you think about, like, we talk about skip all the time.

We're like, oh, yeah, the current version 1 of skip, you know, there's all these amazing things, but we think it's, yeah, it's not that great. It's just, it's just okay. And it's going to get really powerful soon. But in reality, when someone who's never seen an agent like Skip and they can have a conversation that generates like a very sophisticated report that gives them all these business insights that might have taken them six to eight weeks and thousands of dollars before if they ever got it at all, like they're blown away.

So, like, I think the average business person, not just the average person on the planet, but the average, like, sophisticated business person. Is blown [00:36:00] away by GPT 3. 5 stuff. So, you know, there's some perspective, I think leaders in Silicon Valley, like Sam, we probably take into account when they think about how that messaging works, uh, but that even affects people like us who are deep in this particular vertical, it's, it's really hard to maintain that perspective.

Mallory Mejias: I saw it at the innovation hub. I think, you know, obviously you and I meet every week talking about this stuff. And for me, personally, A tool like perplexity that I've spoken about on this podcast that I use all the time is like, Oh, of course, perplexity, you know, people use it, nothing special. And then it came up during the panel and I realized a lot of people attending hadn't heard of it and didn't know what it could do.

So it's, it's definitely good to always remind yourself, uh, to kind of step back and look at the greater context.

Amith Nagarajan: Oh, wait, you know, and I had a quick tip related to that. I mean, before this pod, we were talking about how I use. Um, GPT 4, chat GPT, voice, uh, in the context of like drafting content, like some of you are aware that we're in the process of updating Ascend, which is our book on AI for associations that so many people have read [00:37:00] and provide a great feedback on.

Well, we released that book in kind of the late spring, early summer of 2023, which is eons ago in AI timescales. And so we're doing a complete update of that book, lots of new content, refreshing all the existing content. We plan to have that out. Later this summer, and in doing that, uh, that book, we have a number of new topics we want to cover that weren't in the original book, and a good example of that is AI agents, which I was just touching on a little bit.

Um, so, last night, I decided to use GPT 4. 0 and the audio mode on my, uh, ChatGPT app on my iPhone. And I was walking around New Orleans just talking to myself as I do, really I was talking to GPT 4, and I had this great conversation, but the way I approached it is, I talked to the AI and asked it for its input, I did give it some context on the people that I'm writing for, I gave it context on my point of view on certain topics, like AI agents, And I kind of went back and forth and said, what do you think?

I got feedback. They said, well, I like this, but I don't like that. And then I had to start drafting some content with me. Uh, and then I went and gave it feedback and I kept going [00:38:00] back and forth. And over the course of just over an hour, I ended up with about, I think, seven or 8, 000 words and they weren't perfect.

But then I got back to my house. I. You know, put that content into a Word document. I edited a bunch and, you know, less than a couple of hours of work. I have a new chapter. Now, is it final draft? Of course not, but it's a really good first draft, and it's very much a co creation process, right? But if you go to the AI and just say, hey, give me a chapter on AI agents for an association audience, you're going to pretty much get garbage.

Um, so you have to work at it a little bit harder, but you also have to remember these things are not like traditional software where you have to go through a set of menus or click a certain set of buttons. You can kind of just talk out loud and think through it, and the AI will help you figure out where you want to go with it.

That's the power of these things that a lot of people haven't yet unlocked, because they're thinking kind of a linear way of how to use them, based on, like, the prior biases we all have, where we're working with deterministic software that only operates in one way. Um, and here you just kind of like throw your creativity at it and see what

Mallory Mejias: happens.

Further expanding on that, in terms of potential [00:39:00] use cases we'll see popping up, I want to talk a little bit about multi party conversations, Amith. You also mentioned this before we started recording, but the idea that we could have a conversation, you and me, and perhaps a podcasting expert and a marketing expert and an AI expert, we could all have this conversation together.

Oh, and I should mention all those experts would be AIs besides Amith and myself. Can you talk a little bit about that?

Amith Nagarajan: Yeah, I mean, if you think about a multi party conversation, whether it's in person or on a video call like this, uh, or perhaps, you know, online with Microsoft Teams or Slack or something like that, that's more asynchronous.

Um, you have these things happening all the time with people, where we're having conversations back and forth on a variety of topics, different people chime in with different points of view, different opinions. Microsoft Um, and there's no reason why I can't fully participate in that. And there actually are use cases of that.

For example, another 1 of our projects, Betty, which many of you have heard of, uh, is capable of being directly [00:40:00] integrated with your Microsoft teams, your slack online communities, like higher logic and circle and others. And when Betty is a party in a multi party conversation, Betty will chime in whenever it's appropriate, just like Betty as a real person might do.

And so if you imagine a world where there's multiple different AIs that you invite to a conversation. So say I want to develop a new conference for my association. So I'm thinking, Hey, this new conference is going to be for a particular sub segment of my market. Maybe it's young professionals in a certain region or people with a certain interest.

Um, I bring in an AI expert that's an expert on demographics and the generation on target. I bring in another AI expert that is a domain expert in the subject matter that I'm focused on. Maybe I bring in an event planning AI expert that's really good at the region that I'm good at. What does that mean?

It's basically the same type of AI models, but I've pre prompted them to tell them what their role is. I've said, hey, AI agent one, you're the [00:41:00] expert in this, and I give a detailed, like, almost like a resume of, here's who you are, um, and I tell the next AI a detailed resume of who they are and what their point of view should be, and then I have them talk to each other along with talking to us, and we have an interesting conversation, and that sounds kind of sci fi, but you can do that right now.

In fact, there's some products coming out that do this. There's a product we're actually about to start a trial of here at Blue Cypress called Glue, which I'm really excited to get going with. I have no idea how it works specifically or what its functionality is, but we're excited to test it out. I also think all the mainstream communication tools are going to embrace this concept.

You're going to see this inside Microsoft stuff. Copilots are going to pop in and you'll see it in Slack and you'll see it everywhere else. Uh, but multi party conversations are just part of how we collaborate as a species, right? It's what we've done since the beginning of time in tribes and with tools, and now we're just doing that with AI.

So, it's something we'll have to get used to, but, um, I think we're going to see more and more of that. I mean, it wouldn't surprise me at all if, within 12 months, that one of the episodes of the Sidecar Sink [00:42:00] pod We have an extra participant that we interview that's a live interaction with an AI or multiple AIs that are joining the podcast and having a chat with us, right?

Um, that's not far off. We could probably do it right now actually, if we did a little bit of work from a software engineering perspective, but very soon you'll be able to do that, you know, with consumer grade tools and you'll get some interesting results.

Mallory Mejias: Yeah, stay tuned for that one. I guess you're right.

All the pieces are there more or less. We just need to see them come together. All right. That will definitely be an interesting future episode of the Sidecar Sync. Topic three today, Microsoft and LinkedIn's work trends report. Microsoft and LinkedIn released the 2024 Work Trend Index on the state of AI at work, which provides an overview of how AI is transforming the workplace and the broader labor market.

The report is based on a survey of 31, 000 people across 31 countries, labor and hiring trends from LinkedIn, analysis of trillions of Microsoft 365 productivity signals, and research with Fortune 500 customers. Here are [00:43:00] some key points from that report. Three in four knowledge workers or 75 percent now use AI at work.

AI is credited with saving time, boosting creativity, and allowing employees to focus on their most important tasks. In terms of leadership perspectives, while 79 percent of leaders agree that AI adoption is critical to remain competitive, 59 percent are concerned about quantifying the productivity gains from AI.

And 60 percent worry that their company lacks a clear vision and plan for AI implementation. There's been a significant increase in LinkedIn members, adding AI skills to their profiles with a 142 X increase in skills like co pilot and chat GPT. AI mentions and LinkedIn job posts lead to a 17 percent increase in application growth.

Organizations that provide AI tools and training are more likely to attract top talent and professionals who enhance their AI skills will have a competitive edge. Ryan Roslansky, CEO [00:44:00] of LinkedIn, emphasizes the need for new strategies to adapt to AI's impact on work. He suggests that leaders who focus on agility and internal skill building will create more efficient, engaged, and equitable work.

Amith, I'm wondering, do you have any like gut reactions to this report? Any of this surprising to you or does this feel pretty spot on?

Amith Nagarajan: It kind of makes sense. I mean, you know, I think the 142 X increase in people posting chat GPT on their resume or on LinkedIn totally makes sense. Do you think employers are looking for that?

You know, you're looking at people that are. AI natives, right? We talked about digital natives, social natives, uh, PC natives in the past, people who kind of grew up with the technology or just are custom using it as in their workflow. It's a new skillset. It's a different way of thinking. It's a new and it's harder and different in a lot of ways than learning how to use a new application on a computer because it requires more creativity because there is no manual for chat, shape and T like we've talked about before.

There is no manual for profile. You just have to figure out how to use it in a way. And there's cookbooks, [00:45:00] and there's prompt guides, and there's courses you can take, those are all helpful starting points. But it's really a different way of thinking. It's a different way of thinking about your, your own processes.

So, I like the fact that people are, are going forward and saying, Hey, I've got these skills. And I'm hoping more and more employers are really looking for that as, uh, a key indicator of not just what someone's been doing, but like their willingness to adapt. And I think that's the key thing. How curious and how insightful are they?

How willing are they to learn new things?

Mallory Mejias: What's particularly interesting to me is on the, on the applicant side, seeing that boost of people putting AI skills on their profiles, but then also on the application growth side on the business side, um, job postings that mentioned AI are seeing growth as well.

Now you've mentioned on the podcast before you wouldn't necessarily hire a chief AI officer or an AI marketer, for example, because That might insinuate that AI is only the responsibility of that person. Do you still feel that way?

Amith Nagarajan: I think it depends on the organization. I mean, the AI officer role, I think, [00:46:00] actually could be a really critical role for a Fortune 500 company.

Maybe a very large association for someone who's responsibility to think a hundred percent of the time about just this topic, how it stitches across the whole enterprise. Um, my point earlier when I said that is simply I don't want to delegate AI to one person, which essentially takes the pressure off of everyone else.

That's what I don't like. In a way, if you think about like even technology, a chief technology or chief information officer at an association often has been like the go to for anything, even moderately technical, and a lot of other executives have been like, no, hands off, we're not going to touch it or worry about it, that's the CTO's thing to worry about, and that's a mistake, and that's one of the reasons associations are, in many cases, so far behind on tech, it's the broader leadership team Is not very advanced in terms of their tech understanding.

I'm not saying go get into the code. I'm just saying, like, understand what these systems do, how they work. You make a lot of poor decisions when you're not well informed. And so with AI, I worry about the same thing. If you have a chief officer at the events, [00:47:00] person meets, sorry, the membership person and so on.

If they're just like, okay, yeah, we're good. Now we don't have to worry about this. Um. The organization is not going to do well. And frankly, those people aren't going to do well in their careers because, you know, if you're an events, uh, marketing manager or whatever your title is, you don't know how to use this stuff.

You're gonna have a problem in a handful of years.

Mallory Mejias: So three and four knowledge workers are using AI according to this report, which to me sounds like a lot, but I'm not super surprised as a leader, as someone who has hired many, many people, I'm sure Amith, at this point, would you be concerned hiring someone who isn't familiar with AI?

Amith Nagarajan: Well, let me tell you what I look for when I hire people, and I don't think this changes. I look for people who are curious, people who like to learn stuff, and that's demonstrated by their behavior, right? Not just by saying, Oh, I like learning stuff, but like people who actually take an active interest in their lives and being curious and learning new things.

There's lots of ways of figuring that out if you just have a passion. Kind of an unscripted conversation with [00:48:00] a candidate to say, you know, what do you like learning? Tell me about it. What's your most favorite recent book? Or you're an audiobook person. Tell me about that. Why did you like it? What was interesting to you?

You can tell pretty quickly when you're talking to a learner versus someone who's kind of static, right? And it's very hard to change that. It's hard to like turn someone who's kind of like they turned off their learning capability at the end of college or whatever and they really haven't advanced much.

Mainly because they just don't want to. That's a different characteristic than someone who's like really interested in learning new stuff. So I look for that, because if you have someone who's a strong learner mindset, you can teach them almost anything. Now, I'm presupposing that the person has reasonable intelligence, not looking for the geniuses in every role.

Of course, that's wonderful to have someone ridiculously smart, but like, you know, someone who's got good smarts, but is a learner. And of course, I'm looking for like a work ethic, someone who's really hungry, who's going to push themselves. Thank you very much. And, you know, we work in startups, so that doesn't necessarily mean, like, 100 hours a week person, but someone who's going to push themselves hard.

So the [00:49:00] reason I point to that, related to your question about AI, is I don't think that changes. I think we need people who are curious, who are learners, who are pushing themselves hard. And that's true in every context. And with AI, completely changing the game in terms of what we as humans do versus what the computers do.

We've got to really be on that. We've got to be learning stuff constantly. We've got to be consuming podcasts and doing online courses and talking to people about the way they're using stuff and experimenting. So, to me, those are the important qualities. Um, I think the three and four knowledge workers using AI is, uh, both exciting and also a bit of a smokescreen in that the reality is, is that of those three people out of the four, out of, you know, the three million out of four million or whatever, The amount of depth that most of those people have gone to with these tools is really like, an inch deep kind of stuff, which is, which is fine.

It's great that they started, but a lot of times people are like, Oh yeah, I signed up for chat GPT and I had a conversation with it, or I had it help me, you know, write an email. And that's great. Like it's, I'm not negative [00:50:00] on that at all. I think it's wonderful, but like how many people actually spend, let's say 10 hours a week or more working with AI tools.

And the answer will probably drop very rapidly, right? So we have to do is drive adoption in our organization. Yeah, we got to start learning. And then we have to drive experimentation to find the productivity gains, to find the increases in value for our audience. And then we have to really hit hard on those things.

When we find these like veins of gold, so to speak, in our mining activities, we've got to go after those and really fully exploit them to benefit our organization.

Mallory Mejias: So on that note, So 59 percent of leaders are concerned about quantifying the productivity gains from AI. So how do you balance the need to experiment right now and adopt AI with also being able to quantify the impact of those experiments?

Amith Nagarajan: You know, with every technology disruption cycle, we don't have the ability to project what it means in economic terms. We can look back at history and say, how long does it take to have, let's say, a 10 x increase [00:51:00] in total economic output with prior cycles? So you take about the first industrial revolution and say, you know, we went from a $1 trillion global economy roughly into 17 hundreds to a $10 trillion global economy in the mid 1950s.

So that's a a 250 ish year cycle time for a a 10 X increase in

the 19 hundreds. Through the, uh, 2011 timeframe to get to the next 10x increase. Uh, but back in the 1950s, if you said, Hey, you know, it's a 250 years for the last 10x increase in global GDP. How long do you think it's going to take? Many people even were super optimists on technology might have said, well, it was 250 years last time.

Maybe half as long, maybe 125 years. I don't think a lot of people would have guessed 50, 60 years. And the question is, is what's the time frame for the AI's impact? Because AI is as big of a deal by most people's [00:52:00] perspectives, you know, as technology, information technology was, or certainly kind of the earlier technologies I referred to.

So I think the question is then, okay, if we're on this curve and power is increasing so fast, the economic gains are going to be out there. The question is, who's going to get them? And so coming back to your question, I can't quantify the productivity gains for every organization until I go into that organization and look at what they're currently doing, look at where their market is heading and what needs to be built to serve the future needs of that audience.

And then start to, you know, build little experiments to test it out. But I know that there's opportunity out there. So, I think experiments actually is what educates you to then have a thesis to say, okay, if we build this fully from this little experiment, that's what's going to drive a 2x, 5x, 10x, 50x increase in output.

Um, so I think experiments are the way to educate yourself more empirically compared to the theoretical education you get from an online course. Obviously, we have an online course that are learning up. It's awesome. You should sign [00:53:00] up for it, but it's not going to actually teach you what's going to happen in your organization.

It's going to give you certain fundamentals. And then you go run these experiments in your organization and you learn from that. You say, okay, we can extrapolate from this little bitty experiment. We did that. If we do this. You know, fully it will result in this outcome, but the experiments ultimately shed that light.

Mallory Mejias: Hmm. Okay. I'm going to put myself on blast a little bit here. Um, but obviously you all know that at Sidecar, we use AI every day, all day, basically, in kind of every part of our business. But Amith, if you ask me right now, quantify like the exact productivity gain Sidecar's had from AI, I wouldn't be able to do that for you.

Maybe if I really like sat down and, and did, did, I did some digging, I could pull those numbers together, but I'm wondering, for our listeners, if there's something, should we be like tracking this in a spreadsheet somewhere, kind of keeping track of how long things used to take us versus how long they take us now?

Is there anything we can keep in mind as we're running experiments to contextualize? [00:54:00] Those gains.

Amith Nagarajan: Well, I think on the one hand, the question is, should you even bother? And I think that there's arguments to be made for both yes and no. On the yes side, it is helpful to quantify things because they'll help you predict future gains.

Um, it's also helpful when you're reporting to your board and saying, Hey, this is why we put this X dollars and time into this. And this is what we got out of it. The flip side of it is, is it's such a squishy kind of thing to try to calculate that, you know, is there, is there significant value that would be the potential no side of it?

But what I would say is this, ask yourself after you've achieved some kind of gains, you know, in your gut that you get a lot of value from AI, right? And so in your job, day to day, you're using AI. So ask yourself this, if you stop using AI, if I said at the Blue Cypress level, Hey, you know, we're banning AI.

You can't use AI in your job anymore. What would that do? How much more time would you need in your day to do the same amount of work you need now, right? So it's probably two, three, four, five. It's just an insurmountable obstacle. So that's one way to think about how much lift you're getting from the technology, uh, or it's [00:55:00] like, you know, different modalities of transportation might be an interesting comparison.

I say to you, hey, Mallory, go to D. C. for three days to participate in this. Oh, by the way, you can't fly. You have to drive there. Well, now you have two days of transportation to drive, whatever, 1, 500 miles each way from New Orleans to D. C. I've radically impacted your productivity, right, by taking away an advanced technology that's an order of magnitude faster than what you have available without it.

So I think it's the same kind of concept. You can imagine the world without it and then kind of in rewind, see what the productivity gain is. But it's hard to know exactly what it is when you're looking forward, if that makes sense.

Mallory Mejias: That's actually very helpful. So I'll, I'll think of it that way. It sounds like a total dystopia, a world where Blue Cypress says no more AI usage.

I don't think we'll ever get there, but, um, yes, it would, it would certainly impact Sidecar's business if we could not use AI. So I feel like that is a good way to think about it. Last question. As

Amith Nagarajan: long as I'm around, that will not be Blue Cypress policy, but, uh, and even beyond that, I'll, I'll have an AI avatar of me Oh my [00:56:00] goodness.

Mallory Mejias: We'll have a multi party conversation about it. Um, Amith, last question here. 60 percent of the leaders surveyed feel that their company lacks a clear vision and plan for AI implementation. What would you say to association leaders who might be listening to this and feel the same way?

Amith Nagarajan: Look, I, I empathize with that deeply because I lack a clear vision of what I'm going to do with AI for Blue Cypress or how I'm going to help associations because the field is moving so fast, you cannot say that you have absolute clarity, even someone at the forefront of this, like a Sam Alton or a Dennis Esabas or a Satya Nadella, these guys don't have completely clear vision.

So, first of all, just like, let's, let's kind of align that and feel a little better about ourselves as association leaders that you're not in the dark alone. We're all kind of in the dark. Some of us have like a little bit bigger aperture to see what might be coming, but but not not a whole ton. So that's good because you you catch up with this.

The key to this is, um, it's kind of a multi step process. First, start learning. You're doing that by listening to this pod, read some of our [00:57:00] content, check out the learning hub, check out other resources and there's tons of great stuff out there. Get started with basic learning and then experiment because the experimentation will open up that aperture further.

It'll teach you what will happen on the ground with your organization. And after you've done a little bit of that, or maybe before, you can engage in this process we like to call an AI roadmap, which you can do this in your own. There's people like us who can help you with that. There's plenty of other people who can do this as well.

And the whole idea behind a roadmap is to build a near term plan that will say, hey, this is what we're going to go do. These are our priorities. And those priorities are informed by how we feel the external environment is going to change, right? So that's the first thing you want to think about is what's going to happen to your members because of AI.

So if I have an audience of attorneys as my association's membership, what's going to happen to the legal profession in the next two, three, four years? What kinds of education, products, services will lawyers need over the next several years from my association and where [00:58:00] will, what do I need to build essentially?

What do I need to have available to serve those future needs? Because I need to start building that now or at least start planning to build that now or I'm not going to be able to catch up and end up where those people will be in two, three years. That's one piece. The other part of it is, is how can I make what I currently do better, right?

How do you go do what I just said? Well, you've got to automate a lot of what you currently do, because no one's saying stop doing what you're doing now. Don't stop your annual conference. Don't stop delivering traditional CLE over the web or in person. You have to do those things. But can you make those things twice as efficient?

And then repurpose staff time, um, to then think about, Hey, what can we go after? So the idea of an AI roadmap very much centers around both of those macro external factors and then the internal process factors. Um, and the reason I usually suggest this is a second or maybe third step in the process is if you're starting from ground zero.

And you try to attack a roadmap project, you're likely to find it overwhelming and you're probably not going to get the best output because you just know so little about what these tools can [00:59:00] do and where they're going without having played with them a little bit. So, um, I, I would usually recommend people start with learning a little bit of experimentation and then do a roadmap.

Uh, we've got templates on this. We've got a lot of great content you can download and use as a guiding post. But the basic idea is super simple. Study the external environment, try to predict where your external environment is going. What do you need to do to serve that market down the road in two, three years, and then look at your internal environment, look for process opportunities as well.

Mallory Mejias: Well, that sounds like a great plan. And yes, all you listeners, if you're interested in that AI roadmap piece, feel free to use your fan mail link and your show notes to send us a message. We'll read those and respond to them. Thank you all for tuning in today and we will see you next week.

Amith Nagarajan: Thanks a lot.

Mallory Mejias
Post by Mallory Mejias
May 23, 2024
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.