Sidecar Blog

6 Obstacles Preventing Your Organization From Embracing AI [Sidecar Sync Episode 43]

Written by Emilia DiFabrizio | Aug 15, 2024 4:30:40 PM

Timestamps:

00:00 - Introduction
02:35 - ASAE Highlights
07:44 - Obstacle 1: "We're Too Busy for AI"
13:25 - Obstacle 2: "We Want to Be Thoughtful, Not Rushed"
19:04 - Obstacle 3: "We Can't Start Without AI Policies"
26:05 - Obstacle 4: "Mixed Feelings on AI Adoption"
32:32 - Obstacle 5: "Free vs. Paid AI Tools"
38:56 - Obstacle 6: "We Tried AI, Now What?"

 

Summary:

In this episode of Sidecar Sync, Amith and Mallory delve into the hurdles associations face when trying to incorporate AI into their operations. From time constraints and skepticism to the need for strategic policies, they explore the common challenges and offer practical advice on how to overcome them. Whether you're just starting with AI or looking to deepen its integration into your organization, this discussion will provide valuable insights. Plus, they share examples from their recent experiences at industry events and offer tips on making AI work for you.

 

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

Follow Sidecar on LinkedIn

πŸ›  AI Tools and Resources Mentioned in This Episode:
ChatGPT ➑ https://openai.com/chatgpt
MeetGeek ➑ https://meetgeek.ai/
Suno AI ➑ https://suno.ai/

βš™οΈ Other Resources from Sidecar: 

 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

Read the Transcript

Amith Nagarajan: Welcome back, everybody, to another episode of the Sidecar Sync. We're, as always, enthusiastic to be back with all of you with all sorts of interesting thoughts, ideas and challenges in the world of associations and artificial intelligence. My name is Amith Nagarajan.

Mallory Mejias: And my name is Mallory Mejias.

Amith Nagarajan: And we are your hosts before we jump into our exciting episode.

Let's take a moment to hear a few words from our sponsor.

Mallory Mejias: Amith, I just saw you in person in Cleveland at ASAE Annual, and for those of you that don't know, that's the American Society for Association Executives Annual Meeting. How was that for you?

Amith Nagarajan: I had a great time. I was only there for, I think maybe 36 hours or 40 hours on the ground. I had some other things I had to go do, but it was amazing. It's a, I don't go every year, but I try to get there every other year, at least. And it's just a fascinating time to get together with people because of all the things that are happening in the world.

With technology and with associations. So I had a bunch of fun. It was great to reconnect with people and meet new people, hear a lot of thoughts that are on people's minds right now. So I had a lot of fun. Plus the big thing for me, more than any of that was it was two days of a reprieve from the weather in the South.

It was so nice up there.

Mallory Mejias: It was so nice. I actually didn't even look at the weather when I was packing, and not that it was too cold, but it was kind of chilly, and I thought, oh, I just assumed it would be blistering hot, being that we both live in the South, but it was really beautiful weather up there.

Amith Nagarajan: Yeah, it was great being close to the lake. Cleveland, they really did a nice job with their downtown. It was pretty. There's parks. There was, you know, places to walk. It was it was just a great place to visit. If you haven't been to downtown Cleveland, I would definitely recommend putting that on your list.

It might not be on the top of everyone's list of place to go visit, but it's it's cool. And NFL season starting up. So that was fun to see people walking around with Packers and Browns jerseys. The first day we were there.

Mallory Mejias: There was a lot going on in downtown, downtown Cleveland. There was also a, an MGK concert. If we have any listeners or viewers on YouTube who like MGK, I don't know a ton of his music, but I know that he did thwart a lot of people from getting to our happy hour. So shout out to MGK for that one.

Amith Nagarajan: Oh, is that what the disruption in the street was? Yeah.

Mallory Mejias: I think so.

People weren't even, there was only one path to get to our happy hour, I believe. Well, I'm sure there were alternate paths, but the main path, I believe, was blocked off for the concert. So people who came had to, had to walk a good bit. So we, we truly appreciate that.

Amith Nagarajan: For sure. It was great to see people at the sidecar happy hour that evening. And now that I know that actually, I'm really impressed that we had is we had a pretty good turnout there. So,

Mallory Mejias: We definitely did. And I've got to say, we probably have some new listeners to the podcast, just because we had so many people stop by the Sidecar booth at this event. We had such great feedback from people who were familiar with Sidecar, people who weren't and wanted to learn more. Bye. And so, and some fans of the podcast, which was really interesting to see in person, because again, Amith and I don't meet a ton of you, but it's always exciting to hear who's listening, and what you've learned, and what your favorite takeaways were.

So to the new people, welcome, and to the people who have been joining us, thank you so much for tuning in.

Amith Nagarajan: Yeah. And on my end, I spent a lot of the time at the show, walking around, just talking to people. And also we were, I think we had a few hundred copies of our new book, the ascend second edition, and. I believe they were all kind of flying off the shelves or out of the booth. And I handed out quite a few to some friends and folks that I met at the event.

So that was cool too, because it's a, it's a pretty heavy book to walk around with. It's almost 300 pages of content for association leaders to learn about AI and people were, you know, walking out these things. So it was great.

Mallory Mejias: It was fantastic. We had, I wanna say 200 copies Amith and we gave them all out. People were so excited to see us giving away hard copy books. And dare I say, I didn't walk around the expo hall a ton 'cause I was mostly at the sidecar booth, but I don't really remember seeing many booths with books. So I think people were excited by that and they were even more excited to hear that it was free.

Amith Nagarajan: Yeah, and I think it's a nice giveaway. I mean, certainly, hopefully it's a topic that's high on people's lists to learn about. And most people are still at the very early phases of their learning journey. The day after I or actually the day of this, the second day of the show, I flew to Chicago and I had a handful of meetings there with clients that we work with and just kind of reaffirmed some of the thematic things that You know, depending on the organization, there are different phases of the journey, but they're all fairly early on in their learning journey with AI.

So, which is exciting. But also it's an opportunity to kind of refresh our minds thinking, you know, you and I are kind of in our little echo chamber of sidecar and blue Cypress land, where we talk about AI constantly. We are massive practitioners of using every ounce of AI capability we can. We're developing software around AI.

So to talk to people and hear their struggles and their challenges and their concerns about where they are day to day. It was really actually quite refreshing for me personally, cause I, I just don't get that much. Cause it's the reinforcing loop of the people who are close to us.

Mallory Mejias: Absolutely. Being in person like that and getting to collect so much information firsthand from the people that we're talking about all the times, from the people that Sidecar is serving, that was so empowering that it inspired the topic for today's episode, which is 6 Obstacles Stopping You From Fully Embracing the Power of AI in Your Organization.

So at the booth, as I mentioned, we had lots of people stop by, and I started to notice some patterns fairly quickly in what people were saying to us, different challenges across the board. We had big associations stopped by small associations in between But generally we kept seeing the same challenges over and over so we wanted to dedicate today's episode to addressing those challenges head on. Whether you were at the event or not I think you will probably relate to most of these and we're going to troubleshoot them live together.

First obstacle that I heard probably the most often, and I think might be the most difficult to address.

"We're thinking about AI, but we are just so busy right now. We really don't have the time." Amith, I know you and I have covered that on the podcast before, but for the sake of this episode, I want to dive in a little bit deeper. If an association CEO came to you and said that challenge, what would be your advice?

Amith Nagarajan: Well, I mean, this may not be a popular answer, but I'd tell them that, you know, if they're, if they're too busy right now, they're going to have a lot of free time if they're unemployed. And so my bottom line is you better learn this stuff because your job, no matter if you're the CEO or an entry level person, your job depends on it.

People are not going to be replaced by AI by itself. I don't believe least not in many of the jobs in this sector, but absolutely people and associations will be replaced by people and other entities that embrace AI. So AI is both the most amazing opportunity in the world, but it's also an existential threat if you ignore it.

I can't think of a more important thing to go do than to learn the basics of AI. You don't need to be an AI expert, but if you're completely unaware of the capabilities of these tools, you know, it's like walking around saying, Hey, we're in the home construction business and all of our employees use hand tools, but every other company building homes, they learned about electricity and they have power tools.

Who do you think is going to do better long term, right? The sustainability, unless it's like truly some craft where it's artistic and artisan type building. Which is wonderful, but like that is not the scale solution or the sustainable solution in a sector like this. And of course, most associations are not in that type of realm.

You know, I'm sorry if that kind of roast people the wrong way, the way I'm positioning it, but I'm trying to get people's attention to say, look, this is that urgent that you have to pay attention to this. There's plenty of things you can stop doing, build a stop doing list. Think about what you're currently doing.

That isn't going to change that much in the next 12 months. So if you look at your list of to do's and say, Hey, if I didn't do this issue, if I didn't take care of this topic, would that affect my association's viability or my personal job viability in the next 12 months or even the next six months, so between now and the end of the year, five months ish left, right?

Four and a half months left. So if I just stopped doing this thing for four and a half months, Will I still be employed and will my association still exist at the end of that four and a half months and most things the answer is yes, I would argue that if you don't pay attention to AI, maybe you won't be out of business or out of a job in four and a half months, but it's possible.

And certainly in the next year or two, it's very likely because the world is moving so fast. Yeah. It's a little bit of tough love that I have to, provide in terms of responding to that. You've got to prioritize learning this stuff. And people who are already moving along down this path, I commend you because I know it's difficult.

I realize it's really hard, especially in a volunteer governance structure where your top level priorities are set by your board and your board may not buy into it. I totally get it. I empathize deeply with that. At the same time if you're a leader in the organization at any level, you have to push forward on the things that are going to drive change.

And this is going to be the most amazing change in a lot of ways for every organization.

Mallory Mejias: You mentioned before the idea of the stop list, and I love that. How often would you recommend looking at that list, writing something on that list in practice? Like once a month? Is that something you should be looking at weekly to see if there's anything that you could potentially stop?

Amith Nagarajan: I think so. At a minimum on a quarterly basis, you should be looking at what you're saying your stated priorities are for the next quarter and then looking at what you can stop doing. I do think that a more frequent refresh of that would be appropriate. I look at it also in terms of meetings that I'm attending, meetings take up a lot of time and I look at it and say am I really needed for that meeting?

Can I just skip it? And it's not because I don't want to meet with the people or whatever. It's because I'm trying to protect my time and I'm looking at it saying, do I need to be in that status meeting on this project? Do I need to be part of yet another conversation on a certain topic? Does the meeting even need to exist?

Of course, it's a great question. Can we kill it? But as an individual, can you opt out of certain meetings? That can oftentimes reclaim a handful of hours a week pretty easily because people go like drones into these meetings and just just sit there and, most of the time they have nothing to say and they're just listening.

That's a really inefficient use of time. That's mostly that type of meeting can be replaced with asynchronous communication where people are like writing up the staff support that takes them 10 minutes to write up and 5 minutes to read, right? Like we do that at Sidecar and at Blue Cypress across the board where, we do have regular meetings, of course, but we expect our team members depending on the role and various things to either send a daily or a weekly, what we call a 5 15 report, which is literally takes no more than 15 minutes to write and no more than 5 minutes to read for the other people on the team or the supervisor.

And that saves a crazy amount of time. So there's a lot of techniques like that. I think there's tons of great advice out there on optimizing your schedule and this and that. And a lot of people feel like they've already read all those books and they've, you know, they're tired of hearing the same thing.

But the reality is most people haven't optimized their schedule hardly at all. And they're attending a lot of meetings that aren't necessary, at least for them as individuals. Another thing that I threw out there with the meetings thing while I'm on that topic is, Amazon, Jeff Bezos were famous for this two pizza rule where they wouldn't have a meeting with where you couldn't feed everyone with More than, you had to be able to feed everyone with 2 pizzas, right? So depending on the organization, if I was in the room, that might just be me, but depending on the other people, it's not that many folks can be fed by 2 pieces, 4, 6, maybe 8 people.

So it's not these big giant meetings where you have 12, 14, 18 people and then for the most part.

Mallory Mejias: That sounds like the idea here is that we understand you're busy. I would say most people probably say that they're busy, but you've got to do something to optimize your schedule because educating you and your team on AI is urgent. Would you agree with that?

Amith Nagarajan: Yeah, I mean, what else is that what else is happening in the world right now that could completely change the nature of your sector and your own association business and your job? I mean, this is a technology that, you know, what we're saying is that intelligence is fundamentally being commoditized, which is a crazy thought, but that's exactly what's happening, right?

We're not seeing entire white collar jobs being fully automated yet at the job level, but you are seeing at the task level. And what is a job but for a bundle of tasks? So as a percentage of the job, the more and more the tasks get automated or automatable that's going to displace a lot of those things.

It also, that's the one side of it saying, how do we do what we currently do? Faster, better, cheaper. That's the quest we've been on since the beginning of time. We're a species that's advanced because of our intelligence, but also because we're toolmakers and we've been that way since the beginning of time, but the point is that this is the most powerful tool we've ever held.

And we don't know how to use it. So people don't know how to use it are going to run circles around those that don't. The other side of the ball though, of course, is with these tools, what can we do that we could never do before? So other than saying, Hey, how can we do what we currently do better, faster, cheaper? What are the things that we could not do, but now we can? For example, taking this podcast and simultaneously translating it to as many languages as we want to, right, which we haven't done yet with Sidecar, but that would be a wonderful thing to do.

And the cost of that would have been, unapproachable for us and most organizations up until now it's free. And that's a good example of a task that's been automated. But there's a lot of things like that, things that you can do and of course our book ascend is chock full of ideas like that.

But even if you think about it purely on the, how do we get more efficient side, the last thing I'll say about, you don't have time is that's why you need AI because AI can probably save you 10 to 50 percent of your day. It can automate so much of what you're currently doing. That'll free you up to have, you know, higher order opportunities in your life.

Mallory Mejias: What a great point. A secondary sub obstacle, I'll say to this one, to me is a little bit more valid. Not that being busy is not valid, but I think this one is more justifiable at least. We want to be thoughtful about AI and not rush into anything without strategy. I heard this a few times at the booth.

We want to educate our team, but we want to really think it through slowly before we do. What do you say to that, Amith?

Amith Nagarajan: I think there's two sides to that. I do think it's prudent to have some thoughtfulness, some guidelines, some policies, even around AI. But it's how do people who have no knowledge or experience with a given technology write the policy and create the guidelines if they've never done it themselves?

It's like saying, Hey, I'm going to teach you, Mallory, how to drive a car. I've never driven one, but I've seen a car or I'm going to teach you how to fly a rocket ship. I kind of understand what rocket ships do, but I've never been in one. I've never been near one. I have no idea how to fly one, but I'm going to teach you how to fly one, right?

That's the idea of setting policies and guidelines for others or for your whole team without having any hands on experience. So on the one hand, I appreciate that there's a need to protect your confidential and sensitive data. That's a major hot button and it should be. And that's something appropriate to address.

But the flip side is if you try to get too I would say rigorous about your structure and your policy and guidelines too quickly, you're going to stifle innovation. And quite frankly, the handful of people who are, they're hell bent on just doing whatever they're going to do. They're going to do it anyway, and they're going to work around policies.

So your policies will likely be misinformed largely if you do it too soon. So my suggestion would do some basic training. Get some understanding of what these tools can do at a very high level, and then formulate a preliminary policy. And when you roll it out, unlike most things that come from high on up in the association land, where they're kind of like etched into tablets and people think they're, they literally are like, impossible to change, right?

Like the bylaws of the association policies are perhaps not as hard to change as that. But the point is that most people in the association culture are used to policies being, quite persistent. They last a long, long time. You have to set the expectation people that the policy is likely going to be adapting iteratively on a high frequency basis.

So that's a really important part of it. But I do 100 percent support the idea of policies and guidelines. But that's also why training is, is part of that. Like you, I would mandate training and that's another topic we'll be discussing. But the whole point to me is, there's times to lead where you're saying, yeah, we're going to kind of let people ebb and flow at their own pace.

And there's time to lead. In kind of a mandatory type of mindset where the leader stands up and says, this is a critical issue. All hands are on deck. This is what we're gonna go do. You're required to go to your job. This is your job. And I think this is the time for leaders to stand up and do that with when it comes to AI training, but also.

In this area. So to me, it's a two step thing. Number one, get some basic familiarity with AI. If you know nothing about it, start by learning a little bit and then then form an initial policy and then set that expectation that it's going to change. And then, of course, learn more and iterate and, make the policy adapt to your needs.

Mallory Mejias: This flows in really well to our second obstacle, which was exactly that, Amith, "we don't want to do AI training until we have policies and guidelines in place." Now, you touched on that a bit just now, but I'm wondering, can you share an actual, maybe a general framework? You talked about not stifling your team.

You've mentioned this idea of a sandbox before. Could you give us kind of a high level overview of what AI guidelines might look like?

Amith Nagarajan: First of all, the most common things people are concerned with is the potential loss of control over their intellectual property, whether it be structured data from an AMS or something like that, or it's their content. And that's an appropriate concern, because if you have your people just throwing your sensitive data into particularly free tools from any random vendor that they happen to come across, you're just basically giving away like all your stuff.

Who knows who has access to that? So some organizations will pay extreme attention to the terms of service with OpenAI and ChatGPT for the paid version. But yeah, turn around and have absolutely no issue attending a meeting about something super confidential when a tool like a MeetGeek or some other rando note taker shows up in the meeting.

It's so and so's AI note taker from some unknown company and they're, they'll probably come on in and record everything I'm saying, listen to me, talk, take my video, do whatever you want with it. So it's really weird. It's like this separation that people have in their minds. So I think because open AI has been this lightning rod for concern, which is appropriate because they're the leader in the space, people are only focusing on that use case, whereas they should probably think about it a little bit more broadly. Is not really any different than working with any other type of software company, you have to make sure you're dealing with a vendor you can trust. That's the 1st thing you have to make sure the terms of service are reasonable. That also very clearly say they cannot use your data for other purposes other than to serve you and this idea that the AI models somehow are going to always use your data to train the next version of the model.

That's the concerns people typically have. That has to do with contract terms and trust. There's nothing inherently inherent about AI models that make it so that future AI models will train on your data. That's only if a company either has the right to do so and tells you they're going to do that.

If you're a free user of a product, Just remember, you're not a customer, you are the product. Just like with Facebook or something like that, if you're a free user or a chat GPT, the terms of service are different than if you pay 20 bucks a month. So pay attention to that. That is important. But the flip side of it is that I wouldn't lock down all the tools and say, you can't have access to any other tools.

I'd probably pick, One tool in each category. So one tool for conversational like chat GPT or Claude or Gemini, I pick a tool for image generation, either the built in tools like in those environments or, pick a midjourney or something like that. And maybe a couple of other categories, maybe a note taker.

And I'd have a few kind of approved tools that we know are mature enough where the organization can say, Hey, listen we know these are companies that we can trust. We have commercial agreements with them, or we've reviewed the terms of services and our stuff's not going to get ripped off. Use those and then say, Hey, there's a process.

If you're interested in using something else, just let us know. And we'll do a quick review, make sure it's you know, not like a North Korean state actor that's sponsoring a free meeting note taker in order to steal your data, something like that. So you gotta think a little bit like that but open up the door for people to come to you with new ideas is my point so that, you know, as new tools arrive for example, we've talked about suno. ai on this podcast, where it's a text to music, AI model, which is really cool. Do you want to allow people to use that? And that may not be a privacy issue in the terms of your content. You're not going to feed Suno your sensitive data probably. But do you want that in, do you want music to be generated under your brand?

That's a different issue, right? So I think there's a lot to unpack there. There's no quick answer that I can really give other than to kind of have a flexible mindset around it.

Mallory Mejias: And I think that goes back to your point, too, as well, about the fluidity of the whole policy in the first place. Suno is something that popped on the market at this point, maybe, I don't know, six plus months ago, but if that weren't a part of your AI policy and it needed to be right, you want to make sure that you have something flexible that you can add to and that your staff is aware of that as well.

Amith Nagarajan: There's all sorts of, even within the tools themselves, the fluidity is a good way to put it that, you know, yesterday, actually, somebody on our team asked me, Hey, what do you think about this feature? And we're paying for the chat GPT teams thing. That's our primary tool that we pay for across blue cypress.

And, you know, a lot of people use anthropic and other things, but that's the one we have, like terms of service reviewed. And we have, a paid account where we've opted out all the things that we don't want and so forth. Okay. And control it at the company level, which is good. And, but they added a feature recently to chat GPT where you can have it authenticate automatically with your cloud storage provider.

So if you're a Google docs user, you can have it authenticate with Google docs to automatically be able to look at Google docs. Same thing with Microsoft. And while on the one hand, that's just an easier way to share a file rather than downloading it from the cloud provider and uploading it to chat GPT.

If you want to share the file, it makes me a little nervous personally. Because you're essentially authorizing this SaaS company to have access to your. Private file repository, right? Giving them pretty much full access to it is the way that integration works. And that makes me nervous personally.

So I suggested to this person that it's just better to download the specific file and upload that specific file. In the case of OpenAI, I think they're a very interesting company in a lot of levels. They've done some phenomenal AI work. But I also don't really trust them partly because of the changes that are going on leadership wise.

There's a mass exodus of the founders. There's the, ousting and then return of Sam Altman. There's all this opacity with respect to their approach to alignment and safety and on and on and on, right? So, but yet they're a leading model providers, so they have a very good product that's cost effective and so forth.

Would I be more comfortable allowing Anthropic to have access to my Microsoft files? Maybe, but I don't really know that much about them either. What I think is going to happen is a lot of that stuff is eventually going to end up being boiled into your main platform. So you already have Microsoft Copilot, Google's assistant as well.

Those things frankly suck compared to the full blown chat GPT or Anthropic. They're always a little bit behind because they have to serve such a wider audience. So like in Microsoft Copilot, if you go into Word, On the one hand, it has access to everything in your Word document and theoretically your other documents, but it really is super limited right now.

That's going to change over time, and you probably will just live in that environment is my guess, and then you don't have to worry about it as much.

Mallory Mejias: I think what could be a bit worrisome about the example you just provided is the fact that you Amith or our CEO Johanna may not have known about that feature of being able to give OpenAI access to all of your files until an employee brought that to you. So I can imagine some listeners are probably concerned if they're leading an organization of how do they even predict what they need to protect themselves from without that kind of intel.

Amith Nagarajan: Yep. Yeah, totally.

Mallory Mejias: Our next obstacle, Amith, you will think this one is interesting, and this is a direct quote. Some of our employees are really excited about AI and leading the charge, and some, quote, believe the earth is flat. End quote. How would you recommend leading teams with mixed feelings across the board about this new technology?

Amith Nagarajan: I'm all about providing people as much information and insight as possible and hearing feedback. At the same time, I'm a big fan of this idea called making a decision. And then pushing the organization forward on it. So if you as the leader of your leadership team has decided, this is an important big thing.

Of course, you should listen to people's concerns and you should educate them to your best of your ability. But after a certain point, you say, this is what we're doing and we're going forward with it. And that's it and get on the bus if this is where you want to work and there needs to be more of that mindset.

In my opinion, in the broader association sector, there's way too much of a kind of consensus oriented mindset where everyone has to be on board or if not, everyone's on board. Those individuals can opt out of doing certain things. A lot of people are not making these things mandatory. But to me that just points to an issue with the culture more than anything else.

And again, it's I understand there's lots of nuance to this and I'm approaching all problems with a sledgehammer when I talk about it, but that's intentional because the world doesn't care about your nuances, the world doesn't care about your problems, they only care about your abilities and what you produce a value.

So if there are competing forces out there, other associations possibly, or perhaps commercial organizations that are in your space or can essentially displace the value of the services you provide. Chat GPT itself can do a lot of the things people come to associations for. You've got a big problem.

So I don't think it's time right now to get to consensus. I think now is the time to take bold action and to demand that your team goes with you. And maybe some people opt out. That's okay. That's not the right place for them to be. Maybe it's been the right place for them, but it's not anymore. So there's some of that thinking that I think needs to be more prevalent in this space.

Mallory Mejias: And then I'm sure education is also a piece of that as well. Ideally, you educate your team, you show them what's possible. Maybe some of the flat earthers hop on board, I don't know. Maybe they don't.

Amith Nagarajan: Some people are going to be pessimists, but like ultimately will come on board. That's probably okay. Cause there's people who are just slower adopters or late adopters. As long as they know they have to do it, but there's some people who are just flat out obstructionists and those people have to go.

So it's not just that they're slow to adopt it, but they are actively trying to undermine it because they disagree with it or they think it's. There's some, there's something fundamentally mismatch between their system of values and beliefs and what your organization systems, system of beliefs are on a go forward basis.

And again, these might be wonderful people, like wonderful human beings that have served your organization well, but you know, where you as the leader are taking the organization may diverge from where these people want to go. And the bad news is that might mean some of those people are not part of your future.

But the good news is there's other people out there who probably would love to be part of your future, especially if you have a bold vision for how you're going to transform your sector or your profession, leveraging AI to advance your mission. That's exciting. And so the people who will get on board with that may not all exist within your organization's walls right now, and that's also okay.

Many associations are so focused on keeping everyone happy, which is basically both impossible and largely just, it's just a, it's just something I wouldn't want to try to focus on in any organization. There's no way to do that even in a 10 person, 10 staff association. You certainly can't do that in a larger one.

Mallory Mejias: Even in a family, right? There's no way to keep a whole group of people happy. But I think the distinction you made, Amith, is really important. This is not necessarily people who are cautious, and who want to move more thoughtfully through this. This is people who are obstinate and who don't want to embrace it.

I'm curious. Did you see this same thing? With the internet were there people that were like absolutely not never won't do it

Amith Nagarajan: Yeah, for sure. I talked to tons of people in the nineties and the two thousands that were like, yeah, we don't need a website or websites. The Internet in general isn't going to change our association that much. Same thing with mobile, same thing with social media, same thing with whatever the technology shift is.

The difference is all of those have been passive technologies, right? If you think about it, it's about distribution. It's about lowering cost of compute. It's about changing. It's that you can do computing anywhere with mobile devices, connecting with other people more easily. All of these things, these ones have been passive technologies.

They've required a user to basically activate them and tell them what to do. Whereas, AI is an active technology. Right now, it's like somewhere in between where you still have to tell it what to do, but as these systems become not only more capable, but semi autonomous. And in some cases possibly fully autonomous or mostly autonomous.

This is a different thing. It's a completely different animal. It's a, it's an intelligence form, right? Whatever you want to call it. Augmented intelligence, artificial intelligence, Apple intelligence. You can call it whatever you want. It is the same thing. It's a shift in capability, because it's an active form of technology, and we've never experienced that.

So that's, of course, understandably all of us, including everyone here at Blue Cypress and Sidecar, who are playing with this stuff all the time. We're all confused. None of us know exactly what we should do next, because none of us have encountered this type of technology before, but that's precisely why you have to go figure it out.

Mallory Mejias: And keep this in mind. All those obstructionists, as you called them, Amith, who are against the internet and against social and against mobile, I'm fairly sure at this moment are using the internet and have mobile phones and probably have social media accounts. So think about that, I would say, in the greater landscape for the future.

Amith Nagarajan: Yeah. And then in those, in those examples, they had many years to adapt. And so a lot of people who were early proponents of let's say the iPhone personally, they enjoyed using the iPhone in their lives or social media, or just were a big, user of the web personally might've said for my group, that isn't necessary.

It's not a purpose. There's always a reason why it's not applicable to the association because the mindset has been that it's a protected space that, oh, we are the association for X. And therefore we are somehow insulated from the need to drive this kind of innovation, but in that world, this is rapidly coming to an end.

Mallory Mejias: Our next point you also touched on a bit earlier, but I think it warrants a deeper discussion as well. Our fifth obstacle is "we've been experimenting with the free version of ChatGPT or various other large language model tools, but we aren't sure we want to go the paid route." Why do you, you already said it, that you think people should go the paid route.

Why is that?

Amith Nagarajan: I think, I think there's a couple of reasons why. So first I mentioned earlier that if you're using something for free, how is the company making money? It's not making money from you. So maybe it's a freemium model. And the idea, is that the, as you go further into the product, you'll want to upgrade and there's a small slice of the users that are paying.

And therefore that's how they make their money. For example, I was reading, or not reading, I was watching a video on the wall street journal site this morning. About duo lingo and this is the language learning app that's become, the most popular language learning app in the world. They heavily use AI. Very interesting short video 7, 8 minutes.

I'd encourage anyone to watch it. They do about 500 million dollars a year in revenue, but only 8 percent of their users pay. So there's 92 percent of the users are free users. And so that's called the freemium model where you have a massive number of people you have on your platform and then the paying users actually support it financially.

They do have some ad revenue as well, but you have to remember as a Duolingo user part of what you're providing, both paid and free, is your data. You're providing your interactions, your data, and you're opting into the terms of service that allow you, allow the business to use your data to do whatever it is that they're going to do to optimize, their business.

There's nothing wrong with that. It's just, you have to remember that paid versus free if there's a paid option, there's probably some benefits there that are worth thinking about. One of which is in the case of AI, the terms of service, the terms of service for paid accounts often do say that they will not use your data for future model training, whereas the free versions of most models do not say that.

And that's a really important distinction. So if you're using the free chat GPT, as of right now, if you were to take, let's say, confidential documents, drop them into the free chat GPT, and you haven't opted, you can opt out of it, by the way, is even as a free user open AI got a lot of heat about this.

So there's a setting in the free version where you can, in the settings, you can opt out, Of model training, but it's like everything else. The vast majority of people don't even know that and don't do that. So if I was to not do that and just use free chat GPT and drop in, let's say, all of your sensitive data from a system.

It now is something that OpenAI can use to train the next version of chat GPT. And I think that's a pretty bad issue. So I would avoid that. I do think free tools are great things to experiment with, but just understand them as almost like you're opening yourself up to working in the public's eye when you do that.

Mallory Mejias: I would say most of us across Blue Cypress, maybe not most, but the people who were experimenting with AI were, in the past, using the individual paid account of ChatGPT, which I think was 20 bucks a month, and I think there are some protections there. And then Blue Cypress as a whole opted to switch to the team's account.

I don't know if you know the specifics on that, Amith, but why did we make the shift from those individual paid accounts to the team paid?

Amith Nagarajan: There's a, this is a very, this is a classical SaaS play where people will say, Hey, we get adoption. It's it's called product led growth. You get adoption at people within an organization. Then you go to the company and say, Hey, listen, you have 50 100 200 people using our tools. If you go for the team version, which is more expensive, of course, we will provide you centralized billing.

We will provide you the ability to automatically provision and deprovision users as they come and go from your organization. We'll provide you organizational data control so that you can organizationally opt out of certain things, right? Or control your data. You can have retention of information. So the chat history we're not subject to some of the regulatory compliance issues that a public company under Sarbanes Oxley or other similar regulation may be subject to, but it's important for document retention to be able to potentially retain some of that information for a longer period of time, whether the employee stays or goes, right?

So there's a lot of issues that are not features for the individual user that an individual would really care about, but the company would deeply care about. Or SSO, right? Where I don't have a separate log in. I can log in using my blue Cypress email, stuff like that. That ability to scale up to a team or enterprise type product is a major value out at the org level and does provide some security.

You're obviously not going to do that for every tool that you have, but I think it's an important thing to consider for your workhorse AI tools. So that's where I think a little bit of diligence around that is we'll pay off quite nicely.

Mallory Mejias: So would you recommend, going back to the AI guidelines piece, if an association decided ChatGPT, which, to be totally transparent, is the tool I heard brought up the most at our booth this past weekend, if ChatGPT was going to be their preferred tool to create, copy, and analyze files and whatnot, would you recommend that they go ahead and do the team subscription?

Amith Nagarajan: I think so. I think that if ChatGPT is the right product for you, there's some advantages to it. It's again, I think it's 50 percent more per user or something like that, but it's, the marginal cost is not high enough to be a reason to avoid it, in my opinion, because you get those additional level of company level controls. You don't have to rely on each user remembering to opt out of the data issues I was describing. Plus there's some nice Teams things where you can privately share chats and collaborate in different ways. So I think there's enough value to justify the cost. I think there are pretty smart people working over there and they know how to do SaaS pricing So they, they figure that out.

It's a playbook. A lot of people have used probably the initial most famous example. That was Slack where Slack had lots of adoption at the individual, user level, and then they went into the companies and sold these types of enterprise accounts. And, we had a great episode recently with Bill Macitis, who was the former CMO and early, early stage employee at Slack that I encourage our listeners to go back and refer to as a fun conversation.

And Bill probably has the nicest podcasting studio I've seen, at least remote.

Mallory Mejias: It was so nice.

Amith Nagarajan: but in any event yeah, that strategy is, I think, a very clear cut way to approach enterprise value creation and it doesn't affect the end user, which is great because then users like, yeah, the tool's the same to me.

I love it. It's great.

Mallory Mejias: Our last obstacle, Amith, was one that was a bit more difficult even for me to respond to, so I'm interested to get your take, and this is "our CEO told everyone to go out and try some AI tools, but we don't know where to go from here. What are our next steps?" I'm wondering how you recommend making the leap from just dabbling, testing different tools to actually integrating AI into your workflow.

Amith Nagarajan: This is where learning plays such a key role. So people who are listening to this podcast, I think are investing in learning in one form. You know, we obviously have a ton of free content. We have the Ascend book, which is a free download. If you prefer print, you can buy it from Amazon for a few bucks.

You can access our monthly free intro to AI webinar, which is a great place to get like a broad overview of what's going on with AI, but how it's applicable to associations specifically. We have our kind of introductory level paid offering the 24 dollar one time fee for the prompt engineering mini course, which is a great and quick way to learn and get going.

And then of course our sidecar AI learning hub, and that's just in our sidecar ecosystem. There's tons of other resources. I think the key to it is to be educated at some level, because, you know, if your CEO says, Hey, go try some AI tools. And you're like, what does that mean? And your CEO doesn't really know what that means either.

It's the same thing, like who's going to write policy or like lead if no one knows a whole lot. I can't teach you how to drive a car if I don't know how to drive a car myself, right? Or if I've only seen a car, I'm like, it looks like you might drive it this way. So just start by learning.

And the learning that is most effective pretty much across all types of learning is learning by doing learning by experimentation. So that's what I'd encourage people to do as the next step through whatever tools and whatever content that they find most helpful.

Mallory Mejias: You just mentioned product led growth, so in my mind, I'm thinking of a way to phrase this, but that is the key point. I guess I was looking for more of a framework or a step by step, first you try this tool and then you take these actions and then you'll have AI integrated into your workflow, but I think it's, value led usage is the phrase I just came up with in my mind but as you learn about these tools and as you start using them more and more frequently and getting more out of them I think that is actually the path to integrating them into your workflow as opposed to having to be so conscious about it I think the more value you get out of it, the more you will use it every day.

Amith Nagarajan: Completely agree. It's a self reinforcing cycle. And if you, even if you're, you know, first of all, if you've never used any AI tool at all. Go to chat GPT, try it out eight to 20 bucks a month, just as an individual. So your data is protected opt out of the, of the thing I just mentioned, where in the settings, you can turn off the thing where you share your data for future model training, obviously opt out of that and go play with it.

The problem is it's just the text box. There's some prompts in there, like some things you can choose that are kind of. Give me a recipe for a good cocktail or whatever, but none of that's really interesting. For the most part, you got to have ideas and examples. And that's where some structured learning is somewhat helpful because, you know, if you attend a one hour webinar, you take a little course, you can get some ideas and context is everything.

So if I give you examples of what might help someone write a blog. That's cool for your marketing team or people who write blogs, but if I don't write blogs at all, it's hard for me to translate that into what I do. So if I'm an attorney, how does it help me with contracts or negotiation? If I'm a financial person, how does it help me with my work?

And so that's the idea of contextualizing and giving people clear examples, because I think if they get going they'll have a much easier time being creative because the creativity is the ingredient you need right now with these tools. I was having dinner with an association CEO recently, and one of the things that came up, this is a person who's like super advanced in AI use is really a great conversation.

And one of the things that came up in the discussion is I said use, you know, have you experimented with using the AI as a counterpoint, meaning like someone to debate against. So rather than the AI agreeing with you all the time, which is what these things do by default and completing your words for you and completing your sentences for you, essentially.

You tell the AI at the beginning of conversations, your job is to be my thought partner. And specifically your job is to take the counterpoint on everything I say. So I'm going to tell you about something I'm planning on doing. I'm planning my annual conference. I'm going to do these things. What do you think?

Normally AI is going to say that's awesome, Mallory. That's the best idea I've ever heard. It's amazing. And you know, go do that. And maybe it'll have a couple of minor points depending on the model you're using, but if you prompt the model to say, your job is to be my thought partner and to take the counterpoint on every one of these ideas.

It gives you all these other ideas, right? And it gives you all this feedback that you may not have considered. I use AI that way all the time. I find that most people haven't tried that even the ones that have been pretty deep, they're kind of continuing their workflow as opposed to using it. So you remember these things like have read the entire internet, right?

I haven't done that. So they've read the entire internet. So there's lots of interesting ideas that potentially could be opposing to whatever it is that I'm thinking about.

Mallory Mejias: I have never used it that way I think in my mind, I'm often trying to do things as quickly as possible, and I might think, oh, that'll take me a little extra time, but that makes me think of something I heard from someone in our AI learning hub for teams, or for their whole association, which was that they had different people on their board with different personas, different levels of expertise in certain areas, and they actually used the AI to be that persona, and said, okay you have a strong personality, IT is your expertise, I'm going to present to you this idea and then criticize it.

And I thought that was, it's exactly what you're saying, Amith, but so profound and such an interesting use of a large language model.

Amith Nagarajan: You imagine a world where, and this is not really requiring imagination these days, where there are AI avatars for each of us that represent kind of our collective thought process. And, you know, if you think about the information in your Microsoft or Google account, pretty much tells you a lot about.

What I think and how I respond to things, right? There's a lot of training data there on a per person basis. And you say, hey, we want to have a meeting between Mallory, Amith, Johanna, and these other people. And you can have these avatars talk about a topic. And, you know, there are tools that already do this.

And so people are going to start sending their avatars to meetings. And actually have reasonably good conversations about. And then it's what if it's all AIs talking about an issue? Right. And so that's what we, one of the things we've talked about a lot in this pod and it's in the book is this idea of multi agent systems and a multi agent system is nothing but that it's just basically multiple AIs that have been trained with either specific prompts or different AI models that have conversations with different roles or different viewpoints.

That's what you hear when you hear, when you hear about Microsoft's autogen project, which we've talked about, or if you hear about Lang chain, Lang graph, you think about crew. ai, you think about, The stuff we do in member junction, all of it's multi agentic type of approaches. It's the same idea. It sounds super fancy and complicated, but it's basically a conversation between multiple AI systems and sometimes with humans.

And it's exactly that where, you know, we know what the training data is and they are pretty good at predicting the next token, right? So they're pretty good at predicting what I'm going to say.

Mallory Mejias: Okay. Well, Amith, thank you so much for this great chat today. To everyone who has joined us before, thank you for joining us again. And to all of our new listeners, welcome. Whether you're joining us on YouTube or listening on your favorite podcasting app, we're so happy to have you here at the Sidecar Sync, and we will see you next week.