Skip to main content
Intro to AI Webinar

In this episode, Amith and Mallory interview Greg Kihlström, exploring AI's impact on marketing. Delve into strategies to overcome technology-related fears, understand effective AI applications beyond mere content generation, and examine the ethical considerations in AI use. . This episode offers a compelling discussion on responsibly embracing AI innovations in the association and nonprofit sectors.

Let us know what you think about the podcast. Drop your questions or comments in the Sidecar community: https://community.sidecarglobal.com/c/sidecar-sync/
Join the AI Learning Hub for Associations: https://sidecarglobal.com/bootcamp
Download Ascend: Unlocking the Power of AI for Associations: https://sidecarglobal.com/AI
Join the CEO AI Mastermind Group: https://sidecarglobal.com/association-ceo-mastermind-2024/

Thanks to this episode’s sponsors!

AI Learning Hub for Associations: https://sidecarglobal.com/bootcamp

Tools/Experiments mentioned:

Canva: https://www.canva.com
Llama: https://ai.meta.com/llama/
Writer: https://writer.com/
Jasper: https://www.jasper.ai/
Photoshop Generative AI: https://www.adobe.com/products/photoshop/ai
Get Munch: https://www.getmunch.com/
Swell AI: https://www.swellai.com/

Social:

Follow Sidecar on LinkedIn: https://www.linkedin.com/company/sidecar-global
Amith Nagarajan: https://www.linkedin.com/in/amithnagarajan/
Mallory Mejias: https://www.linkedin.com/in/mallorymejias/
Greg Kihlström: https://www.gregkihlstrom.com/
Greg Kihlstöm LinkedIn: https://www.linkedin.com/in/gregkihlstrom/

This transcript was generated by artificial intelligence. It may contain errors or inaccuracies.

Greg Kihlström: [00:00:00] I think stopping the experimentation altogether is not the right approach either. I mean, I saw some statistic where some large percentage of people are using like chat GPT and other things without telling their bosses.

And that's driven by a culture of fear as well.

Amith Nagarajan: Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights, and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future.

No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host.

Mallory Mejias: Hello everyone, and welcome back to another episode of the Sidecar Sync. My name is Mallory Mejias and I'm the manager over here at Sidecar. And today we've got an insightful conversation lined up for you with Greg [00:01:00] Kihlstrom. But before we dive into that interview, I want to say thank you to today's sponsor.

Today's sponsor is the Sidecar AI Learning Hub. If you are looking to dive deeper on your AI education in 2024 and beyond, I encourage you to check out Sidecar's AI Learning Hub. With the bootcamp, you'll get access to flexible on demand lessons, and not only that, lessons that we regularly update so you can be sure that you are keeping up with the latest in artificial intelligence.

You'll also get access to weekly live office hours with our AI experts, and you get access to a community of fellow AI enthusiasts in the association and greater non profit space. You can get the bootcamp for $399 a year on an annual subscription. And you can also get access for your whole team for one flat rate.

If you want more information on sidecars AI Learning Hub, go to sidecarglobal.com/bootcamp.

As I mentioned earlier, we've got a great conversation and interview lined up for you today with Greg Kihlström. [00:02:00] Greg is a bestselling author, speaker, and entrepreneur, and serves as an advisor and consultant to top companies on marketing technology, marketing operations. Customer experience and digital transformation initiatives.

Today, Amith and I are interviewing Greg on all things marketing and artificial intelligence. And today's interview, we're going to cover some early wins that Greg has seen with his clients, implementing AI. We're going to talk about handling that fear of change and transformative technologies in your organization.

We're also going to cover Greg's top two marketing AI use cases and spoiler alert. They're not as simple as just having chat GPT, write your LinkedIn post for you. We're going to talk about some of Greg and our top. Marketing AI tools. And finally, we're going to wrap up this conversation with a discussion around bias in AI tools and how you can mitigate that.

It's a great conversation today that we have lined up. Thank you all for tuning in and without further ado, here's that interview. Greg, [00:03:00] thank you so much for joining us today. I was hoping you could share with listeners a little bit about your background with AI and marketing in particular, and what brings you here.

Greg Kihlström: Yeah, absolutely. And first, thanks so much for having me on today. Um, so yeah, just a little background on myself. I come from a kind of a mix of I really started my career as a web designer back in the day when when there were webmasters and things like that. So probably dating myself here. But, um, but I really kind of fell in love with this intersection of the creative realms.

And, uh, you It's, you know, more from the design aspect and UX aspect married with marketing and married with technology. And, you know, it's sort of at that intersection. That I've really spent my career, um, after an initial job at a startup, uh, in the, in the DC area where I'm based, I started a digital marketing agency and ran that, uh, for about 14 years, sold it about six years ago.

But in that, in that time, we worked for a number of for profit [00:04:00] companies as well, but we also got, got the chance to work with some nonprofits, some associations. Um, large and small. So, you know, really had a ran the gamut from for profit non profit. And so I have some pretty broad experience there. We also got the opportunity to do some things early on in personalization.

You know, big data was all the buzz of about a decade or so ago, but, you know, got to play with some of those things, which really became precursors to AI and some of the things that we're now talking about. And so much more recently since selling the agency. Now I'm working primarily with, uh, Fortune 1000 companies, uh, looking at everything from marketing strategy to operations to within the last year or so, AI has been in every conversation, even if it wasn't the sole focus of that.

So, um, you know, definitely excited to talk about this, this topic today.

Amith Nagarajan: Greg, that's awesome. Thanks for sharing that background with our listeners. And I was excited to [00:05:00] get acquainted with you and have this conversation so that we could really connect the dots between marketing and AI and the association sector and knowing that you have some experience in serving the market.

is awesome. It's going to be really helpful for our listeners to feel that context is as he described different opportunities. So tell us a little bit about, you know, the last 12 months and I have been crazy for everyone. So what are your thoughts in terms of just generally where you've seen people have some early successes in 2023 when you advise clients with respect to a I specifically in marketing or it could be more general.

That's an area that many people are really interested is like, where do I get started working? Where are some low hanging fruit potentially?

Greg Kihlström: Yeah, definitely. And yeah, you would think that AI was invented last year or something for all the, for all the buzz around. And obviously, you know, it's been around for decades, but I think the. The generative component of it is really what, you know, caught everyone's interest with Chat GPT certainly leading.

Leading that and, and then translating into a lot of [00:06:00] things, you know, I think one of the, one of the most interesting and I think, um, one of the things that separates AI and particularly generative AI from some of the other things we've been hearing about hype wise, like just to pick on NFTs and, and some other things metaverse and some other things like that is AI is, it Almost immediately usable if in, in some ways, you know, not to everybody in every case, but there are immediate practical applications of using it.

And it's also, I would say it's a very democratic, um, in, in, in the sense of a large organization can use it a. Um, small organization can use it in different ways, mind you, you know, I, I tend to work with the larger organizations recently, but I've seen it applied at a, you know, 3, 4 or 5 person organization, um, very differently than maybe a 10, 000 person organization, but [00:07:00] AI can be applied at those at those different levels.

So it kind of, it, it is able to level the playing field and, um, And, um, you know, come to almost immediate usage. So, you know, really what I've seen over the last year is a lot of experimentation around it, playing with Bard and chat, dbt and Claude and, you know, all these, all these various platforms. I think this year, it's really what I'm, I've termed the great reconciliation of we've been playing around for 12 months.

Now, let's get real and standardize some processes and make sure we're taking into account all the ethical and legal and all that stuff.

Amith Nagarajan: Sure. Well, there's a lot to unpack there. And we've had similar experiences within the association market. I think that this is a sector is, as you know, from prior experience that Doesn't necessarily jump on the bandwagon with new technology, the fastest compared to other industries, perhaps. Uh, although what I've been pleased with AI is that there has been a strong amount of interest around generative AI in the last 12 months, particularly the last six months, it's really started to [00:08:00] become a big area of interest for, for the C suite, not just for, you know, the technology officers, but for everyone, including the CEOs of these organizations.

Part of what we, we spent a lot of time talking about on this show and in our content is. This idea that these opportunities, they not only allow you to make your business more efficient, but also they open up new possibilities that, you know, weren't, weren't things that you could do even a year ago. Uh, have you seen any transformations of businesses that you've worked in either either type where there's dramatic efficiency gains, or perhaps there's new businesses or new products or services that became possible because of generative AI and, uh, we'd love to hear those experiences.

Greg Kihlström: Yeah, I mean, on the, on the operational side of things, I mean, definitely, you know, no matter what size organization you're at, you're generally being asked to do more with less, right? So again, it doesn't, it doesn't matter if you're a two person nonprofit or a. You know, 200, 000 person, you know, for profit, [00:09:00] everyone, you know, belts are tightening and, you know, budgets are getting cut.

People are getting laid off. So it's, it's always to me, it's, it's about being able to do more and not, and not just more, but, um, more relevant. Um, work for for less effort put in. So, you know, I think that's what I've really seen. And, you know, in the last 12 months is all of a sudden we're getting, we're getting more quickly through, you know, ideating a campaign to getting the content for that campaign out.

And also, by the way. We hit a button and we've got content for email and website and social and all these things. And so, you know, it's still humans driving it. It's still humans editing and making sure it's on brand and on message and and that there's no weird stuff hallucinations thrown in there. But, um.

It's, it's allowed so many teams to do so much more, so much more quickly. And, and the, you know, it's always [00:10:00] about 75, 80 percent there, you know, straight out of AI maybe, but it's that 20 percent lift is, is nothing compared to having to do it from scratch and kind of facing that, that blank screen.

Amith Nagarajan: Greg, I think you make a really important distinction between a first draft or even a second draft and a final product. And I think where a lot of people get hung up is that AI is not perfect. AI does make mistakes. In fact, it can do these things. A lot of people refer to as hallucinations or making things up entirely, right?

And at the same time, um, part of what we explain to folks is that, well, if you hire someone right out of college and you ask them to write an article for your website or build a campaign, You generally would review their work first and probably expect to edit it. And so we tend to position AI as perhaps an earlier career contributor.

But I would actually argue that even something done by someone very experienced, you know, stands to benefit from additional review from collaborators. Have you had a similar perspective to that? Have you had any challenges with getting people to [00:11:00] think of it as that first draft type of mindset?

Greg Kihlström: Yeah. I mean, I think it's, um, there, I think it often starts with, it kind of goes back to the, the. Model of of change, right? The first reaction to change is always some kind of like fear or denial or something like that. And so I think, you know, once you get past that, okay, I is not taking all our jobs and we're quite a few years away from the Terminator Skynet situation.

So, you know, once you kind of get past that hurdle, Um, then yeah, I think looking at it as it's an, it's an augmentation, like it helps us past certain hurdles that just take humans a while to And so if you think of it as we're going to use AI at three points in the process, but the humans are kind of the gatekeepers and the checkpoints, and the humans are also the ones driving the request.

You know, it's, it's not a great idea to ask AI, what should I market [00:12:00] today? You know, but giving, giving the right prompt to, uh, you know, AI. Can yield some really good results and you know, humans can make them excellent results So I think it's it's just kind of wrapping your head around that. I mean, hey, you know, I've written I've written over 20 books.

I need an editor on every single, like, chapter that I have ever written. So, you know, it's like, for us to think that AI would be any different than, than that, would not be smart.

Amith Nagarajan: I think I think that's absolutely the right way to look at it. The current generation of AI is pretty amazing. At the same time, it makes mistakes that all of us make at times and perhaps other mistakes. And so I think it's a really good point. Um, I want to dig into use cases in marketing in just a moment, but I want to ask you a question about fear.

You mentioned that earlier, the fear of change as part of the theory of change or how to drive change. I think that's something that stops people in the association and non profit market quite frequently in their tracks when new technology comes along. [00:13:00] In your experience, particularly in the last 12 months, but it could be broader than that, what are some of the best ways to help people overcome those fears?

Greg Kihlström: Yeah, I think, you know, first is just education. And, you know, it's, it's, it's easy to be afraid of things that we don't understand and, and don't know, right? So that's, and that's just. That's well beyond technology. That's just life probably. But, um, so, you know, I think first is to become educated and then second, let's be practical about, you know, if, if we are using, depending on the organization, there may be real ethical.

Issues that we need to solve for, you know, if you work in a healthcare related organization, for instance, there's a lot of potential with personal health information to be misused or, um, you know, fed into the wrong algorithm and so on and so forth. So, you know, there it's not that there aren't issues.

It's let's be practical about. Um, what those issues really are. And let's have a solve for that. You know, if the solve is nothing goes out without human [00:14:00] approval, that's pretty straightforward. And again, back to your earlier point, it's like we would do that anyway. If an intern wrote, wrote copy for a financial website, like someone's got to review that for compliance, right?

So we would do the same with with as well. So I think, you know, first educate, then, you know, understand what the real issues are. And then Experiment in a way that is very low risk, you know, maybe do an internal, you know, internal doc, if there's an internal newsletter or email or, you know, something that, you know, the eyes that are going to see it are going to be, um, you know, a little more forgiving if something is a little weird and like experiment and in a, in a low risk way.

And then when you're ready, when you're past that hurdle, then feel free to roll it out. Um, but I, I think stopping the experimentation altogether is not the right approach either. I mean, I saw some statistic where some large percentage of people are using like chat GPT and other things without telling their bosses.[00:15:00]

And that's driven by a culture of fear as well. And then, then the organization is at risk without even knowing it. So you, you don't want that scenario either.

Amith Nagarajan: I think that's a really good point that, you know, some people, there's different personality types and different mindsets in an organization, even a small, a smaller association that might have 20 or 30 employees would still find that certainly the larger ones that have hundreds of people and then there's also volunteers in the mix because associations are led not only by paid employees, but also their volunteer contributors and each of these people has their own perspective on AI and some of them are going to be very much gung ho about taking advantage of it.

Whether it's because it saves them time in their job or perhaps it's because they think it will raise the bar in terms of the quality of work, whatever their motivation is and motivation like that is very strong. So I found that organizations that try to blockade or bar progress because they haven't yet put their hands around it, um, and really thought through what it means will ultimately have what you're describing, Greg, which is people will use it anyway.

And then it's done in kind of an ungated way [00:16:00] without guardrails of any kind.

Greg Kihlström: Yeah. Yeah. Yeah. Better to at least, you know, sanction it for some uses and just kind of see what happens. Cause I mean, you know, the people that are excited about it and using it, I mean, they are the best possible people to, again, experiment with it. Again, don't just put stuff straight out there to the public that, that you're not comfortable with, but you've got undoubtedly, even in a, in a small group, you've probably got a few people that are pretty excited about this stuff and can really be the, The, they can be in the lab doing some experiments while everybody else is kind of thinking, Okay, well, then how might this actually work and Like, what are the review processes we would need to do to make sure it's okay?

Does it, does it differ than our current processes? Cause maybe it doesn't. It's just people are hung up on this, like this idea that AI can be, you know, scary and stuff like that.

Amith Nagarajan: That makes a lot of sense. You know, I'll paraphrase a little bit of what you said, and I'll kind of put it in some of my own words because it's it's [00:17:00] somewhat similar to what we talk about a lot related to a book that we published last year called ascend, which is an AI book specifically for this sector.

And in that we propose a mindset and really a framework called learn experiment build, which is really similar to what you're describing, where you first, you know, set out to be aware, set out to gain knowledge, um, and then do incremental experimentation, increasing in scope over time, and then deploy things.

Uh, once you feel comfortable that they're not only safe and production worthy, but, uh, you know, they're, they're the right value add because you're going to have to experiment with a lot of things. Um, it sounds like you've had a pretty similar experience to what we've encountered from what you just

Greg Kihlström: Yeah, definitely. And you know, I'm a, I'm a big proponent of, of just agile principles in general. And, I'm not, um, so dogmatic that I think everybody has to use scrum or safe or, you know, 1 of 1 of those, um, kind of sanctioned methodologies per se, but what you're describing is a very agile lean process of of doing things, which, you know, serves any type of organization anywhere like I.

It would be hard for me to [00:18:00] think that there's another way to do it. In other words,

Amith Nagarajan: You know, it's interesting. I run a small group of CEOs where we meet at least monthly to talk about the strategy of AI. It's really what we're focused on. And the interesting thing about it is, um, first of all, it's great because the CEOs of these types of organizations historically haven't had much interest in technology.

And of course, there's some that have had it. But, you know, generally speaking, it's not been a technology centric space. So that's great. But what's interesting about it is we're looking at it from an economics perspective. We're looking at it from the viewpoint of What happens over time in a particular industry, be it a specialty medical area or accounting or a field of law where an association represents those people or that profession, essentially, what happens to their members first, what happens to that audience and what will they need in terms of products and services to help them in their journey, wherever it goes because of AI and the changes it's going to have.

And then, of course, the internal view of how do you deliver those services? Well, And I think what's been interesting is [00:19:00] that, um, people are pretty humble about not knowing what any of this means. I mean, I spent all of my time thinking about this stuff, and I have absolutely no idea what the world's gonna look like in, in five years.

And only perhaps a little bit of an idea of what it might look like next year. So I think it's, um, that's part of it too, is that we don't have to know the answer to everything. We can, we can start off with that mindset. And it's a little bit uncomfortable when groups are used to a five year strategic plan, which is a fairly common artifact in this industry.

And a five year plan. I don't know how you do that these days, you know, especially with what's going on

Greg Kihlström: yeah, yeah, agreed. I mean, it's a, it's a gas. I mean, and to your point of five day plan is almost a guest these days, but, um, maybe with a little more, um, certainty, but yeah, I agreed. I mean, I think, you know, the, the best you can do is, um, have regular plans to check your progress and, you know, goals don't need to change, you know, that the five year goals may.

Stay the same, but how you get there is going to, I mean, again, anyone that making plans at the beginning of what, 2020, [00:20:00] um, surely had to make some, some changes, um, very quickly, midway through the year. Right. So like, but that doesn't mean that their overall goals for their organization fundamentally changed.

It's just the way they got there had to. all of a sudden be about remote and digital and, you know, and all that stuff for, for awhile. So I think to your point, it's, yeah, I think that five years of certainty is if it was ever possible, it's certainly not now.

Amith Nagarajan: makes a lot of sense. I want to pivot a little bit in our conversation and, uh, ask you to share with our listeners a little bit about some of your favorite use cases of AI, specifically in the context of marketing. If we can zoom in there, um, and maybe just kind of, uh, in terms of the number, it doesn't necessarily matter when we start with your top one or two that you think might have the most impact.

Greg Kihlström: Yeah. I mean, I think, um, and, and this is something I use, I have several, Tools that I use. And, um, so I do, I do a lot of this, not only for [00:21:00] my own work, you know, I have my own podcast and, and in addition to consulting, I, I create my own content as well, wrote, wrote a few books and everything, as I mentioned.

So, um, some of this is firsthand and some of this is working with as a consultant with clients, but I think just the, the brainstorming and concepting process has just It, that's changed my life personally, as far as, um, you know, give me 10 ideas. You know, I have, I have a local version of llama running for instance.

So like, I just type in my little terminal and like, give me 10 ideas for X, Y, Z seven of them are terrible, like, like almost undoubtedly seven of them are terrible, but there's three that are okay. You know, I can build on this. I can, you know, maybe with a little tweaking and maybe there's something in the, in three or four of the, those seven that like is worth exploring a little bit further.

So, you know, when you think about that again, regardless of the size of the team, but let's, let's talk about a small nonprofit or [00:22:00] association, you know, Maybe you've got one person who is tasked with, okay, we need a campaign for X, Y, Z, um, come up with some ideas. They're going to spend an entire day brainstorming a few ideas.

And you know what? I bet out of 10, three are probably good as well. And seven are terrible. So like. You can get those initial ideas in like 15 seconds and then build on those. So, you know, to me, that use case is give me a starting point. I can make it better based on the right prompt from me. You know, we can, we can get, we can like basically do a day's work in about an hour and then take it further.

So I think that's, that's a huge one. Um, the second one that I would say would just be the idea of repurposing and reformatting content really easily because I mean, you know, there are so many social media platforms and, you know, websites, apps, all of those kinds of things. And so to be able to take something like, you know, canva is one that I use as [00:23:00] well.

It's like to take one piece of one design and say, okay, I like how this looks, but. I've got to have it in eight different sizes for eight different platforms. I, I actually know how to do it in Photoshop if I really wanted to, but why would I, you know, why don't I just hit a button? And again, it's like 90 percent there or maybe 95 percent there even, um, with a few little tweaks.

So again, save someone like me or someone, you know, some junior designer or marketing person that doesn't even know design saves them hours and hours of time.

Amith Nagarajan: Yeah, I was gonna say it sounds like you've got a pretty extensive design skill set on my end. You know, I have a hard time with a stick figure. So, but I think I could use Canva to do that, which is really empowering for someone who doesn't have any of those graphic type skills. Um, you know, with brainstorming as the first use case you mentioned, I think is super interesting because it requires also people to recognize that the potentially can be a thought partner at that phase of ideation brainstorming.

It's normally people are accustomed to using these tools. Yes. [00:24:00] Once they know what they want to do, whereas now you have this potential thought partner. I think it's super interesting. I know Mallory in the past, we've, we've done similar things and thinking about campaigns for sidecar where we said, Hey, how do we want to approach that?

And I think it's, it's pretty similar to Greg, Greg's experience.

Mallory Mejias: Yep, it's really great at coming up with titles to especially, or kind of like those taglines and I'll ask it straight up, you know, give me 15 options and you're right, Greg, maybe only three of them are good, but Hey, that that's a lot quicker than if I had come up with those 15 taglines myself, I'm curious in your experience, do you only use tools like llama?

You mentioned, or. Chat GPT, let's say in that brainstorming ideation process, or do you also use it in that next step of actually creating the content?

Greg Kihlström: I use it across like throughout the entire process. So yeah, I mean, you know, but, but there's always like the, the, the gate and when it's just content for me, I'm the gatekeeper, so to speak, but in other, you know, working in a larger org or something, you know, there would be different gatekeepers at, at different stages, but just talking from [00:25:00] my own perspective.

Yeah. You know, I'll, I'll get some initial concepts. I'll write something that gives me some direction to then go back and say, okay, well, based on this idea, Okay. you know, write me a, an intro paragraph or do something like that. Again, it's not a hundred percent there, but it's closer and closer and it kind of iterates.

And then it's like, okay, I've got this blog post, give me social posts based on that or write a right email copy to promote this or, you know, or things like that. And so, you know, all, all of that stuff, it just takes time. It doesn't. It doesn't take a lot of strategy to do it, but you know, it, it takes human time.

And so, and I'm part of the process. I can, I can guide it if, if there's something weird in there, or if there's some random citation from a publication that was never actually written or something, then I can, I can guide the process before it gets too far along.

Mallory Mejias: And

Amith Nagarajan: about your, uh, local llama instance. Uh, so for those that aren't [00:26:00] familiar with that llama is an open source AI model that is produced and published by the meta team. And, uh, I'm curious why you decided to. Download and run your own model for that type of work.

Greg Kihlström: Yeah, just to kind of play with it and, and to start, um, training it. Uh, the, you know, one, one of the, which I haven't gotten as far on that as I, as I'd like, but, um, you know, my, I have an interest in training it on my own.

And, and stuff like that as well to just kind of see, can I get something that sounds more and more like me over time?

Now there's, there's, um, you know, platforms out there that do that already. Um, that are, you know, and there, there's more and more coming into prominence and, and they're getting like, domain specific and, you know, company specific and everything like that. But, you know, for my curiosity was just, okay, for me, you know, for my clients, like they, they'll invest in one of those other platforms that's more established and everything like that.

But for me, I was just kind of [00:27:00] curious, like, what could I do with. you know, just kind of my own, my own instance and, and trained on stuff here and there. So, you know, so far, you know, results have been good, you know, not great, but my expectation was good from the start.

Amith Nagarajan: Yeah, that makes a lot of sense. And I mean, you have enough experience with these tools now to expect a different response, perhaps out of llama running locally versus than running, you know, GPT four on chat. GPT probably. Um, so that that's super interesting. I think there's a lot of opportunities people to think about how an open source model or really a contained model, whether it's open source or proprietary, uh, might be able to handle dealing with perhaps more sensitive content.

Some associations are concerned about. Um, Taking some of their proprietary content and either fine tuning an open AI model or somebody else's model for sensitivity reasons. I think that's an interesting conversation because, you know, to the extent that you're concerned about a vendor misappropriating your content outside of their terms of service, that would potentially be a problem if you just [00:28:00] put your content in Google or on the Amazon cloud.

Um, but there's a lot of misunderstanding of what the terms of service really say. And then, of course, do you trust the vendor? But, of course, in your own in your own world, if you're doing open source and doing fine tuning or even additional, you know, fundamental training, pre training on a model based on having the open source and being able to do it yourself, that's really interesting to think about.

Like, what does that mean? And can that give you the most secure environment as well? It sounds like your clients, though, have been primarily working with commercial tools like like chat, GPT or maybe writer and Jasper and things like that.

Greg Kihlström: Yeah, yeah, definitely. And, and, you know, the larger ones are there, as you would imagine, very risk averse. And so not jumping, you know, jumping too far into, you know, how do we roll this out? To end customers as quickly as possible, but more kind of in the experimentation phase, but those enterprise tools are, they're getting really good, really quickly.

So, you know, you mentioned writer, you know, that's certainly one that is in my opinion, ready for prime time. So to speak, it's, there's lots of [00:29:00] large brands, case studies already, you know, it's, it's been in use and Jasper is one I use as well. Um, I mean, I've got like three or four different things that I'll use and I'm starting to get an idea of like, Oh, okay.

Well. This tool is going to be good for this purpose versus that and, and stuff. So, you know, I'm, um, even with my own work in it, I'm, I'm getting a preference for that, but, um, but yeah, and then, you know, then you've got the existing kind of legacy platforms, you know, everything from the Adobe, you know, Photoshop introducing generative features and illustrator and all that to HubSpot and Salesforce.

And, you know, so there's, that's really. That's kind of the other component of this, like reconciliation in my mind is, you know, we had all these startups pop up last year and then we had all the established platforms, then tack on generative features. Some of them a little clumsy at first, but they're getting a lot better a lot more quickly.

I mean, the, the Photoshop example, for instance, a little clumsy at first, but fun to play with. But now it's like, [00:30:00] now it's ready to go and I use it all the time.

Amith Nagarajan: Yeah, I have a couple quick things on what you just said. One is that the pace of change is so rapid that people assume what they have today is what they'll have in six months. In reality, that's not the case. The AI curve is extreme. And so there's a lot of innovation happening rapidly. The other part is, um, even if there were no fundamental research advances in AI, which I don't think too many people believe is the case, people are generally quite optimistic about what 2024 and the future will hold.

Um, yeah. There is so much upside with no further fundamental research advances. Even if you just essentially took the current frontier models of, you know, Claude to GPT four Gemini, et cetera, and said, just build stuff on top of that. And people like writer and people like Adobe just said, we're going to take those models and fully incorporate them into our products.

And then the association world and industry in general, weaves those technologies into their business, independent of any future advances. And that's a lot of where I think people need to spend their time. And focus on, well, try to anticipate a little bit of the future where [00:31:00] that puck will be and skate to that point if you can.

But also just build with what we've got in a lot of respects.

Greg Kihlström: Yeah. Yeah. And you know, it's, um, I think it's a safe bet to say, you know, the, the Adobe's and Salesforce's and all those aren't going anywhere. And so, you know, if, if, you know, what one approach would be, okay, well, let's, let's kind of use what they are, what they're developing. I know there's a lot of other platforms that are standalone platform.

So, you know, maybe pick a few of those to, to augment what just doesn't work in, in some of those more , legacy platforms. But yeah, I mean, I, I, I agree with you. I think there's going to be new and, and there's always a shiny object that's going to pop up. We have, we're only in January right now, so we'll, we'll see what happens this year, but I totally agree with you.

I think there's enough out there and there again, it's, it's the meaningful change and the meaningful. Improvements that it can make to workflow and output. And even, you know, read some of the case studies on, [00:32:00] you know, click through rate conversions and all those kinds of things. When generative was in charge of writing copy versus a human, you know, it's pretty compelling what can be done with existing tool sets today, to your point.

Amith Nagarajan: On that note, I want to zoom back into a comment you made a few moments ago about your, you know, you mentioned brainstorming and you mentioned content repurposing as two use cases that are powerful for you and perhaps are low hanging fruit for our listeners. We talked a little bit about brainstorming.

I'd like to come back to the content repurposing comment. And I think that's a super interesting one for associations in particular, because they are essentially content businesses in many respects. They, uh, and often, you know, an association might have the best content in their particular domain. They might have the best content in a particular field of medicine or whatever the profession may be.

And a lot of what we spend our time thinking about is, well, how can associations leverage that asset more and more? Uh, perhaps across modalities like taking text and turning it into videos or perhaps from one language to [00:33:00] another. Uh, and I think there's some interesting opportunities there, but even something as mundane as perhaps say, okay, well, we have this podcast and we want to spin off some blogs from it.

We want to spin off some social posts from it. We want to spin off some instagram and tiktok reels that will be able to draw traffic in and build our audiences there. And. Um, you know, Mallory, I know that that's an area that you're super close to. I'd love for you to maybe just share a little bit about what you're doing with that type of content repurposing.

And I'd love to hear Greg's input, um, based on what you, what you share.

Mallory Mejias: Absolutely. So basically what Amith just described is exactly what we're doing over here at Sidecar. We do this podcast every week. And , we generate the transcript through a tool called Descript. From there, we throw that transcript into ChatGPT or portions of the transcript. And we ask ChatGPT to generate outlines.

 Blog outlines. I mean, we have found that this is the best strategy because ChatGPT does not excel at generating a whole blog at a time. It's something it can do, but it's often [00:34:00] shorter than what you're looking for and maybe a little too general for what we're looking for. So we will first.

Paste in a portion of our transcript from this podcast, ask it to generate an outline for a blog, work with that a little bit, make some tweaks, make sure it's relevant for our audience. Then we'll have it assist us in writing the blog. Well, of course, having, you know, human oversight and editing on that piece, and then we'll post it.

But on the other side of that, on the content repurposing, we are playing around currently with a tool called munch. So we just started video, recording our podcasts and posts. Them on YouTube to the Sidecar Sync YouTube, which just launched basically this week. And Munch is really neat. You can drop in the whole video recording.

This could be from a podcast or from maybe a course or a bootcamp that you're running or a conference that you have. And Munch will auto splice it into the most relevant pieces of that recording. It will add, um, words on top of it or a script on top of the video. And it also, I don't know exactly how this works.

It's looking at topics and keywords that are trending right now on the internet. [00:35:00] That's how it's choosing to create these AI generated clips. So you can basically in 30 minutes or so, and that's just processing for on my end, it just means uploading the video to munch and letting it do its job. You could have 10 to 12 clips that are highly relevant for your audience that are trending, that you can then post out to Tik TOK or Instagram reels or LinkedIn.

And so that is something neat that we're doing. Greg, I'm wondering if you've played around with that at all with your podcast.

Greg Kihlström: Yeah, definitely. So I, I use a different tool. It's called swell AI, but similar in, um, you know, it, it spits out social posts and blog articles. Again, all, all need editing. I'll need work and I'll have some weird like intros. They all, they all start the same and end the same, which is always, that's not a, that's not a knock on that product.

But I think that's just, Gen AI in general likes to say in conclusion at the end of posts, but um, I actually tried to train Llama to not do that, , it stopped for [00:36:00] like an hour and then it added it back in, it's pretty ingrained, but that, all that aside, Um, yeah, you know, it's helped me tremendously.

I mean, I have a team that like edits my show and does the production of it, but I do a lot of the, you know, content promotion still myself. And so, um, being able to have blog posts and social posts and, you know, with minor edits, they're ready to go every, you know, I do three shows a week. So it's like, yeah.

It's a, it's a, it would be a lot of work to do that otherwise. So, um, and the, the clip functionality. Yeah, that's a, that's a relatively new, um, new thing I'm trying out as well.

Mallory Mejias: On the topic of pet peeves, another thing that Chat GPT does, at least I've noticed, Greg, I wonder if you've noticed it as well. It will often frame sentences as not only does AI do this, but it also does X, Y, Z. And to me, when I see that now on a blog, one, it drives me nuts. And two, even though some people do actually write like that immediately in my mind, I'm like, that is a Chat GPT sentence, Greg.

I have a question too. So it seems like [00:37:00] we've gotten maybe a bit spoiled in the last 12 months with generative AI and the fact that of course we're all in this space and for us it's become somewhat normal. It's become a part of our everyday workflows and something that people have come up to me and talked to me about personally is, you know, we're interested in AI education.

We like the podcast. We're interested in the bootcamp. We're kind of tired of content generation. We're tired of the marketing content piece. We want to dive deeper than that. So I'm wondering if you have any perspective or use cases, we talked about content repurposing, which is a little bit different, but in terms of marketing and AI, that's not necessarily that content generation piece.

Do you have any thoughts on great use cases there? I'm

Greg Kihlström: Yeah. So, I mean, if we kind of put generative AI aside, I mean, there's all, there's whole other realms of, of AI, you know. Predictive analytics is something that I spend a lot of my time on. I actually spend more time in my consulting work in the predictive area than I do in generative generative. I do a lot, you know, as you probably could tell with my own stuff, but, [00:38:00] um, you know, predictive.

And again, this, this is another kind of thing where, um, you know, data science is getting kind of democratized by, um, Um, by A. I. And so, you know, I spend at at the large company level, spent a lot of time looking at like propensity models of, you know, how, how likely is a customer to buy or to churn if it's a subscription model, but, um, with the right tools, a nonprofit, you know, they're looking for donations.

They're looking for new members. They're looking for. Membership churn, you know, as well to mitigate against that. So these tools are getting better. I would say the in the predictive space. It's a little, um, generative has gotten a lot more affordable and accessible more quickly than than some of the predictive stuff.

But that other stuff is not that far behind the. So, you know, I would say. Definitely, you know, those, those CEOs of associations and nonprofits out there that are really looking at their financial performance and everything like that's, [00:39:00] that's definitely an area to look at.

Mallory Mejias: be

Amith Nagarajan: reminder for our audience that there's a whole a whole world of diverse A. I. And M. L. Opportunities out there beyond just the generative apps that we're talking about and what you're describing about predicting who might churn predicting who might or might not attend a conference.

Those air applications of machine learning that have been around with with quite great effect actually in many businesses for a long time. But there has been a bit of a hurdle Uh, we had to have either a certain amount of data or a certain amount of dollars to get to those types of apps, and there is a democratization process of that happening right now, perhaps also because of the intersection of generative AI, uh, being able to execute code for you and things like advanced data analysis inside chat GPT, which at the moment is fairly limited, but I envision a world in the very near future, and this is an engineering opportunity, not really a scientific advancement needed.

Where you're able to talk to a generative AI tool and ask for things like propensity analysis around a particular data set. And [00:40:00] actually you can do some of that right now. Um, and uh, that's, that to me is a great opportunity for people who might not have a data science team or even the resources to hire a fractional data scientist to execute on, on what you're describing.

Greg Kihlström: Yeah, absolutely. And I mean, you bring up a great point. I think the most exciting thing to me about AI is the combination of types of of AI. So, you know, I look at it as, You know, there's generative, which we've talked about plenty. There's, there's the predictive um, component. There's also the, the workflow automation component, which, you know, anybody using a project management system, um, or anyone that's ever searched on a search engine used any if this than that statement, they've used a I, you know, maybe at its most basic level.

But, you know, so we've been using a I for decades at this point, of course. But I think the predictive plus generative to your point is really exciting. Because I mean, again, let's Let's say you get, you look through a list of 10, 000 potential event attendees and, [00:41:00] you know, you've got 2, 500 likely attendees.

If you're a small team, it's nice to know that there's 2, 500 people out there, but what are you going to do about that? You know, you need some help and some assistance unless you just send out a, a blank, you know, uh, uh, the same email to everybody or something like that, but to actually personalize it. To 2500 people.

This is where generative plus predictive can really get exciting,

Mallory Mejias: Okay.

Amith Nagarajan: even outside of what's AI proper that, you know, are useful, right? Like doing math. LLMs, we know are pretty bad at it and doing scientific types of work, but there's a lot of symbolic or traditional programs that are out there that can be woven in quite easily into these AI solutions that execute on those types of tasks.

I think I agree with you. The engineering of solutions where you combine All right. everyone. Some generative AI, some traditional compute, perhaps some predictive models, et cetera, to create solutions that execute on it. So you take Greg's idea of like, Oh, these 2, 500 people are people who might be interested in our [00:42:00] conference.

What do we do? Well, let's invoke a workflow. We let's AB test several messages. Now the generative AI may generate the possible campaign. A B branches and on and on and on. Then you run it back through the predictive model again once you have the first round of feedback and that iterative ability, I think, would require mass scale of resources only a couple years ago and now is something that's very quickly coming down and being attainable by just about anybody.

Greg Kihlström: Yeah. Yeah. Agreed. Yeah. And which goes back to a, I just being, you know, it's, it's immediate application to, you know, organization, small or large is just, you know, it's, pretty phenomenal.

Amith Nagarajan: Greg, you know, any, any conversation around a I wouldn't be complete unless we talked a little bit about ethics and safety. So I wanted to ask you a little bit about in the work that you're doing and consulting with clients or in your own business. Um, what are your general thoughts in terms of framing the conversation around how to do a I in a responsible, safe and ethical way?

I

Greg Kihlström: I think 1st of all, and, you know, this kind of goes back to the, the [00:43:00] education piece is, you know, leaders at organizations need to understand enough to, be able to create guidelines like employees need guardrails in what they should or should not do on their own time. They can do whatever they want, but you know, when it's, when it's done for the organization and with the organization's data and information, there have to be some guardrails.

It's not just enough to say, Oh, well, you know, she knows data and AI or whatever, let her just like run wild or whatever. It's like. It's there's got to be some, but in order to do that, you have to know what you're talking about. And, and so I think it's incumbent on leaders to understand enough to be able to say, okay, yeah, we're gonna, we're gonna use it here, but not there.

We're going to use it for this type of content, but not that, um, we're going to feed this kind of stuff into a model and not. This other stuff, you know, so in other words, it's going to vary obviously by industry and by by focus or whatever. But, um, I [00:44:00] think it really has to start with that, but it, the, the rule can't be, don't use it.

You know, we're just, we're past that already. Um, and if anyone out there is, is thinking that we're not, that's, it's, it's just patently false.

Amith Nagarajan: think I read this morning that New York City passed the new AI hiring law that had to do with the disclosure of transparency around algorithms that are used for, I believe, candidate selection. I'm not sure if it covers interviewing. Um, and I know a lot of associations that are concerned about biases and AI, and that's part of their AI safety ethics.

Uh, and I'll give you a specific example of where there's often concerns. Um, associations are collaborative environments. They are intended to represent a profession or an industry. And so, uh, one of the most important things they do is promote ideas and that sort of thing. space, and that's done through various medium, including their conference, include our conferences, uh, their publications.

And so they have a they have a process, a structured process for selecting that content. [00:45:00] So if you want to speak at an annual conference for a particular association, usually, unless you're invited to be a keynote, you usually submit a proposal. And then there's a process that they might go through, perhaps with committee members who are experts in the field.

There's some staff involved in kind of, you know, mediating the process, essentially. And certainly this is an opportunity area for A. I. There's many, many areas of A. I. We could use to automate a large portion of that. But there might be concerns about like, well, if we're using the I to help suggest which of the proposals might be best for our conference, where's the bias come in?

Has any of your work brought you across similar situations where people had a concern about biases that they had to address and kind of put in place some safeguards?

Greg Kihlström: Yeah, you know, absolutely. You know, I think that's that's always going to be a consideration, you know, particularly, you know, we use the term transparency on, you know, how do we how do we know what the, It's sort of in the black box of, of AI's decision making process. So, you know, some tools are not [00:46:00] very good with that transparency and others, you know, other, others a lot better than that.

I think that is an area where, um, some of the more like let's call it enterprise grade tools have transparency built in because they're going after large financial organizations or healthcare or whatever, where they're just, they wouldn't even get in the door. Um, if they didn't have those, you know, smaller organizations that don't have quite those dollars to spend, they're going to go with some things that don't have that level of transparency.

So, you know, that all that to say, um, it's definitely an issue and it's something to, to mitigate, mitigate against where possible. I would say, you know, as a caveat here, though, humans are always biased as well. And, you know, so I, I always, I wrote an article about this, um, a few months back as well, not, not to say that we should just blindly trust AI because that there's been some very egregious examples of bias.

And, and, you know, you mentioned the hiring process, there's [00:47:00] some very notable case studies there of where that just ran amok and was terrible. Um, but. We shouldn't think that just doing it with humans is going to protect us ourselves against bias because you know, history shows us otherwise.

Right? So I think it's, it's about how do you find a tool, the right tool for the right job. So, you know, if you're, if you're generating an innocuous blog post, maybe, you know, bias isn't the top concern. If you're evaluating candidates for hiring, Absolutely. Like, you've got to have a tool that you can know, you know, what, what's getting fed into that.

Why? And how can I see the process so that, um, so that I can mitigate against, against that.

Amith Nagarajan: I think that's a really good point or set of points that you've made in particular that the human biases are often really not taken into account when we think about like, it's not it's not an either or situation. It's probably an and situation. But the idea of is a I bias worse than human bias. And in a way, it kind of reflects [00:48:00] back our own biases based on how it's been trained.

But. Um, you know, in the case of the committee example I provided earlier, where, you know, you're evaluating calls for a proposal or speakers for a conference, um, you know, people are creatures of habit. They tend to say yes to proposals from people they know, uh, perhaps things that they're kind of tuned into.

And that's just how our brains work. It's not that they're bad or good. It's just Part of the process. And so I think AI potentially can be a counterweight to that in some respects. Um, part of the, uh, research I'm most excited about this year is advances in AI interpretability and having models to be better explainable or more, more explainable kind of inherently.

I think there's also an engineering part of the solution too, which is to architect your solution so that you actually ask the AI for the reasons why they said yes or no to something. So if I'm building a solution where. I make an A. P. I. Call to G. P. T. Four. And I say, here is the proposal for this particular conference evaluated in these these different areas.

I'm not gonna just ask for a number back. I'm gonna ask for a narrative. And actually, if you ask the I for a reason, a lot of times [00:49:00] it's pretty good, and sometimes it's reasoning is really, really bad. So, um, there's ways I think of putting place some interesting safeguards around that.

Greg Kihlström: I, I really liked the way you're framing that, you know, if you think about it in terms of if you had a committee to go back to the speaker proposal, you know, thing, um, if you had a committee of five people, Um, why not treat a I as the sixth member of that committee and just like, you know, this person sitting across from you on the committee, they may have really bad reasons for recommending someone as well.

Like, they may just, they may know someone or they may have seen their name on online and not even remember how they like. We don't, we don't even, we're not even conscious of all the biases that we have. Right? So, like why not do a compare and contrast of like, okay, let's, let's send it to AI as if they were part of that team.

Let's see what the rest of us come up with. If AI is so different and so, you know, skewed one direction or another, then, you know, then there's something, wrong there or you know, something to think about.[00:50:00]

Amith Nagarajan: Very good. Yeah. I think there's, uh, there's so much to be learned by experimentation, as you had mentioned earlier, and there's, you know, there's many opportunities out there and to Greg's point, it is like that extra member of the team that you can bolt on and add into the mix. And it's not about handing over the reins to AI entirely or not at all.

It can be gradients of that. And I think there's ample room for experimentation with that particular use case and many others that we've discussed. Well, Greg, it's been an absolute pleasure having you on the show. Thank you so much for taking the time and sharing your experience and expertise with associations and nonprofits who listen to the Sidecar Sync.

Um, Greg, if people would like to get ahold of you because they're interested in, uh, your expertise or the services your company provides, what's the best way for them to, uh, to get you.

Greg Kihlström: Yeah. So two, two quick things. I mean, one, um, I'm very active on LinkedIn, so you can look me up, Greg Kihlström on LinkedIn. And then, um, my website is just gregKihlstrom. com. So, uh, just, um, a little hard to spell sometimes, but, um, just, uh, look me up. [00:51:00] I think even if you search with the wrong spelling, it should show up.

Mallory Mejias: Greg, thank you so much for your time today.

Greg Kihlström: thank you.

Mallory Mejias: What a great conversation we had with Greg today. Shout out to you, Greg, for sharing your expertise and your insights with our listeners today. I want to wrap back to something that he mentioned at the beginning of our conversation, and that is this fear of change within your business. This fear of a new technology, this fear of AI in particular, and that the first line of defense he mentioned there I'm sure if you've listened to this podcast before, you are certain that Amit and I are also on board with this.

We believe that education is the first line of defense to the fear of the unknown. How can you prepare for AI or for a new technology if you don't fully understand what it's capable of? So I really appreciate that insight from Greg. And I of course want to remind you that we have the Sidecar AI Learning Hub available.

If you are looking to deepen your AI education this [00:52:00] year, again, you can get more information on that bootcamp at sidecarglobal. com slash bootcamp. But aside from the sidecar bootcamp, continue listening to podcasts like this one and other AI podcasts, keep reading AI blogs and consuming all the free AI resources you can, because that is the only way to prepare for what's coming with AI in 2024 and beyond.

Thank you all for tuning in and we will see you next week.

Amith Nagarajan: Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book, Ascend, Unlocking the Power of AI for Associations at ascendbook. org. It's packed with insights to power your association's journey with AI. And remember, Sidecar is here with more resources for webinars to boot camps to help you stay ahead in the association world.

We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.

Mallory Mejias
Post by Mallory Mejias
January 25, 2024
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.