Show Notes
In this episode, we unpack the wealth of knowledge shared at digitalNow 2023, where association leaders converged to explore the transformative power of artificial intelligence. We also unpack the latest announcements from OpenAI's Developer Event, from Custom GPTs to the new Copyright Shield initiative, and how these developments can be harnessed by associations to stay ahead in a rapidly evolving landscape.
CEO Mastermind Group: https://sidecarglobal.com/association-ceo-mastermind-2024/
Download Ascend: Unlocking the Power of AI for Associations: https://sidecarglobal.com/AI
Let us know what you think about the podcast. Drop your questions or comments in the Sidecar community: https://community.sidecarglobal.com/home
Follow us on LinkedIn: https://www.linkedin.com/company/sidecar-global
Thanks to this episode’s sponsors!
- Betty Bot: https://bettybot.ai/
- Cimatri: https://cimatri.com/ai
Relevant Articles/Websites:
- digitalNow Conference: https://www.digitalnowconference.com/register/
- Everything announced at OpenAI’s first developer event: https://techcrunch.com/2023/11/06/everything-announced-at-openais-first-developer-event/
- Custom GPTs: https://openai.com/blog/introducing-gpts
- Global user demographics of ChatGPT in 2023, by age and gender: https://www.statista.com/statistics/1384324/chat-gpt-demographic-usage/#:~:text=ChatGPT%20is%20used%20most%20widely,60%20percent%20of%20ChatGPT%20users
Greetings and welcome to the latest episode of the Sidecar Sync. We are really excited to be with you today. And we have a crazy episode with lots of interesting updates, fresh from digitalNow, as well as OpenAI's inaugural developer event. We have a lot to cover today. First of all, let me just give thanks to our sponsors.
We have two sponsors for this episode. Cimatri is an IT strategy firm specifically focused on the association sector. Cimatri provides a wide array of technology consulting services and focuses specifically on artificial intelligence and artificial intelligence planning for associations. The website is Cimatri.com.
Our second sponsor for this episode is Betty Bot. Betty is an artificial [00:02:00] intelligence agent for your association. She learns all of the content that your association has ever produced, and through that extensive knowledge is able to provide intelligent answers to any question that anyone asks of her, you can deploy Betty as a member service agent as a knowledge base expert and Betty is able to bring a new form of engagement to your audience and to the extended world that might be interested in your content. The website is bettybot.ai.
Mallory: Thank you to our sponsors and Amith, how are you recovering post digitalNow, post time change?
Amith: You know, conferences are always awesome, but really draining. And so it's fun to get together with people and have so many great experiences shared, but it's also nice to get back to your office and, you know, get through your backlog. So I'm, I'm doing pretty well. How about you?
Mallory: I am doing pretty well myself. I had the opportunity to MC digitalNow, which was really exciting. And kind of the first time that I [00:03:00] ever did something like that, we kicked off the event last Wednesday evening with an opening reception, and then there were two full days of sessions after that on Thursday and Friday.
And then Saturday we wrapped it all up with this really awesome workshop. And for me, it was really neat to see so many people excited to talk about artificial intelligence and also just excited about community and networking. It was really interesting. Some people would approach the Sidecar team and say kind of for them, content was first and foremost, the reason why they were attending the conference.
And then other people were coming up to us and saying the network and the community was why they were attending the conference. So, I think it was just really nice to see folks in person excited about AI.
Amith: Absolutely. It was a fantastic event. We had a great turnout. Denver was a wonderful host city for us. And I think people learned a lot and they came away excited, you know, ready to go and do stuff. And that to me was, was the super powerful part of it, there was practical applications of AI that association leaders walked away with [00:04:00] that they're already in the process of implementing.
I'm hearing from people through LinkedIn and the Sidecar online community and other ways telling me about what they're up to. It's super exciting. Yeah.
Mallory: For sure. Digital now is actually our first topic of the day. For those of you who aren't familiar with it, I want to give you a little bit of an overview of what the event is and how it works. It brought together industry leaders and innovators to share insights on the intersection of technology and associations.
Sessions covered a broad spectrum of topics from leveraging AI and machine learning to predict futures trends to harnessing data for greater impact and creating cultures of innovation akin to giants like Amazon. The event also delved into practical applications of AI. Like Amith said, the importance of creating a data driven culture and the ethical considerations of AI integration.
With a focus on actionable strategies, digitalNow provided attendees with tools to navigate the complexities of digital transformation, ensuring associations remain relevant and resilient in the face of technological change. Amith, this is [00:05:00] probably a really tough question, maybe an easy one. What were your biggest takeaways from the event?
Amith: You know, I reflected on this before the event and actually after in terms of the change we’ve seen in just over a year since digitalNow 2022 occurred in October. I think it was mid October in 2022, and we had it in New Orleans in ‘22 and the difference in the attendees and their level of interest in AI specifically.
I expected people this year would be more interested in AI, doing more with it, and ready to go do even more. But the magnitude of the difference was surprising even to me coming in with that expectation. So, people are experimenting with AI. They are overcoming their fears. They're learning a lot.
They're building policies to guide responsible A. I. Adoption A. I. And they're experimenting, experimenting, experimenting. We're hearing stories of that [00:06:00] not just the digitalNow, but in general. The digitalNow audience tends to be folks who are not only forward looking, but they're people who are super interested in disruptive technologies, new ways to drive a culture of innovation and experimentation.
So to me, that was number one is what people are actually doing. You go to a lot of conferences and people talk about things, but they're not actually doing a lot. And that was one big, big difference this year compared to last year. The other thing that I think was really interesting wasn't so much the content itself, although the content was awesome.
Like there were some great, great sessions and keynotes throughout the whole event. But to me, the interesting other takeaway is people's acceptance of AI’s rapid arrival. Not that it's coming, but that it's here and that AI Is changing at this insane pace. Part of what I spoke about was the six-month doubling rate.
And so, for those that aren't [00:07:00] familiar with this, artificial intelligence is sitting on top of other exponential trends, like in computing Moore's law, which was the doubling of computing capacity every 18 to 24 months that we've benefited from for 60 years almost. And AI has a similar exponential curve, the only thing is, is that comparatively speaking, it's much crazier because there's a doubling of capacity in AI every six months.
But the takeaway for me, going back to your question, Mallory is people seem to get that they realize that what's happening is not something that you can either ignore or defer on that. The time is now to take action and I think people are also realizing that because of that six month doubling, even something you can't do today is something you're probably going to be able to do in a few months.
And that's exciting.
Mallory: Agreed. What's funny is, even though Amith and I have not chatted about this question prior, that was really one of my biggest takeaways too. I went to digitalNow last year as [00:08:00] an attendee and it seemed like the topics, they were Web3 and artificial intelligence, at least from my perspective, I was also very new to the AI world for sure at that time, was that everyone was interested, intrigued, wanted to learn more, but maybe weren't quite ready to dive in compared to this year's conference where people were came in ready, we're looking for actionable next steps after the conference, and I really heard tons of conversation about a I even outside of the sessions, which I think was really important.
When I got on stage to do my welcome remarks, I asked how many first-time attendees there were in the space. Kind of thinking maybe 20 percent of people would raise their hands. And it was almost all new people, which I think is really exciting and just shows that people are willing to learn more about AI and excited about it.
Amith: Yeah, I found that really exciting, too. I think it was. I think the stats were something like 52 percent 55 percent were first time attendees, which is amazing. So there's a lot of growth in [00:09:00] kind of fresh attendees to the events. And I think people are drawn into it because they knew the content was aligned with that type of thinking. And to me, whether someone was a 10 year attendee and they're the CEO of an association or a first time attendee and you know, very early in their career path, they seem to come away with actionable insights, right? To me, that's the big thing is you need to understand what's happening and that's part of what conferences teach you. You need to be inspired to do something, which is obviously the prerequisite of taking action, because if you're not inspired, you know what difference does it make? Right? But then you actually need to be able to take action. And so the ingredients for that are practical insights, things that actually can help you to go do stuff.
And so, whether someone was a 10th year attendee or more, because digitalNow has been going on for 22 years now or first time or, you know, at any level of experience, they were all walking away saying, I'm gonna go do this and this and this. And that's what I kept asking people is You know, what's the favorite thing you learn and how are you going to apply it?
And there were actual [00:10:00] answers this time. Compared to a year ago, people were interested in AI, they were interested in the topics of digitalNow, but they were kind of, you know, unclear about what they were going to do next, or if they were going to do something next.
Mallory: For sure. Can you share from some of the attendees that you spoke with, what were some of those action items that they were doing next that were interesting to you?
Amith: The most common one was training. So, and this is, you know, perhaps every single speaker, I think, stated this. I certainly hammered it home a lot because it's the number one thing that I think is critical in any disruptive time is to invest in learning. And, you know, you need to walk the talk as the CEO of the organization or in any leadership role.
Whether senior or not, you need to be learning yourself. You need to share what you're learning. You need to encourage learning. And you need to create an environment that's not just safe for learning, but safe for experimentation, which is a form of learning. And so, I think that was the number one thing people were saying is, I'm going to do the AI Learning Hub, or I'm going to read a [00:11:00] book, or I'm going to... run a small experiment and then share the results. So people were thinking about things that had to do with coming up to speed, you know, because A. I. Is changing so rapidly. You know, it is overwhelming. You know, people have told me that consistently for a long time, how overwhelmed they are by the pace of change.
And I get that. And I too am overwhelmed. You know, I tell people that as well. Like I spend almost all my time dealing with this subject and have been deep in a I for almost a decade now and in the software industry for 30 years, I too am overwhelmed. But the only cure for that is learning. The only cure to move forward with that actually has a useful purpose is to go learn stuff because that puts you on a track to keep up with at least some of that change.
Whereas if you're just kind of like throwing your hands in the air saying it's moving so fast is such a big deal and you're not attempting to move forward and learn, you're moving at zero miles per hour. You can't do a whole lot of anything. So anyway, I think that's the number one thing that I heard people say.
They were [00:12:00] definitively going to move forward with training programs of some sort. Had a number of people come up and say, yeah, I really want to join the CEO mastermind, which is a group that I run with CEOs of associations where we meet on a regular basis to really dig deep into the strategic impact of AI.
There were a lot of things like that, you know, where people really had a specific learning related objective and that really, really, really excited me because I think that's the entry point to really driving change.
Mallory: You had a keynote session on AI. I'm wondering if you can talk a little bit about that and then also maybe mention, you served on a keynote panel after that, were there any really hard-hitting questions that people asked that really resonated with you?
Amith: Well, in the keynote, what I tried to do is really paint a picture of what is happening in the field and what's likely to happen in the next 12 to 24 months. And, you know, what is happening and has happened recently is stunning and at the same time is nothing compared to what we're about to experience.
And it's hard to [00:13:00] conceptualize that because part of the way we've evolved as a species is really in kind of a linear way. And one of the charts I threw on the screen was the pace of progress in terms of both knowledge and kind of computational capability over time. And, you know, the exponential curve basically looks like it's flat for a long period of time.
So, really, for like, you know, society and our species existence, it's been pretty much the same for hundreds of thousands of years. And then all of a sudden, it's not. And even in the last 2, 000 years, you know, there was a lot of things were really similar for 1,000 plus years, even 1,500 years, arguably.
The last 500 years are different. And if you keep slicing it down and say, well, yeah, things were really similar for 1900 years. And then in the last hundred years, all this stuff happened. So that pace of change, I think. Was kind of like painting the picture of the landscape of the world we're in. But part of what I was trying to do is also explain what's about to happen.
So there's certain near term trends that are quite clear. In fact, interestingly you know, we knew [00:14:00] that the OpenAI event that we'll be chatting about shortly was going to happen on this Monday, two days ago. And there were certain things we were pretty sure that open AI would unveil.
That's not like the thing that's happening in AI. It's more of a really clear indicator of one of the foremost leading companies in the space and what they're doing. But what they ultimately unveiled on Monday was similar to what we were thinking they would. And it was, you know, it was an exponential change again from what they had announced back in the March time frame.
So to me, what I was trying to do with my keynote I knew we had a lot of great speakers that we're going to talk about specific things. So what I was trying to do was to kind of paint that landscape for people so they understood where they are relative to the landscape and on that journey. And again, it's hard to conceptualize exponential change because Our society is built around really incremental change.
Very, very small change. So it's a tough problem. But my biggest, the most exciting thing I heard about with, with respect to my keynote was that people came away saying, yeah, we need to plan for [00:15:00] what's coming in six months, 12 months, 18 months, because of that six month doubling curve, each of those intervals of time represents a doubling in capability roughly and that's, that requires a little bit of foresight.
And so that, that's really what I was talking about.
Mallory: Yep, absolutely. So you would say with regard to AI that people should be planning ahead six months out, no more.
Amith: No, I don't think that's the end of the story. I think six months is the, you know, when you forecast anything, whether it's weather or, you know, the economy or in this, this case, AI's progress, the shorter the time period is ahead of where you are at the moment, the more likely you are to be accurate, right?
The order of magnitude of change in that time period is smaller, so it's easier to predict. And we have a pretty good idea of what's happening in the next three months. A decent idea of six to nine months, even 12 months. And beyond that, it starts getting a lot fuzzier, obviously, in any field of practice, certainly one that's changing as rapidly as A. I. Is so when we do [00:16:00] roadmaps, which I mentioned earlier, one of our sponsors Cimatri does a I roadmaps for clients and I help them out with some of the clients. We focus on a 24 month period of time. And we break it up into six month chunks, kind of roughly correlating to these doubling periods, and the first six months is quite granular.
There's a lot of detail, and the next six months is a little bit less and so on. And the key thing to it, though, is to keep reevaluating it. So, a lot of people like to say, hey, we have a quote unquote plan for the next two years. Set it and forget it and just execute on it. But the plan could be wrong, you know, and you won't know, possibly for a little bit of time into that into that journey.
So that's a big part of it is recognizing that you're going to have to keep updating a roadmap or a plan.
Mallory: The roadmap makes me think a lot of Dr. Rebecca Homkes’s session on the last day. It was a workshop called survive, reset thrive. And she gave a really good example with Alice in Wonderland. So Alice lost in the forest goes up to the Cheshire cat. And she's lost and she's asking, you know, can you help [00:17:00] me?
The Cheshire Cat asks, where are you going? And she says, I'm not sure. And I'm paraphrasing here, but the Cheshire Cat says something like, well, if you don't know where you're going, any path will take you there. And I think that really resonated with me and hit hard. Dr. Homkes said, if we don't have an idea of what success looks like, or what a finish line looks like, or what our AI strategy looks like, we have no path to get there. And so, that's something that really stuck with me from that last session.
Amith: I totally agree with that. And you know, the other thing is we know certain paths that will not get you there. And so, you know, any, any progress is progress on the one hand. But then the flip side is, let me tell you some things I was discouraged by and my reaction to them. The number one thing was people saying we have existing priorities that are taking up essentially all of our bandwidth. Whether it's financial resources, mental bandwidth, energy, et cetera. We have existing priorities. It might be that we're [00:18:00] rebooting a major technology system, like an AMS selection upgrade, right? It might be that we are making some major organizational changes to our product pricing or something like that.
But a lot of it was technology because people, understandably, they group AI initiatives into the technology bucket broadly. And so I was told, oh, yeah, well, we're just starting this process of replacing our AMS, we're several years behind in doing that. And I said, Oh, that's interesting. I said, so that's that's going to precede your AI plans.
They said, yeah, we have to get it done. I said, well, tell me more about that. Why do you need to get it done? And they said, well, we're like three or four years overdue, and our AMS is old and creaking and really bad. And I said, okay, so let me ask you this. Like, is there a way if let's say two years from now you woke up, you fast forward and you still have the same AMS would you be out of business? And the answer was generally no. There were a couple of people who said, well, you know, we have a data leakage problem where literally our, our AMS has corrupted data and the data keeps getting worse because it's a 30 year old [00:19:00] system and the developers even exist. And I heard that once or twice.
Most people said, no, our employees don't like our AMS. They think our AMS is, is not very effective and easy to use, but it works even though it's clunky and it's slow and it's expensive. And my question was, well, if you deferred the process of upgrading your AMS, which by the way, I am no fan of technology, debt, and keeping old systems around, but with limited resources, if you have to choose between an AI Road Map and executing something on your AI journey in 2024 versus an A. M. S. Upgrade. My point was simply this. You know that your AMS upgrade will not take you meaningfully further on the AI journey, and that's by far a bigger impact in terms of your business and your ability to serve your members and your audience.
So why do it? Why not just figure out a way to put it on pause and redirect that money and that time, which is as important, if not more important [00:20:00] than the dollars to something meaningful in AI and a lot of people started thinking, Oh, yeah, you know what? We know that a new AMS is probably going to be marginally better than our last one.
Not like a radical change. So, you know, why do that right now? So anyway, that's the general point I was making. I realized that's kind of a generic statement, and every organization situation is a little bit different. My main point was, reconsider your existing priorities and be willing to turn off projects, kill projects, or defer projects so that you reclaim some of that bandwidth.
Your organization has a lot of resourcing. Even if you have 10 people and a small budget, you have capability, but you're probably sucked into all sorts of activities, some of which could be eliminated, some of which could be deferred. And people don't think about that enough. They're too tied to what they've been doing.
That inertia around current process and current projects is really deadly when it comes to being able to rapidly change and adopt new capabilities, in particular, [00:21:00] AI.
Mallory: Do you see a path for AI to work with AMSs in the future? Is that already happening? Could AI disrupt the AMS structure as is?
Amith: You know I don't know what the AMS vendors are doing. I have to think that the AMS vendors are taking this seriously and doing something with their AI strategy, and I would be very pleased to see AMS vendors generally adopt A. I. And improve the quality of their products through AI and all this stuff.
But the AMS vendor community generally moves fairly slowly in my experience, and you know, even if the AMS vendor community radically embraced AI and released, let's say universally all AMS vendors, and there's dozens of them all released AI features in the next six months. That doesn't mean that customers are gonna start using them or that the features within the AMS are really the important ones when it comes to AI adoption. So from my point of view, the AI, the most important AI capabilities are the ones that are at the intersection of your association and the outside world, not the internal ones. [00:22:00] And AMSs, which take up the lion's share of associations like financial and mental bandwidth when it comes to tech are 90 percent plus focused on internal process.
And so the problem with it is fundamentally, I don't think you can move the needle in the areas that matter the most, which is how you engage with your members, how you deliver content, how you interact with your committees, how you deal with volunteers. Those are all things where AI is going to rewrite the book very quickly.
And if you don't rewrite your book, you know, people are possibly going to go somewhere else. So, from my point of view, the AMS industry isn't the path through which people will be adopting AI rapidly. Also, any kind of change to a system that's as integral to their back office business processes as an AMS is going to be treated with extreme caution in terms of the speed at which people change their process or upgrade. And part of that's understandable.
Mallory: Yep. As all you listeners can see, we talked about some really interesting topics at digitalNow. We could keep this conversation going for a while, but thankfully, if you didn't [00:23:00] attend, we are posting recordings of our sessions really soon. So be on the lookout for that on the Sidecar LinkedIn and on the Sidecar Community.
I guess it was the week of events because not only did we have digitalNow, but we also had OpenAI's inaugural developer event, a landmark occasion that was held on Monday, November 6, 2023. It was packed with big announcements that will shape the future of AI development and application.
Here's what unfolded. ChatGPT has reached a staggering 100 million users weekly, a year after its launch, making it one of the fastest consumer products to achieve this user base. Additionally, over 2 million developers are actively building on its API. The latest iteration of ChatGPT, GPT 4 Turbo, was introduced, offering a text only model and a dual text and image understanding model.
It boasts a context window four times larger than GPT 4, which we'll dive into later, and a knowledge [00:24:00] cutoff as recent as April 2023. Custom GPTs, arguably the thing I am most excited about. OpenAI is empowering users to create their own GPTs for various use cases without any coding requirements.
This includes the ability for enterprise customers to develop internal GPTs based on OpenAI's knowledge base. With that comes the GPT store. This is a new marketplace for user created AI bots, which is set to launch soon, featuring creations from verified builders and offering a potential revenue stream for popular GPTs.
OpenAI also released a few APIs. One of those being the assistance API, which allows developers to create agent like experiences capable of retrieving external knowledge or executing specific actions, ranging from coding assistance to AI powered vacation planning, which Amith and I have talked about in previous episodes.
There's also a Dall-E 3 API. The text to image model Dall-E 3 is now accessible, complete with moderation tools and a variety of [00:25:00] output. There's an audio API as well. Text to speech API with six preset voices providing access to two generative AI models. And last but not least, a little bit of a different line of thought.
There's a program called the Copyright Shield. This is a new program to protect businesses from copyright claims. When using OpenAI's products, offering to cover legal fees in the case of IP lawsuits. Amith, I'm wondering, what are your thoughts about all of these developments? There's a lot, I want to dive into most if not all of these with more detail, but what are your initial thoughts?
Amith: Sure. Well, I think, you know, depending on who you're talking to, the most exciting aspect of the announcements was pretty much in one of two categories. One was the custom GPTs that she mentioned, which are essentially this ability for you, as a non-programmer, to create your own GPT, essentially your own version of these chatbots, but they can be tuned to your particular use case so they can be given instructions to say this is what you're [00:26:00] supposed to do, like your specific capabilities, and you can also inject into your custom GPT proprietary knowledge, as Mallory was describing, and I can't underscore enough how important that is, because, you know, say, for example, you're an author.
And so, we just released a book called Ascend, Unlocking the Power of AI for Associations. And actually, about five years ago, I wrote a different book called The Open Garden Organization. Imagine if I wanted to create a GPT called GPT Amith. And I decided I want this thing to be able to answer association questions kind of in my style of conversation with the knowledge base. And perhaps those two books are relevant because they're two bodies of work that represent some of my thinking that I've published. Maybe there's some presentations I've given over the years. so I can upload this content to a custom GPT called Amith GPT, and I can put that in the store or I could make it available to anyone I want.
Now it does live within ChatGPT, so it's kind of captive to that ecosystem. But the functionality is really tremendous. [00:27:00] And I think it's exciting because it allows a non-programmer to create these things.
You can create these things in other ways. Like if your association wanted to create a truly extensive version of this, there's products obviously like Betty Bot and there's other ways to create true enterprise wide. You know, chat agents that use your knowledge base. Custom GPTs aren't quite at that level.
They're not enterprise grade at this point in terms of the scale of content they can ingest or kind of the other things that you want to do in terms of guardrails around how these bots respond. It's basically ChatGPT plus some extra knowledge and some instructions that make it have a particular personality.
But I don't want to underplay how important this is because this is going to put the ability to create a custom GPT in the hands of literally everyone who can type if you can, if you can type or speak English you can and actually other languages, you can interact and create a custom GPT. So, I'm pumped about that too.
And so everyone I've talked to that's not a developer has been able to zoomed in on that. Developers have been focused actually on context, length, speed and cost. So Open AI [00:28:00] dropped their cost by 2. 75 times. So almost 3 X reduction in cost while also increasing the context length. Of their most powerful model, actually for most people by about 16 X.
So it is technically 4 X. As you mentioned earlier, where there's a 32 K context window, it's now 1 28. But most people didn't have access to that. Most developers had access to an 8 K context window. And so we're going up to 128, which is a significant bump. Why are developers excited about this?
Well, context window is kind of like short term memory. So, let's say I'm talking to a brilliant scientist, who is the most experienced, knowledgeable climate change scientist on the planet. And I'm having a conversation, I'm saying, you know, a bunch of things because I'd like to learn more about, you know, climate change and, and climate science.
And this scientist is brilliant. The person has unbelievable knowledge, but there's just one problem with this scientist. They have very bad short-term memory. They only remember about the [00:29:00] last minute or so of my conversation. And after that, they forget who I am and what I said. So that is kind of a limitation.
Even though this brilliant, incredible person is available to me to talk and answer questions and possibly even do things for me. The person only has a one minute memory. That's the way I like to explain context windows with AI. The AI is unbelievably capable, but the context windows have been very, very small.
So getting a 16 X improvement as a practical use of context means that you can go from, you know, a handful of paragraphs of information that are remembered by the AI model to literally a couple of business books. So you could take both of the books that I've written Ascend and also The Open Garden Organization make and load both of them into the context window of the newest GPT 4.
So, again, why are developers excited? Because developers are building apps that utilize as much of the context window as they can. And then the lower cost obviously helps, and it's [00:30:00] also much faster, by the way. GPT 4 Turbo deserves that name, just like GPT 3.5 Turbo was the faster version of the older model.
So, those are probably the two main themes that I'm seeing out there with developers and non developers.
Mallory: In terms of the context window and short term memory, I, like I'm sure many listeners here, have been using some of the same threads on ChatGPT since the beginning of the year. So when I think of that, I think, oh, well, it does have memory, right? Because everything I've ever thrown into the newsletter thread I have with ChatGPT, it remembers, quote unquote.
So can you kind of explain that a little bit further, what you mean by that short term memory?
Amith: Sure. It's it is a little bit confusing because if you look at the length of your conversation with the chat agent, it looks like it's got unlimited length and it does. It stores all that in its in its in its database essentially. But the that's not actually in the AI model that's just stored somewhere on an Open AI server that it knows that's the conversation you've had.
Think of it this way. Let's say I had a minute worth of short-term memory in [00:31:00] terms of just conversation. In that minute, I remember probably a little bit of what you first told me when the start of the conversation occurred. So, in that first minute you introduced yourself. You might've told me a little bit about what you were looking to do, and I probably remember what you just told me, but what's in the middle is what gets compressed, and that's how our brains work, and that's also how these AI models tend to work.
And that's a little bit of what's happening in the longer chats where it looks like there's a lot retained. But really what's, what Open AI's chat agent is doing is compressing a lot of that intermediate knowledge. remembering a little bit of the beginning, a little bit of the most recent or a lot of the most recent requests and then kind of compressing it down.
So, let's say again in that minute long conversation we had and we've really had like an hour long conversation compressing means just summarizing. So like he keeps on chunking it down to smaller bits where it's like, here's a bullet point of 10 words that came out of a five minute conversation. And that's how these chatbots work is they don't actually have a high [00:32:00] resolution memory.
And the longer context window helps solve for that. It helps create a much, much better short term memory.
Mallory: That makes a ton of sense, actually. And I didn't know that. So like I mentioned, I use a newsletter thread on ChatGPT, where I've kind of trained the model on how I like to write weekly newsletters. And every week I go in there, I drop in links to relevant news items across the Blue Cypress family of companies across Sidecar, and it generates the newsletter blurb that way.
But oftentimes I'll have to remind it to reference a note that I gave maybe a couple weeks ago. Remember, I didn't really like when you did this. And that makes a ton of sense. It's compressing all of that knowledge into, I guess, a short form bullet point you could think of. And so when I remind it, I will say it says, Oh yeah, I remember.
And then fixes it. But yeah, that's a really interesting point.
Amith: Right, because the reminder is just like dealing with a person. You've kind of refresh their memory and you've more important than even refreshing the memory. You've put that most important piece of information at the most recent part of the [00:33:00] memory, which is what it uses. It overweights on that. It knows much better how to do that.
Now we're just talking about custom GPTs and you know, your comment gives me an idea of something I think people will find very useful. My example of the Amith GPT that knows both of my books and so forth is available for other people to use. I think could be interesting. But I think your example potentially has a much broader use case in that you are producing a newsletter on a consistent basis.
And there's certain instructions that you want to be retained. You always want a certain style, a certain tone of voice, perhaps a consistent length. And those basic instructions up until now, as a non-programmer, you do not really have a good way of kind of codifying that into the chat so that it's always there.
And with custom GPTs, you can provide instruction settings that are always there and are always kind of top of mind, so to speak, for the AI. So you could say, here are the 10 most important rules for Mallory's newsletters. And Mallory's newsletters could be a custom [00:34:00] GPT that understands those things and won't forget those things.
So, it's much, much less likely to forget those things. And then on top of that, you could say, here are five newsletter examples that I've written. That is my best work that I think I've gotten the best reactions to use this as an example for tone, for style, for length, et cetera, and it will remember that much better.
And then you can go and use that custom GPT and have as long of a conversation as you want. And the pieces that you've quote unquote programmed into those instructions, which is by the way a form of programming, I don't need the air quotes really, it's just a new form of programming. But you as a user have programmed that into your custom GPT, it stays with it.
So I think that's actually a use case people will just, will just blow up, is custom GPTs for personal use, custom GPTs for your team's use, where let's say an editor might create a custom GPT that has their way of thinking about a particular type of assignment and gives all of their writers that custom GPT to chat with instead of the generic ChatGPT.
There are so many ways that you can use this [00:35:00] technology.
Mallory: I am obviously very excited about all of the custom GPT options that are out there. Open AI actually published a blog where they gave some examples, which I think is really helpful to contextualize just how fun some of these GPTs can be. One of them was a creative writing coach, which can read your work and give you feedback to improve your writing skills.
There was another one. called a Game Time GPT, which quickly explains board games or card games to players of any skill. Gosh, I have wished many times that I had something like that. Also a Laundry Buddy GPT, where you can ask it anything about stains, settings, sorting and everything laundry. Amith, I know you played around with this briefly yesterday, I think with what you mentioned as an Ascend GPT, perhaps.
Can you talk a little bit about what that was like? I'm not sure if it's fully released yet, even It wasn't available for me as of yesterday, but tell us about that.
Amith: Yeah, you and I and 100 million of our closest friends all tried to hit that at the same time. So OpenAI is, I believe they're just slowly [00:36:00] rolling it out to people. I actually don't have access to the custom GPT capability directly but I do have access to something called the OpenAI Playground, which is more of a developer tool.
And the underlying technology for it, I was able to utilize through that mechanism, which is using something called the assistance API, which is what the custom GPT is built on top of. And so I was able to create something along those lines, which is, I think, the link that I shared with you that had one of the books loaded into it.
And it was exactly what I expected it to be, really. I mean, it's just basically taking that knowledge and augmenting the generic chat GPT tool with a particular body of knowledge that I, you know, grounded it with and told it to only use that knowledge in order to answer questions, which, by the way, is an important thing.
A lot of people worry about the knowledge of a language model like ChatGPT. And the reality of it is, is that if you are up until now a programmer or now really anyone. You can provide direction on how much of that knowledge, if any, you want the AI to [00:37:00] use. To answer questions. So, for example, if I were to say only use the knowledge from my books and do not use any of your preexisting knowledge, then generally speaking, the AI will respect that.
And that dramatically reduces so-called hallucinations, these made-up answers. It also reduces inconsistencies, even independent of hallucinations you probably, from your custom GPTs, want your answers to come out, not conflicting answers that come from a different mindset or a different body of work.
Maybe you don't, depends on the on the tool that you're building. But short answer to your question is that I actually haven't played with the custom GPT capability yet, but I played with the underlying technology and it's pretty impressive. Yeah. You know, the other comment I wanted to make on it is in the developer community, it was exciting on the one hand, I think, for a lot of people, but a lot of people also were scared, you know, because they said, Oh, well, there's this whole ecosystem, you know, of two million developers.
And I'd be willing to bet you at least half of those people were building a lot of these apps that you [00:38:00] see, which are, a lot of people call them wrapper apps, with, wrapper with a W and these wrapper apps. Essentially have a purpose of just essentially exposing the ChatGPT functionality in a different way on.
Now you can build those directly into the core OpenAI product. And to me, that's incredibly obvious that that was going to happen. Because if you think about it, what is OpenAI going to do? They're going to pursue horizontal growth opportunities, meaning they're not going to create something specific to an industry like manufacturing or financial services or the association market, at least not likely.
They're way more likely to create additional features that everyone wants. And so those are features that are kind of abundantly obvious that a lot of people would want. Another example of this that's not in the open AI realm, but it's also similar is, you know, people were kind of surprised that Zoom and Microsoft Teams are adding summarization capabilities and action items and all these other AI generated enhancements to the [00:39:00] meeting experience.
And to me, once again, it's incredibly obvious that was coming because these platforms have better data. They have all the incentive in the world to do that and to bake it in you know, is just incredibly important. I'll give you an example of this is that it used to be that if you had an email tool, an email server tool or an email application, if you wanted to filter out spam, you'd have to buy a third-party product.
You'd go and you'd either download it if it was a consumer tool, or if you were an I. T. Administrator, you would install a server spam filter, but that's like ridiculous sounding now because it's just built into these cloud services we all use. It's just part of it because that's a necessary part of your experience.
Similarly, with OpenAI’s movement, they're going to be incorporating features into their platform as well. Microsoft as well. Amazon as well. Google. that are naturally useful to a broad population of people. Now, I bring that up here because if you happen to be a developer listening to this podcast, think about the moat that you're [00:40:00] going to build for your product.
Don't build things that are so obvious and so like vanilla that it's useful to everyone because you're going to get run over by one of these platforms or all of these platforms. Instead, find something where there's differentiated value that's actually I like to call it a durable moat, one that is not easily penetrable by anyone who just has as much or more capital than you.
But the real audience for this podcast is association leaders. And if you flip that around, I would say to you this is important to think about as it relates to your planning, because if you're out there buying systems and software tools I would urge you to consider what will likely be a feature of Microsoft or Google or zoom or slack in 6 to 12 months versus what you need a third party tool for. So think clearly about where things are going. And, you know, rather than saying, you know, having an ecosystem of 50 different tools. Think critically about what's likely to retain distinct value independent of the core platform and what's not. Anyway, that's a little bit down [00:41:00] the rabbit hole in terms of where things are going, but it's really related to what's happened with OpenAI and this, a bit of an uproar in the developer community about how, you know, hundreds of thousands of startups were basically put out of business on Monday.
Mallory: Wow, this is something I don't want to go too far down this rabbit hole, but I do have a follow up question here. I think about this a lot. What you're saying about these features that are important to everyone being baked into platforms like Microsoft or Google, what happens to tools like Otter.ai or Fireflies or whatever they're called? Let's say these meetings, summarizers, note takers, I guess I'm asking what happens to those when all of these features are already baked into platforms that we currently use?
Amith: become a relic of history. You know, most of these products will die. And if you are dependent upon them, that could be a short-term problem or maybe a bigger problem depending on the nature of the product. For the company founders and for the capital that's invested, that's obviously a negative outcome as well.
But ultimately, you know, think about it from the perspective of economic [00:42:00] cycles and the maturity of a market or the maturity of a category. Over time, the more mature something becomes, generally speaking, the more first of all, the barriers to entry in that market are higher because the expectations of the consumer are also higher.
But early on, when a market is brand new, the expectations are lower. The capital required to get into a market is lower. And that's what's happening here is we're super early. And so there's this craziness, this frothiness, there's crazy amounts of money chasing the prize. And so, there's just literally thousands of these things popping up as you'd expect in a massive potential market that's early, early, early in its maturity and over time that's going to evolve.
Now, AI is a great thing to study because, you know, traditional economic curves tend to flatten out after a while in terms of growth and also the maturity of those markets tends to cap out. I don't know what's gonna happen with AI with six month doublings. That's just an enormous opportunity. So I think it's an interesting thing to be thinking about deeply.
You know, for these, for these products that go away. [00:43:00] Like, I don't know about Otter. I actually, I love Otter. I use Otter all the time. I used it this morning and it's right now best in class from my perspective in terms of note taking, summarization, a variety of things. But if Otter was baked into Microsoft office and it was, you know, 80% as good, I'd start using that.
I mean, that's, and that's just how it's going to be. It doesn't need to necessarily be best of breed. It just needs to be good enough. And so that's, that's the reality of this world, I think.
Mallory: Going back to this topic, you mentioned that yesterday, you, me, and 100 million of our closest friends were trying out custom GPTs. I was interested in this number. Obviously, it's a huge amount of people that are using ChatGPT weekly. I wanted to learn more about the demographics of this group. What I found online was about 35 percent of those users are in the 25 to 34 age range, followed by about 28 percent of users in the 18 to 24 range.
I kind of have two questions about this Amith. First, what do you think is driving this massive adoption? [00:44:00] And then second, obviously, we both know associations are always trying to appeal and attract younger, new members from these age groups. How can they tap into a tool like ChatGPT or into this whole sphere of AI to attract younger members?
Amith: To me, it's simple. There’s so much value in this tool and, you know, I don't think this is always true, but certainly people that are earlier in their life experience tend to be more malleable in terms of the tools they're willing to experiment with and try. Not always. But, you know, that tends to be the case of these demographics.
I wasn't aware of them. They're super interesting, but they don't surprise me too much either. That, you know, I think this is over 60 percent of the total user base is between 18 and 34. And I think it's interesting in terms of the value creation for these different age groups. So if you're in that 18 to 24 range, it's basically like college aged.
And then if you're in the next group up, you're kind of like early career professionals. And so what are people actually doing with this [00:45:00] tool? One important thing to understand about the 100 million weekly active users is that that's not the number of people that have registered for ChatGPT.
That number is massively massively bigger. It's a much, much larger number of people who've created accounts. These are the number of people who log in at least once per week. And so, to log in at least once per week, you actually have to be getting value out of the tool. So to me, that's the number one thing, is that it's creating a new form of value that people previously didn't have access to.
Especially, like, it's not quite been 12 months, but it's very close to 12 months since ChatGPT came out. And so if it was a flash in the pan, it wouldn't be having that kind of usage at this point. So that's, that's the first comment is there's clearly a lot of value being created. You and I obviously can both attest to that.
Everyone I've ever really talked to who's gone beyond saying literally hello to ChatGPT has said, this is insane. And tools like it, you know, they've had similar experiences. Here's the point I'd make for associations though. Why is this attractive to people in that age range? So, number one, it's valuable.
Number two, it's instant on. [00:46:00] There's no waiting. You go to it, you get your answer. It's a 24/7, 365, and it's good. People don't like friction. People don't like waiting around. People who've been around a while might be kind of tuned to have to wait. If you're used to calling up an airline and sitting in a hold for 10 minutes and you just think that's the way of the world, that's life, then you're going to go do that because that's perhaps your only choice or you're just used to it. But what if Southwest or American or United or one of these companies put a really great chat interface that allowed you to have a great conversation and solve your problem quickly and feel good about it, right?
That could be an amazing differentiator for people who are in that space. And then, of course, it'll become expected and standard. And it's just like, can you book your airline ticket on a website? I don't know of any airline that doesn't offer that now, but originally it was something that reduced friction, increased ease of use.
So, coming back to the association market associations, I said this on stage at digitalNow, [00:47:00] they're really good at encumbering their audience with their process inefficiencies. And what I mean by that is, your website's a pain in the ass. That's what I mean by it. Your website reflects the history of your policies, procedures and internal systems inefficiencies.
And as a result of that just logging in can be painful. Giving you money is often painful. And again, I know some associations have done a really good job and worked hard at, you know, new modern contemporary web experiences, but they're the tiny minority. So if you have a lot of friction in your engagement path and then you wonder why people aren't coming to you, it's that simple.
It's not that they don't like you or they don't think that you're necessarily even the best in your profession or in your industry. They might very well think, wow, those guys do have the best content, but you know what? I just want to do this easily today. I don't want to have so much pain having to get that information.
So to me, that's the trend line I'd really pay attention to is clearly there's value and it's easy and it's, and [00:48:00] it's just instant on. So if associations can emulate that. Which they can. The technology is there today to create the ChatGPT experience for your members, for your audience. You can tap into that rather than being behind.
And the good news is actually there's lots of ways to do it and it is achievable by even associations with a very limited means and very limited, you know, internal technology skills.
Mallory: I think it would be really interesting to see an association create a custom GPT, perhaps with some of its knowledge and put that on the GPT store and see what happens. What do you think about that, Amith?
Amith: I think that’s a fantastic idea. I'd love to see that happen. It's so easy to do. I would encourage people to try it out or even for like their department. Again, this is one of those things where if you're an association leader and you're not the CEO it's a lot better to beg for forgiveness than to ask for permission when it comes to driving change.
I'm not suggesting that you put your most proprietary knowledge inside ChatGPT and just make it public. That is not what I'm saying. What I'm suggesting is you can create an experiment where, [00:49:00] say, for example, you're the director of meetings at your association. And your CEO is anti AI and it thinks AI is just horrible or just doesn't care about it is probably more likely.
But you have a lot of content that's related to your meetings. Well, potentially you could do an experiment where you take the pieces of content related to your upcoming meeting that are not particularly sensitive, right? Stuff that you don't really care about being public. They might already be public on your website.
Put it into a custom GPT and the custom GPT might be the 2024 ABC Association Annual Conference and you put it out there in the wild. You say this thing knows about all of our conference details. It knows about all of our sessions. It knows about all of our speakers. It knows about all of our past conferences because we've uploaded all the content.
You as the meetings manager, meetings director or even a meetings coordinator who just started at your association can literally go do that. Be careful about proprietary and sensitive content. I can't underscore that enough, but you can go play with things that are already public without much reservation.
And so to me, to Mallory's point, this is a fantastic [00:50:00] opportunity, not just for like a generalized GPT, but perhaps even more specialized GPTs for different functions in the association. Now what I would also say is there's lots of ways to achieve this outcome. Thank you. If you create a custom GPT, and by the way, Anthropic with their Claude tool and a number of other companies that have similar chat type products are going to be doing the exact same thing.
The downside to picking an ecosystem that you live within is that your custom GPT will only live within ChatGPT. So someone has to be a user, possibly a premium user. That's a little bit vague, whether custom GPTs are available to free users. TBD. But you're kind of picking a platform, right? If you're going to do that, could you create your GPT and all the other platforms?
Yeah, probably, you know, something similar will exist, but it's just, it's something to think about. You're kind of a captive element of that audience. You wouldn't be able to put this on your website, you know, as a standalone thing. For example there are ways to do that. You know one of the speakers at the digitalNow conference was a product manager from Microsoft, and she [00:51:00] demonstrated how to use Azure to create a chat bot.
In the Azure services, Azure AI services built on top of OpenAI and to upload a document that became a knowledge base for that bot, which you can then embed on your website. You can make it available internally. So Azure has a service for this. It's a little bit more on the technical side for sure. And then, of course, there's Betty Bot one of our sponsors for this podcast.
And Betty definitely does all of this as well. Betty's big advantage is that it's it's model independent. So Betty can utilize OpenAI. Betty can use Cohere. Betty can use open source LLMs and Betty does a lot of other things that are what I'd call more enterprise grade where it's focused on quality and accuracy and does a lot of other sophisticated things with grounding the truth of content that you don't get with these other environments.
But there's a range of solutions. You don't have to go out and buy something. You can experiment with either zero cost or extremely low cost. So to Mallory's point, I would really encourage you as well to just get out and play with [00:52:00] this stuff.
Mallory: And if you do get out there and play with it, let us know either on LinkedIn, go to the Sidecar community and the Sidecar Sync space, and let us know, we would love to hear about it and chat about it on the podcast.
Amith, OpenAI's announcement of a copyright shield for businesses using their models is quite bold.
In my opinion, the company said it will pay legal fees of customers using the generally available open AI developer platform and chat UBT enterprise. Do you think this move is more about displaying confidence in their system's ability to avoid copyright issues, or is OpenAI genuinely committing to cover potential infringement costs indefinitely for businesses?
Amith: You know, I think the devil's in the details. It's kind of like buying pet insurance. I'm not saying that I'm as skeptical that, you know, those of you that aren't familiar with it, that's an industry that's famous for being difficult to collect claims on if you buy that type of policy, but at least that's been my experience.
But I'm not suggesting that Copyright Shield isn't intended to be a broad form of protection, but I think it's worth [00:53:00] studying if you think of that as a reason to use OpenAI. I also think it's keeping up with everyone else because that's exactly what Microsoft has done. That is exactly what Google will be doing.
That's what everyone and that's my speculation. By the way, I don't have any inside information on Google, but I am confident that all of these companies will provide that type of indemnity. And why are they doing that? It's again, like most other things, it's not out of the goodness of their hearts. It's because it is a competitive differentiator that makes you feel more comfortable with their service.
The underlying thing is that, you know, these models to a large extent have been trained on copyrighted materials. And so the question is, was that legal? Will these models if, by use of, by using these models, will that open you up to potential you know, infringement claims from the copyright holder in some indirect way?
And so there is definitely a gray area around this. And there's emerging, you know, cases that are happening right now that'll probably tell us a lot over the next 12 months. But I think these companies are trying to provide some degree of assurance around it. I think [00:54:00] ultimately, it's one of these things where you know, if you listen to different legal experts talk about this, one of the common themes I hear is that the usage of the tool can determine whether something is infringing or not, as opposed to the tool itself.
The question is, is the tool itself inherently infringing on a copyright of a protected work or was the nature of the training that was used to build the tool, a form of fair use? Which is a doctrine that basically allows for certain types of use of copyrighted materials. And you know, that question is outstanding.
I think there will be a lot of answers to that open question in the coming 6, 12, 18 months. We'll see what happens. I think it's good. I think ultimately it's good that companies are providing this because it, for people who are concerned about it, definitely pay attention to that. Last point I'd make is they did very clearly say, and other companies have done the same, that you have to be within their terms of use.
So if you are using their tools in a way that is not intended, then you are not covered. So if you intentionally use ChatGPT [00:55:00] to, for example replicate the work of an artist or replicate the work of a writer, that doesn't mean that open AI is going to stand in and indemnify you from infringement claims that directly relate to your use of the software against their terms. So keep that in mind. It's, it's not like a, a protect all no matter what your behavior is. You're responsible for your behavior as well.
Mallory: Amith, earlier you mentioned this CEO mastermind group. I realized we haven't talked much about that on the podcast. Can you share some more details there?
Amith: Sure. I am co facilitating a CEO mastermind group with Mary Byers, who's a very well known author and speaker and strategic consultant and facilitator in the space. And she and I put together this CEO mastermind group because we both believe strongly that this is not a technology conversation as much as it is a strategy conversation and an execution conversation. And a culture conversation. And it's such a broad [00:56:00] sweeping set of changes that associations need to adopt that the CEO must participate in this discussion. And so we said, look, the two of us both have enough experience and relationships in this market to pull off, you know, forming a group like this, get together a group of CEOs to have a peer learning journey, essentially, is what we like to describe it as.
And so we have right now about 30 CEOs who are in this group. We meet on a regular basis and we discuss emerging topics in AI, of course, but really the crux of it is, is that we're using experience sharing, we're using forum style conversational techniques, which is basically a nod to folks like YPO and EO and Vistage who have, you know, confidential forums that encourage vulnerability and sharing and those kinds of things.
So we have a really good group of people that get together on a regular basis, share what they're doing, share their challenges, share their wins. And then of course, you know, I'm injecting into that a lot of AI content. Mary's doing an amazing [00:57:00] job facilitating the group and really our intent for the journey forward in 2024 is to grow the group a bit to have some additional CEOs come in who are deeply committed to change. They don't have to know anything about AI, but they have to be deeply committed to driving fundamental change in their in their association and to join the group. That's what the group is right now. And I'm pumped by it. I love this group. I think there's some amazing conversations and we're just getting started.
We've only had four meetings so far in what we call a pilot series, which was this quarter. And we're starting again in January with a monthly meeting, 90 minutes. And then we're also going to be doing an office hour every month. So CEOs are welcome to apply. And the best way to get more information on that would be on the sidecar community. We'll have a post on there that'll have a little bit more information on the CEO mastermind.
Mallory: Be on the lookout for info about that. This week was heavy on the events. We had digitalNOW we had the [00:58:00] OpenAI developer event and me. Thank you for the chat today. Thanks for sharing all your insights and reminder to everyone here, if you have any questions that you want to ask us, leave them in the Sidecar Community that Amith just mentioned, we will see you all next week.
Amith: Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book, Ascend, Unlocking the Power of AI for Associations at ascendbook.org. It's packed with insights to power your association's journey with AI. And remember, Sidecar is here with more resources for webinars to boot camps to help you stay ahead in the association world.
We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.
November 9, 2023