Skip to main content
Intro to AI Webinar

Show Notes

Join Amith and Mallory as they delve into the unexpected return of Sam Altman to OpenAI and discuss what the latest drama means for associations. They chat about the imperfections of artificial intelligence and how to balance “good enough” tools with your organization’s responsibility to be a trusted information provider. Finally, they highlight how generative AI search could be the next big (essential) thing for associations. 

Let us know what you think about the podcast. Drop your questions or comments in the Sidecar community: https://community.sidecarglobal.com/c/sidecar-sync/ 

Join the AI Learning Hub for Associations: https://sidecarglobal.com/bootcamp 

Download Ascend: Unlocking the Power of AI for Associations: https://sidecarglobal.com/AI 

Join the CEO AI Mastermind Group: https://sidecarglobal.com/association-ceo-mastermind-2024/ 

Thanks to this episode’s sponsors! 

Tools/Experiments mentioned:  

Articles/Topics Mentioned:  

Social:  

We have a tremendous amount to share with you think week. First, let’s thank our sponsors. Rasa.io is an artificial intelligence newsletter platform specifically built for associations. And basically what rasa.io does is it takes your content and it personalizes it for each email recipient.

So rather than sending out the same email to all of your list, you send out truly personalized one to one emails. The result of this is that rasa. io increases your engagement, increases your open rates and your click rates. And your member satisfaction.

Our second sponsor for this week's episode is the Sidecar AI Learning Hub. The Sidecar AI Learning Hub is a learning program that allows you and your team to incrementally learn AI all year long with a combination of recorded content that you can access at any time at your [00:02:00] convenience office hours with AI experts every week. And a tremendous community of like minded, forward looking AI practitioners in the association space.

You will advance your knowledge and you will help advance everyone else in the community as well. We're super excited about this AI Learning Hub. One other quick note, the first 50 people to sign up for the Hub will get lifetime access for the same price as a one year subscription, which is $399.

Mallory Mejias: Thank you to our sponsors. What a special week this has been. We've got Thanksgiving tomorrow, and also the news is changing by the hour.

We like to plan out our script, our outline for each episode, and typically that's fine to do the day before, the evening before, but this morning we woke up and we had to make some quick edits because we saw more fresh news from Open [00:03:00] AI. And that is our first topic of today. I'm going to give a brief overview of what the last few days have looked like for OpenAI, and then we'll dive right into it.

So if you missed it, Sam Altman, CEO and co-founder of OpenAI, was abruptly fired by the board of directors on November 17th, 2023. The board cited a lack of transparency in Altman's leadership as the primary reason for their decision. This move led to the resignation of OpenAI's president, Greg Brockman, and several senior researchers in protest.

The firing revealed deep internal conflicts within OpenAI, particularly between profit driven goals and the nonprofit's focus on AI safety and ethics. Following the ousting, Microsoft, a major investor in OpenAI, hired Altman and Brockman to lead a new advanced AI research team. This decision came amidst concerns over Microsoft's 13 billion investment in OpenAI and its impact on the company's stock.

In the wake of these events, Emmett Shear, former CEO of Twitch, was appointed as OpenAI's interim CEO. [00:04:00] This comes after OpenAI CTO Mira Murati was initially named interim CEO. The situation remains dynamic with ongoing discussions about leadership and the company's direction. It underscores the complexities of managing rapid advancements in AI technology and balancing innovation with responsible governance.

That was initially the notes that I wrote about this. And then this morning I had to add in today, November 22nd, OpenAI announced that Sam Altman will return to the company as CEO with a new board. Altman is right back where he started, but I think it's safe to say OpenAI is not right back where it started.

So Amith, what are your initial thoughts on all of this news?

Amith Nagarajan: Well, the first thing I'm thinking is someone's going to create an AI generated movie about this drama. It's just nuts. And on the one hand it's nuts. On the other hand, it's kind of what you'd expect with what's happening in the world of AI as well. There's so much at stake and there's so many conflicting views that it's [00:05:00] somewhat natural to think that there's going to be drama like this, shaping many of the major AI companies. You know, really, what you have are two forces playing against each other. One is the folks who are what we'd call safety advocates, people who are erring on the side of saying, hey, let's slow things down. Let's make sure we're not releasing things into the wild that potentially could be deeply harmful to the world. And then on the other side, you have people who are saying, well, the only way we're gonna really be able to do deal with that issue is if we advance a I rapidly enough to be powerful in a good way.

And, you know, Sam Altman is clearly on that side of the fence. And you know, Ilya Sutskever, the other co-founder and chief scientist of OpenAI, who is the key figure on the board behind this set of changes initially, although he ultimately circled back around and supported Altman's return. Which is a super interesting plot twist, but his view has always been focused on the safety side.

So I think that it's gonna be interesting to see what comes out in the [00:06:00] coming probably months in terms of the board's actual reasoning. One of the things that Altman agreed to as part of being reappointed as CEO late last night was an internal investigation would be allowed into his prior conduct, and we'll see exactly what that turns up.

I suspect probably it's really going to be a largely differences in opinion in terms of how fast we should go. Things like getting GPT five started in terms of the training process. There's a lot of speculation about that. Altman had, I think, about a week and a half ago, confirmed that GPT five was in training.

Some people are speculating that OpenAI has achieved internally AGI or artificial general intelligence, which would be, you know, a remarkable leap forward as powerful as GPT four is, it's certainly not AGI. So there's lots of different things that are being asked, and I think we'll learn a lot more as you know, as more is revealed.

So I find it interesting. I think ultimately with Altman back at the wheel at OpenAI, it's probably [00:07:00] a really good thing for OpenAI itself because OpenAI was likely to lose a large number of its team members- over 700 people out of the nearly 800 employees. So roughly, you know, seven eighths of the company had said that they would resign if Sam Altman was not reinstated.

So, you know, that would have been obviously a catastrophe for OpenAI; would have impaired their ability to really continue as a business if that had occurred. So I think that's an important thing because open at the moment is a very important company in the infrastructure world.

I think my main takeaway thought about all this stuff, which I've shared online in a couple of different ways and linked in and on side cars, community, et cetera, is you have to build with the mindset of swappable models.

You cannot tie yourself to one company not just because the world is changing so fast and so many advancements are coming, but because you have to be able to protect yourself from downside risk, like the implosion of one of the leaders in the space that could very well have happened and perhaps [00:08:00] could still happen, you know, in the coming weeks or months.

So those are my initial reactions to it. I think it makes for good drama certainly, but I think there's some very practical implications from this series of, of episodes really that have occurred.

Mallory Mejias: You mentioned there's speculation that OpenAI has achieved AGI, can you briefly define what that means?

Amith Nagarajan: Sure. So artificial general intelligence would be a model that is capable of a wide range of expert level skills. And you could argue the GPT four and Anthropics Claude 2. 1, which was just released yesterday, already have like elements of that because they're good at a lot of different things. But AGI generally has a pretty high bar as number one, having this expertise at a superhuman level in pretty much all domains.

That's a lot of what people are defining as AGI. So a model that's really, really good. At just about everything. And also a lot of people stitch into the definition of AGI, models that have agency, that they can [00:09:00] take action on your behalf. That's a key thing. A model that is capable of both that level of superhuman reasoning and intellect and knowledge and also can take action.

You know, in people's definitions of AGI tends to shift over time. You know, if you'd ask people if somehow GPT-4 was available in terms of what it could do 10 years ago, I am sure that many people, possibly myself included, might have called that AGI. Perhaps with the lack of agency might have prevented it from being called that because it can't take action on your behalf in its present form. But the essence of the idea is that it's something so powerful that it has, you know, world shaping impact. And so the definition of it is one thing. The other thing is, is like, you know, what does it actually mean in terms of implications in society, on business obviously. So AGI is like this navigational beacon in a sense of like where we're heading. And will we be there in 12 months? Will we be there in, [00:10:00] you know, five years? Will we be there in 10 years? Will we never get there? You know, generally speaking, I think most researchers say that AGI is going to happen in the next five years, and a lot of people think it's going to happen in the next two.

Mallory Mejias: You mentioned the battle between the safety advocates on one side and the rapid movers on the other side. I found a tweet on X from someone named Siki Chen that I wanted to share because I thought it was interesting. So basically the move slower people ousted the move faster people who move fast to start a new company. All the move faster people will join the move faster. And all that will be left are the move slower people moving slow to move slower together.

Amith, one, what do you think about that? And then two, is it possible to be a move faster person and also proceed with caution?

Amith Nagarajan: I think it's the natural, you know, kind of lubricant of how minds work is that people will tend to go with people they agree with more often than not. And so, yes, that's exactly what would likely have [00:11:00] occurred if Altman was to have joined Microsoft as the head of a new research lab, which was the news effective, you know, 24 hours ago, very likely that hundreds of people from OpenAI who believed in Altman's vision, which was generally a flavor of, I think, move faster.

But I would say that Altman's been one of the more responsible leaders in terms of, kind of a balanced mindset of moving fast, but with safeguards he would have taken a lot of those people with him. And yeah, without the people who are the move slower folks harping in their ear all the time about them moving too fast, they probably would have moved much faster than they are, than they have been at OpenAI.

And then what would be left at Open AI would be largely what I consider an unfundable company, meaning that it basically is a bunch of people who say they want to go slower. And as a practical matter, with all the competition in the world, OpenAI would not have remained competitive for very long.

And so, you know, the OpenAI is actually in the midst right now of a new round of funding, which is very, very important because, you know, I've heard quotes that open AI is a [00:12:00] cash incinerator, I think, is the term that I heard from a former open AI person reported by some podcast I was listening to.

I forget which one, and that's exactly what it is, because at a billion dollars a year and annual revenue run rate. That might sound like a lot of money, but it's a tiny sliver of what they need just to run inference, which is the process of using chat GPT and and related tools, not to mention training these models, which, you know, costs hundreds of millions of dollars per model going forward and sometimes more.

So I think that what would've likely have occurred is OpenAI would've died because you know, it's not a fundable commercial, you know probability that, that these, this company with all the most slower people remaining, you know, would've been attractive to investors, is the bottom line. So that's why I was saying earlier, I think OpenAI, you know, basically scarcely avoided a near death experience here because, you know, without, without all those people who were driving it forward, it would've been a, an a company that largely wasn't fundable.

Mallory Mejias: Do you think it's fair to say, though, to equate AI safety [00:13:00] with moving slow?

Amith Nagarajan: I mean, that's what some people assume to be true. I actually think that's the most irresponsible thing we can do. Because by virtue of moving slower, we're assuming that we are in control of something. And the reality is, is we're not. You know, so let's just say you and I were in charge of OpenAI, which obviously we're not, we have no affiliation with OpenAI, but let's just say we had 100 percent control over OpenAI's direction.

Slowing open AI down doesn't mean you're slowing down anything because the rest of the world is moving forward at the fastest possible pace. We know what's happening at Amazon, they're training their own very large model. We know that's happening at Microsoft with other models they're training.

Meta is moving forward. We know that you know, X.ai is moving forward with the new large language model. And that's only to say the ones that we're aware of. There's tons of people out there who are actively training. More and more and more capable models. So it's my personal opinion that you have to have AI in your court to make AI safe.

And so we have to develop a I [00:14:00] techniques for controlling AI going forward. I'm not suggesting that we throw caution to the wind and be reckless. I think there's a lot of work each company should be doing around safety. And doing grounding and doing alignments and all these other things, which we can talk more about.

But ultimately, the idea that OpenAI itself alone is somehow in charge of the destiny of what's happening with AI, I think is a false assumption. And so I think OpenAI actually slowing down when there are good actors at OpenAI is actually irresponsible because that means someone else is going to get ahead of them and who knows what that's going to mean.

Mallory Mejias: You touched on this a bit earlier of not being tied to one tool to one company. How should associations approach their AI investments to safeguard against similar uncertainties in the AI industry?

Amith Nagarajan: Well, I've been talking about this in a general sense for a while, in that whenever you're in a fast-moving area of technology, you should move fast, but you should also assume that what you're doing will be replaced. Meaning if you're [00:15:00] building some kind of capability for your association using say GPT 4, you should assume that whatever you're building will be essentially obsolete in roughly 12 months, possibly sooner.

Therefore, it's important for you to think about how to replace what you're doing now with what's going to come, even though you don't know what's going to come. So that's a really tough thing to kind of even just wrap your head around. But the idea essentially is this. When you build something using a tool like GPT 4, put in a layer between yourself and GPT 4.

Almost like a layer of insulation. Think of it that way. And that way you can swap out GPT 4, possibly with GPT 5. Possibly with Claude, possibly with something else, and you're not tightly coupled to GPT 4 specifically. I'm speculating when I say this, but I'd be willing to wager a significant sum that that's exactly what Microsoft has been doing in building Copilot.[00:16:00]

Even though they have 13 billion invested in OpenAI, even though they are, right now, from a business perspective, seem to be tightly coupled to OpenAI. Their software architects I'd be willing to bet have built in a layer of indirection or what we call an abstraction layer in software development speak between the model itself and all the other software they've built.

And I'm encouraging associations to think that way in terms of how they're designing their business. And there's lots of approaches on how you can do that. I think the key to it though is, you know, if you, if you basically assume, most associations are accustomed to technology decisions. Every decade where they say, I'll replace my AMS or my FMS or my LMS every 10 years, you can't think that way with this type of stuff and just like, you know, fully hit yourself to a particular company.

You have to have a safeguard in place and not to assume it is like a worst-case scenario that you're going to switch models, but to actually assume that you're going to switch models and to build up your infrastructure with that in mind. That's [00:17:00] really what I'm trying to describe when I say that organizations should be planning for a lot of change.

Mallory Mejias: If an AI giant like OpenAI is struggling to balance innovation with responsible AI and governance, it's safe to say that it's something we all need to be thinking about. We talk a lot on this podcast about innovation. It's in the title. Go try things. Go experiment. And we don't spend a ton of time advocating for people to slow down, I would say.

What steps can association leaders take right now to ensure that their staff are using AI responsibly and ethically?

Amith Nagarajan: To me, the number one thing is education because most associations and most staff at most associations are not yet up to speed really in general on AI. You know, people are taking steps. It's exciting to see people doing a lot of different things to move themselves forward, but I think education is the most critical thing to start and then continue doing because the world is moving so fast. If [00:18:00] associations aren't well educated on AI just in terms of the capabilities, they don't stand a chance of being responsible in their use of AI because responsible use mandates that you first have an understanding of what AI can do.

So that's the first step is awareness and education. That's really kind of a combined step is make yourself aware of what's out there and then learn how to use it. And once you learn how to use it, then you can have an intelligent discussion about what you should and should not do. Are there responsibility or ethics guidelines that you want to put in place for your association?

In my opinion, the answer should absolutely be yes. Where you give your staff very clear guidelines on what to do and what not to do. You give your members clear guidance on what the association is recommending and not recommending. So I'll give you an example. When it comes to making key decisions about, let's say, which speakers should speak at your annual conference, a very important decision right now.

It's made essentially manually. I don't know of any association that's fully automated that with AI. It's technically [00:19:00] possible to fully automate that entire process of abstract submission or speaker proposal submission all the way through the selection process, the scheduling process. We talked about that at digitalNow now using a framework called auto gen.

That was actually the example that we illustrated on stage, and it's doable today. But is that the responsible thing to do? And my opinion is, is that in that scenario, it's really key to have human review of the AI’s work. That's a key component of this is that it's not autopilot. And that's one of the reasons Microsoft is called a Copilot.

Google calls their AI assistant duet, which is the tool they're baking into Google suite. And I think it's the mindset has to be like a copilot type of mindset. So that means that you still have a job to do. It's easy to say, hey, Chat GPT is so great at creating blog posts. I'm just gonna, like, take what it produces and post it on my website.

Most associations don't have that mindset, but if you don't tell your team what's acceptable and what's not acceptable, they don't know where to go. They don't know if they're allowed to use it at all, or the flip side of [00:20:00] it is, is they might use it in a totally uncontrolled way. So I think that's an important thing to do, is to have some guidelines, but you have to educate first.

Because if you, if you take an approach to guideline setting when you really don't know what you're doing, That's like me who's never flown a plane trying to teach an airline pilot how to fly a plane or what's good and what's bad. I have to at least have some general understanding of what's involved before I try to provide guidelines to someone who's doing the work.

Mallory Mejias: So I'm hearing that as step one being awareness and education around AI and step two being that human oversight and guideline piece. Is that right?

Amith Nagarajan: That's right. And the other thing I think is really important, this is where I'm an advocate for stepping on the gas in terms of investing in AI models. Is to actually have AI help check AI. So the future is a multi-model world. And what that means is that as great as GPT from Open AI is as great as Bard seems to be getting from Google and so on.

These are really powerful tools, but it's not a one size fits all [00:21:00] environment, and it's not a rely on one model, just like you wouldn't rely on one person to do something important. And so, for example, it's becoming increasingly possible for associations to have their own trained models that represent their body of knowledge and that specialized model can be used as a fact checker and an accuracy checker for other contents. So maybe use GPT 4 to generate articles for your blog because it's good at that. But then you have a specialized model fact check that piece of content against other content, right? Or to provide analysis of is that scientifically correct for if it's a scientific domain and so forth. And that's where I think AI is really a key part of the solution for responsible AI. Because you can use AI to help you check other AI the outputs being accurate and responsible and so forth.

Mallory Mejias: I imagine some listeners thinking, we don't have an AI dedicated person on our staff, we're a small team, or maybe we're a [00:22:00] big team, but we still don't have that designated AI person. Who do you think needs to worry right now about building these models or purchasing these models from a vendor?

Who should be taking the lead at this point on those projects?

Amith Nagarajan: You know, the default thinking in most organizations would go to the CIO or IT Director because it's a technology topic. And certainly that role is super critical in this area. I would really encourage all IT Directors or CIOs to really, really focus on learning this stuff. So they understand the tool set.

But I would think of the IT Director or the CIO as kind of like a general contractor you hire to build a home or to build an office building for you. They're not the architect of what you're doing. They don't necessarily control the vision of what you're doing for the business, but they can execute on building what you want built.

So from an architecture perspective or a business vision perspective, I think that's where the CEO needs to personally invest their time in learning what AI can do, how it [00:23:00] affects the business landscape and how it should affect their strategy. Because really, you're gonna rewrite the strategy book over the next few years, possibly over the next six months if you're on it.

And you cannot rewrite the strategy book for your organization if you don't understand the capabilities of AI models. So getting up to speed at a high level working with other CEOs who have a similar vision that the world is going to change and you have to learn this stuff. And then ensuring that your IT Director, if you have one, or your CIO Is really pushing aggressively in the terms of their own development and their team's development so they can help you execute the plan is key. So the CEO isn't gonna execute all this work, but they have to understand it so that they can, you know, have a vision and then direct that vision with the rest of their team. The other thing I'd say Mallory is this is not like an AMS implementation where you can really lean heavily on IT and then it's their job to get it done.

You really have to get every person all every single employee in your organization part of it, there is no central authority [00:24:00] around AI. In my opinion, it's every employee in the company from the early career person who's right out of college all the way to your most senior staff.

You have to put a mandate out there and get them all on board with learning AI. Because all of those brains learning this stuff are going to have a different perspective on it. And they're going to contribute different ideas, and they're going to be innovating at their own pace and in their own ways.

So you really have to make it a team wide effort. Which also is exciting, because it's a new thing to learn. And if you build that into your culture the right way, it'll really revive a different mindset in the culture.

Mallory Mejias: This flows really well into our next topic, which is, is good enough really enough? In our most recent episode, we discussed the exciting possibilities of custom GPTs and their applications. Following this, we received an insightful comment from a listener, Jack, hello Jack, if you're listening, highlighting a critical aspect of AI's limitations.

Jack's experience with AI powered transcriptions for educational content brings to light a prevalent issue. The [00:25:00] imperfection of artificial intelligence. He pointed out that while AI offers efficiency and cost sometimes compromises accuracy and reliability. Jack says, “silly mistakes aside, every now and then, scanning AI generated transcripts you'll catch a nuanced mistake that is not obvious on an issue of importance, where the wrong information could be a real problem.”

I know personally, I use the tool Descript to edit all of our podcast episodes. Descript provides AI powered transcriptions as well. And those transcriptions are not perfect. They're pretty good, but they're not perfect. So I definitely understand Jack's point. And this concern extends beyond just transcription services to various AI applications, like content generation, for example. Associations are committed to delivering accurate and trustworthy content and Jack's dilemma raises a crucial question. Can we rely on AI that is good enough, but not perfect, especially when an association's reputation for providing trusted information is at stake. Amith, [00:26:00] what are your thoughts for settling for good enough when using AI tools, particularly as an association?

Amith Nagarajan: You know, I wouldn't settle for good enough with an employee, and I wouldn't settle for good enough with an AI application producing output. So I think it's a, it's a complex question, in that, you know, if you look at AI as this, like, solve all problems with one shot and you're done, definitely you're not going to be satisfied with the results.

I think you can get some really good things out of AI and the models are getting better. So the types of issues you're running into with the script and Jack has run into with transcription of educational content will decrease significantly. You know, it's like AI's curve is so radically accelerating that it's very likely in six months, 12 months, 18 months with each of these doublings in AI power, you're gonna see far fewer of those kinds of mistakes.

So the mistakes will be less and less frequent. However, there will still be mistakes. And in fact, the fact that they're less frequent is something of a concern in a way. Because when they do occur, they're still [00:27:00] significant, but they won't be frequent enough. So people would get a lot more comfortable with the fact that maybe they don't need to review it, right?

Because our goal, generally, as humans, is to work as little as possible on tasks that we don't enjoy. And I don't know a whole lot of people who love proofing content like that, especially if it's pretty darn good. It's really easy to gloss over it and to then eventually just say, yeah this is like 99.99 percent of the time it's great, it's better than most humans would ever write. It's good enough.

So this actually ties back into our earlier discussion about AI safety and reliability quite nicely. So, A- you shouldn't rely on models by themselves. I probably would think that to be true even in a few years as these models get radically better. B- I think this is where multi model environments are really key when you have what I call mission critical content. So mission critical content being again the content you share publicly the content that guides practitioners in your field on their behavior and their decisions. In the [00:28:00] world of medicine, certainly nothing could be more mission critical.

But that's true also for the legal field for accounting for engineering. If you're building a bridge, I want to make sure you have the right information for that bridge, right? So there's some big deals, you know, type issues out there.

So I think that what is going to happen in the near future, meaning the next 6 to 12 months, is that bigger associations that have the wherewithal and the vision to realize the potential of this, but also wanting to ensure the accuracy is higher, are going to build custom, fine tuned models, which specifically have expertise in their domain.

So something like a GPT 4 or an Anthropic Claude and other models. These are large language models. They have really good general purpose knowledge. Kind of like a really well educated person from a university that has a degree in general studies, right? They've studied literally a little bit of everything.

That's what a large language model basically is. Now you can, you can kind of coerce a large language model to be better in certain areas by prompting it certain ways. But it's knowledge is [00:29:00] very general. In comparison, the association has content that goes super deep in a particular area. And the association knows their content is essentially correct in that domain.

And so what's possible to do... At an increasingly accessible, affordable, you know, executable way is to train your own model to either fine tune an existing model or literally train a model from scratch, which is still a substantial endeavor. But fine tuning a model is very accessible now. So if you fine tuned one of the smaller open-source models like Llama or Mistral, and there's others out there with your content, and you train that model essentially to turn into your fact checker.

So you'd say, for example, Jack might have a fact checker model that is an expert, a deep, deep expert in all of his content, but doesn't know anything else. This is a model that's like, you know, 1000 miles deep, but an inch wide. As opposed to a large language model, which is the opposite. It's incredibly broad, but not very deep in [00:30:00] anything.

And you use them together. And that's where you'd say, okay, we're gonna develop a piece of content, a transcription of educational content with the broader model, because the broader model has great general language skills, but then we're gonna check it with the smaller model that's been fine tuned on specialized content.

And then working in unison, just like two people working together who are both skilled in different ways can improve the quality of the output. You're going to see that kind of approach result in far greater accuracy and better quality work, just like a team approach to human workers would result a better in a better output than one person working alone.

So that's what we're going to see happen in the next 12 months. I'm advising a number of associations on this exact type of strategy right now. It is still somewhat expensive to custom, you know, fine tune models. It's still something that I'd say only larger organizations have a shot of doing, say, in the next 6 to 12 months, but over time it will be something accessible to almost all organizations to be able to do this kind of thing.

Mallory Mejias: Is it safe to [00:31:00] put an association's proprietary information into a tool like Llama right now?

Amith Nagarajan: So yes, because it's like something like llama is an open source model that you download and run in your own dedicated computing environment. So inherent to that, you're controlling the environment. You can even use publicly executed models like OpenAI and Anthropics models in a safe way.

As long as you sign up for the paid versions and those paid versions have terms of service that are acceptable to you. So there are ways to use both the public APIs and also certainly the ones that you download and run yourself. The open source models. Those are inherently more secure because you're running them in your own environment.

Mallory Mejias: I want to go back to your earlier point, the first thing you said on this topic, which was I wouldn't settle for good enough from an employee. And I want to challenge that a little bit because isn't that exactly what we expect from employees? Good enough? We don't typically expect perfection out of our [00:32:00] employees, so wouldn't you say that even though these AI models are good enough, that's similar to the kind of work we're getting out of humans?

Amith Nagarajan: Yeah, that's a really good point, Mallory. And I think it's the question is what it's kind of like defining AGI. What is good enough, right? It's it's a relative bar. My way of thinking about the question was initially like you just take the initial output from the employee and post it online. Being like, hey, it's good enough versus like, if the employee has reviewed their own work once, you know, and if somebody else has checked the work, then that is good enough, you know, that's, that's the best reasonable effort, right?

Like, have we applied all commercially reasonable efforts to that work, then yes, is good enough. And this, I think the same thing is true for AI. There's this weird divergence in mindsets that people have when, when they work with an employee and get output. They would generally say, well, let's proofread this or let's have someone check it.

But with AI, it's like this magical box that produces this document and you're like, okay, I'm just going to like assume this is good enough, but it's not. Especially with where AI is [00:33:00] today. It's generally not good enough by itself. It's what we call a one shot versus a multi shot approach to AI, where it's like a single shot is just, oh, I prompted it.

I take the output. And generally speaking, you know, is it good enough? 80 percent of the time. Sure. But what the 20 percent of the time, that's a problem, right? It's a significant percentage of time where it's not good enough. So I simply am saying that, like, if you apply reasonable efforts to validate that the output is good, then I think you're fine, but just taking a transcript from saying, Hey, let's transcript, get a transcript from educational program from a video and just posting it. You wouldn't do that if a human transcription service was provided. And I wouldn't suggest doing that with AI either.

Mallory Mejias: So it sounds like the key here is really kind of like in our earlier topic, oversight and teamwork are essential, but that could be oversight and teamwork of AI models or humans.

Amith Nagarajan: Right. Yeah. The way these models work right now is there's, there's a lot of inefficiency in how we access them because people are literally talking to these models through interfaces and you can get a lot done. But the, the world of [00:34:00] multi-model that I keep referring to is around the corner for most people. There are frameworks like auto gen, semantic, Colonel Lang chain and other tools that are out there that allow you to very easily construct multi agent scenarios like the ones I'm describing in this discussion where one model checks another model in this agent framework, and they can also take action. That scenario, I think, is very powerful, because it allows you to guarantee that there's a high level of checking being done in the work. Perhaps far more checking, though, actually occurs in reality. You know, most people have these standard operating procedures that they spend a lot of time building out.

And do people actually follow them exactly every time? I mean, some do, but some don't. And, you know, do you know when there's been a variance from your standard operating procedure to what actually occurred? Usually the answer to that is no. So I think we actually will get to a much better answer, but human oversight, particularly for the most mission critical work, I think is going to be important for quite some time.

Mallory Mejias: Are there certain use [00:35:00] cases of artificial intelligence that you would warn against when accuracy is paramount? I know you mentioned large language models and using chat GPT, for example, which has access to a lot of general information, but any, anything else?

Amith Nagarajan: Yeah. I mean, I think we're at the stage where, you know, just like I wouldn't want to get in a car today in late 2023 and just turn on full self-driving in a Tesla or the equivalent and other vehicles, if it was out there. And just go to sleep and hope that I get where I'm going right. I'm not gonna hand over the complete control over my life and my family's lives to an AI yet in that scenario, will I be willing to do that in the future? Probably because at some point the eyes gonna be so good and it's so battle tested that it'll be Actually safer to do that than to not right. And in a similar way, I think the same is true for business context where I'm not going to, like, literally just take, you know, mission critical advice from an AI yet.

So if you're a health care association and your members are practitioners in a particular medical specialty area, do [00:36:00] you want an AI to in an unsupervised way? Just give medical advice to your members, you know how they should practice in their field. Probably not. I mean, that sounds really scary to me based on the current state of AI, both because the models are still very early.

If you think about it in the grand scheme of things, these models are still very early. They masquerade as experts and everything because they're very confident in what they say. But the reality is, there's lots and lots of holes in them still. So for mission critical scenarios, I would stay away from them for now.

I wouldn’t not use AI. I would simply ensure that the AI is being checked by human for key pieces of information, and I wouldn't put the AI in those mission critical roles. There's plenty of other things that I can do today that are completely transformative for our organizations and for society that don't need to go to those mission critical areas yet.

And over time, as the technology progresses, let's use it in mostly non mission critical areas initially, have human oversight for now in most areas. And then over time we can see where it goes, right? And let [00:37:00] the progress in the field and the multi-model scenario that I'm describing will be a big part of that progress.

Let that progress determine when we want to take, you know, some of those controls out of the mix. But for now, I'd be very, very skeptical of people who wanted, like, fully automate truly mission critical decision making around again, particularly things like people building bridges or delivering health care.

Mallory Mejias: What about data analysis? I know we've been playing around with this feature in chat GPT at Sidecar. Do you know if there are hallucinations with the data analysis feature? Or is that pretty accurate?

Amith Nagarajan: The data analysis feature. So there's this feature in chat GPT in the paid $20 a month version called advanced data analysis. You can upload a file and then chat GPT will analyze the file for you. And in the scenario that Mallory is describing, there are no hallucinations in the analysis because the way that works is Chat GPT generates a Python program and that Python programming code actually then manipulates your data and gives you an output. So it's using your data. It's using a deterministic [00:38:00] program written in Python by the AI. And it gives you an output. Now, could it have flaws? Yes, it could have written the code incorrectly to calculate what it's doing in an incorrect way.

But will it hallucinate and inject false data? It won't do any of that because it's using your data files. But should you still check the work that comes as an output from the data analysis tool within chat GPT? 100 percent you should look at what it's doing and you should actually, you know, again, double check it just like you would like something that came from a team member.

Mallory Mejias: Exactly. Our next topic is Generative AI Search. U. S. News and World Report, a leading provider of service news and information, has taken a leap in the digital information space by launching Generative AI Search functions across its website, usnews.com. This feature integrates advanced AI technology to enhance the user experience, offering more dynamic, personalized, and interactive search capabilities.

The AI driven search tool is designed to understand and respond to user queries with more contextually [00:39:00] relevant and comprehensive information, transforming the way information is consumed and engaged with online. This move by U.S. News is a clear indication of the growing influence of AI in the realm of digital journalism and information dissemination.

It raises questions about the future of content consumption, the role of AI and enhancing user experience and implications for information accuracy and reliability. Considering U.S. News integration of generative AI search, Amith, how could similar technology benefit associations with extensive content libraries?

Amith Nagarajan: Yeah, you know, I think this is a really fascinating and exciting thing that U. S. News and World Report put out. It's still in beta, but I think it's 100 percent directly relevant to the association world because, you know, part of what happens in the world today is that it's not necessarily the best content that wins, but it's the content that you can access with the least friction.

That's good, right? It's not to be great. And so friction really becomes the enemy of associations. It becomes all of [00:40:00] our enemy, right? In terms of providing a great user experience. And so what happens in a lot of association websites is you have a complex navigation system. Maybe it's maybe it's antiquated.

Maybe it's brand new, but it's probably complicated because your associations complicated. And so what you're essentially doing is imposing your complexity on me as the user. So if I want to find something on your website by browsing or by searching, I get lots and lots of stuff. It's, you know, more common that I get more results than fewer results.

And that's overwhelming. A lot of associations have said, well, we'll solve this by, you know, implementing a federated search tool. And a federated search tool essentially is a tool that can search across multiple repositories of information. So it can search against databases and CMSs and, you know, multiple other document repositories.

But actually the federated search tools in their current form. Typically, actually increase the overwhelming nature of the result because yes, you've now exposed more content to the search, but as a result of [00:41:00] that, you actually have an even more overwhelming results to the user. You don't actually help them get to the information they need faster in most cases. And so what U. S. News and World Report has done that's interesting is that they've taken generative AI and they've essentially made their search a chat process. So I tried it last night and it was very interesting. So I have a high school sophomore who's starting to think ahead about college and at least as much as any 16 year old can really think ahead.

And, you know, he's been talking about potentially becoming a doctor, which I am really excited about him wanting to do something that helps people. And it's a great profession. It's interesting as a side note to think about well, what will the world of medicine look like when a current high school sophomore graduates from medical school goes through residency?

And that's 10 plus years from now. In any event, I decided to ask U.S. News and World Report some questions. I know that they're strong in rankings data, particularly around colleges and lots of other areas. They're big in that. And so I asked U. S. News and World Report search tool, you know, [00:42:00] for some universities that would be good for my son to attend that would prepare him for a top medical school, and it provided an interesting answer. It generated a response for me. It did show a number of links, but it gave me about two or three paragraphs that explained. First of all, it was kind of cool. It said that's a great profession, a noble profession for your son to be considering and that it was pretty cool.

And then it talked about like different universities that might be good choices. And then I went on to say, well, you know, he is an avid skier and an outdoors person. Which of these universities potentially have great recreational options, and then it refined its results and it started talking more about, you know, universities in Colorado and Utah and the Northeast that are near mountains.

So it worked kind of the way I'd expect it to, where it's essentially a Chat GPT type experience. But specifically trained on this vast content repository that this publication U.S. News and World Report has been building for decades. And this is exactly the use case that's available to associations.

The association market [00:43:00] has a brand much like U. S. News and World Report. You're known for being the best at a particular thing. So you have a great brand and you have a content repository that's probably unparalleled in your field. And so doing the exact same thing is possible where you can have a chat-based tool, which is, you know, it's, it's really blending search and chat into one thing, essentially.

And, you know, that's exactly what associations can do. There's lots of ways to do it. Obviously, Betty Bot that. We've talked about a prior sponsor of this podcast is one of the tools that Sidecar uses and a number of associations use already. And there's a number of other ways to go about doing exactly what U.S. News did. But to me, it's. You know, one of those things that's both exciting and kind of like the obvious next step for pretty much every publisher.

Mallory Mejias: What a neat example with your son is a custom GPT, an example of generative AI search, or could it be?

Amith Nagarajan: You know, a custom GPT could do that because you could upload into the custom GPT content that's specific to you. You could provide, like, for example, I could upload a [00:44:00] document to the custom GPT that has links to a whole bunch of articles that I've written and instruct that GPT to use those links and to provide those links.

In my responses in the custom GPT, so you could kind of emulate it. Custom GPTs at the moment are a little bit limited. They can't really ingest massive amounts of your content, but they can give you an opportunity to try out the idea for sure. And then you can, if you want to scale it in a production oriented way, there's, there's lots of ways to do that.

But yeah. A custom GPT actually is a fantastic idea, Mallory, for, for playing out with something, playing with the idea and testing out this type of idea.

Mallory Mejias: I have my own personal example from this past weekend. I was looking at hotels potentially for a trip next year in Tennessee. And I think the hotel was called Bolt Tree, Tree Houses or something like that in Tennessee with these really cool cabins up in the trees with glass all around. And I had some questions about what was included in that rate, if they included food and whatnot.

So I navigated to the FAQs and I couldn't believe my eyes. When I saw that this [00:45:00] hotel had Bolt, the name of the hotel, GPT, in its FAQs. And I was really excited, I immediately started interacting with it, because I'm not one to typically engage with the regular, the old style of chatbots. Because I feel like normally, when I'm asking a chatbot a question, it's not simple enough for it to just respond in a pre-programmed way.

So I engaged with Bolt GPT and it gave me all the information I wanted. I even pushed it a little bit and said, you know, are there any restaurants nearby the hotel since it's out in the middle of the woods and it gave me some options and I was so impressed to see, I'm assuming this was a custom GPT that it put on their website, but that feature just came out, what, two weeks ago, less than two weeks ago.

So it was really neat to see a feature like that on a hotel’s website and it was incredibly useful and it made me more inclined to, to book with them for sure.

Amith Nagarajan: Definitely. I mean, really what that's going to is, how can we better serve our audience? How can we reduce friction, taking someone from an idea or a thought directly to the right answer for them as quickly as we [00:46:00] can and as accurately as we can and as personalized as we can. And, you know, not too long ago, this whole concept would have been totally in the realm of sci fi and now it's here.

And so you know, being able to have conversations with brands about their content, about their offerings is 100 percent the expectation people are going to have. Today, you're excited seeing that on the website, and since we're spending so much time in this field, you know, it's cool to see people innovating in that area, but 12 months from now, but certainly 24 months from now, if a hotel's website didn't have that, you'd look at it and say, Oh, I really don't want to deal with them because this other, you know, this other website makes it easy to do, right?

Or if you're on an aggregator site like a Kayak or an Expedia, the ones that offer a really good experience like that, that's where you're gonna go because it's lower friction and higher value for time.

Mallory Mejias: I think I already feel like that today, to be honest.

Amith Nagarajan: Totally.

Mallory Mejias: What role do you see AI playing in the future of content management and curation within associations. I know you and I have talked about AI generated [00:47:00] tagging, for example. Can you give a little bit more information on that?

Amith Nagarajan: Yeah, sure. Well, I mean, AI generated tagging is kind of a, to me, a basic place to start where tags are still really useful in the realm of AI because they provide essentially quick summary tidbits that a human can look at and say, hey, what is this article about? And it can be used for classification in some respects.

And, you know, things like similarity searches and product cross referencing, things like that. I think that ultimately what you're going to see happen with taxonomical type searches and tagging is that they'll become, I don't think they're going to be less important necessarily, but they'll become more invisible to most people.

So you can use AI right now to do all the tagging for you. We have a number of articles on the sidecar website and the Ascend book, which is available at Sidecar global.com/ai. It's free download. We do talk about taxonomies and tagging through A. I. In there as well. It's pretty straightforward thing to do.

You know, the question of my mind, Mallory is really about, like, [00:48:00] what's the problem we're trying to solve to begin with with taxonomies? Like, why do we do them? Because nobody, I don't think there's anyone out there who really loves doing taxonomies. And if you do love him, my apologies. But the point is, is that most people don't choose that as the thing they're gonna spend their time on unless there's a really good problem they need to solve. And so let's talk about that root analysis of what the problem was, and how can we solve that problem best going forward. And I think there's a whole world of new opportunity today to solve for that type of issue compared to what we've done historically.

Mallory Mejias: We've heard it a lot on this episode. The importance of awareness and education around AI is essential. I highly encourage all of you to check out our new AI Learning Hub for Associations. With that $399, you will get a full year of access to lessons that are regularly updated, you in step with the latest AI advancements.

You get access to weekly office hours, direct engagement with AI experts, so you can clarify, question, and apply AI concepts in [00:49:00] real time. And finally, you get access to a community of like-minded peers, where you can collaborate and grow together. Share your experiences and insights, which will ultimately enrich your AI journey and the journey of those around you.

So I highly encourage you to check out that Learning Hub. You can access it at sidecarglobal.com/ bootcamp. Thank you so much for your time today and hope you have a happy Thanksgiving, happy Thanksgiving to all our listeners too.

Amith Nagarajan: Thanks, Mallory. Happy Thanksgiving, everyone.

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book, Ascend, Unlocking the Power of AI for Associations at ascendbook. org. It's packed with insights to power your association's journey with AI. And remember, Sidecar is here with more resources for webinars to boot camps to help you stay ahead in the association world.

We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.[00:50:00]

Mallory Mejias
Post by Mallory Mejias
November 27, 2023
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.