Sidecar Blog

[Sidecar Sync Podcast Episode 10]: Ten AI Predictions in 2024 – Sidecar

Written by Mallory Mejias | Dec 29, 2023 9:58:28 AM

Show Notes

In this special edition episode, Amith and Mallory delve into the future of AI with a discussion on 10 AI predictions for 2024:

  1. Natively Multi-modal Models Go Mainstream 
  2. Open Source AI Models Nearing GPT-4 Capabilities 
  3. Microsoft Co-Pilot Gets Deployed at Scale 
  4. Rise of Common Data Platforms for Enhanced AI Utilization 
  5. Consumer Expectation for AI-Enabled Brands 
  6. AI Misuse in Information Integrity and Cybersecurity 
  7. Dramatic Reduction in AI Costs Fueling Broader Adoption 
  8. Increase in AI Interpretability Leading to Safer Use and Broader Adoption 
  9. Many Scientific Advances Fueled by Specialized AI Models 
  10.  Generalized AI Models Advancing in Reasoning and Math Skills 

Thanks to this episode’s sponsors!  

Tools/Experiments mentioned:   

Social:   

Amith Nagarajan: Greetings, everybody.

We're here back again to [00:02:00] have another episode of the Sidecar Sync. We've got a really exciting episode for this week to wrap up the year. But first, let's thank our sponsor, Sidecar's very own AI Learning Hub. As we've discussed in the past, training is where it all begins. People ask me all the time, I want to get started with AI, but where do I start?

And I always say, start with learning and now you have an easy path to do that. Sidecar’s AI Learning Hub is a self paced, meaning flexible and easy to schedule learning hub where there's over 30 lessons with compelling and association specific AI content. Applicable for your entire team. Sidecar’s AI Learning Hub also includes live office hours every single week with AI experts from across the team.

This allows you to mix the flexibility of self paced lessons with the power of having one on one and group interaction with experts in AI to ask literally any question you can come up with. The Learning Hub is an affordable $399 per person [00:03:00] with group purchasing options available to get your entire team signed up affordably.

Learn more at Sidecarglobal.com/bootcamp.

Mallory Mejias: It is the end of the year. I hope all our listeners are having a happy holiday season. And as Amith mentioned, today is a special edition episode of the Sidecar Sync. Typically, we talk about two to three topics in each episode and we dive in deep to each of those. Today is different because we will be talking about 10 predictions that we think we're going to see in 2024 pertaining to AI.

Ten things to look out for in 2024, and also what they mean for association specifically. Amith, I know we limited ourselves to 10 predictions for this episode. If we had no cap on time, how many predictions do you think we could have come up with?

Amith Nagarajan: Well, uh, probably in the dozens at least, and there's so much going on. And, I'm thankful that everyone's tuning in to the Sidecar Sync during the holidays. You know, it's the end of the year and it's always a fun time to reflect and think about, you know, what [00:04:00] the future holds and to maybe make some New Year’s Resolutions and I think the AI models themselves are busy making some New Year's resolutions and how they're going to get good in 2024.

Mallory Mejias: Absolutely, we will be doing a mini dive on each of these 10 predictions. First and foremost, we are predicting natively multi-modal models go mainstream in 2024. One of the key developments in AI is the emergence of natively multi-modal models. That is kind of a tongue twister. We're talking about AI that doesn't just understand texts, but can seamlessly interact with images, audio, and even video. Multi-modal models can handle various types of data inputs and outputs. This means they can understand a picture, respond to a voice command, and write text all within the same framework. It's a leap from models like GPT 4, which rely on separate components to handle different data types.

Upcoming models like Gemini and Ultra are designed to be natively multi-modal, meaning they're built from the ground up to effortlessly switch between text, images, and sounds, paving the way for more [00:05:00] intuitive and versatile AI applications. Amith, how might an association utilize a multi-modal model in 2024?

Amith Nagarajan: Well, you're definitely right. Just saying multimodal model, say that 10 times fast. It'll take a little while, especially when…

Mallory Mejias: It's a skill.

Amith Nagarajan: for sure. So let's actually start with a really brief discussion on information loss. We've talked about this in prior episodes, but, um, the idea is, is that text is actually the lowest resolution type of information that we can train a model on.

We have a lot of texts, but the amount of text we have is a tiny fraction of the knowledge we have. As humanity. So think about it this way, and it's the old adage that a picture is worth 1000 words. And in fact, it's probably worth a lot more than that. So when we train models natively on images on video on other types of content, it fundamentally gives those models a better understanding of the world.

The way a toddler learns the way even actually other species learn [00:06:00] have nothing to do with language. They have to do with interaction with the physical world, experimentation, manipulating objects, these are all things that build an understanding of the physical world, but also that drives our understanding of what's possible.

It also drives our understanding of, you know, these physical limitations. So models that are purely text based are inherently limited in their ability to, to think that way. So that's one quick comment. So if you think about, like, you know, text being the lowest resolution, then image being the next higher resolution, and then obviously add audio to that, then you have video.

So it becomes richer and richer and richer and that allows us to scale these models. So, I think for associations, this is a super exciting part of the world because text, of course, is abundant in the association world, but you also have a tremendous amount of videos. So just think about the footage you have from all of your annual meetings where you've taken recordings from all of your past, um, keynote sessions at the last, you know, however many conferences you've done that have had recordings [00:07:00] and you just kind of have them sitting there in your LMS, maybe or somewhere else. Or maybe you just don't even have them online. No one's gonna go watch a five year old video.

Probably some people may, but the value in training models with this proprietary content could be truly enormous. So that's exciting. I think associations also can use these multimodal models as consumers in a lot of powerful ways, just like everyone else. So it's an exciting time.

Mallory Mejias: If we feel comfortable or listeners feel comfortable using tools like ChatGPT, do we need to recommit to learning how to use multimodal models, or do you think it'll be pretty intuitive and user friendly?

Amith Nagarajan: From a user perspective, it's going to be totally seamless. We're already seeing elements of this with chat GPT, where you can upload images. You can have a conversation with voice, with chat GPT in the mobile app. And if any of our listeners haven't tried that yet, I would encourage you to get the mobile app on your phone and actually put it into the voice mode and just have a conversation.

You'll see what I'm talking about. Um, the thing that's different [00:08:00] here is that chat GPT is not natively multimodal. They've stitched together their audio model, their voice recognition, their text to voice to have the response played back to you. They've stitched together something called GPT 4 V, which is their vision model with their text model and so on.

So it's, it's a great engineering solution with the underlying models have no idea about the other modalities. GPT 4 itself is purely text based. Coming back to the consumer question you asked, though, it's gonna be totally seamless. It's, for example, no different than if I'm able to email you, Mallory, but then I'm also able to get on the phone and talk to you, versus having this call, which we're doing on Zoom with video, we can see each other, or in person, that's even higher information, right?

So it's not going to be any different. It's going to be more natural for people to interact in a multimodal way because that's what we're used to interacting with other people.

Mallory Mejias: I guess I'm wondering, though, to fully leverage a natively multimodal tool, will we need to do more training on our end to make sure we're not just stuck in that [00:09:00] Chat GPT frame of mind?

Amith Nagarajan: I think you're right that there definitely needs to be some awareness training and examples. People are incredibly good at seeing examples and going, wow, that might apply to me. So, for example, let's say I'm an event planner and I've got a problem. I've got an upcoming event. And we've done really well in marketing it, and we originally thought we’d have 500 people, but we're gonna have 700 people, and we've got to figure out how to fit these people into the room, still have fire code not violated, make it an engaging experience, and I want to rethink the floor plan or the layout. So I might take a photo of the floor plan. And I might talk to the model and say, hey, give me several layouts that might work. And these multimodal models will be super fluent so they'll have architecture background, they'll have interior design background, event planning background, and they'll give you some creative ideas.

Of course, not all the ideas they give you in that example might be workable. There might be like, for example, a mezzanine floor where you stack up some plywood on top of a desk and put people on top of each other.

[00:10:00] Maybe that won't be a good idea in practice, but yeah, it might suggest it. But things like that would require a little bit of education to say, hey, this is possible. But again, if you try to use the analogy of imagine it's an expert human and you can show them a picture as well as show them a video of something, people will start to very quickly adapt once they realize those capabilities are there.

Mallory Mejias: Absolutely. Our second prediction for 2024 is that open source AI models will near GPT 4 capabilities. As the AI landscape continues to evolve, a significant trend we're observing is the rapid advancement of open-source AI models. As a reminder, open-source software is freely available for anyone to use, modify, and distribute.

Open-source AI models are not just catching up, but are on the verge of rivaling established giants like GPT 4. Early 2024 is expected to unveil some groundbreaking developments, notably the larger model from [00:11:00] Mistral AI, along with LLAMA 3 and PHY 3 from Microsoft. These models are not just paralleling GPT 4's abilities, they are potentially excelling in certain aspects due to advanced training techniques.

This shift towards high caliber open-source models could reshape the dynamics of AI accessibility, fostering a more inclusive and collaborative environment in the AI field. Amith, how do you see the advancement of open-source models, like Mistral's larger model and LLAMA3, affecting the current AI ecosystem?

Amith Nagarajan: Well, it's just going to create more choice. So ultimately, more choice begets better results. It's competition, which reduces cost and also increases quality in all markets. And we know that very well from, you know, basic economic theory. And we've seen that play out in every market as it's matured, this market is by no means going to mature or even get close to maturing in 2024, but it's, it's accelerating.

 And so, more models, and there will be hundreds of these models that are open source [00:12:00] will, I would say, don't just actually exceed GPT 4, which again came out in March of 2023. They'll definitely exceed GPT 4, but it's likely that open-source models will maintain parity or close to parity with whatever advances come from Google and Open AI and others.

So that's really interesting. It also tells us that, you know, open-source means you can do what you want with it. And so that's good and bad, like most things in life. The good part is, is you can run versions of those models in your own environment. So you can have them in your AWS or Azure ecosystem, or even on your own hardware if you wanted.

And that means you can totally securely train and fine tune these models with your own content with enterprise data that you might not want to send over API to any of these vendors. So it opens up new potential applications for some people, particularly that are dealing with highly sensitive data, for example, in health care domains or in financial services domains.

So that's going to be interesting. And, I think what we're gonna see [00:13:00] ultimately is that specialization really is where the action is at. So these mainstream models like GPT 4, what we expect to see GPT 4. 5 early in the year, GPT 5 probably later in the year in 2024. They're great. They're big, powerful models.

But the specialization of models like PHY 3 as well as LAMA, as well as Mistral and many others. It's going to open up the opportunity to create like these specialized agents that are super fast, very, very inexpensive to run, and can be woven into really this idea of a multi agent or multi model ecosystem.

So if you want to add more Ms to the earlier thing of, we said multi modal models, we can go to multi model modal ecosystems. And then we have, you know, what's likely to happen in 2024. The point being that it's this ecosystem that just mixes up incredible capabilities and it becomes a tool belt.

And that tool belt can be used to build all sorts of cool stuff, in any industry and particularly in the association market where there's so much content to play with.[00:14:00]

Mallory Mejias: I think I'm going to have to be absent for the episode where we talk about multimodal model models. Um, I tried to test out LLAMA 2 for myself when doing research for this prediction and I went to the website and realized I needed to download it to my computer. Um, will we only be able to access these open-source models by downloading them?

Will we ever have access to a platform like Chat GPT or does that defeat the point of it being open-source?

Amith Nagarajan: No, I think, I mean, there's definitely platform players out there that are taking open-source models and making them available on scalable, easy to use ecosystems where in some cases they're free to test, but in others you pay a little bit of money for essentially for the underlying resources. So, for example, in both Microsoft's Azure environment and Amazon's AWS, they have a platform approach where open source models are available. So you can go and use Llama 2 on AWS. You can use it on Azure. You can indeed download it if you have sufficient hardware. You can run it locally. You can run it on any computer essentially [00:15:00] where you have sufficient computing power, but most likely people will deploy these things in managed services environments. You know, all the progress we've made moving towards cloud based architectures, we're not going to go backwards on that. There's so much flexibility and there's so much improved security that comes from that. So I think people experiment on local hardware, but they'll really deploy in enterprise grade cloud environments like AWS, the Google Cloud, or Microsoft's environment.

Hugging face, which is a developer oriented community that essentially kind of has like all the different models inventory where you can download them has a new chat UI. that you can play with for free. We'll link to it in the show notes, but it allows you to try it all these different models. So just last week, actually, I was playing around with Mixtral, which we talked about in a prior episode and was able to give it a pretty complex coding task, and it did a fantastic job with it, and it was nearly instant.

Whereas GPT 4 does the same caliber of work with that particular problem, but it's both expensive and fairly slow, honestly. So that was exciting. But yeah, there's some tools you can [00:16:00] use to test right now. And deployment is going to become easier and easier. You'll essentially be able to point and click and get the model you want, whether it's using something like OpenAI or using an open-source model.

Mallory Mejias: I will have to check out Hugging Face for myself. That sounds interesting. Our next prediction for 2024 is one of the ones I'm most excited about, and that is the large-scale deployment of my Microsoft 365 Copilot. Copilot, an advanced AI assistant is designed to seamlessly integrate into Microsoft applications, enhancing productivity and workflow efficiency.

It offers capabilities like summarizing meetings and teams, efficiently managing emails and Outlook, aiding document editing in Word, analyzing data in Excel, and improving presentations in PowerPoint. Sounds like a dream. The anticipated widespread adoption of Copilot represents a big shift in how office tasks are approached, potentially transforming everyday professional practices into more efficient AI assisted workflows.

We should also add that Google is doing the same thing with Duet AI, and I'm sure they'll [00:17:00] be right in step with Microsoft to get this deployed in 2024. Amith, what are your thoughts on Microsoft Copilot, and how do you foresee that transforming productivity and our workflows?

Amith Nagarajan: Sure. Well, one little sub prediction before I begin with kind of my general answer about Copilot is I think Google in 2024 is going to play, they'll be playing catch up with open AI and Microsoft, but I think there will either catch them or possibly even surpass open AI and Microsoft. We've got to remember that a lot of this stuff came out of Google.

They have unbelievably smart people there and effectively limitless financial resources. Um, their main constraint is GPUs. Obviously time hasn't been on their, on their side. So I think we're going to see some improvements there at the moment. Duet is extremely limited compared to what we've seen of Copilot because it's based upon an earlier version of Google's own model, which we know is not nearly as powerful as GPT 4, whereas Copilot is powered by GPT 4. Microsoft has just had a hard time scaling fast enough. That's the reason you see them investing literally tens of billions of dollars a [00:18:00] year and infrastructure. A lot of that's going to empower this.

So to get to the question of what does this mean in 2024? I believe that associations will be able to use Copilot very seamlessly because like the earlier conversation about moving to multi modal models, we will be able to very seamlessly ask the Copilot inside Microsoft Word, Microsoft Excel, questions, just like you would ask Chat GPT and the main difference is it's the same engine that powers Chat GPT, but it's sitting inside your private office documents.

So as an example, let's say that Mallory and I were talking to an association that was interested in attending the AI Learning Hub, but they wanted to do the organization wide license to get all of their staff members into the boot camp at the same time, which by the way, great idea. But let's just say we needed to send them a proposal and let's say it’s a proposal for ABC association.

And Mallory goes into Word and says, [00:19:00] I need to create a proposal for the AI Learning Hub and it'll start generating it. But not generically, it's based on prior proposals Mallory has created that are in her folder, as well as organizational standards that have been established. Now, let's say Mallory sends that document to this prospect and they say, hey, this looks really good, but we have a few questions.

Can you present to our leadership team the value of the AI Learning Hub and why we should train everyone in AI? And Mallory says, sure, I'd love to do that. But let's just say she's never created a power point for this particular subject before. So she goes into power point and says, hey, I have this proposal I just created. The prospect wants to have a conversation. Please create a 15 minute presentation for me. And then PowerPoint will again have the context of knowing about that proposal and that prospect and create a beautiful presentation for Mallory for that specific purpose. Now, that would take hours of time to do all that, even with a very fast computer user will be now done in minutes.

That's a good example of what you can do with Copilot being embedded into your workflow. [00:20:00] So really what we're saying here is, it’s gonna happen this year. All you have to do is number one, you got to be willing to pay the price. It's 30 bucks a month per user. That's not cheap. If you have 100 people, it's you know, it's a decent chunk of money per year.

But I would say that, you know, think about what that maps to in terms of the cost of FTE time. So a full time employer FTE, if you have 100 employees and you pay 30 a month, you're paying $3, 000 a month or $36, 000 a year. So, think about that. That's probably like, you know, how does that map to your average cost per employee?

You're maybe paying for an additional half FTE or, you know, three quarters of an FTE or less depending on your average FTE cost. And the multiplier you get is radically higher, right? So that's, that's the tradeoff. Anyway, my prediction is very simply put that most associations by the end of 2024 will have adopted one of these two tools and be seeing, you know, 30 to 50 percent productivity gains on typical office tasks.

Mallory Mejias: I can't wait for this. I feel like I’ve been waiting for this since they dropped that video, when was it, maybe in march of [00:21:00] 2023 and it seemed like it was coming soon. But um, I’m hoping we get it really early in 2024, if let's say we all got access January 2nd, a great holiday present for everyone, Microsoft Copilot, what would you say association leaders should do in that moment?

How can they be proactive to make sure their staff are involved and engage and fully utilizing all these new features?

Amith Nagarajan: Well, I think the first thing is, is, you know, sign up. If you don't want to sign up all of your users, sign up for a few of them and just start testing it out, play with it and share the story, socialize it, turn it into tribal knowledge initially. Make sure it works for you. It will, but make sure it works for you.

It's a sales process. If you want to get your team going on something, you have to give them a reason to get excited about it. I was listening to a podcast yesterday where the speaker was talking about a recent study done around people like about 9000 office workers in the United States were surveyed at the very beginning of this month, December of ‘23 about their perspective on AI [00:22:00] and there's basically a simple correlation, which is the people who had more knowledge of AI were more afraid of losing their job.

And that's interesting because if you say, okay, well, if you don't know much about AI, you don't know its capabilities. You're not that afraid. But the more you know, the more afraid you are. Super interesting data point. Now, it could be that these people realize how much the AI can do for them. And therefore, realize that so much of what they currently do can be automated, which is a true statement, but that's assuming a zero sum gain.

And so if you don't get ahead of training your people on Copilot or AI more generally, then people are just going to get nervous because they know what these things can do. And they're going to be assuming that in the background that if no additional things need to be done, there's just gonna be fewer of them or none of them.

And so part of that training is this obligation you have to grow your people, but part of it is also to make sure they're comfortable that, hey, yeah, we're gonna get this 30 to 50 percent boost. And what we want you to do with that time, guys, is we want you to do all these additional things we haven't been able to do.

Ways to level up [00:23:00] our customer service, ways to create more value for our members, ways to engage new generations of people coming into our profession that we thought about, talked about, but never have had time to do. There's a long list of that in every organization. Let's go attack that stuff. Anyway, my point is, is that if you don't go out there pretty aggressively with this, Copilot being the best example right now of an immediately useful application for office productivity, your people are gonna have their own ideas.

And I think it's really important to go out there and do exactly that as soon as you get access to it. Now, by the way, Mallory and I don't have any inside information that suggests January 2nd is the drop date for general availability of Copilot. We, too, at Blue Cypress are waiting for it. If you have under a thousand employees, you generally are not going to get access to Copilot yet.

But hopefully that will start to happen in Q1.

Mallory Mejias: And I’m sure we will be talking about it on a future episode. Our next prediction for 2024 is the rise of Common Data Platforms or CDPs for enhanced AI utilization. The effectiveness of [00:24:00] AI depends on the quality and accessibility of data, which is where the Common Data Platform, or CDP, becomes essential.

CDPs are designed to store and manage data from various sources in a unified manner. For associations, this is particularly crucial as they often have data dispersed across multiple systems like your LMS, AMS and others. Siloed data across different platforms and vendors hinders AI's full potential.

A CDP consolidates this data, providing AI systems with a comprehensive, high quality data sets essential for maximizing AI's potential in areas like member engagement, predictive analytics, and operational efficiency. So for associations to be able to fully leverage AI, we are predicting the rise of the CDP, which also sounds like the name of a sci fi movie.

Amith, in your words, why is a CDP absolutely necessary to get the most out of AI?

Amith Nagarajan: It's really simple. Associations in 2024 are going to realize they need to own [00:25:00] their data. Your data is one of your most important assets. You probably say that in your association all the time, yet you don't do a lot with it. And one of the reasons you don't do a lot with your data is because you don't have access to it.

Your data is tucked away in all these little nooks and crannies, these different proprietary systems, whether it's an LMS or an AMS or whatever it is, but you don't have true access to it. The reason I like to say that you don't own your data is that even though you technically own it legally by virtue of whatever contracts you have in place with those system vendors, you don't truly own it as a practical matter because on a day to day basis, you don't have the ability to access all of that data at once or even large chunks of it a lot of times.

So the idea of a CDP is simple. You bring all of your data from all these different sources into a master system, essentially a large database. But this database doesn't perform any functions. It's not at all an AMS or CRM or LMS replacement. It's designed to basically hold the data that comes from those systems, essentially in a read only fashion, unify that [00:26:00] data into this data format, and then train AI on top of that.

So there's a couple reasons this is so important. First is it brings your data into an environment you truly own. So you have access to it at all times and literally the entirety of your enterprise data set in that single place. And you also have ownership of the underlying platform, meaning you're not building on a proprietary third-party system.

That's a key point. I have a CDP that comes from a commercial vendor and it's a licensed SaaS product, you're better off than no CDP, but honestly not much better off because ultimately you don't have portability, you don't have true ownership. So, I believe that, in 2024, the nonprofit community broadly and the association market specifically is going to start catching up with the corporate world, which has for a long time realized open source software plays a key role, not the only role proprietary software can do good things, but plays a key role in supplementing their technology stack.

Now, you know, we over here at [00:27:00] Blue Cypress believe so much in this idea that for the last 18 months we've been investing gobs of resources financially and team member wise into building a totally free, open source, common data platform or CDP specifically built for this industry called MemberJunction.

You can check it out of memberjunction.com. It's a totally free product. It's a gift from us to the community we care so deeply about and has been good to us and all of our companies for so long. But it's a software solution that will solve this problem. It's one of many ways to do this. So it's not the only way you can do this.

But the idea is that people are going to realize that the CDP is the key solution they need in order to get their data freed up and have a foundation that can build on because you don't want to build on top of, let's say, an AMS that you plan to replace in the next five years because then your AI strategy is on shaky ground.

You want to build on a rock solid foundation and owning your data and having it in a platform you completely control is the foundation that most people are lacking today.[00:28:00]

Mallory Mejias: If someone listening to this episode wants to implement a CDP in 2024, what steps should they be taking first in order to do that?

Amith Nagarajan: So seeming they want to go the open route, they would get access to a product or a platform such as MemberJunction, or you could just build on top of a raw database and the data platform like Microsoft's environment or many other data platforms. If you're going to do that, you're going to need a lot more technical skill or money.

If you use something like MemberJunction, you have obviously a running start. What you need to do is, if you have in house technical capability, the first thing to do is free them up from the other things they're doing. So if you have an IT team of five and let's say they're great folks, but they're busy just running the day to day operations, find a way to reduce what they're currently doing.

So either outsource or eliminate a bunch of the day to day operational tasks. Your IT team is doing. Or if you think your IT team doesn't have the skill set to do this, which is also a case that that occurs. Then you need to find a partner that can help you with it. But the idea is, is that you [00:29:00] need to skill up or you need to, you know, add resources to the mix in order to get started because this is not stuff that it's not point and click. It's not like using Chat GPT where you just turn it on and it works. You have to put some effort into this, but it's effort far, far smaller than a system replacement like an MS or CRM. It's much, much smaller than that type of effort, but it's a much bigger effort than simply rolling out Chat GPT.

So it's somewhere in the middle. You have to get ready for that by ensuring you have the right resourcing.

Mallory Mejias: Our next prediction for 2024 is consumer expectation for AI enabled brands. Consumers are rapidly moving from a phase of admiration for AI capabilities to expecting these features as a standard. Brands lacking AI enabled services, particularly in customer support, like 24/7 chat assistance, may soon be perceived as outdated.

This shift indicates that by the end of 2024, AI integration in customer service and engagement could become a norm, setting new standards in consumer [00:30:00] expectations and brand interactions. Amith, why is it crucial for associations to adopt AI enabled services and how do you think this could impact their engagement with members and stakeholders?

Amith Nagarajan: I mean, the bottom line is people expect you to have these capabilities, and if you don't have them, they're going elsewhere. You might have the best product, but there might be a product next door that's good enough and it's easier to get, and that's going to win every time. People aren't going to do backflips to get to your content when they can just walk in the door next door and get something that's comparable.

And so, the key to understand there is that the expectations of the consumer on your phone, on your iPad, and on the go are going to apply to you, and AI is only going to accelerate that because it's such a dramatic shift from what we've had before that an interface like a Chat GPT is going to kill traditional search is an example in most cases.

And so, you have to be on board with that if you want to provide a low friction experience. So let's talk about friction [00:31:00] for just a second. We simply mean by that anything that stands in the way of the user getting to what their end goal is. So if I'm coming to your website trying to figure out how to learn about a particular topic or have a particular question answered.

And I have to do a complex search then I have to look at several documents. Then from there, I have to synthesize my own thoughts on those documents. And then from there, maybe I have to, you know, formulate a hypothesis and what I want to go do. All these steps are happening quickly in my brain, but it's taking me minutes to hours to do that on your website versus if I can go to Chat GPT.

And yes, the content is not as good as yours, but it's pretty good. So if I can get the answer I need in your domain about accounting or law or medicine or whatever the case may be for you, I'm going to go do that, um, and maybe I'll fact check it a little bit, but maybe because it's so easy, I might not do it as much, which, of course, is a concern.

But if you provide an AI experience like chat GPT, but train on your content, you are not only keeping pace, but you have a better product, but you have lower friction than you've had before. [00:32:00] So it's expectations. Ultimately, think of it this way. In your association and at your board meetings and internally at the staff, you're all talking about how many things you have to go get done.

And I empathize Mallory empathizes. We hear you. It's tough to get your job done. But the thing is, no one else cares. Your customers do not empathize with you. They simply want what they want. And if you don't provide it, they're going elsewhere. And that hasn't always historically been the case for associations.

They've had a pretty strong moat by brand, by tradition, by whatever factor you can consider. That's gone and it's going away. Certainly for the earlier, you know, career people that everyone's focused on, how do you engage younger generations, etcetera, etcetera. There is zero loyalty to the idea of joining or being part of the association because that's what people have always done and so much of what associations have been tolerated for doing is because older generations or people who've been members for longer, let's say, are just more tolerant of that and that that's going away. So I think that's actually really good, but it's going to be a rude awakening for some associations.[00:33:00]

Mallory Mejias: I spoke in a previous episode about a hotel website that I visited that had a custom GPT built for its FAQ section, and I can't agree more, Amith. I am obviously still talking about this hotel website weeks after I visited it. But I was so impressed. I have not forgotten the name of that hotel. And while I didn't book a stay there, it's definitely still top of mind.

And I just imagine a world where your association, your organization could be that for your members and what that would mean if they have the experience of going to your website and realizing that you are a leader in this space and that you're innovative and you're trying new things. It would be memorable, I'm sure.

Amith Nagarajan: Well, I’ll tell you a little story about friction and putting the onus on the customer versus on yourselves. So, one of the most common things associations do is survey people. You know, it’s almost like they’re professional researchers in a way and they’re like “oh, we want to find out what to do next so let’s survey everyone.” And first of all, people don't generally like surveys. So that's one thing just to [00:34:00] remember is, is people don't typically like filling out surveys. I know there's some people who do, no offense intended, but, most folks do not like filling out surveys. The people who do fill out the surveys, especially if they're not incented to do so, you also have to question whether or not they're putting the time into it to give you thoughtful answers.

The other thing about surveys is they tend to reflect people's past rather than their future. People are very poor predictors of their own future needs and desires. Um, and so surveys tend to be really terrible instruments in terms of gathering really valuable insights of where you should be going. Yet we stick with these things time after time after time, and we asked people to fill them out.

A great example actually is something as simple as an interest profile, right? A survey that says, hey, Mallory, thanks for being a member and can you fill out this 50 question survey and tell me which of these topics you're interested in? And Mallory might be saying, well, but don't you already know? I look at your website, I read your email. Shouldn't you already know that? And why are you bothering me with this? That's how people feel today.

Yet associations stick with these kinds of things, which are kind of grading. Um, it used to be these surveys were hand filled out, [00:35:00] and I think some associations maybe you're still doing that.

Hopefully not. But even if it's on a website, it still is painful. So look to get rid of stuff like that. Truly ask yourself, is that survey really adding value? Does it provide real insight? Many of the surveys you're using as instruments by virtue of the fact that you've always done them do not provide value to you.

And they hurt your experience with your audience. So, you know, think critically about that. Sometimes it's removing stuff that will actually free up your time and give you an opportunity to rethink the future. But again, like, you know, we found in all of our AI work for prediction and personalization, like at brands like rasa.io, for example, surveys actually don't tell you what people want to read.

They just tell you what they have been reading. And a lot of times people don't know what they want. So you have, that's where AI at scale really is helpful. But first thing is get rid of the stuff that hurts people like surveys or not all surveys, by the way, just to be clear, just some of them.

Mallory Mejias: Some of them and the surveys on paper. Yep, I think we can agree on that. Well, most of our [00:36:00] predictions thus far have been fairly positive, but we try to keep it real on the podcast and we can't talk about AI predictions in 2024 without mentioning some of the negative ones. And the one that we're talking about now is AI misuse, particularly in information integrity and cyber security.

Obviously, as AI technology advances, so does the potential for its misuse. The concern centers on sophisticated AI driven misinformation. The concern centers on sophisticated AI driven misinformation such as deepfakes, which we can all agree will be particularly dangerous in 2024 as it is an election year.

These deepfakes could be used to create highly convincing yet entirely false narratives. Potentially swaying public opinion or causing widespread misinformation. We've talked about this on our team and I've talked about it with you and me that digitalNow 2024 is October 27th through 30th, 2024. And I believe election day, the presidential election is November 5th, 2024.

So I'm sure that we will have a [00:37:00] lot to talk about in this area at digitalNow 2024, particularly with the fact that it is in DC as well. So I am interested to see what that conversation will look like at that point in the year. But in addition to information integrity, AI powered phishing attacks are becoming more sophisticated.

So businesses and individuals have to be alert to avoid these types of threats. Telltale signs for most people to spot phishing attacks are misspelled things in the email or weird email addresses, but just realizing that using AI, people won't make those same errors in, in their phishing attacks.

It's interesting to think about what that might look like in 2024 and how difficult it may be to discern those types of attacks. So Amith, what are your, what are your thoughts on this and how can associations prepare for this kind of AI misuse?

Amith Nagarajan: I mean, the first thing is, is, you know, um, go watch that kid's movie, the crude's, um, and you know, in that basically the, the dad is always telling everyone to be afraid of everything that's new. And [00:38:00] so in a way you kind of have to adopt that mindset because, anything that you haven't seen before, there's a very high probability it's not only fake, but it's malicious and that's the world we live in. Just to give you a quick anecdote, the CEO of our family of companies at blue Cypress, she received a message from one of her team members recently, um, and it was that team member who had been pretty far along in this text based attack where someone had texted her pretending to be the CEO and saying, hey, can you go to the Apple store and buy gift certificates for me? And can you just put them on your personal card because I don't have time to deal with like calling in whatever and then send me the codes, right? So it's a phishing attack. But it was pretty good. You know, it used a lot of public information. It had the person's name spelled correctly. It knew that this other individual was an employee at the company further down the ranks, all this kind of stuff.

And those attacks are only going to get better and scarier. Fortunately, in our case, that person, after a few iterations, they actually even went to the Apple store, by the way. This is a very intelligent person with a [00:39:00] graduate degree. They went to the Apple store and then realized, wait a second, this seems kind of weird and texted the whole thread to the CEO and said, what's up with this?

And I'm like, no, no, no, that's not us. So this is real. It's happening now. There are literally billions of dollars at stake. And more importantly than that, you know, even worse things can happen than losing money. So in 2024 our prediction is as scary as it is that there will be an act of significant misinformation in the election cycle such that a mainstream media network, one of the mainstream media networks will unknowingly air a video clip from a major candidate of and perhaps the presidential election, certainly part of the election process for a position that is fake.

And something that is, uh, close enough to being believable, but not accurate, right? Something that was generated. So, I cannot see a world where that doesn't happen, right? It's so easy to fool people with tools like HeyGen, that we illustrated at digitalNow, which is a much lower [00:40:00] caliber type of tool.

There are tools out there you can use that are incredibly sophisticated even today. And remember, we're on a doubling cycle of six months, so we have almost two doublings in AI power between now and the election. So, you know, the other thing that I would suggest that is very likely to happen is people are going to realize that the once very hot technology called blockchain, which is part of the whole conversation on cryptocurrency, but it's but a distinctly separate topic is more important than ever because the way you can prove authenticity, the way you can use crypto to verify that content is actually real is through blockchain and that’s not being used at scale yet but I believe that’s going to become increasingly important in 2024. In fact, at digitalNow 2024, again October 27th through 30th in DC, mark your calendars. You do not want to miss this event. We're going to be talking a ton about AI, but we're going to [00:41:00] weave that back into this idea of AI and trust at scale. And some of the speakers we're working on lining up right now are exactly at that intersection, both in the public sector and in the private sector. It's going to be an exciting and interesting discussion, particularly with the backdrop of the election a few days after the event.

Mallory Mejias: Speaking of digital now 2024 registration for that conference in DC, October 27th through the 30th of next year is now open. You can get the lowest price that you'll ever see for registration, which is currently 20 percent off using the code early 20. We'll also drop that in the show notes. If you want more information on the conference or want to register, you can go to digitalnowconference.com.

Our next prediction for [00:45:00] 2024 is the dramatic reduction in AI costs, which will ultimately fuel broader adoption. The cost of AI is decreasing due to advancements, due to advancements in technology, increased competition, and more efficient AI algorithms. As computing power becomes more affordable and AI development more streamlined, organizations can deploy AI solutions without the hefty price tag once associated with them.

The decreased financial barrier allows for a wider implementation of AI, and it democratizes the use of it. And with that, we'll see more opportunities for innovation and competition in 2024. Amith, how do you think associations will first feel or experience the continuing reduction in AI costs?

Amith Nagarajan: Well, you know, first of all, I think this prediction is probably the same thing as saying like, children are going to like ice cream in 2024. It’s, it's one of our, one of our easy picks, right? So we want to have at least one right in 2024 . And by the way, we will come back to this next year, towards the end of the year and say [00:46:00] where we were right and where we were wrong.

That'll be fun, but I think this one's like, you know, abundantly obvious that the cost reduction that we're seeing on a six month, you know, having or six month doubling in power relative to cost. We keep talking about ad nauseum. It is going to be unbelievable what that does. I just read an article on the information, which is an excellent source for kind of breaking news and all things Silicon Valley.

And they were talking about the adoption of AI in banking specifically with Microsoft technology and how notoriously cost restrictive many banks are with their investments in technology as well as how risk averse they are with respect to new tech. And so they're talking about the adoption even there.

And I couldn't help but think, well, you ain't seen nothing yet, because as these costs go down. Everyone's gonna jump on board. You know, the example we gave earlier in the episode where we said, hey, if you have 100 employees, copilots only $36,000 a year for those 100 employees. Well, a lot of people would say, well, that's pretty significant.

We just don't have it in the budget. But what if it costs $3,000 a year, right? And that's where we're heading with cost. Now, Microsoft's not [00:47:00] going to self-elect to the lower their prices by that amount, but they will eventually in when competition is strong enough to force them to essentially. So ultimately, you're going to see more and more adoption.

I think associations that feel pinched where they're saying, hey, listen, we just recovered from the pandemic. We had to dip into our reserves back then. We now are starting to recover from that financially, but we may even still have a small operating cash hole we're dealing with. How can we possibly allocate significant dollars for AI.

And so I'd say good news is on the way. The cost of many of these tools is going to come down in 2024. But what you have to think about is how long do you want to wait versus how much are you giving up? Like what opportunities are you giving up by attempting to wait for lower cost? So I think the cost reduction curve is a double edged sword in that sense, because people don't want to waste money by investing too soon.

But it's not like buying a stock that's decreasing in value. You're gaining value from your use of it along the way. So I think many of these tools will [00:48:00] become free. Some of them will become very inexpensive, but you're learning. And as an organization, if you're starting to use these tools, even if you spend a little bit more now than you would 12 months from now, your culture and your knowledge and your team is so much further along.

And you have to account for that in your thinking on this.

Mallory Mejias: Well at least we’ll have one prediction for sure that's correct.

Our next prediction is the increase in AI interpretability leading to safer use and more reliable AI models. As AI systems become more intricate, there's growing emphasis on interpretability, understanding how AI models make decisions. Increased interpretability leads to safer AI use as it provides insights into the decision making processes of AI models, ensuring they align with ethical guidelines and avoid unintended biases.[00:49:00]

In our last episodes, we dove in really deep to this, so I would recommend that you check that episode out if you want more of an explanation of this. But we talked about how in the past, in terms of neural networks, people focused on the individual neurons to understand how and why they fired in certain situations.

And we're moving into an area of the field where we're looking at features instead, which are clusters of these neurons that are activated for specific topics or specific tasks. Ultimately, understanding these features is just one step into AI interpretability and understanding how these models work and why they make the decisions that they do.

And understanding those things will allow us to make them safer and more reliable. Amith, what leaps do you hope to see in the field of AI interpretability in 2024?

Amith Nagarajan: Well, the more transparent these models become, the more likely the user is to understand why they're doing things and why they're making decisions, the better we'll be able to deploy them at scale, particularly for mission critical problems. So, you know, I like to use a lot of analogies and imagine if [00:50:00] airplanes were just invented and you kind of, you knew they were able to fly, but you weren't really quite sure why it was possible. You're still kind of in the camp that says, well, is that even possible to do? Is it possible to, you know, put an object heavier than air in the air and sustain flight? And, uh, you know, you might not know how it works exactly, right? Or you might not understand all aspects of how it works.

And so it's a little bit scary in those early, early days because you're not quite sure, like, what's gonna take the thing down on. You don't want to be on that airplane when it's coming down. So that's kind of how AI feels. I think to some people now is that you're kind of building the airplane as you fly it and also in an airplane where you're not quite sure how it works.

And as we get better, and by the way, this applies not just to those of us, you know, speaking here and listening to this podcast, but the deepest AI researchers in the field can't tell you any more than I can about how chat GPT actually works today, but that's changing. So interpretability is this branch of AI where we're doing research to figure out how these models are actually working to get more transparency.[00:51:00]

And that's just going to make it easier for everyone to say, yeah, like I get it. I understand how this thing works now. It's not a black box anymore. We can make it work and wire it into. Many more business processes. So I think it's a super exciting area. And as you mentioned, from a safety perspective, these doom scenarios where we think the AIs are gonna become self-aware and we think all these bad things are gonna happen will become a less lesser concern for people when you actually see what's happening inside these AI brains. So I think that's a really exciting advance for 2024 to pay attention to all the major labs are investing very heavily in this.

Mallory Mejias: Our next prediction for 2024 is that many scientific advances will be fueled by specialized AI models. Building on recent achievements like Google DeepMind's discovery using GNOME, we expect 2024 to witness more significant scientific advancements fueled by specialized AI models.

These AI tools will continue to [00:52:00] revolutionize fields like material science, renewable energy, and advanced computation. The success of GNOME and identifying millions of new crystal structures hints at the enormous potential AI has in accelerating research and uncovering knowledge that would take centuries through traditional methods.

We do have a previous episode on GNOME as well, so check that out if you want more info. Amith, why do you think this prediction is relevant for those outside of material science, for example?

Amith Nagarajan: I mean, it has to do with really discovery in general. So I would categorize this as discovery as the most important operating word in that entire introduction on Gnome being the thing we most recently spoke about in this realm a couple episodes ago. And you're gonna see this kind of discovery applicable to all sorts of fields.

It's gonna be very exciting. So certainly in all branches of science and biology and synthetic biology, we're gonna see tremendous discovery opportunities happen as new specialized models come online. We're already seeing that, you know, in many fields. So why does that matter? Well, think about accelerating [00:53:00] existing processes.

So in the case of drug discovery, there's a number of sequences of activity that have to occur kind of at the moment, you know, in pretty much a linear sequence. Ranging from identifying potential pathways to identifying molecules and then figuring out like, okay, which of these molecules are actually, you know, things we can synthesize and then which of those molecules actually are effective.

And then, of course, are they toxic? All these steps that you go through in various phases of drug discovery, you could accelerate that. And you can come up with novel concepts for pathways to target as well as, you know, ideas for molecules at scale that's previously unimaginable. So the opportunities to have, you know, curative outcomes for diseases that are considered really uncurable or perhaps we only have, you know, minor symptomatic relief for is an unbelievably exciting area of research.

But again, it goes back to discovery. You could apply that to other branches of science and other fields as well. So to me, this is a massive opportunity around associations, [00:54:00] because if you're involved in that area of discovery in your field, that puts you at the center of the conversation on what's next.

Now, many associations tell me, oh, but we're more of, you know, kind of the mature branch of the field, meaning like we just kind of help disseminate the information on what's already considered the normal operating practices and standards. And I get that. But I also think associations, even that are kind of on that side of the maturity curve, for their particular field should engage in understanding this both because it's going to affect what you do further downstream.

So if you're in accounting or if you're in chemistry or if you're in biology and you are dealing with kind of broadly accepted subject matter as opposed to what's on the frontier of your field, the frontier is changing into broadly accepted very quickly so you have to understand these things. But also, you should play in that frontier land to some degree, because, you know, there's this compression happening going from decades of time needed to take something from the lab to the scale in the world to now years and soon, [00:55:00] probably quarters and then months, which is exciting.

But it also means your associations gonna have to figure out how to broaden the pipe of information coming in and how to process all this.

Mallory Mejias: Yep. And I think this goes back to our previous conversation about industry specific AIs and kind of the discoveries that will come out of that and how they'll impact associations in various fields, not just science. Alright, our last prediction is that generalized AI models will advance in reasoning and math skills.

In 2024, we anticipate big advancements in generalized AI models like GPT 4.5 and possibly GPT 5. These models may exhibit enhanced reasoning and math skills going beyond their current capabilities. And what does this mean? It means that models could handle more complex analytical tasks, understand and solve higher level mathematical problems, and provide more nuanced insights.

For associations, this could translate into AI tools capable of deeper data analysis, more accurate forecasting, and solving [00:56:00] complex, multifaceted problems that currently require extensive human intervention. The evolution in these AI models’ capabilities might also enable them to contribute creatively to problem solving, offering innovative solutions that haven't been considered before. Amith, if this is the case, how do you see associations best leveraging a tool like this in 2024?

Amith Nagarajan: Well, first of all, I think the prediction that reasoning skills and math skills will advance significantly, is perhaps a little bit controversial because some people think that we're just gonna continue to see better versions of next word predictor type capabilities, which is essentially what even the most cutting edge current models are really doing.

There's not actual reasoning going on where the model is considering a series of steps that needs to take in order to execute a solution to a problem. It's not really coming up with anything novel, and that's for the broader foundation models we're talking about in comparison to something like Gnome, which actually is doing, in fact, that type of, uh, you know, novel discovery in a very narrow [00:57:00] field.

But the point would be that these broader models, having true reasoning skills, being able to solve math problems, being able to do other things that require step by step reasoning that require what we talked about in a prior episode that's like a tree of thought where you test out different ideas, go deeper on the things that actually seem to make sense until you find a solution through an iterative process.

That's the way we work through problems. We test ideas out in our minds, and then we go to the next step in the next step in the next step. These models at the moment can't do that, but we will see that in 2024 in various flavors. It might be a multi agent framework type approach that provides the most, you know, mileage for us in 2024, or it might be native to some of these more advanced models or new model architectures that are better at these things, inherent to their model architectures.

So, this is probably a little bit on the edge of what most associations will really touch themselves in 2024, in my opinion, because most associations I'm talking to are just getting started. They're [00:58:00] starting their learning journey. They're starting to kind of catalog ideas for what they should be doing in 2024, and there's so much that's already happened in AI that associations generally haven't taken advantage of yet that knowing this is around the corner may not actually change what they plan to do in 2024 itself. I would say the number one thing you should do with this particular prediction is watch it closely, pay attention, learn as much as you can and focus heavily on keeping yourself up to date on what's happening because that which you assume that these models cannot do today is very likely to be an invalid assumption, even in six months.

Mallory Mejias: Yep, and we've kind of just briefly touched on this today, but if you want to dive more into that, the episode had the word Q* in it, because that was the model we were discussing. It was a couple episodes ago. If you want to learn more about that topic, and actually I just checked, it has, was one of our [00:59:00] most popular episodes thus far of the podcast, so it's definitely worth checking out.

Amith, do you have any final thoughts on AI in 2023 and, and where you hope to see it in 2024?

Amith Nagarajan: Well, I think in 2023 we had a lot of people start to wake up to the possibilities of AI, which is exciting. People starting to make, you know, truly significant efforts to understand what this technology is about. Many people that I've had conversations with the CEO level of the CXO level that are saying, they really believe this is a transformative technology in terms of strategy, in terms of what they actually do as a business.

And that's important because technology shifts prior to that might have affected one element of how you distribute content or how you go about creating content in some cases. But this is affecting everything all at once. And so people are realizing the transformative impact of AI, which is exciting. I think that's where people have come to in 2023.

I would say that's, you know, the people we talked to are a self selected group, people who were listening to this podcast, [01:00:00] you know, as well. And so most associations that mainstream part or we kind of look at is the one standard deviation from the mean and the bell curve that that middle chunk or the, you know, the kind of the late, late majority essentially, they haven't done anything yet. And so when we pull associations and ask them, how far along are you with something as simple as basic training on AI? Most people say I've done very little or done nothing. So I think the general answer is 2024 is when people start, like the broader market starts doing stuff because they won't be able to not do that.

And I think the earlier adopters that, you know, tend to listen to our podcast and read our content, Mallory, probably will take that next step of really deploying their first significant application of AI in 2024. And I'm so pumped about that because in my mind, there's this flywheel that's going, that's not about technology, it's about people, ultimately, it's about you and me and everyone else that are thinking about this stuff that are learning it. And if we start now and we start learning little bit by little bit by little bit, we can make [01:01:00] progress. We can make momentum happen. You know, imagine talking about what we're talking about today, Mallory, 12 months ago in your own journey and learning AI, it wouldn't have been possible, nor for me either, in terms of where I was 12 months ago to have this conversation, much less think about how to apply these technologies.

And we are where we are in our own deployment of AI in a pretty advanced way across our companies at Blue Cypress and at Sidecar because of that momentum. And so, you know, I think 2024 is the year where everyone figures that out. That's my hope and my prediction. I'm gonna be an optimist about it and say that the majority of associations in 2024 will implement a learning program around AI.

And I'm hopeful that a substantial minority will implement some kind of major application.

Mallory Mejias: Yep, I think you said it best 12 months ago, I don't think I could have been co hosting this podcast. I think my first foray with Chat GPT was around October of the year before, which I had learned about at that digitalNow conference. And it's crazy to think how much in 2023 has changed [01:02:00] and I know it will only continue to change at that rate or maybe quicker in 2024.

As we wrap up this episode, I'm reflecting on the predictions that we made. I'm wondering if all of them come true, or maybe if some of them come true, and we'll definitely reassess that later in 2024, but I'm challenging you all and challenging myself to just make sure that you have a way to keep up with these AI advancements, whether that's listening to this podcast, listening to the other great AI podcasts that are out there, attending AI events, AI webinars. Making sure that you are doing what you can to stay up to date because the stuff is changing so quickly if you take a break if you take a few weeks off or even a month,we've seen it on this podcast, right? You will miss a lot.

 I want to wrap up this episode Thanking you all for listening and I’m excited to continue the Sidecar Sync in 2024 to see where that continues. And to also remind you that if you want a more hands on way to continue your AI education, to check out our AI Learning Hub reminder that we have flexible on demand lessons that we are [01:03:00] regularly updating.

So the idea is that as these predictions come true, perhaps we'll be adding lessons to the AI Learning Hub to reflect that. You'll get access to a community of fellow AI enthusiasts, and you'll also get access to those weekly live office hours with AI experts to ask all your questions. Amith, I will see you in 2024.

Amith Nagarajan: See you in 2024. Thanks, Mallory.

Mallory Mejias: Bye everybody.