Skip to main content
Join the AI Learning Hub

Timestamps:

00:00 - Introduction
02:09 - ‘Ascend’ 2nd Edition Release
08:48 - AI's Role in Accelerating Book Projects
13:54 - Lumi: AI for Storytelling
17:24 - The Power of Storytelling in Associations
26:12 - Google's DeepMind Achievements
32:06 - Understanding AI Reasoning
37:44 - The Future of AI and Generalization
41:37 - Elon Musk and Deepfake Controversy
46:18 - Ethical Concerns with Deepfakes
51:49 - Business Use Cases for AI Avatars
54:15 - Closing Remarks and Future Resources

 

Summary: 

In this episode of Sidecar Sync, Amith and Mallory explore a range of fascinating AI news and developments. They discuss the launch of Lumi, an AI-driven storytelling platform by Colin Kaepernick, Google's DeepMind achieving high-level mathematical problem-solving, and the ethical challenges posed by deepfakes in politics, exemplified by Elon Musk's recent controversy. Amith and Mallory delve into the nuances of AI reasoning, the future of AI in science, and the importance of critical thinking in an age of digital misinformation.

 

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

Follow Sidecar on LinkedIn

📕 Download ‘Ascend’ 2nd Edition for FREE

🛠 AI Tools and Resources Mentioned in This Episode:
ChatGPT ➡ https://openai.com/chatgpt 
Claude 3.5 ➡ https://www.anthropic.com 
Otter.ai ➡ https://otter.ai 
Lumi ➡ https://www.lumi.ai 
AlphaProof and AlphaGeometry ➡ https://deepmind.com 
Eleven Labs ➡ https://elevenlabs.io 

 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

Read the Transcript

Amith Nagarajan: [00:00:00] Greetings everybody and welcome back to another episode of the Sidecar Sync. It is exciting to be back with you as always and we have a whole bunch of interesting topics at the intersection of artificial intelligence and associations for you today. My name is Amith.

Mallory Mejias: And my name is Mallory Mejias.

Amith Nagarajan: And we are your hosts. And before we jump into our three interesting topics for today, we're going to take a moment to hear a quick word from our sponsor.

Mallory Mejias: Amith, how are you today?

Amith Nagarajan: I'm doing really well. I'm enjoying my time up here in Utah. It's coming to a close fairly soon, and I'll be heading back to new Orleans where it's a thousand degrees and a hundred percent humidity. But, uh, I'm enjoying the last few days up here while they last. How about you?

Mallory Mejias: I'm doing pretty good myself. Really excited that Ascend second edition is finally out on Amazon and available for free download on the Sidecar website. Amith, I think it was just a couple of months [00:01:00] ago on this podcast. You said, we'll have that book done by end of July. And I remember thinking, Ooh, I don't know.

And I'm happy to report. Uh, it is July 31st the day we were recording this podcast and the book is done.

Amith Nagarajan: It is awesome and congrats to you and the team and everyone who has contributed to the book across the Blue Cypress family as well as our two awesome, uh, association leaders who are guest contributors, uh, Liz from South Carolina CPAs and Alice from the Society of Actuaries. Both, uh, developed really great case studies that are in the book for the second edition of Ascend.

I'm really pumped about it, Mallory. I It's got a whole bunch of new content. The book went from like 185 pages to 290, so it's definitely, uh, you know, been taking some HGH, I guess, so, um, the book is, is ready to go and I think it's going to be a really good guide for people to execute on AI projects and implementations.

I think the first edition was just primarily about introducing a lot of these concepts to the market. Thanks And [00:02:00] over the last 12 months and all of our work and all of the content we've produced on AI and associations, we've learned a ton and we've realized there's certain use cases and certain topics, uh, deep within marketing, deep within education, deep within fundamental technology areas like vectors, for example, which we talk about on this podcast before, uh, we've developed new chapters for each of those topics and others that have gone into the new edition of the book.

So couldn't be more excited about it. And I'm really pumped about what the The sidecar team is going to do next in refreshing our a I learning hub to take advantage of all this new content in in learning format, too.

Mallory Mejias: Absolutely. I think, as you mentioned, Amith, those two case study chapters are maybe two of the chapters I'm most excited about from Liz and Alice, but I think we've also kind of revamped the marketing chapter. As you mentioned, we added a vector chapter. There's a new chapter on agents. Uh, we kept a lot of the content, but we also kept Obviously updated a ton of the content now that the book is nearly 300 pages.

So I'm thrilled about that. And yes, we are in the [00:03:00] process of revamping our AI learning hub content to reflect a lot of the topics and themes that you'll see in ascend second edition. So I'm really excited to roll that out.

Amith Nagarajan: Well, you know, you mentioned the tight timeline for delivery, and I think that's the key to what we do across all of the organizations in our family is, you know, try to set Very clear and very specific goals that are narrow, make a lot of difficult decisions, um, at the planning phase each quarter to eliminate projects that are very much worthy things, but to narrow our focus and to set firm deadlines.

And of course, um, we're pretty good at using AI. So we've used a hefty dose of AI to help us in creating Ascend. Ascend is all original ideas, From the team at sidecar and other companies within blue Cypress, as well as our guest contributors. Uh, but we've heavily used AI to curate ideas, to edit, uh, to help us brainstorm.

And in some cases to write pieces of copy, uh, or to trans, uh, transfer knowledge from one [00:04:00] modality to another. For example, uh, most of the work that I've done in the book is actually been an audio form in either talking to chat GPT and developing outlines interactively through the voice agent. Or using otter.

ai, which is another tool that I love using where I just, you know, walk around New Orleans and wherever I'm at and just talk to my phone and record things and then take the transcript from that and summarize it and on and on. So it's it's a it's a great synthesis of human creativity and ideas with AI brainstorming.

Uh, AI counter positions in some cases, because we've asked the AI to say, what's wrong with this content? How can we improve it? And more and more with the frontier models, we've talked about how, for example, um, Claude 3. 5 sonnet, uh, seems to be more intelligent, right? And so I know Mallory, you were working with that particular tool with the book and getting some interesting feedback to help us more critically edit and improve our content.

Mallory Mejias: Absolutely. And as you mentioned, using it truly as a brainstorming assistant, as an editor for my own work, [00:05:00] I'm curious to meet, I know you wrote the open garden organization book. Was it 2018 that you released that?

Amith Nagarajan: Yeah, it was published in 2018. Yes.

Mallory Mejias: How long did that take you compared to first edition of Ascend? And then we probably beat some records with the second edition as well.

Amith Nagarajan: Yeah, things keep going faster. So the first book that I wrote, uh, for this market, the open garden organization I published, I think it was early 2018. Uh, I spent a solid two years on that. I mean, I didn't work full time on it, but it took me two years to get it done. And I hired a third party Company and pay them a good bit of money to actually do a bit of the copywriting for me So I would like speak to them and they would transcribe my ideas send me drafts of chapters I would edit them and that's that's how that first book got done And that book by the way is we talk about open garden in ascend because it's super relevant We talk in open garden about this idea of opening up and having a new perspective as an association to be inclusive of not just your core [00:06:00] audience, but other audiences that are adjacent or perhaps even disconnected from your traditional core, because that opens up your total addressable market.

And there's a lot of other ideas in that book that are super relevant in an AI driven world. Uh, but to your point, that took me two years, a lot of dollars, uh, external company, a lot of humans. And that was, you know, really pre AI. In the context of tools that were practical to use in that context and comparatively, you know, we started the ascend project in addition one in January of 2023, I believe, and we published that in June.

So it's about a six month project again, not a full time effort for any individual. I put quite a bit of effort into that. Um, I don't know, maybe two, 300 hours of my time personally over the course of six months. It was a pretty big, pretty big effort. Um, and then, you know, with Ascend second edition, you know, we've, in a way we've been working on it ever since the first edition shipped with all the content we create at Sidecar, all the talks we give, all the webinars that we deliver, uh, the custom, you know, delivery of education we do with a lot of associations as partners when we deliver [00:07:00] education for their members and so forth.

Um, but ultimately I think what the work itself, uh, We did probably in about three months start to finish and in that range or you said two months earlier and, you know, and probably fewer hours. I mean, certainly in my end, there were fewer hours. I probably put 100 hours into this edition is my guess,

Mallory Mejias: and I guess you're right. In a way, we have kind of been working on this one since the release of the first one, so it is hard to contextualize that amount, but we definitely churned it out, I would say, in a few months. That

Amith Nagarajan: right? And we went through and redid basically the entire book. I mean, it's the second edition, but it's completely refreshed. Um, one of the things we changed is, In the first edition, we were playing around with an experimental idea of using a business fable format where we said, Hey, let's actually chart the course of a fictional character who's going through a transformative change in her association as the newly appointed CEO.

of the Society of Really Good Accountants, which was a fun name. And, um, we loved the concept and the idea was, I'd say like, you know, three quarters baked in the first edition of [00:08:00] Ascend. Uh, it received some pretty positive feedback, but also some criticism about being somewhat disjointed. Um, and we decided actually to dispense with that concept For, uh, this particular edition, maybe, you know, we'll bring that back in the future.

But the idea was one that we didn't feel was fully formed, uh, for the second edition. So, uh, even though we removed quite a bit of content from there, uh, we reworked the order of the topics. Mallory, that was a really heavy lift. by you where you looked at it and said, Hey, this doesn't really make sense in the order that was originally presented.

So you reworked pretty much the entire flow of the book, which was awesome. And, uh, sometimes when you're in something, looking at it over and over again, you're like, yeah, of course this makes sense, but then, you know, you were looking at it with a fresher set of eyes than mine and you came back and said, yeah, actually this, um, makes much more sense in this other order.

And I think you maybe had a little bit of help with that, that revised outline from, was that chat GPT or was that also Sonnet three, five?

Mallory Mejias: was Sonnet 3 5. I had an idea, I think we both had the idea of wanting to break up the book into sections, kind of like a choose your own adventure, I think you were the one that [00:09:00] coined that phrase in relation to this book of me, but I did use Claude 3. 5 sonnet. I gave it the book, the original and said, what, what do you think about this?

If we're trying to divide it into sections, what do you suggest? Um, I didn't take that feedback flat out and use it. I worked with it a little bit and kind of put my own spin, but, uh, it was indeed a heavy lift, but honestly, not for the reasons you might think just working in a word document that was, I want to say at its peak like 400 something pages It was honestly just more complicated to copy and paste things everywhere But once we locked that in it seemed like the rest of the book flowed seamlessly

Amith Nagarajan: For sure. And, you know, it's interesting because I think this behind the scenes, how ascend second edition came together, hopefully it'd be instructive and interesting for a lot of our listeners and viewers on YouTube, which by the way, if you listen to us on audio only, we do have a YouTube channel, which is growing, uh, rapidly in popularity.

So, um, please check that out. But I think that the key to it is we are ourselves very much not only users of these tools, but [00:10:00] trying to find new ways to get get really, you know, more out of each squeeze. And the key to what we're saying here isn't so much that, you know, we're using it for copywriting.

There's a little bit of that, but more than anything, you know, we have an abundance of ideas across our ecosystem at sidecar. Across, you know, additional now all these other things that we do, and we certainly are open minded to ideas that come from a I as well as anything else. But a lot of the way we're using a eyes editing in brainstorming in collaboration.

And so there's really this human a I partnership that's happening that Has dramatically accelerated a significant work like the Ascend book. Um, but it's, it's different than what a lot of people think. Some people might say, Oh, you used AI for the book. So what'd you do? You just like typed into a chat GPT prompt.

I want a book on artificial intelligence for associations. And you know what, you can get something out of that type of a prompt. It may not be that great. Um, it might be okay. It's increasingly. We feel the value we create is where we can connect our listeners and our readers to our [00:11:00] experiences and our thoughts, uh, in this market and how to apply these technologies.

Um, but then AI certainly is a great assist in accelerating that. And also like we were just talking about enhancing the quality of the work. So in any event, I'm super excited about this release. As Mallory said, the book is available for free download sidecar, global. com slash AI. Um, that is a totally free download.

It's a PDF. We encourage you to download it and share it with anyone and everyone that you would like to. There's, there's no like IP rights or issues. We are, our goal is to share this with as many people as we can to help them. Uh, and it is available in print on and, and uh, Kindle format on the Amazon store.

Uh, we do plan by the way to create an audio book. We're investigating the steps to do that right now. Uh, we may actually hire a human, uh, you know, person to do that, but we're very likely to use AI. 11 labs actually has a service that will take a transcript like a send and convert it into an audio book.

Uh, so we're looking into that as well. So more, more to come on that front. Cause a lot of listeners like to [00:12:00] listen to their books instead of just reading them. So that'll, that'll

Mallory Mejias: That checks out.

Amith Nagarajan: Yeah.

Mallory Mejias: Or watch on YouTube. Whatever your preference. We'll figure out a way to get us in in that medium. Today, we're excited to cover several exciting topics as Amith said earlier. First, we're talking about Lumi, a brand new startup storytelling AI platform. Then we'll be talking about Google DeepMind's mathematical AI models.

And finally, this will be interesting, we're talking about deepfakes in politics. Very timely conversation. So first and foremost, Lumi. Lumi aims to democratize storytelling by providing tools that help creators. Oop, let me back up on that. I didn't start from the top. Colin Kaepernick, the former NFL quarterback and civil rights activist, has launched a new AI driven startup named Lumi.

This platform is designed to empower creators, particularly those interested in producing comics and graphic novels, by leveraging AI to streamline the storytelling and publishing process. Lumi aims to democratize [00:13:00] storytelling by providing tools that help creators develop, illustrate, publish, and monetize their ideas.

The platform is subscription based and offers various AI powered tools to assist in the creation of hybrid, written, illustrated stories. These tools allow users to do things like create characters, generate illustrations, edit and publish, and maybe most importantly, creators can publish their stories directly on Lumi and monetize them.

They can order physical copies and create and sell merchandise. based on their intellectual property. Another note here, creators retain full rights to their work on Lumi, which is a significant departure from traditional publishing models. Lumi also handles logistics like manufacturing, sales, shipping, allowing creators to focus on their craft.

Kaepernick's vision for Lumi is rooted in his own experiences with media and publishing. He faced challenges like long production timelines, high costs, and issues with creators not having ownership over their work. Lumi seeks to address [00:14:00] these problems by providing a more accessible and equitable platform for storytellers. Amith, when I heard about this, I thought it was really exciting. And of course, I'm always kind of thinking on the two sides of the coin. So on one side, it's incredible that we're going to see this democratization of access to storytelling and story creation. But on the other side of that coin, is this likely means we will eventually be seeing an incredible over saturated market full of stories everywhere.

And that's the case for a lot of things, but I'm thinking specifically with storytelling. Do you see this as something that will kind of balance itself out in the market like it has in the past?

Amith Nagarajan: Well, you know, I think that obviously, you know, people's demand for different kinds of mediums will shift over time as the availability of content changes. I mean, certainly if you think about like how scarce high quality content has historically been versus what we're seeing now. Uh, part of that is based on the market [00:15:00] economics of the fact that, you know, there's, there's so many more dollars chasing to, to be chased, uh, as a producer of content at the scale of someone like a Netflix or an Amazon or an Apple or the traditional, you know, media producers.

Um, but for smaller producers who traditionally wouldn't have had a platform at all, uh, I think a tool like this could come in and make it possible to tell stories just like, you know, I mean, the web originally made it possible for people to have blogs, right? And to, you know, for social media to make it possible for more people to connect.

So I do think there's very much a democratization story there that will result in more diversity of content, which will satisfy niches that are so small that mainstream content producers couldn't ever, you know, seek to fulfill them. Uh, so I think that's an exciting thing. And, you know, it's also, um, When we think about storytelling as a medium, um, it's a way of communicating, and it's not just about entertainment, which is where most people's minds go, I think, when we're talking about storytelling is entertainment, and that's obviously awesome and amazing, and it's a great part of our human [00:16:00] experience, but at the same time, Um, it is an incredibly powerful medium for business as well.

So translating nonfiction ideas into stories, I think is another way that this type of a platform could be really powerful. So coming back to your question, um, I don't know if there will be saturation. I think there, you know, there, there's ebbs and flows in a lot of these things. The more AI will be get more content.

I think there'll be more great content in there and there'll be a lot of really bad content. So I think people filtering and then tools like AI tools, filtering out bad content and helping you find what, what matters to you is going to be more important than ever for sure.

Mallory Mejias: Absolutely. I was, I had a note in here for myself that we have a whole section, in a sense, second edition, very relevant on storytelling as a way of marketing. Like you just mentioned, I was, Thinking Lumi would have been a great addition to that chapter if the book wasn't already done. Why has this been a focus of yours, Amith, in the last few years?

And maybe it's been longer than that, but I do feel like it's been something more recent, at least in terms of how we've talked about storytelling.

Amith Nagarajan: [00:17:00] I've aspired to get better at this in my, you know, business communication skills for decades. I've heard for a long time how powerful storytelling is in marketing and from a sales perspective. If you're able to weave a story, weave a narrative. through a presentation, or a pitch, or a demo of a product, you're more likely to connect with whoever it is that you're, you know, sharing your content with.

So if you're gonna go present to your board, are you just presenting the latest results from your association's financial performance in very dry, You know, black and white terms, or do you tell it in a narrative story? Um, and, and that doesn't necessarily mean to create fictional characters and have a story arc and all of that, but to use elements of storytelling, uh, to make, uh, ideas come to life in a different way.

So I've been interested in the idea. I don't consider myself to be particularly good at this really at all, but it's something that I've aimed to get better at and I've worked at over a period of time. And certainly when you're writing, I think there's an opportunity to do that, which is why we experimented with that idea in the first edition, uh, and [00:18:00] we'll probably do other experiments like that in the future.

Um, to me, ultimately, I think the question is like, you know, when you think about an association and what it's trying to do in connecting with people and, you know, in many cases, convey critical information to those people about things that are happening in their profession or their sector, um, storytelling, I think perhaps could play a role.

Um, and I'm actually certain it could play a role. And I think storytelling also. Could make content more accessible for people who may not have the patience or the attention for more traditional drier content. Um, so I think it potentially opens up some doors. So that's what, that's what comes to mind and why I've been focused on it overall for, for a period of time.

Mallory Mejias: Mm hmm. I wanted to dig into that just a little bit more. Uh, in working through this topic, I realized pretty much everyone knows what storytelling is. We all have this knee jerk reaction for an intuition of what telling stories means. But I've struggled myself with Bringing this down to the ground [00:19:00] in terms of marketing in terms of business.

Um, I was thinking maybe are we talking storytelling about the history of an association or storytelling from the perspective of a member of an association? And maybe it looks like all of those things, but I'm curious, Amith, what do you, if you could provide an example or two of kind of how you see that in practice, storytelling in the world of associations.

Amith Nagarajan: I mean, I think a lot of it comes down to like, what do people actually do in their life? Like if, you know, you go to an associations conference and you listen to a keynote or you listen to sessions present. And, you know, sometimes they're super interesting. Sometimes they're story arcs and what people are presenting.

A lot of times when people say, Oh, my favorite speaker was so and so. A lot of times it's because that person actually used a story arc in the way they presented their content or they told personal stories that really illustrated key points. But what do people do once those sessions are over and they're at the lunch or they're at the cocktail hour?

A lot of times they're just, you know, they're connecting with other people and they're telling stories. They're telling stories from their lives, they're telling stories from their business. And that's what people tend to remember. Um, [00:20:00] because there's an emotional connection there more so than just like the dry information being passed.

Um, and so like in my own career, like a lot of times when I've tried to share ideas with, you know, team members at various companies I've been involved with, um, if I had an experience that was particularly relevant, I wouldn't just say something like, Oh, well, I think that, you know, entrepreneurs tend to, for example, Have very outsized egos because entrepreneurs have to have kind of big egos in order to even start a company.

Because, you know, it's kind of outlandish to say, I'm going to create a new company from scratch. You also have to be kind of like a little bit of a psycho in terms of your degree of optimism in order to be an entrepreneur, because it's extremely likely you will fail. And that's like the highest probability outcome.

So you have to be both kind of pretty decent sized ego and pretty optimistic. And that actually blinds you a lot of times to things that are right in front of your face. So I can tell you those theories, but I can also tell you a story about how, when I was a younger entrepreneur, um, I was back in California at the time where I grew up and I had this experience where, uh, once upon a [00:21:00] time we got literally a knock on our physical door, uh, by a couple of young guys.

And, you know, my, myself and my co founder at the time in my old business, we were really young guys. And these guys came by and said, Hey, Um, you know, you guys have a lot of bandwidth. Can you spare us some bandwidth? And we're like, no, you guys are a bunch of jokers. Like we don't want anything to do with you.

And they're like, no, no, no, no. We really need the bandwidth. We're like, well, just pay us for it. And we did. And they said, well, we'll give you stock. And we're like, no, we don't want your stock. It's gonna be worthless. Well, sure enough, we took their cash, not their stock. That company turned out to be eBay.

It would have been better to take their stock than their cash. And it was our giant egos that resulted in us not seeing clearly that in fact, they were actually pretty far along in their, in their journey. We just didn't pay attention. We're like auction site. That's a joke. We're an enterprise software company.

That's a toy, you know? So we had such big egos that we couldn't see past ourselves. And so I tell that story. Everyone always remembers that story. He's like, Oh, you jackass. Why did you not, you know, get stock in those guys? I still feel that way. And I remind myself when I tell that story, much more so than the theoretical idea of like, hey, [00:22:00] keep your ego in check.

Um, and think about things from a little bit more balanced perspective. And those of you that are listening to us now probably will make fun of me the next time you see me in person. And please do. Cause it reminds me of my own learning experience. But the point is, is that stories like that are fun.

They're entertaining a little bit. Um, and they're educational in a different way. So can we do that at scale is the question. Can AI help us do that in new ways where we're taking either actual nonfiction things like that and turn them into fun pieces of content that scale well to billions of people, uh, or take fictional ideas, right?

And, and craft storylines around ideas. So I think there's, there's so much power in this. I love it. By the way, as an aside, uh, you know, so I grew up in California, like I just mentioned, I'm a lifelong 49ers fan. So some of you will love that. Some will not. Um, and I'm also a fan of Colin Kaepernick. I think the idea of him in particular doing this based on his own experience, having a very difficult time telling his story after what happened to him in the NFL, Uh, and you know, the cost, the timelines, the, the, just the challenges, [00:23:00] all of that, uh, is a really illuminating thing, right?

And he's someone who has been able to tell his story. He's someone who has over time, I'm sure there's been a tremendous amount of perseverance on his part and other supporters of his to tell his story and to tell other stories like his. Um, but I think that's a really inspiring origin story for why this brand has come to the table.

I have no idea if this particular company will be the one that's successful or there might be many. Um, but I love the idea of a purpose driven company like this, that's trying to bring a technology to solve a particular pain point. Um, whether it's a great economic engine behind this, I have no idea, but I think it's just really interesting.

It's the, it's an interesting use case for AI for sure.

Mallory Mejias: I don't think this product is out just yet, but I did think it would be interesting to perhaps work with it on a second edition and see if we could come up with maybe a comic book for it or something a little bit different from what we've tried in the past.

Amith Nagarajan: Totally. I'd love to see that.

Mallory Mejias: And I've got to say, I always love the eBay story Amith. You told me that a while back. I'll never forget it. I [00:24:00] think I told it to our CEO Johanna earlier last week. It's a great one. So I'm glad you shared it with our listeners.

Topic two, Google DeepMind's mathematical AI models. Google DeepMind developed models capable of solving complex mathematical problems at a level comparable to top human contestants in the International Mathematical Olympiad, or the IMO.

The AI systems named AlphaProof and AlphaGeometry2 have demonstrated remarkable capabilities in mathematical reasoning and problem solving. Google DeepMind Together, these two models solved four out of six problems at the IMO, earning a score of 28 out of 42 points, which may not sound so good at a glance, but this score is equivalent to a silver medal, just one point shy of the gold medal threshold.

The AI system solved problems in algebra, number theory, and geometry, but did not solve the combinatorics problems. I had to look that up. That's the mathematics of counting and arranging. Think [00:25:00] permutations and combinations in math. To give you an overview, AlphaProof, the first model, focuses on formal mathematical reasoning.

It combines reinforcement learning with the Gemini language model and AlphaZero, the other, a different model. It solved two algebra problems and one number theory problem, including the most difficult problem of the competition. And then AlphaGeometry is designed, of course, to tackle geometric problems.

It integrates LLMs with symbolic AI using a neuro symbolic approach, and it successfully solved the geometry problem. The success of these two models at the IMO demonstrates that AI can achieve high level performance and complex mathematical reasoning, a domain traditionally dominated by human intelligence.

So this one for me was pretty interesting, Amith, because we all know AI has not achieved reasoning just yet. That'll be a really big day on this podcast when that does finally occur, but it seems like AI is reasoning, right? When handling these really [00:26:00] complex mathematical problems. So can you kind of explain that a little bit, how we're seeing the illusion of reasoning without having it?

Amith Nagarajan: Well, in the context of language models and also language vision models, all the things that consumers interact with, Chachi, PT, Claw, Gemini, etc. That is where we're seeing a facsimile or an illusion of reasoning. Because these models, again, are not actually reasoning. They're just predicting what they should say next, essentially.

And we've covered that in our Fundamentals of AI, um, 1 and 2 pods and videos in YouTube. And the idea basically is simple. It's that these models are complex statistical programs, essentially, that just guess the next word, right? But with high levels of accuracy, and that's why they're so good. So it makes you feel like they're actually reasoning, but they're not.

They're not actually. Uh, using math. They're not actually using any particular, you know, like, uh, approach that is grounded in any, uh, structured thinking or, or process. Now future models are actually doing that. They're overlaying, uh, [00:27:00] these other concepts on top of language models. What you're seeing here in these particular narrow domains of alphaproof and alphageometry, are actually kind of like what we've covered in the past as mixture of experts models in the context of language models.

It's a little bit different here. This is basically hybrids where you're taking the strengths of language models, which is not reasoning, but then you're using, for example, symbolic AI or using deterministic decision based systems, which are other branches of computer science, in some cases AI, in some cases not AI.

that are really good at specific things. And what you're able to do is use the strengths of these different types of technologies to solve problems that they individually would not solve. That's really what's happening here. Um, there are fundamental innovations at the individual model level that Google, or DeepMind, which is a branch of Google, has come up with.

But ultimately, what you're seeing here that I think is most exciting is essentially combining models together to solve problems that the models individually would have no chance of solving at this level. Um, [00:28:00] so So coming back to the question of reasoning, these models actually are performing reasoning because they're applying a step by step way of breaking down a complex problem and solving it.

Um, but they're, but it's a narrow, very, very narrow use case of reasoning. So they are reasoning, but they're not reasoning on anything you throw at them. These are not models that consumers can interact with and say, Hey, I'm going to ask it anything I want. They're specifically designed for this set of problems.

Um, so in that sense, they are reasoning. But because they're, you know, if you, if you think of reasoning and it's most simplistic definition as breaking down a complex problem into step by step solution and executing that step by step solution, that is what these tools are doing. Um, but again, in a very narrow domain that doesn't take anything away from the achievement level because the achievement is truly extraordinary.

I mean, you know, a silver medalist in the math Olympiad, I mean, you know, four out of six, like I'd get zero out of six personally, and I'm somewhat decent at math, but I'd probably completely fail at it. So the point is, is it's, it's really, really smart. In this domain, and that means [00:29:00] that if you just kind of extrapolate what's happening in the world is you're going to see broader and broader examples of this that blend true reasoning with the broader capabilities of these language models.

Um, so I think that's what's exciting. Um, the other thing I would say here is where math goes, so does science. So remember that math essentially being the foundation for just about everything in terms of scientific pursuit. Um, if math can advance at a nonlinear rate. We're like, essentially, if math can be done by a I, where you have original novel math being done by a I, then there will be original novel scientific breakthroughs.

Also, you know, originated by and executed by artificial intelligence. So that becomes super exciting. So you could say, Hey, let's let's do a deeper version of this in other domains we've talked about in the past on this pod at Um, so we've talked about models at some length things like alpha fold. Uh, we've talked about weather models.

We've talked about, uh, models in other domains. And so like in, in material science, for [00:30:00] example, You're going to see this type of capability explode those other kinds of scientific models, um, you know, further in terms of their ability to, to, you know, solve novel problems. So that I find super, super exciting.

Mallory Mejias: This makes me think of an example that I saw in our prompt engineering mini course led by Thomas Altman, or perhaps it was a session he led last year sometime, but giving ChatGPT a really simple math problem, something with you give apples, you take away apples. And if you don't tell it to think step by step, it gets the question wrong.

And if you prompt it, simply prompt it to say, think through this problem, step by step, it will actually get the problem right. And I've seen that play out in action. I'm of course, not comparing ChatGPT to either of these two models, but I'm wondering from your perspective, Do you think this is better training with these two models?

Do you think this is more fine tuning, or do you think this is something else altogether?

Amith Nagarajan: Uh, no, it's, it's, it's different types of models being brought together. So it's not [00:31:00] so much different training or different fine tuning. It's different algorithm Approaches that are combined together. And then there's obviously some supervisor algorithm of sorts that brings it all together to actually solve the problem in the context that's being described.

So, like, we talk a lot about the transformer based architecture and large language models as well as others that are sitting on top of that basic innovation that happened way back in 2017. And a lot of the advancements have, that have occurred since then are based on that architecture. Um, that's the architecture that essentially is what a lot of people think of as quote unquote AI, where, you know, you have this, this predict next token, predicts next word concept.

And as amazing and as powerful as it is, it's just one step in the evolution of this stuff. There's many, many other things going on in parallel with that that are, Completely unrelated to the transformer architecture. They may, you know, have shared scientific roots in some ways, but, um, there's lots and lots of branches of this stuff happening at the same time.

It's just kind of like, you know, prior to the ChatGPT moment in late 22, most people had never heard of a transformer architecture or the [00:32:00] idea of a GPT or any of this stuff, even though it had been around for several years before that. Um, similarly, you're seeing some things now start to bubble up that are really interesting from a research perspective, but aren't yet commercially.

Impacting the world might not impact an association day to day, but are things you should pay attention to because ask yourself this question. If you knew, even in 2021 that by the end of 2022, the world would be completely different with AI and ChatGPT. What could you have done differently to be better prepared?

Right? So similarly here, we know that some of these innovations by next year, the year after are going to have Once again, radical impacts on our world. So if you are an association that is in any type of scientific discipline, you're in engineering, you're in architecture, you're in a branch of science, or you're in education, um, you have to look into this stuff because it's going to affect your field.

Will it affect the administration of your association? Maybe to some extent, because just generally the AI will become smarter and more capable of reasoning because these kinds of mathematical [00:33:00] models will actually make their way into consumer models over time as like, you know, uh, you know, essentially accessory models, if you will, to the main models of something like a GPT four. 0, may ship in its next release with a whole bunch of coprocessors. In fact, that's how computer architectures have worked for a long time, where you have something like it. You know, your, your central processing at your CPU, which is the main brain that runs your computer, but then, you know, quite a few years ago, we started getting co processors, the most famous of which is the GPU, which is now powering AI, but it's really good at graphics.

Um, and so you're going to have that happen with AI as well. It's not going to be a one size fits all thing. And again, going back to like the association context. Um, it means the models are gonna have more power for your business as well. But in your domain, you have to, maybe you're not the world's greatest expert in the content of your domain within the staff of your association, but you have to be aware of this stuff.

Uh, so at a fundamental level, that's the most important thing to be knowledgeable about, uh, and to follow it. I think it's just fundamentally interesting in terms of where it will take [00:34:00] these models within 12, 24, certainly 36 months.

Mallory Mejias: So you think what's most impressive about this topic for today is the fact that we have multiple AI models working together to give us a really narrow sense of reasoning, kind of in this one domain of math.

Amith Nagarajan: Think, think about it maybe a little bit more abstractly because the math part is cool, the science part is cool, but think about it a little more abstractly. If all I can do is use everything I've ever read, listened to, watched, or heard to predict what should come next, kind of definitionally, I can't really create something new.

All I'm doing is predicting what should come next based on what I've been trained on, right? So as a human being, everything I've read, everything I've seen, everything I've smelled, et cetera, all of that collectively gets somehow stored in my brain, whatever I've retained. And that's going to inform my thinking in terms of what's next.

If that's all I could do, right? If all I could do is predict the next token, which sometimes that's exactly what I do. But, um, the point is, is that We can create new things [00:35:00] that aren't necessarily based on our training data, if you will. They're based on ideas or based on logic or based on knowing certain, uh, you know, fundamental ideas of how can you reason through solving a novel problem.

You know, how do you take flight? right? How do you put a man on the moon? How do you create a vaccine? Right? These are all things that are novel breakthroughs. Or, you know, how do you solve problems in economics or, you know, whatever it is that you're doing? How do you come up with the idea for a story that you want to write?

Um, these are not necessarily based on your training. Of course, your training data influences it, but it's not so much that you're predicting what's next based on what's come before. So these other models that are out there that are capable of applying the true reasoning where they have the definition of a problem and they're able to break it down into components and come up with a novel solution.

That is what's truly remarkable about this. And in the case of like alpha proof and alpha geometry, they're doing that with a very narrow domain. If we can broaden that [00:36:00] somewhat and we can create novel creations of whatever they are, then the AI goes way, way beyond what we have now. Cause what we have right now is amazing.

Like we didn't have it even two years ago in a consumer sense. Um, it's powerful, but being able to create new things from scratch that aren't necessarily, you know, the natural extensions of what happened before is what gets me excited.

Mallory Mejias: makes a ton of sense. Do you think the path to AI reasoning is seeing more kind of narrow use cases of it pop up across the sphere until they all kind of merge into one? Or do you think we'll just suddenly have general reasoning?

Amith Nagarajan: The generalization is the hard part that I think no one has that I know of at the moment. There is no leading theory that looks like it's going to be the ticket to generalization of knowledge. So that's what we do, right? Like we have an experience and we generalize it to something else. Uh, early this morning, I was out here on the lake near my house in Utah and I was doing this thing called e foiling, which I think we've talked about before, it's one of, one of the fun things I love to do when the weather's a little bit [00:37:00] warmer, and a friend of mine was out there with me and he, he brought his 11 year old, which was super fun, so his 11 year old went out for his first e foiling session this morning, and this kid, of course, he'd never been on an e foil, he's never surfed either, um, but he has skateboarded, Um, and so I'm like, oh, okay, well tell me about that.

And I actually don't have any experience with skateboarding other than like falling on my face a few times when I was a kid and then deciding I didn't want to do that anymore. So, um, but I tried to relate to him like, okay, well just pretend you're on a skateboard and figure it out is what I told him before he went out there.

And of course he's 11 and has no fear and is very athletic kids. So he figured it out and within five minutes he was like, you know, cruising around. It was pretty cool. Um, but that kind of generalization, an 11 year old and a lot of 11 year olds can do that, right? That doesn't make him a world class athlete.

It just makes them. You know, probably a typical 11 year old kid. That's a little bit on the athletic side. And we can do that all the time with all sorts of things, right? Um, you know, so, um, when we are able to generalize like that, plus the other, that example is, you know, very much like in the world, right?

That's the world model. And that's part of what we've talked about before that these models, these [00:38:00] These language models have not to date had a world model where they understand 3d, they understand the world around them. They understand physics, um, beyond just having read a physics textbook. So like this 11 year old, if he was the next Isaac Newton and he read all the physics textbooks in the world, but had never written a skateboard and he was the most brilliant genius, but he had no experience in the real world.

Would he have been able to get on the E foil and ride? Probably not. Right. Because he lacks that like real world experience, the same thing with models. So. Uh, world model is one piece of it, and then the generalization, which is another piece. That along with reasoning. Reasoning is like one piece of it that we've gotta add, but then beyond that there's, there's more to it than that.

And I think, you know, that's where the next breakthroughs need to occur over, and that's probably like, you know, I'll say five to ten years, but, you know, maybe that'll happen in the next twelve months, who knows. Better AI begets better AI, right? So like we have tools now that are far more powerful than anything we've ever had in our history.

And that means that we're going to keep innovating faster, [00:39:00] which is exciting and scary at the same time. Right?

Mallory Mejias: Absolutely. That's really helpful though, that idea that reasoning is just one piece of it. Whereas I think I was thinking it was the piece. But, you're right, how could it e foil? It couldn't.

Amith Nagarajan: Yeah. And it's a critical piece, um, of this, like we talk about the path to AGI, right? And you say, okay, well, these models cannot reason at all right now. So of course, having reasoning is like a very natural next step. Generalization would be another next step. Maybe it happens after reasoning. Maybe it happens concurrently.

And the idea of world model people are already working on. That's one of the reasons that these multi modal models are so important because they're trained not just on text, but they're trained on images on video and on other forms of data. Um, and that's actually one of the reasons a lot of people look at companies like Tesla and say, Hey, they have a really interesting advantage.

With respect to AI, because they have trillions of hours of training video that no one else in the world has from real, real world physics happening on the road.

Mallory Mejias: Well, speaking of [00:40:00] Tesla, Amith, um, in our next topic, we're actually talking about Elon Musk in regards to deepfakes and politics. You all may have seen this in the news recently, but Elon Musk faced some criticism for sharing a deepfake video of Vice President Kamala Harris on his social media platform X, formerly Twitter, The video, which was originally posted by a podcaster and labeled as a parody, was manipulated to make it appear as though Harris was making derogatory remarks about President Joe Biden and herself.

When Musk shared the video, he did not include any disclaimer indicating that it was a parody or manipulated content. Instead, he captioned it with quote, this is amazing and a laughing emoji, which led to widespread criticism for potentially misleading his vast audience of nearly 192 million followers.

The video quickly garnered millions of views, of course, raising concerns about the spread of political disinformation, especially leading up to this presidential election. Critics pointed out that Musk's actions appeared to violate X's own policies, which prohibit sharing [00:41:00] synthetic, manipulated, or out of context media that could deceive or confuse people and lead to harm.

Musk defended his actions by asserting that parody is legal in America. The incident highlights the growing concerns about the misuse of AI and creating deepfakes and the challenges social media platforms face in regulating these types of content. Now, Amith, we predicted something along these lines in our 2024 prediction episode of this podcast that we released in late 2023.

I believe the prediction was that a major news platform would share a deepfake video without initially realizing that it was a deepfake. So this isn't exactly that, but Certainly in that same vein, what were your thoughts when you heard about this?

Amith Nagarajan: Well, you know, first of all, Musk will be musk in a way and like love him or hate him he's he is and so I think he's going to keep doing this kind of stuff and um people should pay attention to His content like or if I should say it this way if you're paying attention to his content You should be aware that a [00:42:00] lot of it is going to be like this Um that doesn't necessarily excuse him for not having disclosed that is a a deep fake uh, I wouldn't want to do that if without disclosing that this is like You Not only a deep fake, but something you should be really aware of.

Um, so I think he has the opportunity and people like him who have a large follower base have the opportunity to do tremendous good by helping educate people that these things are super easy to create. Um, My thought on this particular piece is, yeah, it's exactly the kind of stuff that people will watch and a lot of people will not realize it is fake.

I don't know what that percentage would be, but I would be, you know, I'd be surprised if it's a very small percentage. I think that a good percentage of people would view that and say, oh yeah. This is this is a real thing. Um, so that's scary, right? That's the potential to influence millions of people. Um, on lots of topics.

So, you know, we, we called that out in our predictions pod, as you mentioned, because, um, years not over yet. I still think there's a good chance of it that some media [00:43:00] source will unknowingly, uh, Air a fake video. And I think, you know, something as obvious as this might be different, but it might be also something, think about the more subtle shifts that you could make, you know, you take a political actor like that and you say, okay, well, we're going to change their position ever so slightly on particular topics to create confusion and to show that they're.

you know, disingenuous about certain things, right? They say that they support, you know, topic a, and then they start hedging that when in reality they never hedged that. Right. And, and so then what do you know what's real, what's not. Um, so I think the most dangerous types of deep fakes are the ones that are actually just slightly off because then they start taking people down a path where they believe in, you could see these accounts getting a lot of followers and then people believing that they're real.

Um, so. To me, that's, that's the real concern. And I think, you know, we have to be talking about this stuff that again, with AI, there is no solution for this at this moment to detect deep fakes. There's no such thing. Um, there are attempts at creating watermarks for authenticity so that you can [00:44:00] positively assert, uh, essentially the provenance of a video or an image, um, or any kind of an asset, you know, to say, Hey, this is a real production of person X.

I think that's going to become more and more important to actually Do not trust anything by default and have to verify everything by default, particularly from public figures like this. To me, that's probably the most realistic solution. Maybe that's a blockchain based solution. Maybe it's something else.

Um, but AI is not able to just automatically detect deep fakes for you. So don't assume what you have coming to you over the air, through cable, through streaming or on your, on your device is real. And that's, That's a sad state of affairs, perhaps in some ways, but I think it's the only way you can really approach it is to have that very high degree of skepticism.

Mallory Mejias: I suppose it's part of our due diligence as humans on this earth now to be really critical of the information and the news we're consuming. It definitely seems overwhelming to kind of have to think potentially everything that we see may not be legitimate, but I think you're right until we have a solution, which [00:45:00] we don't right now.

That's probably how we have to move through life.

Amith Nagarajan: Yeah, I mean, I think that there there will be solutions that it's all it's the cat and mouse game or the move counter move thing where, you know, to the extent that these tools there will always be tools that are somewhat better than what people are able to detect, depending on your level of commitment and resources, you can stay just slightly ahead and, you know, try to scam people that's been going on for a long, long time across a lot of domains well preceding the The internet and digital based approaches like this, and this is just the latest variety of it.

It's just, you know, an A. I. Powered version of scamming like this is super, super scary. I think there's also hyper personalize such deep fakes that you have to be aware of. It's not just political motives to influence elections, which are obviously problematic enough. But it's getting a fake phone call from a loved one that's telling you they're in trouble and they need to send money in order to help them out.

Um, you know, how do you address that? We have to be prepared for it. You know, you probably need to talk to your family, certainly talk to your colleagues at business, [00:46:00] maybe set up some old school, like rotating codes that you and only you would know that are not even anywhere on a computer that is like handwritten by people.

Um, and you use those as like monthly rotating codes to verify your authenticity. Right. Of course, that can be hacked, too. That can be guessed, predicted because we're all very predictable machines as people. So, but it's better than nothing, right? So there's just a lot of, there's a lot to unpack here in terms of what to do about it.

Uh, so, you know, in a way, like Musk's action, I don't think this is his intention, but like creating a big controversy out of this. ultimately, hopefully creates a lot of noise that people can't ignore that. Hey, there's this thing out here that you have to pay attention to. Like, I don't think he thought eight steps ahead like that.

I think he just shared it because he felt like it. Um, and that's what he did. Right. But like when someone with that kind of a followership does that, and then it gets people figure that out, I think hopefully it's increasing awareness as a byproduct of that action.

Mallory Mejias: We can all [00:47:00] agree on this podcast that deep fakes and politics. pretty much bad across the board. Deepfakes with scams, of course, bad. Deepfakes of loved ones. I think something that is interesting to consider, though, is that the word itself, deepfake, has a negative connotation. You hear that word and you don't often think, ooh, like all the positive use cases of deepfakes.

But it was on one of our monthly intro to AI webinars that we had someone ask us about Heygen, which is a tool where you can create an AI avatar of yourself. And they asked, are you deepfaking yourself? And we were like, And I had to think about it for a second, because again, my mind went to the thought of no, deepfakes are bad, we're not doing that.

But Thomas confirmed, yes, we are deepfaking ourselves. And it got me thinking, and me, to look at this from a different angle perhaps, I know you and I have talked about the use case of creating an AI avatar of yourself, and being able to send one to one personalized videos from you to a member, for example.

Do you think it's always best practice to disclose that when you're doing such a thing?

Amith Nagarajan: think you're getting into kind of the little [00:48:00] bit of subjectivity there in terms of what's best practice. I personally believe it's a good idea to disclose, uh, something that's not, you know, human created versus AI assisted or AI created, you know, it's like in the context of ascend, right? We talk all the time about it and of course it's the perfect place to showcase how to use AI, but, you Um, I think it's really important to disclose that.

That's my personal opinion. I don't know that, you know, that's the right answer for everyone in all contexts. But, um, you know, there, there are a lot of places where, you know, and deepfake, just to break down that term, the fake part is probably obvious. I mean, all this stuff is fake because it's not authentic.

human created content. And then the deep part just comes from deep neural networks, which is the technology. All this stuff is based on it has been for about 12 years now or 13 years. So there's really nothing meaningful in that term other than that's been the term of art in the popular consciousness.

But yes, we're deep faking ourselves when we use an A. I. Avatar. We use a video editing. All that is in that category. And so you can use it to improve your content. You can use it to hyper personalize content. So sending [00:49:00] that one to one video that you just described Um, one other thing actually just going back to the realm of politics real quick is that you know People think oh the deepfake is going to be used by An opponent to or or a state actor that wants to influence the election to essentially undermine Um, candidate and that might be the most initial, like, you know, blunt force trauma, simplistic way of trying to kill off the candidate or whatever, right?

It's like this most obvious thing. And some people who are influenced by that. It's probably a lot of people. It might work. It might work for a while. But what about, um, what we said earlier, like the more subtle attacks where you're shifting someone's, um, Perspective ever so slightly over time or and a different use of it is let's say you have a candidate that is not doing well because let's say they're aging and they're having a hard time presenting themselves well, but they don't want to be perceived that way and so they use deep fake technology to take, you know, highly coherent, coherent, coherent, wonderfully orated, beautiful message to the public and [00:50:00] say, this is me.

This is me debating someone. This is me speaking, right? It could be used to amplify and hide or cover up issues, right? So it can be used in a lot of different ways, both, you know, in offense, defense and all sorts of different ways. So I think we have to be really thoughtful about it. Coming back to the business use case, I think as long as you're responsible with your use of it, we're like, Hey, like in the book, we talk about, um, this idea of really just translating content.

Um, and we talk about it. There's this whole section of the book or a chapter of the book where we talk about translation, translation, people immediately, their brains go to, Oh, English to Spanish, Spanish to French, French to Chinese, whatever. And of course that's a great use case, but what about translating in this context, right?

Taking something from a message that was maybe tailored to, uh, you know, an experienced user. Professional group. And now we want to tailor that that content to an emerging young leaders group, people who are right out of college and they use maybe not a different language, but they perhaps communicate differently in some ways, right?

I could certainly [00:51:00] relate to that with teenagers. So, um, I guess the point would be there's lots of ways of leveraging this content for this, this technology for good and creating content that serves, um, people know when they're doing it for good, I think. People know when they're doing it for something other than that.

So the question is like, with all tools, like, you know, what are people gonna do with this stuff? There is the, the side of it, can we detect it? But then there's also the, the side of it of, hey, we're disclosing it. So, uh, it's, it's a super interesting topic. Um, I wish I had better answers than this, but, uh, I think we're gonna have to just wade through this muck as a society.

Mallory Mejias: Yeah, it goes back to the saying of kind of like, the more we learn about this stuff, the less we know, and the more questions we end up with, which is, I think, a good thing overall, and that we're having this discussion live on the podcast, but I certainly, something I always come back to is being able to leverage AI to create more stories, to create more personalized interactions, but then realizing, too, then, That it's [00:52:00] also kind of missing that human connection piece, if that makes sense.

Like sending a video of me to someone and disclosing that it's AI, does it still carry the same weight as if Mallory took the time out of her day to record a video for you? I don't know. I don't have the answer to that.

Amith Nagarajan: Yep. That's where I think the, uh, the philosophers need to come in and help solve those, those problems. So.

Mallory Mejias: Maybe that'll be, uh, a next guest, a guest soon on the Sidecar Sync Podcast. Well, Amith, this was a really great convo today. Thank you for sharing your insights. And everyone, please check out Ascend second edition. You can access it at sidecarglobal.com/ai for free right now, and we will see you all in next week's episode.

Post by Emilia DiFabrizio
August 1, 2024