Skip to main content
Join the AI Learning Hub

Timestamps: 

00:00 - Introduction
02:00 - Preview of digitalNow 2024 Conference
06:45 - Google Notebook LM Overview and Features
12:30 - Experiment with Notebook LM on Sidecar Content
21:50 - Insights into XRX Framework and Applications
30:15 - AI Multimodal Capabilities in Associations
38:00 - How LPUs Improve Real-Time AI Interactions
45:10 - Closing Thoughts and digitalNow Contest

 

Summary:

In this episode of Sidecar Sync, Amith and Mallory dive into the latest AI trends impacting associations. They preview the upcoming digitalNow conference and showcase the impressive keynote lineup, including experts from Google, the US Department of State, and more. The episode also features an exciting exploration of Google Notebook LM, a new tool designed to help users organize and interact with documents using AI, along with an overview of XRX, an open-source framework enabling multimodal AI solutions. Listen in to learn how associations can harness these tools to boost productivity and innovation.

 

 

 

 

πŸŽ‰ digitalNow 2024 Contest:

Post about this episode on LinkedIn! Share something you learned or a cool tool you're going to try, tag Sidecar (https://www.linkedin.com/company/sidecar-global), and tag #digitalNow. Each post is an entry, and two winners will receive free passes to digitalNow 2024.

Note: Every post counts as one entry. The contest ends on October 4th.

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by digitalNow 2024the most forward-thinking conference for top association leaders, bringing Silicon Valley and executive-level content to the association space. 

Follow Sidecar on LinkedIn

πŸ›  AI Tools and Resources Mentioned in This Episode:

βš™οΈ Other Resources from Sidecar: 

 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

 

Read the Transcript

Amith Nagarajan: Welcome back to the Sidecar Sync. We are excited as always to have all of you with us to explore the intersection of artificial intelligence and associations. Uh, my name is Amith Nagarajan and we are your hosts. Before we get going on our interesting set of three topics at that intersection of AI and associations, let's take a moment to hear a quick word from our sponsor.

Mallory Mejias: Amith, how's it going?

Amith Nagarajan: It is going great. You know, it's a busy day. Um, but, uh, you know, down here in New Orleans, I think, uh, there's like been a role reversal because I think you guys are thinking about hurricanes today.

Mallory Mejias: I was going to say of our listeners, if you've listened to the sidecar sink before, and you had to guess which one of us was about to experience a hurricane, you would probably guess a meath because he lives in New Orleans, but we have a hurricane coming through Atlanta this weekend. I think maybe hitting like Friday, Saturday, I will say, I'm not all that worried about it, but on that same note, after having grown up in Louisiana, my whole life, there's a part of me that's nagging, like, do, do I go stock up on water bottles?

You know, all the, all the hurricane prep.

Amith Nagarajan: Yeah, I think the Louisiana thing probably desensitizes you just a little bit to the impending doom of a hurricane, maybe, but I don't know. You guys are a little bit inland, so it shouldn't be too, too bad. And hopefully it won't cause a lot of damage along the way for folks that are in the path on the coastal community.

So we'll obviously be keeping all those folks in our thoughts and hopefully no one who's listening is directly affected. But, uh, you know, hurricanes are definitely no joke. But here in Louisiana, and specifically in New Orleans Um, there's a reason there's a drink called the hurricane, and uh, it is, tends to be where people head when hurricanes are headed here, as opposed to doing anything in terms of active preparation for the storm, so.

Mallory Mejias: Exactly. Well, at least for the time being, I don't think we have a podcast planned for during the hurricane. Though maybe just for fun times. Maybe we should schedule one.

Amith Nagarajan: Yeah, seriously.

Mallory Mejias: Amith, I'm going to quiz you really quickly. Do you know what number episode this is for us? I'm putting you on the

Amith Nagarajan: Well, I could cheat and look it up by looking at the show notes that you've prepared, as you always do, uh, so helpfully, but, uh, it is actually kind of clipped off of my screen, so I don't know, I think, are we close to 50?

Mallory Mejias: We're really close to 50 today, or well, tomorrow when this episode is released, we will be releasing episode 49 of the Sidecar Sing podcast.

Amith Nagarajan: Well, 49 is a great number. Um, it's my favorite team in the NFL. Unfortunately, they're not doing very well at the moment, but, uh, that's exciting.

Mallory Mejias: 49 episodes. It's crazy to think about. We've done one every week, as we've talked about in the past few weeks, and I feel like we've come a long way. I mean, I think the goal and the vision with the podcast is certainly still the same, but we've evolved and grown, and Amita, I don't know if I've ever asked you this, but had you done, had you ever had your own podcast before?

Is this a first for you?

Amith Nagarajan: Nope. I have never done a podcast before. I've interviewed on tons of them over time, but I've never been part of a podcast production as a host or co host or anything like that. So this is definitely a first for me.

Mallory Mejias: Yep. So in honor of the 50th episode coming up, we wanted to do a little contest for listeners and, or viewers of the Sidecar Sync podcast. So for our listeners and viewers, for you and one colleague, potentially you could win two passes to Digital Now 2024, which is October 27th through 30th in Washington, DC.

And we're going to talk a little bit more about that conference actually as topic one, but here's what you have to do. We're also going to write down the instructions in the show notes. You have to post on LinkedIn about the Sidecar Sync. It could be something you like from this episode, something you learned, a cool tool that you're going to try out.

Post about this episode on LinkedIn. Tag Sidecar. And hashtag digital now, every post that you make is one entry. So theoretically you can have unlimited entries if you want to post all day, every day about the sidecar sync podcast on your LinkedIn. And we're wrapping up this contest on October 4th. So you've got a little over a week.

We're really excited to launch this and I can't wait to see all your posts.

Amith Nagarajan: Can posts be AI generated?

Mallory Mejias: I will leave that to the discretion of our viewers and listeners, but I would say they should probably be AI generated. If you're a really good listener, they should probably be AI generated.

Amith Nagarajan: For the record, I'm all in favor of that.

Mallory Mejias: Well today, as I mentioned, first and foremost, we're gonna talk about digital. Now, that event that is coming up for us next month, believe it or not, and then we're gonna talk about Google Notebook, LM, and share kind of a, a fun experiment. We ran with it at Sidecar. And then finally, we are talking about XRX.

So first and foremost, DigitalNow. We've mentioned it a few times, many times on this podcast before, but DigitalNow is our flagship annual in person event for Sidecar. We bring Silicon Valley and executive level content to the association space, and it is going to be for the first time ever in Washington DC. because as we all know there are many associations there, but we are thrilled to be bringing it to the capital of our nation. The theme this year is exponential associations building a foundation for the future. So we're going to be pulling on a lot of the same topics that we talk about on this podcast, but the idea of kind of reevaluating what it means to be an association right now, what that might look like in the future, and the same steps you have to take to become that exponential association so you can not only survive in the future but thrive as an organization.

So to give you kind of an event overview, the event kicks off on October 27th which is a Sunday. We've got a registration period and then a welcome party. And then we have two full days of sessions on Monday, October 28th and Tuesday, October 29th. We've got some fantastic keynote speakers that I'm going to go through really quickly here.

Amith, you'll be kicking off the event as our first keynote speaker. We've got Thomas Altman who's the co founder of Tassio Labs and one of the creators of several products that we've mentioned on this podcast, like BettyBot and Skip. We've got Sharon Guy joining us on the keynote stage who is an AI and e commerce expert who formerly worked at the Alibaba Group.

Denise Turley, who's the VP of corporate systems at the U. S. Chamber of Commerce. On day two, we've got Gio Altamirano Rayo, chief data scientist at the U. S. Department of State. Robert Plotkin, who's an AI patent attorney. Neil Hoyne. Who's been on the podcast along with Robert actually and Sharon. Neil Hoyne is the chief strategist at Google.

We're so excited to have him joining us at Digital Now this year. Dr. Param Dadia, who's an integrative medicine physician and sleep specialist. We'll talk a little bit about that in just a second. And then we also have John Spence joining us, who's a leadership expert and executive coach. We have a keynote panel on both days with an excellent facilitator and consultant in the association space named Mary Byers.

And then in the afternoons on both days, we've got some exciting breakout sessions lined up. I've jotted down the names of a few just to give you a little teaser. One of those is bridging the gap, integrating IT and data strategy for transformative business outcomes. Another is need for speed. You might not be thinking big enough.

From buzz to impact. Practical strategies to elevate your value. Innovate the lean way, getting non dues revenue right, and then we will also have an exciting interview lined up with Dale Sear and Juan Sanchez from Inteleos, along with Mary Byers, who we've also had on the podcast in a previous episode.

On Tuesday, October 29th, we have our download party at a nearby venue called the Roofers Union. That'll be a great couple hours that we spend together socializing, networking, enjoying some good food and drinks. And then Wednesday morning, we have an AI innovation showcase where we're high, highlighting associations that are doing really innovative work with AI and a key takeaway session to consolidate everything you've learned from those fantastic keynote speakers and breakout sessions and take it back to your organization.

So Amith, that was kind of fun. So, um, before we get into a big overview, what was your inspiration for bringing digital now into the sidecar world?

Amith Nagarajan: So Mallory, um, I have been part of digital now as a speaker or sponsor prior to when we acquired the company and brought it into sidecar and I always had the highest level of regard for what the founders of digital now, uh, had done, uh, two gentlemen by the name of Don B and Hugh Lee, who had been, uh, consultants and advisors to the association communities for many, many years, who had tons of respect for, and they built this program to, uh, To really try to bring things into the association community that were at the forefront of technological disruption, leadership change.

Uh, and I think they did a fantastic job for 20 years. And as those guys were kind of heading towards, uh, the latter part of their career on a full time basis, and also the pandemic hit at the same time, Sidecar was starting to really grow. It just seemed like an awesome fit. So I think we were a great place for that conference company and really the event to land.

Um, and there's a cultural alignment between sidecar and digital now, both have always been focused on how do you bring, uh, new ideas and, uh, innovative concepts into the association community. So it was a very natural fit. Uh, so we did that acquisition back in during the beginning of the pandemic, really.

So digital now has occurred. This will be the 24th year and digital now has gone on every year except for the year 2020 when the pandemic went on. So that was the year that was virtual only. We were not involved that year, but we took over in 2021. We did the event in Nashville the first year we ran it.

The pandemic was kind of winding down, but it was still a thing. And so that was a bit on the risky side. Our goal simply was just to have the event and do it safely and hopefully have, you know, some people come together and really resuscitate it. And then 2022, we did it here in new Orleans, which was super fun.

And. And then last year in 2023, we held it in Denver. Um, this year we thought, you know what, uh, digital now, the founders of the event originally, they always held it in either Orlando and they took it to a couple of other spots, but they had a special relationship with the folks at Disney. So they were doing it in Orlando a lot.

And we thought, you know, no, one's ever run this thing in DC. So, you know, there's, as you said, there's a couple of associations in DC. So, Hey, let's see what happens when we bring it to DC. So, uh, but getting back to the inspiration, it really was, how do we add more emphasis to. Innovation in this space. And of course we've been deeply embroiled in the world of AI for over a decade now.

And so AI was not news to us and it was always part of the agenda for digital now when we acquired the company, but certainly since the chat GPT moment in late 22, we've been hyper focused on that at sidecar. And of course, digital now, you know, has been focused really, really heavily on AI, uh, in the years since we've been running it.

Mallory Mejias: I'm sure some of our audience listening to those keynote speakers, we mentioned someone from Google, someone from the U. S. Department of State, someone from the U. S. Chamber of Commerce. They might be thinking, wow, it's exciting to bring in speakers from outside of the association space, but perhaps these individuals don't have a great grasp.

Maybe I'm just spitballing that of how associations work. So I'd like to hear a little bit of your thought process, um, from bringing in outside speakers to digital now.

Amith Nagarajan: Yeah, it's a great question, and I totally agree with you. The context of associations or whatever your industry is, is so important to connect with people. I mean, that's why associations exist, right? There's associations and all these hyper specialized, narrow categories and subspecialties, and that's because the more specific you are with your approach to engagement with content, with learning, the The more effective you are at conveying the content.

The issue with people who are only in the association market is you get a bit of an echo chamber after a while. And so we like to bring in new ideas, things that are from different industries, because there's a lot of great content from Fellow association travelers, so to speak. So, you know, you think about all the great work the folks at ASAE do, the work that Chicago Forum does, and all the other regional SAEs, and a number of other organizations that produce great association content.

You know, the reason we started Sidecar was we wanted to plug a gap that We felt existed in the ecosystem of thinking about things from radically different sources that could be applied in this sector, but really weren't being talked about enough. So it's really part of the founding of sidecar going back, you know, six years plus now, uh, and digital now is always like that as well.

Um, so for me, it is about getting new ideas, um, sparking, uh, new conversations, but also of course, blending association expertise in. So part of it has to do with how we prepare our keynotes, how we vet them to begin with, that they're willing to make an investment and spending the full conference with us.

So our keynotes hang out with us the whole time. Um, they get to know our attendees. They really add a ton of value by having conversations. And we do a lot of prep work with these keynotes to make sure they have a really good understanding. So, you know, a lot of times keynote speakers are parachuted in, they give a canned talk and then they literally exit, you know, out the door as quickly as they possibly can to catch their next flight.

We don't do that, you know, we bring in some some very significant speaking talent Obviously as you just went over really excited about the folks coming in and appreciative of them But they're committed to helping this community and we want people that are aligned with purpose. I think it's a great combination Also, as you mentioned earlier, our good friend, Mary Byers, uh, is facilitating conversations with some of the keynotes each day, and that helps thread in deep association expertise with people who might have a less, uh, less experience in the space.

So that's how I've always thought about it. And I think it's just, it's an opportunity each year for people to learn about what's happening outside of the association bubble.

Mallory Mejias: hmm. Yep, I agree with you. Mary does such a fantastic job of pulling out insights and challenge questions and things out of speakers, so I'm really excited to see how that plays out. I did mention Dr. Param Dedia, who is the integrative medicine physician and sleep specialist. Amith, I'm sure some of our listeners and viewers are thinking, huh, how does that fit in with AI and associations?

So can you speak to Dr. Dedia a little bit?

Amith Nagarajan: Well, Param is an amazing guy. He's an engaging speaker. I think people are going to find him fascinating on a number of levels. And my perspective is this, is that in an age of AI, we have to focus on our humanity more than ever before as individuals, as teams, as families. And so health, um, longevity, sleep, restorative benefits that come from all those things are more important than ever, because we're getting busier.

Not less busy even though AI is helping us. We somehow are getting busier I'm hopeful that AI will perhaps change that balance over time But in the meantime, we have a big task ahead of us as leaders in and around the association market We have to figure out how to embrace this incredibly exciting, but also incredibly disruptive technology Uh, not just AI, by the way, that's what I'm talking about.

But there's plenty of other things, which is by the way, a little preview into my keynote, which is about exponential everything. It's not just AI. It's a whole bunch of other stuff that I'll be talking about on stage. And the point is this, is that we have to take care of ourselves and it's a little bit, you know, non traditional for digital now, and probably for most conferences to try to balance that out a little bit, but we thought we'd pull in one keynote speaker who could really.

Think more broadly and more holistically about success So if you start with success from an individual perspective from your own Platform as an individual which has to be rooted in good practices around sleep and other other things He's a sleep expert. So I tend to think about sleep Not that his talks put me to sleep because he's a very engaging speaker But he is world renowned in sleep specifically Um, but he'll be sharing some insights that I think people will find very practical.

Um, he's not dogmatic. He's not one of these people that comes in and tells you to do eight things. And he's not someone who says you have to do perfect this and perfect that. Um, he's a much more practical guy. Um, I've gotten to know him personally over the years. And he's just an amazing individual, deeply cares about helping people.

And he's excited about Digital Now as a platform to help innovative thinkers in our space. Thanks. Um, hopefully learn some techniques that will help them and their families and their colleagues in their life journey, which will, of course, make them happier and healthier, but also make them more successful or more likely to be successful in adopting big changes like implementing AI.

So that's how it ties together in my mind. I'm an entrepreneur. I like taking risks. I like doing things that are different. And so bringing in someone who's kind of off program, so to speak, you know, it's like bringing in a folk singer to a rock and roll concert or something. It's not someone's, it's something people are going to expect, uh, but it might be really cool.

So hopefully it'll be the latter.

Mallory Mejias: Absolutely. Uh, Amith, you invited Param to our leadership summit, which we actually talked about on the last episode, but I think it was two years ago at this point and Param session across the board was one of the best ones rated in the survey feedback. And I still think back to tips and tricks he shared with us from two years ago.

So I'm thrilled to bring them to digital now this year. And I'll let you all know his session is called high yield health, the foundation for exponential times. And I love that. What a cool tie in.

Amith Nagarajan: Yeah, so cool. And you know, I knew, I knew what would happen when we, we have our leadership summit annually, which is our private event for Blue Cypress family companies. There's a bunch of companies in the families. You guys have probably heard us talk about over time. Um, and we invite the senior leadership from all of the different companies and Blue Cypress HQ, uh, to come together once a year up in the mountains of Utah and learn together and hang out and build relationships.

We actually just finished doing this year's installment last week. It was a ton of fun. Um, and, um, we invited Param and I knew that I was pretty sure that he'd get largely really, really positive reviews, but I also knew that I was, I was almost certain anyway, Mallory, that we would get some criticism that some of the people coming together would say.

What the hell are you guys doing telling us about health? This is a business conference. It's none of your damn business what I do with my, with my health and with my body. And I'm like, okay, well, no one's telling you what to do. You know, we're trying to provide things that hopefully will be helpful. Not every, not every session is helpful to everyone.

And I kind of view it with digital now. In a similar fashion that some people attending will be surprised maybe in some way Unfortunately offended but I kind of look at it and say the vast majority of people I think are going to get maybe at least one useful thing from param probably many Um, I know every time I hear him speak, even though i've heard him speak bunches of times.

I learn at least one new thing Uh, so, you know, that's the nature of risk, right? You can't please everyone all the time And if you try to do that you end up becoming you know, something generic like you turn into a toaster There's totally undifferentiated. So

Mallory Mejias: You, you make me remember, yes, it was the most polarizing session because across the board it was the favorite by a long shot, but we actually got some of the most negative feedback about it as well. So that's really interesting. Anyway, come to digital now if you're curious to see how that plays out

Amith Nagarajan: And that's how Sidecar has always been and that's how Digital Now has always been, which is another reason we brought the two companies together a few years ago. So, very excited about Digital Now this year.

Mallory Mejias: for sure. Can you give us a quick sneak peek of your session? Sorry,

Amith Nagarajan: well, you know, exponential associations is our general theme, and I'm going to be talking about exponentials, exponentials in the context of AI, but also other fields that are experiencing similar phenomenon. So we think about the world of energy. We think about the world of material science, synthetic biology.

My goal is to share what's happening from a macro lens across a number of exponentially growing categories, and then talk about how they're converging and how, how they're feeding off of each other. So for example, AI. Is begetting more advanced material science and more advanced material science is going to beget over time better ai Uh, and and there's multiple, you know chains of effects that come from that I'm going to talk about it from that general perspective and then i'm hoping to zoom in And talk about how each of these advancements could help specific professions or sectors How the associations that serve those professions or sectors Need to stand up to help those, uh, those areas, those sectors adjust.

And then of course, how associations internally would need to adjust as these changes unfold. So it's the broad thematic approach that we're taking with the whole conference around exponentials. Um, and AI is a big, big, big part of that. Uh, but there are other things on the horizon. I also plan to talk just a touch about quantum computing and how that's kind of like another layer of crazy on top of all the crazy.

So hopefully it'll be a lot of fun, but I'm hoping to kick off the event. With just a bunch of ideas on what's happening in the world and get people excited about it. Um,

Mallory Mejias: Can't wait for it. Okay. All right, next up we are talking about Google's latest release, Notebook LM. Notebook LM allows users to upload documents, create notebooks, and interact with their content sources using AI, of course. It can summarize information, answer questions, and generate new ideas based on uploaded sources.

Users can upload up to 50 sources. including Google Docs, PDFs, text files, Google Slides, and web URLs. It doesn't look like at this point you can upload video, but I imagine that will be something you can do very soon. Notebook LM is powered by Google Gemini's 1. 5 Pro. And as I mentioned, it creates summaries of your uploaded documents and highlights key topics, suggests questions, and it can also generate a PDF.

Audio, which we're going to share a little bit of based on the sources that you upload, you can ask specific questions about your documents. You can have Google Notebook LM help you create an FAQ section or a brief. I'm going to share my screen in just a minute so you can see what I'm talking about here.

And it operates as a closed system, so it doesn't perform web searches beyond the uploaded content and as a note, user data remains private and is not used to train its algorithms. So right now I'm going to share my screen. And for listeners only, I'll try to. Walk you through exactly what I'm seeing. So right now I'm in Google notebook LM.

And of course, as you probably all expected, we decided to use the second edition of our Ascend book for this experiment. And I also linked the sidecar website. So those were the two sources that I added. I could have added a ton more probably, but I was trying to run through this pretty quickly and what I got out of it for such a quick experiment.

I will say was very impressive to me. So you can upload sources. You can do, as I mentioned, Google docs, slides, a link to your website, copy text up to 50, which is quite a bit. I will show you the chat interface. So kind of like a chat GPT, it's still loading here.

If you click this notebook guide here, you will see that it summarized all of the content that I provided to it, and I can ask NotebookLM to help me create the FAQs, study guide, table of contents, timeline, briefing doc, and I can also essentially FAQs. to help me create whatever I want.

Mallory Mejias: So one example that I ran yesterday was asking it to create an outline for five podcast episodes from Ascend Second Edition and it did a really good job of this. I imagine it scrubbed our website and maybe pulled some context from there about the podcast, but other than that I didn't give it previous episode outlines or anything like that.

If I had added a few of those in as additional sources, I imagine what I would have gotten out of it would have been much better. You can see there are suggested questions here. And then also you see an option to generate an audio overview. So this is essentially a nine minute quick podcast that was created about Ascend 2nd edition.

It was a man and a woman speaking. And I'm going to play a couple clips from that audio for you right now.

Amith, I shared with you that generated audio podcast of sorts yesterday and you gave it a listen and you were pretty impressed and you obviously we both deal with this stuff all the time. Can you talk a little bit about why you were so impressed?

Amith Nagarajan: Sure. You know, I was, I was really impressed. I had a holy crap moment because I looked at it and said, you know, it's, it's synthesizing kind of a higher order product, which is a podcast from all this source material, a number of iterations that you went through with the tool. Um, and, uh, I thought the quality of the content, just the transcript, that Fed into the podcast was really good.

It was conversational. It was two different people speaking to each other as listeners have heard from the little clips that you just played. And, uh, it sounds really natural to the quality of the voice synthesis, um, was really, really good. And it had emotion, it had tone. Uh, it was. Really good. I mean, I could definitely see us using this tool as a part of the sidecar sink, not to necessarily replace what we do.

Cause of course that's super awesome. But like, you know, we, you know, we, we definitely want to look at creative ways to augment it. Um, one of my favorite podcasts on the really technical side is this thing called latent space, which I think I've told you about Mallory. It's really about AI engineering. So it's more like at the code level, how people are putting together systems.

And they talk about model training, all this, all this really cool stuff. Well, anyway. Those guys have first of all, I think I'm pretty sure it's sooner they used for like a theme song, which is kind of fun I don't know if we'll do that. But and then they have an AI co host So I think it's two founders of this podcast that have conversations.

Sometimes they bring in guests, but they have an AI co host That, uh, essentially reads scripts that kind of are, uh, introductions and conclusions, but also really interesting interludes between segments of the show. So I could see Notebook LM operating in all of those areas. And I thought the multimodalities, you know, we've talked a lot on this podcast and in the book about how AI is becoming multimodal, um, and how, you know, we're on this progression through the doubling curve of roughly six month doubling in capability and AI.

And we're seeing that because the voice capability here is dramatically different on that note is a little side side note. Um, just today I was able to access the new chat GPT advanced voice mode. I don't know if you've played with that yet.

Mallory Mejias: No, I haven't.

Amith Nagarajan: it was what was demonstrated when GPT four oh was first announced and was only available to a small number of people.

No one I know had direct access to it. But today they rolled it out. You may have to actually delete the chat GPT app from your phone and then reinstall it to get it right away. But they say they're rolling it out over the next few days to everyone. And I found it to be far more natural than the last version of voice.

It's not quite as good as the demo so far in terms of its latency. It's, it's does have a little bit of a lag to it, but it's, it's fine. And it's way more useful than previously because you can interrupt, you can have con you can really have a conversation with the thing. Um, so, but it's a very natural feeling voice.

So I think that the Google notebook LM is another good example of that. You know, computers for a long, long time have been hard for people to use because you've had to comply with the computer's way of thinking, which is, you know, screens and mouses and clicks and all this other stuff where we've tried to approximate a good user interface or user experience.

And now we're moving to natural interfaces where we can have language interaction in our language. We can speak to the computer. The computer can see us and see not only like what we're saying, but what, how we're feeling, if we want to share the video. Um, and soon we'll be able to see the AI avatar, you know, in real time and be able to understand more from what the AI is saying.

So I find all of this exciting. I think Google's work is often underrated in the area of AI, which is still shocking considering their longtime leadership in, in the discipline. Um, but they, they haven't done a great job commercializing it. So, you know, the onus is on them to shift gears. Gears. I think this tool is a good example of where Google shines, where they've taken their underlying model, which is the Gemini pro one five, as you mentioned, which is a good model.

It's nowhere near as good as GPT four Oh, or Anthropix 3. 5 clod sonnet. Um, probably about on par with the Lama 405B, uh, product or, or model. And, uh, but it's, it's a good model, but they've built some really cool software around it and they've stitched it together in a way that has some novel use. So, you know, creating that audio file took you a handful of clicks.

Um, and. You could have done that with these other tools, but it would have taken you more and more steps to produce all the different pieces and then go to 11 labs and connect it all together might've taken you a few hours. So there's a lot of opportunity for engineering good solutions and for creating a nice user experience.

That's easy. So, uh, you know, hats off to Google. I think they did a great job. Yes, no,

Mallory Mejias: episodes, you've probably heard my stance on this, at least personally, which is, for now, I will see People choose to consume human generated content for fun and or like human plus AI but mostly human. I will say this is the first time I've listened to totally AI generated audio and thought, wow, this is entertaining.

It was casual. It was friendly. They kind of have like this back and forth, um, rapport with one another. And I thought, okay, now I can kind of see this more than I did previously. So I was incredibly impressed with this. I think. In theory, it seems a lot like a custom GPT. Obviously with custom GPTs we don't have the ability to generate that nine minute audio.

Also as a side note, it generated that in maybe five minutes or so, five minutes or less of processing time. But this is impressive and I think it's a great point you made that maybe the underlying model isn't necessarily better than anything we're seeing, but the way that they've packaged this up is exciting.

Amith Nagarajan: That's where I think a lot of upside is that I often tell audiences that I speak to on AI, that even if the models that we have today do not get better at all for a decade or longer, there's It's going to take us about that long to figure out all the creative ways we can use the tech, the technology as it is today.

There's so many things we can do by, you know, combining and recombining and layering and all these other cool things. We've talked about agent systems, you know, which is basically actually what this is. It's a multi agentic type of solution. I don't know what's under the hood, but it basically is that conceptually.

Um, it's amazing what you can do just by, um, building on top of the foundational capability of these models that we've, you know, had access to really only for a handful of years now. So I'm pumped about it. I think this is a great example. Associations can create their own notebooks. They can put their content in there.

They can share them with different people. This is a very early products. I wouldn't necessarily build a prototyping environment where you're thinking, Hey, what would it look like if I had some of my content, maybe, uh, tailored, For particular use cases in this kind of environment. Uh, what I do like about Google's commitment to not ever training on your data is that the other companies aren't being that direct about it.

You can protect your data by way of opting out of training, uh, with some other AI platforms, but with Google, they're saying upfront, we will not use anything you share for model training, which I think is good. Yeah.

Mallory Mejias: was thinking there's some potential use cases here that some of our listeners or viewers could kind of go off and try today. I think a big one would be onboarding. So kind of uploading any important source documents as they relate to whatever department that you work in membership, marketing, finance, even, and then create kind of this, um, This notebook essentially where this new hire can go in and ask questions and you know that it's creating answers based on those source materials only Um taking all your member requests and emails and even transcripts from phone calls dropping those in here That could you could do that in a few hours and then you have kind of this Really neat resource that you can keep going back to and asking questions to but something a little bit I guess it's maybe not so far out of the association sphere that I think is so And I think what's exciting is the idea that students can drop in maybe the chapter from the textbook that they're reading and maybe their notes from the lesson, the teacher's lecture, and then create not only study guides from that, but even an audio, like a nine minute audio of whatever you're learning in school.

Amith, I'm curious, do you think your kids would be interested in something like that if you showed it to them?

Amith Nagarajan: I think they might be, you know, my kids are warm and cold, not hot and cold, but warm and cold about AI depending on the day. And, and I think they're starting to see more about the usefulness of AI. Um, I don't know if it's broadly generational or age based. My kids are, are kind of in the mid teens right now, but they, um, they generally look at AI as something that's, you know, kind of putting people out of jobs as their mindset, particularly my daughter, um, my son, not quite as much, but he just is looking at going, I don't know if I need this thing or not.

Um, so it's kind of interesting because I, they're just not super excited about it. I think this tool might change their mind a little bit because, you know, It has a lot of utility for students as you mentioned and probably in a way that uh, Educators might look at as a positive development because it's going to help them create new ways of learning As opposed to replacing learning, you know, it's interesting.

It happens to be that this morning. I was over at my son's school I was asked by the administration over there to give a talk to their faculty On ai and I was happy to do that Uh, and I told him I said look i'm a fish out of water here because I don't know anything about education I mean, I work with associations all the time.

They provide professional education, but that's very different than k through 12 But I did my best and I shared, you know some of the general broad themes that we talk about a lot on this podcast and With the association community and uh some super interesting conversations came out of that, you know, there were of course the Uh, predictable concerns with respect to students cheating.

Uh, but a lot of what we talked about actually was the use cases for teachers, um, or teachers are going to see an incredible opportunity to improve their efficiency and connect with their students more. You know, you think about like. Where does AI leave the rest of us in terms of what we do with our time and the opportunity for improved human connection, I think is, is unbelievable.

So with teachers, you know, the real magic of it is when they're able to spend more time, particularly one on one or in small group settings with the students, um, that's where they're able to deliver life changing value in some cases. And we're, I think the, they also get a lot of purpose from it. Um, but there's less and less of that because there's so much administrative work, there's so much lesson planning, there's so much grading, uh, and think about the medical profession, you know, you have doctors that spend a smaller and smaller fraction of their time with patients, and with each patient it's a tiny amount of time that is super rushed when you actually visit your doctor, and what if we could change it where the doctors have way more time to spend 30 minutes hour and a half with you.

And they're not rushed. And similarly, if the teacher had all the time to spend, you know, with each individual student to get to know them and to connect with them and have all these super powerful tools behind them to be able to do personalized learning and create really engaging experiences for each child in that process, or each person in that process for adult ed, um, that changes their, their outcomes dramatically.

So that's exciting. So we ended up talking about that a lot. I think that, um, we didn't talk about this particular tool, but, um, every industry, every sector is going to go through so much change and this particular tool, I'd encourage people to go check it out. I actually haven't played with this one myself yet.

I'm really, I was really excited when you, when you, uh, showed me that example because I haven't had a chance to look at this one, but it's, it's definitely worth spending an hour or two and, you know, my parting comment, but pretty much all the talks I give. Is a call to action where I say, listen, the one thing that you can do, all of you, no matter how busy you are, no matter how advanced or early you are in your AI journey is you can allocate.

A small sliver of time every week, put it on your calendar and make it non negotiable as a learning block. So it can be 15 minutes. It can be an hour. If you can do it more frequently, a couple of times a week, or even daily. That's awesome. But even if you just said, look, let's just start off with 15 minutes once a week, and what we're going to do with that 15 minutes, it's a non negotiable block of time.

On the calendar, we're going to learn something about AI, right? And so this is a perfect type of thing to go play with for 15 minutes. You'll learn a lot from it. Uh, and of course, if you can go with higher frequency, a little bit longer durations, that's great, but it's kind of like, you know, telling someone who's never gone to the gym that they should start working out six days a week.

Um, It's not going to happen. But if you say, Hey, once a week, take a 10 minute walk, then go up from there. It's the same kind of approach I'm trying to take. So in any event, um, I think this is a great example, build your own personal backlog of AI learning AI experiments that you want to do, keep a list somewhere.

And this, this should go right onto that list. Okay.

Mallory Mejias: And if you have a ton of time on your hands, you can start a podcast because for me, that's what keeps me super accountable is knowing, Hey, I want to talk about this tool. I always, always do my best. If it's something I know how to use and think I can use efficiently to test it out so I can at least give you all a firsthand insight.

All right, for topic three, we're talking about XRX, which is an open source development framework created through a partnership between 8090 and Grok, and that is Grok with a Q. It enables developers to build multi modal AI solutions with seamless integration of voice, text, and image outputs. So honestly, the term XRX is silly, Something that you may not remember, but if I break it down into what it means, it might help you.

So the first X is, refers to any modality input. The R, which is a capital R, refers to reasoning. And then the last X is any modality output. So X, R, X, any modality input, add in some reasoning, any modality output. What are the key features? It allows for the creation of AI applications that can handle input and output modalities like voice, text, and images.

At its core, it incorporates a robust reasoning system, which enables complex AI powered interactions. Powered by Grok's LPU AI inference technology, XRX delivers instant inference and superior performance, making it suitable for real time applications. And as we mentioned, the framework is open source, which allows developers to freely use and contribute to its development.

So all of that is a little on the technical side. I wanted to talk a bit about what this looks like in practice, and instead of reinventing the wheel, I went to LinkedIn to a post that Amith actually shared with me. This post is by someone named Benjamin Klieger, who is an AI applications engineer at Grok.

And he says, quote, Need an AI agent to ask and collect patient information through voice calls? Done. Can the agent be interrupted and wait for the person to respond? Yes. Thanks. Can it interpret information out of order? Yes. Can it correct information if the patient interrupts it to revise a previous answer?

Yes. What about an AI using casual language to take pizza orders? Done. Can it use casual language like a pizza shop worker? Can it navigate customers through the menu at the same time? Yes to both. Can it confirm the total purchase price with the customer and then submit a real pizza order through Shopify?

Yes. So when you start to hear some of those use cases and examples, and there also is a short video that we'll link to you in the show notes as well on LinkedIn that shows. just how real time, the real time inference is, you'll see how impressive this could be. So Amith, we talked about, we've talked about Grok several times on the pod, but I want to say the episode where we talked about the LPUs, it was a while back.

So can you talk a little bit about the difference between Grok's LPUs versus traditional GPUs?

Amith Nagarajan: Um, Well, you know, we, so we had, I think a whole episode on this, maybe, uh, 20 episodes ago or something. So we'll definitely link to it in the show notes because it's still relevant. And the information about the share is in much more detail in that pod. But the basic idea is that GPUs have been used for both training AI models and also running them in the AI world.

We typically call running the model. Like when you interact with chat, GPT, it's called inference. And so training versus inference are very different. Computationally and their requirements in terms of memory, um, access to data and actual, you know, computations, the fundamental ideas behind each of those processes are very different.

So the idea basically is that rather than using GPUs for both training and inference, what if we had specialty hardware that was incredibly good at inference? So Grok's innovation is the language processing unit or LPU as they call it, And that is essentially, uh, purpose built hardware that is designed just for inference.

It is not used for training at all. So GPUs, primarily from NVIDIA at the moment, are used for training most AI models. You can also use NVIDIA GPUs for inference, but LPUs are much more performant. At inference. Um, so, you know, it's kind of like, I don't know if you ever, if actually those of you in DC have probably seen this, like in some cities that have riverfronts, you'll see these like bus, like tourist bus things that like they tool around the city streets and they show people what's going on and they're, they turn into boats.

So like in the Potomac in DC, it's pretty cool. Like you'll sometimes see these things just like going to the river and, you know, they're not particularly great as buses. They're not particularly great as boats. But they can do both kind of sort of, right. Um, so similarly a GPU is a general purpose thing.

That's not what G stands for. It's graphics originally, but it's basically AI workloads, uh, doing, you know, really high scale parallel computations, um, which is great for all sorts of different things. Um, but it's kind of like the general purpose workhorse in a sense in the AI realm, whereas LPUs are like a speedboat on the water.

That thing's not going to go anywhere on land, but all it can do is go really, really fast. In that one category, right? So it does one thing and that one thing that it does, it does it really, really well. And if you were going to design a speedboat versus if you're going to design this kind of amphibious bus thing, um, you have optimization opportunities.

You can make the hull a lot more hydrodynamic. You can put a lot more powerful engines in it. You can approach a lot of different design decisions differently because it's single purpose rather than multipurpose. So that's the basic idea is that LPUs are specifically built for just inference. And, uh, they've done a lot of interesting things in the hardware design in terms of memory architecture that eliminates a lot of the choke points that GPUs suffer from on the inference side.

Um, but that's more of a technical discussion that, first of all, is not even my area of expertise. I only know it at a surface level. And secondly, it's probably not super interesting to the audience, so we won't go into a lot of detail there. It's best to just categorize it as specialty hardware used for running AI models.

And it does it at ridiculously fast speeds. Um, there will be other technologies like grok stuff, but right now grok is the clear leader in terms of inference. You know, they, they are inferencing the llama 3. 1 70 billion parameter model, which is the medium sized model at something over a thousand tokens per second.

Which is many, many times faster than human perception can possibly consume. Even those of us that read and hear and listen at incredibly fast rates. You know, that's probably two, three, 400 tokens per second is what we can consume at most. And this thing's already, you know, many times faster. So coming back to the commentary about, you know, Like real time applications, anything that you're thinking about with AI that you want to have very low latency on, um, grok is an amazing platform to build on.

So we're super fans of what they've built, um, really because of the benefits, you know, I think the company school, the people who work there seem awesome, but the technology just is in its own league at the moment.

Mallory Mejias: And to set the stage, are we seeing grot chips power other frontier models? You mentioned llama, but what, what does the landscape look like across the board?

Amith Nagarajan: So at the moment, um, GROK availability for people other than those who are building massive data centers is you inference with, if you want to inference with GROK LPUs, you do it through the GROK cloud, which you can sign up for as a developer, get an API key and start building stuff. And it's super cool.

They've got great customer support teams, highly recommend that. Uh, and then you have your choice of models. You know, they, they're not a model company, so they have a couple of Mistral models. They have models from the Lama family. Uh, and several other things that they have, like the Whisper model from OpenAI, which is the, uh, the voice, the text and text to voice, um, open source model that, that they have.

So, uh, That's kind of the infrastructure that's powering XRX. And I think that's interesting because that enables this kind of real time opportunity that other AI platforms really can't match at the moment. Um, and then the reasoning, the R in the middle that you were describing earlier, uh, Mallory is really important because, you know, we talked a lot on this podcast a few weeks ago, right before open AI released it's 01 model, which was previously strawberry.

And how that was kind of the first, like truly reasoning enabled model. Well, you know, the R is going to be whichever model you want to plug in because it's an open source model, agnostic framework. So building multi agentic kind of voice to voice model, uh, applications, I should say, are now a lot easier.

It's still not like, Oh, click a button and it's built for you. But when you overlay what we're talking about here with XRX as an open source framework, uh, and then you think about plugging it into infrastructure, like member junction, which we've talked about, or really anything else, um, it makes it so much easier to start thinking about how you're going to construct applications to meet both current needs, but also future ideas.

Because, you know, again, with the pace of advancement here, the, that are in XRX becomes more and more and more powerful over time. Um, and you can plug in even more complex decision making.

Mallory Mejias: So skip is an AI data analyst. Skip is an AI data analyst for associations. Uh, this is my first time actually thinking through this, but are you all considering and can you share, um, Making skip multimodal in the sense that you could just talk to skip, skip responds, low latency, easy conversation.

Amith Nagarajan: Totally, um, so skip and Betty, which are both multi agentic solutions for associations. Skip, as you mentioned, is a data analyst and Betty is a knowledge assistant. These are both a I chat bots that, um, Tassio labs actually mentioned Thomas earlier, one of our keynotes to digital. Now, this year, um, He's one of the founders and he's been involved in building those solutions, um, are text to text right now.

So they're multimodal in that they can interpret images and video, but they're interacting with the user text to text it's on the near term roadmap to enable voice, um, and to also have potential other modalities. And what I mean by that is. Uh, imagine a world where, um, in the context of, uh, analytics, you might say something like, you know, even a member might be able to ask questions that result in skip fetching data, doing whatever transformations are necessary.

And responding, obviously there's a bunch of different. Security layers that you have, but this is an enabling technology that would make it easier for a product like skip to support that kind of capability. This is not necessarily required, but the idea is that you know XRX is a tool set that makes it possible for associations and anyone else for that matter to build solutions like that.

That's why I find it exciting.

Mallory Mejias: And you said the key earlier is, is the low latency. That's the piece that's impressive here. Can you think of any other use cases off the top of your head where low latency might be important? I guess member calls to a call center is what I'm thinking. Anything else? Yes.

Amith Nagarajan: Sure. Anything where people are interacting, it's a synchronous form of communication. So phone calls, video conferencing, uh, if you want to have an AI avatar, join us on this zoom call and have it interact with us in a meaningful way and. You know, see that in like a full 3d type of avatar that is photorealistic and talking to us.

That would be something that you need real time inference for. Um, there's a lot of applications where real time makes a lot of sense. There's certainly applications where it's less important. You know, you might have actually skip as a great example. A lot of times people go to skip and they say, Hey, I want you to predict which of my members are not going to renew and skip goes off and does all sorts of data crunching and runs machine learning models for you and comes back with a spreadsheet as an answer to that type of question.

And you might not care if it takes five or 10 minutes for something like that, or if it took, you know, 30 minutes, it really doesn't matter that much. But, um, that's an example where an asynchronous interaction is totally fine. Um, but for synchronous communication, which can even include things like text messaging or.

Interacting through apps like WhatsApp, you know, those are all scenarios where I think this becomes really, really important. The other thing to remember is, you know, models are getting more compact, which makes them faster to run. So model capabilities that would have required the biggest frontier models last year are now packed into these small to medium sized models, which inference much faster on any hardware.

Um, and we'll definitely Grok. So I think that, um, you know, what would have been possible a year ago, even if you had the same hardware you have today. You would have had a lesser model running on that hardware, and now you have something dramatically better. So that's part of what makes this exciting.

Cause if, if the real time interaction is with a model, that's kind of dumb. Then you can have the best multimodal real time interaction. But like the thing in the middle you're talking to is, is not very useful and that's no longer the case. And it's the convergence of all of these capabilities that for the first time, you know, ordinary organizations that are run by people who aren't, you know, Netflix, Amazon, open AI can build stuff like this.

And that's exciting because that's, you know, the dead center for the association market.

Mallory Mejias: Well, when we can communicate voice to voice with Betty and Skip, we'll have to bring one or both of them on the pod, and we can interview them ourselves.

Amith Nagarajan: That'd be fantastic. Okay.

Mallory Mejias: Well everyone, thank you all for tuning in and reminder about the contest we're running. If you want a free registration to DigitalNow for you and your colleague, post on LinkedIn about this episode, something that you learned.

Maybe it's going to try Notebook LM, maybe it's about XRX, tag Sidecar, hashtag DigitalNow, and every post is an entry. We'll see you all next week.

Post by Emilia DiFabrizio
September 26, 2024