Sidecar Blog

Celebrating 50 Episodes, Exploring AI Learning Hub, and Advances in Spatial Intelligence [Sidecar Sync Episode 50]

Written by Emilia DiFabrizio | Oct 3, 2024 10:01:30 PM

Timestamps:

00:00 - Introduction
02:10 - Celebrating 50 Episodes
06:20 - Introducing the AI Learning Hub 2.0
12:36 - The Value of Continuous AI Education
19:18 - Fei-Fei Li and the Future of Spatial Intelligence
21:35 - How Spatial Intelligence is Changing AI
28:58 - The Future of Training Models
37:47 - Contest Announcement and Final Thoughts

 

Summary:

In this milestone 50th episode of Sidecar Sync, hosts Amith Mallory celebrate the show's journey while diving into exciting new developments. They explore the enhanced AI Learning Hub 2.0, which now includes a professional certification, and discuss the importance of continuous AI education for associations. Plus, they introduce spatial intelligence, a game-changing area in AI, and share insights from Fei-Fei Li’s groundbreaking work. Tune in for a fascinating look at the future of AI in the association world, and join the celebration of 50 episodes!

 

 

 

 

πŸŽ‰ digitalNow 2024 Contest:

Post about this episode on LinkedIn! Share something you learned or a cool tool you're going to try, tag Sidecar (https://www.linkedin.com/company/sidecar-global), and tag #digitalNow. Each post is an entry, and two winners will receive free passes to digitalNow 2024.

Note: Every post counts as one entry. The contest ends on October 4th.

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by digitalNow 2024the most forward-thinking conference for top association leaders, bringing Silicon Valley and executive-level content to the association space. 

Follow Sidecar on LinkedIn

πŸ›  AI Tools and Resources Mentioned in This Episode: 
NotebookLM ➑ https://notebooklm.google.com 
AI Learning Hub ➑ https://learn.sidecarglobal.com 
The a16z Podcast ➑ https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

βš™οΈ Other Resources from Sidecar: 

 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

 

Read the Transcript

Amith Nagarajan: Welcome to the Sidecar Sync. We're excited to have you join us. And we have a whole bunch of interesting content to cover at the intersection of associations and artificial intelligence. My name is Amith Nagarajan.

Mallory Mejias: And my name is Mallory Mejias.

Amith Nagarajan: And we are your hosts. And before we get into all of the excitement of today's episode about AI and associations, let's take two, take a moment to hear from our sponsor.

Mallory Mejias: Amith, we've got big news today.

Amith Nagarajan: Yeah. What's that?

Mallory Mejias: We are celebrating episode 50 of the Sidecar Sync Podcast. Can you believe that?

Amith Nagarajan: It is pretty amazing actually

Mallory Mejias: mean, 50 episodes and we have so much to celebrate this week. We're going to talk about our brand new and improved AI learning hub, but also for our listeners, Monday, September 30th was international podcast day.

And so we got to celebrate the 50th episode of the sidecar sync podcast on the same week as international podcasting day, which is really exciting to me.

Amith Nagarajan: That is super cool. I have to say, you know, just getting this episode started, um, in the back of my head, I've been playing around with Notebook LM a bunch and we covered that in our last episode or maybe it was, was it two episodes ago? I can't remember. covered it recently, um, and, uh, at the time we recorded that episode, I had not had hands on time with it yet and Mallory had, and she played, uh, an example clip of the podcast that it generates.

And, uh, since then I've generated a few myself. It's, it's pretty impressive. So definitely if you haven't checked out Google's notebook, LM, go check it out. Um, but I also have those two voices in my head every time I podcast, because I I've listened now to like 10 or 15 examples of that. So when we're starting to talk, I'm like, do we kind of sound like those AI's from Google?

Mallory Mejias: No, the AIs from Google sound like us, Amith.

Amith Nagarajan: Yeah, that's probably it.

Mallory Mejias: Well, we're super excited to be celebrating episode 50. Off the top of your head, this might be too hard of a question. I'll have to look as well. Amith, do you have a favorite episode that you want to talk about? Maybe a favorite topic or a favorite guest that we had?

Amith Nagarajan: You know, I would say when we recorded the foundation, Episodes. There were two of them, foundation episode one and two. Um, they still are some of our most popular episodes even through now, because I think they give people a really good intro to AI and, uh, it's a format I think that's pretty easily consumed.

So probably those two, although recently we did another evergreen style episode, uh, on unstructured data that seemed to get a lot of really positive feedback. I enjoyed that topic a lot. And I think it's, it's an area with so much opportunity. So, uh, probably either, either of those two.

Mallory Mejias: I would say on my end, I like the Evergreen episodes as well. I like the Foundation of AI episodes. For me, the Vector episode was also really fun because it was intimidating, to be totally honest, and kind of scary, and I did my research, and after that episode, I genuinely felt like, okay, I've got a good grasp on this, and hopefully our listeners felt the same way.

And I always think back to, to Neil Hoyne's episode. I think that was a really great interview that we did.

Amith Nagarajan: I agree with that.

Mallory Mejias: Well, everyone, I want to remind you all about a contest that we are currently running in honor of our 50th episode that entails complimentary attendance for you and a colleague to Digital Now 2024, which is this month, October 27th through the 30th in Washington, DC.

All you have to do is post on LinkedIn about the Sidecar Sync podcast, tag Sidecar, and hashtag Digital Now. Each post is one entry and the contest ends tomorrow, Friday, October 4th. So get your posts out there. Mm

That's going to be exciting.

Amith Nagarajan: Well, I can't believe digital now is right around the corner. It's, it's kind of crazy. I mean, every year we say the same thing, but it comes back around really quick. It has been slightly less than a year since digital. Now this year will be October 27th through 30th last year.

I think it was November 8th or 10th or something like that. So, but still that's not the reason time is just moving quickly. Um,

Mallory Mejias: Yeah. And I associate the beginning of this podcast with digital now from last year, because they were very close. I don't remember if it was the same week, the week before the week after, but really close to one another. And so it's really, it's just, it's insane to think that we've done this for pretty much a full year, but excited to be here on episode 50.

Well, today's episode, we're covering two topics. One of those is the AI Learning Hub, or our new and improved AI Learning Hub 2. 0, as we like to call it. And then we'll also be talking about spatial intelligence, and that'll be a really interesting conversation as well. So, if you have listened to the podcast before or watched us on YouTube, you have heard us mention the AI Learning Hub only, I think, every single episode.

It is our library of asynchronous AI lessons and courses with association specific use cases and applications. Within that Learning Hub, we have office hours with AI experts, and then you also get access to a community of AI enthusiasts, as I like to call it. And we are thrilled to announce a brand new AI Learning Hub that we just released last week, essentially, at the end of September.

The AI Learning Hub was always meant to be living and breathing, and we plan to update content frequently from the start, but we really zoomed out. And took a look at the whole thing and realized this stuff is changing so quickly that we decided to reinvent kind of the whole learning hub. We wanted to have a better flow, we wanted to have better recordings, we wanted to have better checkpoints, more engagement with our learners, and better courses overall.

So you might be wondering what's new exactly about this AI Learning Hub 2. 0. Pretty much everything. Every course in there is new minus the AI Prompting course which we redid just several months ago so that one's new as well.

We have a Foundations of AI course, AI in Marketing and Member Communications, AI in Events and Education. Data in AI, Strategy in the Age of AI, AI Agents, and we're working on an eighth and ninth course currently, which will be Responsible AI and a chatbots course. We also moved to a new LMS, or Learning Management System.

So formerly we hosted the AI Learning Hub on a community, on the Circle community, which we love, but we were limited in terms of learning features, especially as it pertained to things like assessments. So now we aren't. We've added in knowledge checks and activities throughout, and we're also thrilled to announce the launch of our new Association AI Professional Certification.

So for those individuals who take all the courses in the Learning Hub, complete them, and then take and pass seven assessments that go along with those courses, and we'll see you They then share a reflection on ways that they're incorporating AI into their regular work. Once they do all of that and pass everything, of course, they earn the AAP or Association AI Professional Certification that recognizes that individual for outstanding theoretical and practical AI knowledge as it pertains to associations.

I hope you all know by having listened to this podcast before that we at Sidecar are committed to bringing our audience the best, most up to date content out there, and making it highly relevant for you, and that is why we're thrilled To be announcing the launch of this new AI learning hub. If any of this sounds interesting to you, we do have our AI prompting course that is at the really low price of 24 dollars.

It's a great entry point. If you kind of want to see what the AI learning hub is about. And it has some fantastic tips and tricks in terms of prompting, um, chat GPT, for example, or Claude or Google Gemini. We've also got a middle tier that we're offering, which is our standard one. You get access to all seven courses in the AI learning hub.

And then we have our pro tier, which gives you access to all the courses, access to the office hours and access to that AAP certification that I just mentioned, and that's 399 a year. I'll also mention if you want to get your whole team involved in AI learning, which is really where we recommend starting if you want AI to have that fully transformative effect on your association, you can get your whole team in there as well for one flat rate based on your organization's revenue.

So Amith, what are you most excited about with this new AI learning?

Amith Nagarajan: It's kind of hard to pick one thing, you know, there's, there's a lot going on in that, uh, And, you know, AI is changing quickly. So our commitment since the beginning of all of our learning opportunities with sidecar around AI have been to continuously update, iterate, improve. And we do that all the time.

And we have tons of content, um, that's available for everyone for free. We have this podcast, we have blog posts, we have a monthly AI webinar. We do a whole bunch, um, to try to provide as wide of a swath as possible of, uh, free content. And then our premium offerings through the learning hub that you mentioned.

Um, have really taken a step up in terms of both quality and depth. Uh, so I'm, I'm excited about that. Probably if I had to put a pin on the one thing that I'm most excited about is the certification. And the reason is, is that, uh, I think certifications are very clearly understood as being valuable for career advancement for, uh, designating your expertise.

And there's a lot of people out there already, which is exciting, who have taken the time to learn a lot about AI. A lot of them are in this community. Um, and we want a way of, uh, acknowledging and recognizing those strengths in the community for employers who are seeking to hire a AI, you know, capable.

Uh, association professionals, this is going to be a great way to be able to showcase that on a resume or on LinkedIn. And so, uh, we're excited about that. I think that's, to me, the biggest thing about it is having a professional designation that shows that you're an expert at that intersection of associations and artificial intelligence.

So that's my number one thing, probably. How about you?

Mallory Mejias: That's a good question, Amith. I don't want to say the certification as well. I think in a way that's the most exciting because it's the newest thing that we're doing. The certification is exciting for that reason.

But being that you already said that one, I think the thing that's most exciting for me is really the overall flow. So for the first round of the AI Learning Hub, it was a bit. More disjointed, I'll say. We kind of focused in on topics where we had that expertise. We shared use cases and examples, but I don't think there was a great flow, a great story from beginning to end.

And since we decided to do recreate everything all at once, which was a bit crazy, but I'm happy that we did. We had the ability to take a step back. And look at the picture from afar and say, where are we starting? Where are we ending? What do we want people to learn from this AI learning hub? And so for me, that was the most exciting feeling, like it's a cohesive product.

Amith Nagarajan: Yeah, it makes a lot of sense, you know, just reflecting on the journey of, uh, everything we've been doing around AI, which, which goes back many, many years. I mean, sidecar has been talking about AI for years and years now, and we really have ramped up our AI conversation in the last two and a half years.

But, um, you know, the thing that we've been consistently beating the drum on is learn, learn, learn. And I think that's probably true. For any topic that if you have something disruptive or something emergent that you've got to go figure out, you need to learn. You need to, you can't just, you know, make a bunch of, of uneducated choices, decisions.

It's all guesswork at that point. And still today, I see a lot of people that are out there saying, Hey, what's our AI strategy going to be? Let's hire consultants or let's try to figure it out ourselves. And they don't really have. A deeply rooted foundation in the basics of what AI can do, where AI is heading, how to think about it, um, and so fundamentally, I think it's, it's a really worthwhile investment to slow down for a second and do an education program of some sort.

Obviously, we have ours and it's tailored for the association community, but there's tons of great educational resources on AI. Ultimately, what we care about is people going out there and doing something to advance their AI learning. You know, what I tell people when I speak on AI is I say, listen, I want you to promise me one thing walking out of this room, I want you to write down on your notepad or in your brain.

If you don't have a notepad with you, that you're going to devote 15 minutes a day to learning AI. I don't need an hour from you. I don't need three hours from you. I just want 15 minutes a day, five days a week to learn AI, because if you do Um, consistently, you're going to become way more knowledgeable than probably everyone else, you know, or the vast majority of people.

And it's going to put you in a position where you're super effective using the tools. You're going to be able to provide valuable counsel to others in your association, to people outside of the association. It's going to help you in your personal life. It's going to help you also be more competitive in a changing labor market, uh, as the world is changing So I think it's it's a great and and very easy investment to make if you think about it We can all afford to spend 15 minutes Doing something that's important every day.

And if you can't, if you think you can't do it once a day, do it once a week, you know, 15 minutes every Monday morning, 15 minutes every Friday afternoon, pick a time that works for you, block it off and do it. Obviously our learning hub, it has a bunch of small lessons that are broken up typically in under 15 minute chunks.

So fits in really nicely with that idea. But again, you know, the goal here isn't to, to pitch the learning hub by itself. It's the idea of learning on a continuous loop basis. And don't stop. If you think like, you know, okay, I have a pretty good understanding of AI, know how to do the prompting. I understand a little bit about vision models.

I understand about agents. That's awesome. Don't stop there. Keep going. There's no such thing as graduating from AI college. You know, like all of us are basically like, you know, just slightly less or more incompetent with AI than the next person. That's honestly where it is. Like even people who say, Hey, we're AI experts.

There's really no such thing because we have no idea what these models can actually really do. And that's true even for the people who create the models.

Mallory Mejias: a hundred percent. And then we're going to kind of touch on that with the second topic, which is spatial intelligence, which really blew my mind. I mean, I was just thinking of something while you were talking. I feel like. The fact that we get to do this podcast every week is an incredible learning experience for me and that we're always kind of sending links back and forth to each other and newsletters and articles.

I think it would be helpful perhaps if you would one day share maybe some of like the top AI influencers that you follow because you'll send me things sometimes and I'll be like, I don't even know who that is. And then I'll click follow, but I do feel like you have a really good repository of resources that you, uh, that you keep up with.

Amith Nagarajan: Thanks, yeah I follow a lot of different people on LinkedIn. I subscribe to a whole bunch of different newsletters. Some of the stuff I subscribed to is kind of esoteric or technical. If I see someone who is Not a big name. You know, I do follow all the big names like Faye, Faye Lee, who you're going to talk about shortly with spatial intelligence. Andrej Karpathy, all these other people that are very big names in AI.

And they often have some very interesting things to say. However, there's also, you know, a ton of other people out there who aren't that well known who are researchers or engineers or AI. So I'm pretty liberal with how I sprinkle the follow button, you know, on platforms, particularly LinkedIn is the main platform where I hang out.

And a lot of things end up in my feed that I think are interesting. And the newsletter side of it is really valuable too. I don't even know how many newsletters I subscribe to, but I look at probably, I don't know, easily 20 or 30 newsletters a day. I don't read them, you know, uh, from front to back, but I scan them and I look for interesting things and that's a lot of what I spend my time doing.

Plus I listened to, I don't know, probably 10 or 15 different podcasts on a regular basis. So I'm just consuming information constantly.

Mallory Mejias: Wow. That's intense. We should make a list. We should post it on

Amith Nagarajan: Yeah, I'd be happy to. That'd be fun.

Mallory Mejias: Last question on the AI Learning Hub. What was your favorite course that you recorded? And just so you all know, Amith led the way on teaching a lot of the content in the Learning Hub. I have a course in there as well. And then we have some other teachers too.

But I'm curious, what was your favorite?

Amith Nagarajan: Well, I mean, I think probably for me personally, my favorite one is strategy in the age of AI. And the reason I like that course so much is that it gives you an intellectual framework for how to think about strategy generally, uh, based off of the Hamilton Helmer seven powers framework. And we've adapted it for the association market to some degree, but more importantly applied it to the world of AI.

You know, whenever there is a period of rapid change, uh, there's opportunities for new businesses. There's opportunities to displace existing businesses, but you have to rethink, um, the way your business is going to work, whatever that business is. And so having a really rigorous intellectual framework for what could create a strategic business model.

Advantage ultimately. And what's a durable one, uh, is something I think is really interesting. So that one was my personal favorite, probably because that's the stuff I think about constantly in starting businesses within the blue Cypress family or working with you on, on product strategies for sidecar or whatever.

Um, I think probably. The other favorite of mine, that's a little bit much more practical is the data one, the course all about data, because we talk about some of the things you mentioned earlier, like vectors. We talked about unstructured data that I mentioned earlier. We have a lot of depth of content in there.

And I think, uh, you know, a lot of association folks, I know whether they're in membership or marketing or technology roles struggle. Mightily with data and AI can help a ton with this. And so the data course I think could be very, very practical for a lot of people. Okay.

Mallory Mejias: Awesome. Moving to topic two, which is spatial intelligence. So Fei Fei Li, who Amith just mentioned, is a renowned AI pioneer, Stanford professor, and is also known as the godmother of AI, which I didn't know, but several news sources call her that. She has been speaking about spatial intelligence for a while, essentially saying that it's the next frontier in artificial intelligence.

Spatial intelligence aims to give AI systems a deeper understanding of the Physical world and enable them to interact more effectively with their environment. Spatial intelligence encompasses several important capabilities that I want to cover for you all briefly. And one of those is visual processing, the ability to process and interpret visual data from the surrounding environment.

But really the key here is this 3D understanding. So understanding the three dimensional nature of objects and spaces, including their geometry and spatial relationships. Spatial intelligence also creates the capability of making predictions about how objects will interact or move in the physical space.

And it's all about linking perception with action. So allowing AI, not just to see and understand, but to also interact with the world. Fei Fei Li has founded a startup called World Labs, which aims to build large world models, LWMs, that can generate interactive 3D worlds. World Labs has raised significant funding, about 230 million thus far, to pursue this vision.

What's interesting here is that Fei Fei Li compares the development of spatial intelligence in AI to the evolutionary leap that occurred when organisms first developed sight, which led to an explosion of life, learning, and progress. She's essentially saying that a similar transformative moment is about to happen for computers and robots.

So Amith, this was This was a lot for me to tackle. You sent me a podcast episode, which we'll link in the show notes. It was an A16Z episode where they interview Fei Fei Li. And it was pretty mind blowing. I was short episode. I don't know how they got so much into, into just a short period of time, but it was great.

You all should listen. And you and I on this podcast have talked about. computer vision, which allows for object detection and recognition in images and videos. We talked about SAM2 or the segment anything model recently, but what we're actually talking about is 2D analysis. And so spatial intelligence, as I understand it, is different in that it refers to an awareness and understanding of the three dimensional world.

Can you talk about that distinction a little bit?

Amith Nagarajan: Sure. And before I forget to do this, um, Fei Fei LI wrote a book called the world's I see, which is an excellent book on AI, kind of general. Early, and it does talk about her, uh, personal journey, um, and how she went through this process of really getting into this particular subspecialty within the world of ai, which I, I think is just a really interesting thing.

Um, so I think of it this way. So, you know, it's, it's actually, I think her analogy is, is best that it is like an organism first evolving to develop sight. Um, and as the sight get gets better and better, the capabilities are radically different than. You know, organisms that didn't have sight. So that I think is one way to look at it.

Another way to think about it is if I through language models, if I am just essentially describing to you, um, this is the house I live in. These are the dimensions of the living room and the dining room and the bedroom. And this is how tall my roof is. I can give you a lot of. Textual descriptions of that.

But if I take you to my house and I show you my house and I walk you around it and you see like constantly in your brain is processing, you know, these multidimensional images of what's going on. And it's, you've got movement in there, which is a fourth dimension. Um, you have a much better understanding.

Plus you probably. You probably have understanding of materials as well and physics and you have an intuitive understanding of those things where you're like Oh, well I know that that beam is made out of wood and you know This other thing is made out of steel and this is made out of glass and what would happen if you threw a baseball At the glass it probably will break it might break depending on the kind of glass Whereas if you throw it at the wood, you know intuitively It's probably not going to do anything.

Um, so you have a intuitive understanding of physics and materials and a whole bunch of other things in your brain. And a lot of that has come, not because someone has told you, Hey, Mallory, baseballs break glass, but don't break wood. It's because you've had experiences in your life, largely visually, uh, where you've ingested all that training data, essentially over the course of a lifetime.

Um, and essentially right now, the concepts that she's talking about, um, Is to couple the ability of models to take in video and image data, but to actually have understanding, um, as opposed to just basically look for patterns. And so the understanding means that, um, there's a physics engine in there where, you know, the movement of objects is not just based upon patterns that were inferred from millions of hours of video previously, but also based on an understanding of what the rules of physics are, um, or understanding the materials. So you may or may not have any way of determining what the materials are. Um, you know, if you look at a car, is the panel on the side of the car made out of aluminum or steel or composite or something else, um, and so you can make some guesses, some educated guesses based on what you think most cars, you know, panels are made out of, but you know, if you actually knew that, then you'd be able to better predict what would happen if that car crashed, um, at different speeds and all this kind of stuff. So a lot of what she's talking about is What's going to, in my mind, there's, there's world models, there's vision models, there's language models, there's all sorts of different kinds of science of models.

Ultimately, these things are all going to converge into models. Um, and whether they're large or small, I think will be a distinction that matters more to the engineers than, than most people. Um, but this, this is a capability, none of these models have today. Um, they don't have an understanding of what's going on.

Um, they're just basically predicting the next frame of the video, essentially, as opposed to having a deeper understanding. So that's a big part of what she's describing. Um, the other thing that I want to point back to is, um, sometime ago there was a big, uh, big amount of noise about open AI releasing, uh, Videos from something called Sora, which we covered on this podcast.

Maybe it was back in the spring, right around there. And, um, the speculation at the time was that the reason open AI had invested in this text to video generative model is they were playing with a world model. Which is a physics engine, physics based, you know, world model, uh, which is similar to what's being described here.

Um, so I think a lot of labs are working heavily on this. I think she's amazing. Fei Fei Li is amazing. I think that, you know, uh, she'll probably do something really interesting there. The question will be like, will this company like have a distinguishably different, you know, capability compared to what open AI, anthropic and a lot of other big labs are doing, who are all thinking about this problem as well.

Mallory Mejias: Oh, man. This is so interesting. To me, there was a thread, particularly when she was talking about language models. There was this underlying current, at least in my words, of, you ain't seen nothing, like, you ain't seen nothing yet. Which I think is interesting because we talk about generative AI so much on this podcast, and particularly models like GPT 4.

0 and, and Claude. And, um, My mind has been blown seeing these generative language models But the fact that that really is just a fraction of the way we kind of perceive the world I I thought it was quite a provocative statement she made About how language doesn't really tell us all that much about the environment around us, but you you nailed it on the head It's that 3d interpretation

Amith Nagarajan: Well, I mean, Jan Lacoon, who's the head of Meta's fair lab, their AI research lab, uh, brilliant, you know, computer scientist and AI researcher, he's been Uh, the main, the main head of that, that lab. And so he, uh, has talked in the past about how, you know, in a period of time when a human being from birth to, I think he talks about like four years old roughly has taken in, you know, uh, so much information through our various senses, right?

So sound, vision, touch, smell. Um, these are senses that generate information for our brains and our training process that in that four years, um, that. That child has taken in more information than the entire world's comprehensive collective knowledge base, right? From a digital perspective. Actually, I think I have that stat wrong.

It might be like multiple orders of magnitude greater. So he talks about that and then, you know, kind of pointing out some similar things. So, you know, he has mentioned publicly in the past and the recent past that they too are working on models kind of in this genre. So I think it's an exciting category.

Yeah. Um, Fei Fei Li, by the way, one of the interesting things about her is she was the main, um, thrust behind ImageNet, which was the thing that put deep learning on the map. So, back in the, uh, two decades ago, she was doing research on image recognition, and they built a, a really, uh, interesting labeled data set.

So, in classical machine learning, what you would do is you'd have data, and you'd have to have labeling for it. And the idea would be that you're trying to get. The machine learning algorithm to predict, um, what a, an image, or in this case, an image, what the, what the labels are for it based upon its training sets.

So you would have, you know, millions and millions of images that have been labeled very painstakingly by hand, by people. Um, and she was able to do that actually using the power of scaling through, uh, Google and getting data, and then also using a distributed kind of gig economy type workforce to build that.

She talks about this in her book, which is really interesting. Uh, but what ended up happening was, um, there was an entry, she created the contest called image net, which was an annual computer science competition where it was like, Hey, you know, you can submit essentially any algorithm you want, you can show us, uh, through your research, how accurate your algorithm was.

For a set of benchmarks and, uh, the, the team from the university of Toronto that came in with the AlexNet paper basically showed that they had this deep neural network that they trained actually on two GPUs, you know, in a lab in the university, uh, were able to blow away everyone else. And that, you know, from there exploded into the whole deep learning, uh, thing.

So she, she's been. You know, deep and all of that for a very long time, brilliant lady and a super interesting person. So I think she'll, she'll probably do well with whatever she does is my prediction, but, uh, I think a lot of other people are putting energy into this as well.

Mallory Mejias: I'd like to talk a little bit about the training of a world model, because for me, it seems a bit like a paradox in that we use text to train generative text models, images to train image models, video to train video models, and sometimes a diverse set of those to train models. But how can we train a spatially intelligent model when we're using these other modalities?

Does that make sense?

Amith Nagarajan: Yeah, your question makes a ton of sense, and I wish I had a good answer for you. Uh, that's an area where this, you know, I think a lot of the researchers that are working deep in this haven't yet published a ton, and I don't know how much will be published, you know, I suspect, and this is purely, purely speculation on my part, That there will be some kind of an MOE style approach to this or a mixture of experts style approach to this, where, you know, you imagine a model that's trained, uh, in a more traditional manner that has a rule set that's, that's like physics related.

Then you have other models that are more generative in nature and maybe collectively they're able to, you know, have hyper, you know, accurate predictions of the world around us. So I think that the data set will probably be one of the most unique challenges of training these kinds of models. There may be a need for novel model architectures as well that are particularly efficient at dealing with this kind of data.

Um, you know, a lot of the data that we've been whether it's whether it's image or text or video. As well as like DNA and other things, uh, are sequential in nature. And you're able to say, Hey, like, let's put them into a sequence, which then bodes well for the transformer architecture in terms of predicting the next token in that sequence, right?

Whether it's a pixel or if it's a word or whatever the case may be. Um, or even like using it for diagnostics where you can put like a patient's symptoms into a sequence and then use that to predict what their condition might be. Um, but I don't know whether or not. This type of problem fits into that kind of concept of, of a sequential, you know, model.

So that would be interesting. I think there's a decent chance there's ways to encode it that way. Uh, the question is whether or not that will be efficient and whether it'll scale. One of the interesting conversations around AI, particularly around world models is, you know, who are the companies that are likely to have the advantage here?

And, uh, this is where, you know, people who have a lot of real world data like video could potentially be interesting, especially if they have a lot of mobile video where, um, you have, Both video capture, but telemetry type data that tells you things about location, speed, uh, stuff like that. And guess who has a lot of that?

It's Tesla, um, and other auto auto companies as well that have, you know, camera systems in their cars more recently, but Tesla has by far the world's largest repository of video and not only have the video, but they also know the location where the video is taken. And perhaps more importantly, they know things like acceleration and velocity. And, you know, all these other things that could be really interesting in building a word world model. And also remember that Tesla is working on their humanoid robots, which would require a world model to be really effective and kind of non industrial settings. So I think it's going to be an interesting race to watch for sure.

Getting back to your question, short version is I don't actually know what people are going to do here, but. I think it's going to be an explosion of research that's published in the next couple years that will really give us more insight. The other thing we have to keep in mind is that, you know, we have, like, kind of tailwind behind us is all the AI stuff that's happened and the continuous compounding of Moore's law that keeps happening and, you know, new computing architectures. There was just an announcement from IBM, um, something called a North pole. I think it is. I haven't read the paper yet, but they just announced some research, um, with a new processing architecture that's dramatically more efficient than anything out there, even more so than the Grok platform that we've talked about.

So you have that kind of stuff happening, which is going to open up the capacity to process just a ridiculously larger amount of data ultimately than what we've been doing.

Mallory Mejias: Yeah, and this is more of a note that that might be interesting for listeners, but you mentioned robots and humanoid robots, but they don't have to be humanoid robots, but the fact that they interact with the 3D world essentially as they move through space, but Their compute, or the brain of the robot, is by definition the digital world, or the 2D version of, of things.

And so I think Fei Fei Li was talking about spatial intelligence kind of being that, that connector. And if we could build natively, spatially intelligent robots, we'd kind of be, we'd be seeing vastly different things, I think.

Amith Nagarajan: I think you're right. Yeah. I think there's definitely going to be some interesting, interesting opportunities around that. Um, you know, I think that ultimately, yeah, the way computers represent information and make decisions, whether it's AI or more classical deterministic software, um, is based on an extremely narrow, limited amount of data.

You know, compared to what we use as people. And so that can be good and bad. I think part of what we have been like, you know, evolved to do. That's really a good thing because we wouldn't be able to survive without it is the ability to very rapidly filter out a lot of noise from the signal. Um, and that's where I think our perspective and how to design these algorithms is a little bit different because computing resources have been very limited until recently, and there's still.

Quite limited if you especially if you look ahead and say where will computing be in five years? So the opportunities to do things at scale are gonna be quite different I mean, you know another little often on lesser known fact is that neural networks are not a new innovation, you know they scaled into deep learning and Uh, you know, the early 2010s is when we started to see that really explode.

But the idea of neural networks goes back decades and decades and decades. I mean, the first like deep research into this occurred in the seventies and eighties. The problem was, is compute was so tiny back then, so limited that neural networks were not useful and they were considered a toy. And, you know, back in the late nineties, there was another wave of people saying, Hey, neural networks can be great, but compute still was very limited and the amount of data we had, because this was like early days of the internet was very limited.

And, you know, once again, it's like these, these waves. So in any event, I think that there is a lot of opportunity here. The thing that I want to tie together for our listeners, um, is how does this relate back to the world of associations? Um, and there's, there's two comments I want to make there. First of all, even though associations are traditionally dealing with information and providing services like membership and, uh, education and so forth, uh, through the traditional means, um, you ultimately operate in the world, just like anyone else.

So you don't really know exactly how something like this would affect the way you operate your business or the services and products that your association in. You know, maybe asked to produce, but it's likely going to change because when you think about intelligence in the spatial world coming online, um, the needs and the expectations of your members are going to change.

And that's really the second point is that First, when you think about a new innovation like spatial intelligence that's coming online, let's say in the next five years, there's going to be major progress there, perhaps similar to the last five years of what's happened with language, right? So if you hypothesize that that's going to occur, let's say for the rest of this decade.

What does that mean for your field? If you're in a branch of medicine, what does it mean for your doctors and nurses and medical assistants? If you're architects or engineers, what does it mean for your world? Um, if you're a material science organization, what does it mean for, for your members? And, and the list goes on and on.

In every field, there's some impact. From all of these technologies and what associations, in my opinion, have to do is to anticipate those and to start thinking about the products and services that the members in that future state will need. And not necessarily to go build them right now, but to start thinking about what those products might actually be.

Mallory Mejias: Listening to Feifei Li speak just emphasized to me that we are on the cusp of an absolute explosion. It kind of felt like we had already lived the explosion, but hearing her talk about spatial intelligence, I don't think we've seen anything yet.

Amith Nagarajan: I agree.

Mallory Mejias: Well Amith, thank you for 50 episodes of the Sidecar Sync podcast. How exciting. We've got to thank everyone that's tuned into the podcast. Reminder, if you're interested Um,

in attending digital now to compete in our contest that ends tomorrow and check out the AI learning hub as well.