Sidecar Blog

Meta's New Llama 3.2 Model & Decoding Digital Twins [Sidecar Sync Episode 52]

Written by Henry McDavid | Oct 17, 2024 3:57:50 PM

Timestamps:


00:00 - Introduction
02:31 - Digital Now Conference Update
05:44 - Meta’s Llama 3.2
09:50 - AI at the Edge: On-Device Benefits and Privacy
14:48 - Vision Capabilities in Llama 3.2 Models
20:21 - Which AI Model is Best for Complex Use Cases?
23:48 - What Are Digital Twins?
31:59 - Applying Digital Twins in Associations
36:42 - Optimizing Member Experiences Using AI

 

Summary:

In this episode of Sidecar Sync, hosts Amith Nagarajan and Mallory Mejias delve into two significant AI developments: Llama 3.2 and digital twins. They discuss Llama 3.2's advanced features, including its range of model sizes and improved language support, emphasizing its potential for on-device AI applications. The hosts explore the concept of digital twins, virtual representations of real-world entities or systems, and their applications in various industries, including associations. The conversation covers the benefits of digital twins in decision-making, predictive analytics, and personalization. Amith and Mallory also touch on the importance of data management and common data platforms in implementing these technologies. Throughout the episode, they provide insights on how associations can leverage these AI advancements to enhance member experiences and optimize operations.

 

 

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by digitalNow 2024the most forward-thinking conference for top association leaders, bringing Silicon Valley and executive-level content to the association space. 

Follow Sidecar on LinkedIn

🛠 AI Tools and Resources Mentioned in This Episode:

MemberJunction Library ➡ https://docs.memberjunction.org/

Llama 3.2 ➡ https://huggingface.co/meta

ChatGPT-4 ➡ https://openai.com

Claude 3.5 ➡ https://www.anthropic.com

Gemini 2.0 ➡ https://www.google.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

 

Read the Transcript

Amith Nagarajan: Sidecar sync listeners and viewers. Welcome back to another episode We are so pleased to have you with us. Thank you for spending a short bit of your day with us We have as always a bunch of exciting topics. My name is amith nagarajan

Mallory Mejias: And my name is Mallory Mejias.

Amith Nagarajan: And we are your hosts. And before we get into our two topics for today at the intersection of all things, artificial intelligence and associations, let's take a moment to hear from our sponsor.

Mallory Mejias: Amith, how are you doing on this lovely Wednesday morning?

Amith Nagarajan: I'm doing really well. It's a cool morning in new Orleans. I got a run in this morning and uh, it's just nice outside. And that puts me in a good mood. How about you?

Mallory Mejias: It's chilly over here in Atlanta. It was 41 degrees when I woke up, so I was not fully prepared to take the dog out in that weather, but it's been nice. I really enjoy fall. It has been a crazy few weeks for sure as we gear up for Digital Now [00:01:00] 2024. Are you excited, Amith?

Amith Nagarajan: I am super excited about digital now, 24. We've got two weeks to go. As you said, and October 27th, I guess it's less than two weeks. It's 11 days. Thanks. From today, we're recording this on the 16th. So, uh, we've got, um, I think record attendance already or record registration already and, uh, and growing, uh, we've got 11 days to go.

So if you haven't yet registered and you intend to come, you should definitely get on that right away. A lot of people post COVID have been registering for conferences like in the last minute, or even showing up on site to register. So, um, those of you association folks that run meetings know this very well.

Um, we're. We're challenged with that obviously in planning, but, uh, we're so excited. We've got some amazing content lined up. Um, can't wait to see this large group get together in DC and we've got an exciting surprise, uh, to announce when we get there.

Mallory Mejias: Absolutely. Yes. What is this trend about, Amith, where I swear, I don't know, maybe over [00:02:00] 40 percent now, maybe over 30 percent of our attendees have registered within the last probably month and a half, which is really amazing, but also really hard to plan for. But I mean, as I said, the fall's a busy season, not just for us, but for everyone.

So I think a lot of people were waiting to see kind of how their schedules shook out.

Amith Nagarajan: Yeah. A couple of evenings ago, I was chatting with the CEO of a pretty large association that we work with a lot. And, uh, he was telling me that he just got back from his conference and I don't remember how many thousands of people are there, but it's a big event. And he said that 25 percent of their.

Registrants had registered in the last 10 days.

Mallory Mejias: Wow. all.

Amith Nagarajan: I don't know if that's a general trend line or if that's more than normal. But yeah, and that's definitely a shift, um, compared to pre COVID, you know, I think there was just a, it seems to be a durable shift in behavior, um, where people used to take advantage of early bird type pricing more or just plan ahead more, but, uh, it's.

It's interesting. And I think maybe AI will help us better predict [00:03:00] registration patterns in the future. You know, if you were to use a machine learning model trained on pre COVID data to try to predict what the registrations will be in a post COVID world, you know, that's a good example where, um, a lot of times AI's training data, you know, can get invalidated, um, based on changing events in the world.

So, but we're excited. It's going to be awesome.

Mallory Mejias: We are indeed excited. We had actually a listener of the Sidecar Sync podcast reach out to us in our inbox, and they are based in the UK, but they're avid listeners of the Sidecar Sync, and they asked if we had a virtual option for DigitalNow, which we don't technically, but we will be recording those keynote sessions and adding them to our AI Learning Hub, which you've heard us mention on the podcast before.

We also have all of our 2023 keynote sessions from DigitalNow. In the AI learning hub as well. So if that is of interest to you, if you're listening to the pod, but you can't join us, keep, uh, keep a lookout for those keynote sessions on the AI learning hub. Today, we have two exciting [00:04:00] topics to talk about.

First and foremost, we're speaking about Lama 3. 2, and then we'll be talking about the concept of digital twins, which is a new one for us on the Sidecar Sync podcast. Llama 3. 2 is Meta's latest advancement in large language models. And here is an overview of some of its key features and advancements.

Llama 3. 2 offers a range of model sizes to cater to different use cases. We've got lightweight models with their 1 billion and 3 billion parameter models designed for edge and mobile devices. And also medium sized models at 11 billion and 90 billion parameter models with vision capabilities. The lightweight models are optimized for on device applications, providing instant responses and enhanced privacy by processing data locally, and the larger models introduce multimodal capability.

capabilities, allowing for more sophisticated reasoning tasks, including image processing. As a note, the 11 billion and 90 billion models are the first in the LLAMA series to support vision [00:05:00] tasks. All LLAMA 3. 2 models support a context window of up to 128 Thousand tokens, and they offer improved support for eight languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

They're also optimized for deployment on various hardware platforms, including Qualcomm and MediaTek chips, as well as ARM processors. Now these models are suitable for a wide range of applications like. Personal information management and multilingual knowledge retrieval on edge on device AI for edge and mobile applications, image reasoning tasks, like we mentioned, including object identification and visual grounding, as well as document level understanding, including processing of complex graphs and graphs.

And charts. It is available now through various platforms. You can download it from Hugging Face and Meta's website. It's deployable across major cloud providers like AWS, Google Cloud, and Microsoft Azure. And it's accessible through Meta's Llama stack, [00:06:00] which provides APIs and tools for easier customization and deployment.

So Amit, we're big Llama fans. I would say over here, what do you think is exciting about Llama 3. 2?

Amith Nagarajan: Well, first of all, I think it's the season for new models. It seems like, but then again, it's kind of like saying it's hot in new Orleans. It's always the season

Mallory Mejias: It's always, right, right.

Amith Nagarajan: Um, you know, it's crazy. And actually just as an aside, I read, I don't know how truthful this is or accurate it is, but there's, um, I forget the name now, but there's a particular Twitter user that has consistently given, uh, scoops on leaks from the major labs and supposedly, um, there's a GPT four to five coming, there is a Claude.

  1. 5 Opus, which is the larger version of Claude's model. That's on the horizon, which is likely why GPT 4. 5 would probably drop right after that, because open AI has a kind of a good habit of kind of dropping things as soon as, uh, Claude does. Uh, and a new, uh, plan, not a new plan, but a plan for Google to release a [00:07:00] Gemini 2.

0 at some point this fall. Now that's all speculation, but the point would be at least one, probably two of those three statements are likely to be true in the next several weeks. Uh, and coming back to Llama 3. 2, this is a very powerful, uh, model from Meta. It's open source, open weights, totally free, inferences everywhere around the world and, and different languages and on any platform you like.

So it just gives us more flexibility. Um, and that all of that is good. This is all increasing choice, increasing flexibility, um, and pushing things forward. Now, coming back to the question about, you know, on device or, you know, On edge. And I think a lot of us are just generally on edge with AI, but, you know, it's one of these things where, why would you want to do that is one of the questions people ask me a lot.

And the real reason is, um, first of all, performance, um, think about it this way. Your phone is an unbelievable. It has an enormous amount of power in it, and these phones keep getting better and better, as do [00:08:00] laptops and, uh, tablet devices and so forth. Even your watch probably has a pretty powerful processor in it.

Um, and so, why not take advantage of all of that computing that's just literally sitting there? You know, over the history of computing, we've had, uh, kind of this ebb and flow back and forth from central computing to on edge or on device computing and back and forth and back and forth. And, you know, it started off with mainframes and mini computers where you had these dumb terminals as they were called, uh, which were these green screens and it was all character based applications and all the processing happened on the mainframe of the mini computer.

Then in the nineties, we moved to client server where there were applications that kind of had a mix. There was some things on the server and then some things on the windows PC typically, and then with the web, it's moved largely back to the data center. Although more and more can be done in a web application.

And of course with mobile apps running on your phone, you can tell very quickly what's a native app versus an app. That's just a thin frame around a web experience. The fidelity and quality of that [00:09:00] application is usually higher. The reason I share all that quick history of computing, you know, in the last 60, 70 years is that, um, as these things have kind of gone back and forth and back and forth, we keep realizing, well, computing doesn't slow down and there's all this power on edge or on device in your phone, on your tablet, on your laptop.

We should be taking advantage of it. Um, so that's one reason, uh, it's interesting, uh, and many, many problems don't require the most powerful model, um, in the biggest computing, uh, data center. The other reason is privacy. So when you think about Apple's entire strategy around AI, it really, focuses around this on device, um, model concept.

Apple has their own very small, I think it's a 3 billion parameter model that runs on the latest iPhones that were released. Um, and Metas 1 billion parameter model clearly is targeting the Android world and just in general, anybody who wants to run. Small models, of course, Gemini flash, which is their tiny model, uh, is super fast and super small.

So coming back to the whole idea is that [00:10:00] if we can distribute our workloads, where some of the AI happens locally, some of the AI that's really complex, maybe happens in a data center. That's an interesting ecosystem. We can take advantage of this incredible infrastructure that's out there. Um, for associations.

When we think about this, we might say, well, what aspects of our member experience may we want to perhaps not ever get the data for? Like, we don't necessarily want association members to ever share, like, let's say patient information. Um, we really wouldn't want that. Right. Um, if we're a medical association, so maybe there's some kind of an application that we release that's trained on our corpus of content.

So it's an expert assistant in a particular medical domain. But we want those conversations to be entirely local to the environment that the user is using it. Maybe they have it deployed in their environment. So that's just one example where you have personally identifiable, identifiable information, um, just generally sensitive information.

So that's, that's one example, or just within the association itself, if you have the opportunity to deploy these models in your own [00:11:00] data center or in a cloud environment, which is what most people do these days, and you can control it. Okay. Then you know that your data isn't kind of leaking away and going to major cloud providers, or sorry, major AI providers, who you may or may not trust.

I'm not suggesting you shouldn't, by the way. It's just an opportunity to create a more secure infrastructure for your most sensitive data.

Mallory Mejias: Mm hmm. In that case, if security is of the utmost importance for an association, would you recommend that all of the AI models they run, uh, be run locally, or is there kind of a use case where you could use these other models as well?

Amith Nagarajan: I think it's a mix and match scenario and depends on the application. It also depends on the association. I wouldn't say that security is the highest priority for data in all associations all the time. Even the government, you know, even the CDC, CIA has different tiers of security around different kinds of content.

They have a public facing website with some information and then they obviously have various levels of classification. Uh, and I think that's true for, that should be true for most other organizations, including associations and perhaps [00:12:00] your most sensitive workloads that you do want to AI enable you run, you know, on prem or in a virtual private cloud environment.

Where you have a higher degree of, of control, um, with Lama 3. 2 specifically, I think there's some really interesting things about this particular series of models that are worth noting to one is the size of the smallest model being 1 billion versus most of them have been 2, 3, 4 billion parameters.

That's a notable size reduction, which means less memory, less compute. Um, and there are benchmarks show performance that's roughly on par with Lama 3. 2. Uh, prior model generations, AKA six months ago, that were 7, 10, 12 billion parameter. So, you know, it's, we talked about this trend line over time here at the sidecar sink, where we've talked about the enthusiasm we have for small models.

Um, as much as the latest frontier models, the biggest models, because they're democratizing access to really high powered AI, uh, and the 1 billion parameter Lama model is roughly as good [00:13:00] as GPT 3. 5 was. So that's, which is kind of crazy. That's back in late O2 when chat GPT launched and people first got that taste, you know, that was based on a model that's approximately equivalent in power to the 1 billion parameter Lama.

So that's exciting. And then the vision capabilities you mentioned earlier, um, Mallory, I wanted to quickly highlight, you know, when you're able to have a multimodal experience, the model can understand more about what the user's trying to do. Can both look at pictures of the world over time, that will be video as well.

And then respond to you both in the form of text and, and pictures. So, um, that I think is a really powerful concept right now. The vision model doesn't extend to image generation, but that Uh, likely will soon be kind of a standard thing. All models do. Um, but this model is able to look at images. And so, uh, there are many applications that can be enabled by, uh, that kind of multimodal multimedia type of capability.

And that's done. And even they're kind of, you know, what was it? The 11 billion parameter model, [00:14:00] which is very small. You can also inference that locally 90 billion. Once you get into that range, those models are big enough where you probably have to run them in a data center. But, uh, the 11 billion parameter model having vision is pretty crazy.

Mallory Mejias: mean, I would say you're probably the biggest, AI power user that I know, and I would bet that many listeners and viewers of the sidecar sink would probably agree. I'm curious, are you considering, or do you run an AI model locally on your mobile device for personal information management? Is that something you're considering doing?

Amith Nagarajan: I definitely am considering it. I have actually done it. I've, I've downloaded llama 3. 1, the small model, and I ran it. I'm trying to remember what the app was. There's a whole bunch of apps you can use. Be careful because there's a lot of malware out there too, but there's, there's apps you can download from the, um, Apple, uh, iPhone store and also from the Android world.

And then you can download different models and then you basically have a chat type experience with them. The problem with these things is they're not [00:15:00] integrated well into the experience. So they're not kind of natural and engaging in a way where, you know, you can talk to an on device assistant. So if you think about where Apple's trying to go with Siri, where Siri has local memory and local inference, that's really interesting because a model like that, that has, you know, essentially guaranteed privacy where the data is encrypted on device, um, is not available to anybody.

Um, I would definitely think. People would, would be able to get comfortable using it for other things. Like an example might be like personal data. If you, you know, use like a fitness tracking tool, that's linked to your iPhone, do you want that health data going to, um, any of these cloud providers or AI providers?

Maybe, I mean, maybe you're comfortable with that and maybe you're not, but more people would be comfortable, I think, with local. Um, so. I think there's a number of use cases that, where that makes sense. I also think what's going to happen is every app is going to have a built in AI model where it's going to just like say, Hey, this app you downloaded from the app store, it has llama one, like llama three dot two one B just [00:16:00] baked into it.

It's just part of the app as part of the download. And maybe it's. It's used, maybe it's shared or something, but the apps become dependent upon local inference capabilities, just like the apps are dependent upon multi touch capabilities. So the apps are dependent upon having a camera, you know, there's whole generations of apps that are going to become available locally because there's zero incremental cost for having.

Those basic AI capabilities. I call them basic, but they're actually pretty advanced. So if you just assume that mobile app developers in 2025 are going to say, yeah, of course there's an AI model that runs locally. There's zero latency, essentially with network. It's super fast and there's no incremental cost.

It changes the equation because even though the cost of running these models in the cloud has been dropping really rapidly, it's still, it's still something. So. If a mobile app developer says, Hey, I want to do a free app that helps you with nutritional coaching and you can inference that locally, it's both better privacy wise.

Plus the mobile app developer has zero incremental cost when people download it. So I think it's going to, you're going to see an explosion of AI enabled apps is the [00:17:00] short version of my answer. And coming back to the question about me. Yeah, I'll totally use this stuff. I'm, I'm a little bit of a strange case in lots of ways, probably, but particularly in terms of AI use.

I do use AI really heavily. Um, but I'm also something of a creature of habit. So I'm not necessarily going out there and trying every single new tool as fast as I can. Um, I do try to go deeper in tools and try to like stretch, how much can I get out of it? You know, part of over the last 30 years of entrepreneurship, one of the things I've, um, spent a lot of time thinking about is how to grow other leaders.

Um, and part of that is helping people learn how to think about their own prioritization and how to delegate. Okay. Part of that is being really good at protecting your time and focusing your time on high value activities, finding low value activities, and then pushing them off to other people traditionally.

Um, now you can push it off to AI. So I do come back to it and say, Hey, what else can I get out of the tool set I have? So in that way, [00:18:00] I'm probably a power user. Um, I'm very deep in like using these things, like from a programmatic perspective, because I work with our software development teams across the family.

of companies all the time. But, uh, anyway, that's, uh, that's kind of what I do with them. But, um, I have not yet played with Llama 3. 2 specifically.

Mallory Mejias: Well, that's kind of a good segue into my last question here, which is, uh, we've talked about on the podcast before this plug and play approach. So if you are developing an AI product at your association, you don't want to be locked in to one model. You want to have kind of this layer of protection in between you and the technology so that you can plug and play new models as they come out.

So I'm going to put you on the spot here, Amit, as someone who is developing and helping to develop AI products out of all the recent models we've seen thus far, which one do you find yourself? Going back to

Amith Nagarajan: Well, I would tell you that for the complex products we're building, at the moment, OpenAI's is still the best at something that we refer to as deep instruction following. [00:19:00] So, If I provide the model with a really complicated prompt, you know, a prompt that has multiple layers of direction and is asking for a highly specialized type of output, um, that is something that open AI has done a better job.

They've essentially fine tuned their models to have. Very, very good structured outputs and they actually have APIs for it. So open AI is more reliable by enough of an incremental margin for really complex use cases that our teams tend to default to open AI still. Um, and we have no problem with that. I mean, open AI seems to be a reasonable company.

You know, I don't know that I trust them any more or any less than anyone else, but, you know, they have a really good product at the moment. Um, I don't think that advantage is going to. Be durable. I think that, you know, the other products are right on their heels and I've seen really good results from the latest llama models, like the four Oh five B model specifically.

So for really complex software development, I think open AI has certain advantages. Um, but that's not true. Like universally there's [00:20:00] aspects of, uh, for example, with our, uh, one of our AI agents that we have called skip, which is our AI data analyst product. But. uses GPT 4. 0 in certain areas, but also uses 4.

0 mini in some areas. And also in the case of where 4. 0 mini is used, you could also just as easily plug in, um, Llama 3. 2 90B, um, as a very easy substitute for that. So we like to say we're agnostic. The point you made about plug and play though Mallory, I think is the most important thing I can try to hammer home for our audience.

You should not. Really closely couple your development to a particular company's API APIs or a particular company's proprietary tools. Of course, all of the companies are trying to create additional features. Like a great example of that is open AI has something called the assistance API, which is think of that as kind of like being a custom GPT.

Under the hood. So you can provide files, you can have it run code interpreter. You can have [00:21:00] it do a whole bunch of things that the regular API can't do. But once you build on top of the assistance API, first of all, it's, it's quite limited right now, but that will change, but if you build on top of it, then you're way more closely coupled to open AI, you have less portability than if you build in a more general way.

Um, this has been going on since the beginning of time with every platform, you know, you build on, uh, for example, you build a web application, you Um, and then back in the days before browsers were really standardized, someone like Microsoft might come out and say, Hey, we've got this browser called Internet Explorer, and it supports this extra feature.

And then all of a sudden websites are incompatible with other browsers. Right? So that's exactly what's happening with these AI models. I would just encourage people that to the greatest extent you can come up with a generic way of interacting with them. Uh, actually the member junction team has a free library for, for doing this, that you can get on GitHub.

We'll include it in the show notes. It's just called the member junction AI library. It allows you, independent of using anything else in the member junction world, this tool allows you to automatically switch [00:22:00] which models you're running your code against. Uh, so you can very seamlessly switch from one to another.

And there's other libraries that do that as well, but this is all free open source software that's easy to take advantage of.

Mallory Mejias: Awesome. We'll include that in the show notes for sure. Topic two today is digital twins. Digital twins are virtual representations of real world entities, processes, or systems within a business, synchronized with their physical counterparts in real time. This concept has gained significant traction across various industries, offering a powerful Powerful tool for simulation, monitoring, and optimization of business operations.

So a business digital twin typically consists of the physical business entity, and that could be products, processes, or systems. The digital representation of the entity. And then the data connection between the physical and virtual representations. These digital replicas use real time and historical data to represent past and present states of their [00:23:00] physical counterparts and simulate predicted futures.

Digital twins can be applied to various aspects of a business, like operations and processes. They can model and optimize supply chain management, production lines, customer service processes, and resource allocation. They can be used for product design and development. Performance monitoring, predictive maintenance, and digital twins can simulate user interactions, customer or member behavior patterns, and service delivery.

There are quite a few benefits of having a digital twin in business, so I'll share a few of those with you all. They can provide real time data and predictive analytics, which enables more informed and timely business decisions. They can expose previously undetectable issues and guide managers to make data driven improvements.

Insights from Digital Twins can be used to improve products in future iterations or uncover opportunities for new product lines, and they can be used to deliver novel experiences and features to [00:24:00] customers. A primary example of Digital Twins that you definitely recognize would be Uber. So Uber sophisticated digital twin system demonstrates the potential of this technology in business.

And it allows Uber to do things like manage dynamic pricing, optimize their routes, and ultimately improve their customer experience. So Amit, I had not, yeah, I had not heard of the term digital twins. I think it makes a ton of sense as it relates to business. And at a glance, our listeners might think, well, this sounds great for Uber, but what exactly might a digital twin look like?

In the world of associations.

Amith Nagarajan: Well, I think the concept of digital twin, another way to describe it is it's a simulation. It's a way of simulating a complex environment or a complex system. And so digital twin is just the term of art that's been around for the last few years, I think. And, um, companies have embraced the concept. What's, what's different about it is now the amount of data we have and obviously the AI that we have to be able to, um, Simulate increasingly [00:25:00] complex environments that have more and more dynamic variables to them more and more externalities that are being considered.

So, on your question of how associations may apply it. Imagine if you had a digital twin that represented, uh, something like your annual conference. And the annual conference digital twin, basically you had all of the content, all of the speakers, and all of the attendees. Kind of loaded up into this digital twin of this environment.

And don't picture it as like a video of people attending the conference, but more about like the idea of what happens, how will different people behave? You know, say you have 10, 000 people coming to an event and you have 300 sessions and two dozen keynotes, and you have all these different, you know, Well, what would happen if you moved session A from one room to another?

Or what would happen if you changed the musician that you have featured for an evening entertainment venue, um, from one group to another? Or what would happen if you changed some of the topics, right? So if we have. All of the individuals modeled [00:26:00] as people, right? Essentially in this digital twin in this complex ecosystem.

And we could say, well, this is likely what's going to happen. This is how that system will react. Your annual conference might have attrition. You might have fewer people come. You might have more people come. Um, more people might go to this session than this session. So it might help you with planning. Um, and so association's membership as a digital twin and say, How will the membership react to the association taking this public policy position, for example, and that might be based on all of the data you have on all of the people based on all of your behavioral interactions like newsletter clicks and website visits and educational courses they've taken, social media listening, you print, bring in all that data and say, I'd like to be able to better forecast or simulate really What happens in this complex dynamic system?

Um, so I think there's a lot of applications for associations. Now what we're describing here is very sophisticated. So a company with the technology and capital resources of an Uber or someone like a large scale [00:27:00] manufacturer, um, historically have only, they've really been the only organizations that have had access to technology along these lines.

Remember that the cost of these types of systems is going to keep coming down because we're on the backs of not only Moore's Law, but the AI acceleration that we're all experiencing together. And so it's going to become less and less expensive to do the kinds of things that we're talking about. Um, and I think it's a really key thing to be thinking about is more generally than the term digital twin, how do you better simulate what would happen as you make decisions?

And then as you stack up decisions and say, okay, we've made the decision to host the conference in. Illinois, which city in Illinois should we pick? Okay. Well, which, what time of the year should we do it? And all these other things. And then you can have kind of downstream decision making from there. So it is a really interesting technology if you generalize it.

Um, I think that the other thing to think about with digital twins is the applicability within the professions, uh, that you serve as [00:28:00] associations. So the example I just provided. Is really about how the association may use digital twins as a, as a platform essentially to, to make decisions and model the future.

Well, what about if you're an association, let's say in the healthcare world, your doctors, your practitioners, your clinicians, your researchers, how are they using this technology or how will they likely use this technology? Well, imagine if there was a digital twin of a patient and that digital twin was essentially, um, A representation of everything we know about this individual.

All of their, you know, key data, right? Their full electronic health record. Everything we know about them from a genetics perspective. Behavioral insights, like every data point we have from, let's say, the ongoing data stream we get from wearables these days, right? Um, anything else that you can think of, right?

Like, that, all that information is loaded up into this digital twin. And we might say, well, let's see what's gonna happen. What would happen to this guy, Meath, If we did this procedure on him, what would happen if he took this [00:29:00] particular medication? What's the combination of his biology specifically with this particular, you know chemical compound that we're thinking about giving him?

Rather than like basically trial and error where you say, oh, well this person's sick in this way Let's see what this medicine does which is kind of what happens, right? Like we have these broad based research that says, uh, first of all, like Is this going to hurt someone? Obviously that's what we're filtering out, but, um, we really don't know what's going to happen with different individuals when we have different interventions, whether it's medication or something else.

So I think that's the most obvious one to me where digital twins literally mean a twin of an individual, a human being. But I think you can model any complex system. And then you say, okay, well what happens when you model a digital twin of each individual member? So you say, I have a digital twin, perhaps a lot less sophisticated than the biological representation, and that's probably a lot less interesting to an association.

But I have a digital twin of every single member, which represents everything we know about them. And we can kind of simulate what's going to happen. So we [00:30:00] have Mallory as a digital twin. In the system and we can determine, okay, what's, what's Mallory likely to do? Where is she going to go? What is she going to do?

Uh, and then we aggregate that up by a hundred thousand people and we have our whole membership as a digital twin as a system. So those are the types of things I think get kind of interesting. Um, and obviously there's an AI engine behind all of this because the scale of data is way beyond what any of us can individually comprehend.

Mallory Mejias: Mm hmm That's super interesting. I want to talk about the annual conference example that you gave first, which of course you were just providing as an example, and it doesn't seem super feasible, but I was jotting down potential data sources we would need to make that digital twin possible, including data on individual preferences.

Uh, information about the venue, like layout, location, historical data for all of our events, um, info on all the conference sessions, even weather information to know if it's raining or whatever. Thinking all of the information that would be necessary to create that digital twin in this moment seems [00:31:00] infeasible.

But I'm wondering if there are ways to do mini simulations that you could talk about where maybe we don't have everything, but we could kind of plug in some of this information and have a good guess.

Amith Nagarajan: I think it's, it's, you know, one of our core values is that we want to seek progress over perfection. That's a blue Cypress core value. It really means a lot to me because it's put another way. It's saying. You know, uh, great can be the enemy of good, um, where, you know, you have good enough can get you 80%, 90 percent of the way versus 0 percent of the way towards perfect.

Right. So I think with digital twins or any other kind of modeling exercise, what we're talking about is gathering as much information as we actually have access to, and then working with that as a starting point, um, more and more data sources are going to become available to you over time, partly because you're going to get better at capturing them and thinking to capture certain kinds of data.

Um, and then. What you're also going to be able to do is to extract. Insights from unstructured data more and more. And we've talked about that in this pod. We had a really good episode on [00:32:00] unstructured data, I think two weeks ago or so. It's one of our most listened to episodes. And, um, I think the ideas in there, um, are really interesting here because you have mountains of data in the form of emails and other forms of correspondence with your members and your audiences.

Um, but you often don't really think of that as your data. So the unstructured content itself can be fed into these kinds of ecosystems, but she can also run that unstructured content through AI to gather structured insights. Out of the unstructured content and then feed those more structured attributes into these kinds of simulations and models.

And so, um, I think that if someone wanted to go experiment with this, I would start with something really small, something really simple. The annual conference might be too big of an exercise to go after right away. Um, Um, you might start off by modeling, let's say an individual member and trying to see like what's going to happen with that one person and then group together multiple such models to say what's going to happen with this cohort, like a [00:33:00] cohort of people that came in at the same time or based on like the same graduating class or something like that and to grow from there.

But the sophistication of these models obviously drives. being useful. So if you only have a tiny sparse amount of data, you know, the probability of them being useful to you is probably pretty low. Um, the only thing I'd add to that, which is a little bit of a contradiction to my last statement about less data, meaning less useful is that remember.

That the foundation models that we have available have a very good general understanding of people and the world. And so even though you may not have perfect data on every individual, or even all of your individuals, collectively, um, the AI models that are pre trained on all of the world's data could probably still be pretty helpful to you in other ways, as part of this ecosystem. I think the key thing to be thinking about when you hear about digital twins or something that may seem like. You know, maybe more esoteric to some people or just maybe seems out of reach in some ways Or just how do I use it is think about business problems that you'd like [00:34:00] to focus on like where are your pain points as an organization?

And how could you improve the quality of your organization your business either increasing efficiency? Or improving value to members. Um, and you know, venue selection is one that I always, you know, hear people talking about in meetings is some people plan these things out like six years in advance because their events are so large.

It dictates that. Um, but within those venue selections, there's so many, there's hundreds and hundreds of other decisions as, as. You know, well, Mallory for planning digital now, even an event on a smaller scale like ours. Um, so potentially these kinds of simulations could be really helpful for optimizing the quality of the event, the business outcomes, like the number of attendees, uh, and just really creating an experience that's, um, The best possible experience.

So I come back to that because that's the physical world, but there's similar concepts I think available, like in an LMS, you know, or in any other environment and say, Hey, we have this LMS full of courses. What would happen if we change these courses or change this structure or change this learning path?[00:35:00]

Well, you know, the digital twin concept applied there can help us simulate what's going to happen if we made those changes before we actually make them. It's kind of like doing QA on the future in a way.

Mallory Mejias: We've talked about using vectors to personalize offerings for members in a previous podcast episode as well. We'll link that one and the unstructured data one in the show notes. So if we're vectorizing data points to provide personalization to members, are we kind of creating digital twins in that process by saying, we think you would be interested in this.

Is that kind of a digital twin? I

Amith Nagarajan: of an individual would essentially create a semantic representation. Put another way, there's like a mathematical meaning to like Ameth. So if I take, let's say, a document, That outlined, here's everything I know about Mallory. And then I ran that document through what's called an embeddings model, which would generate that embedding or the vector that you're describing.[00:36:00]

It's a sequence of thousands of numbers that represent the semantic meaning of that document. Which is obviously a proxy to saying, Hey, this is the meaning of who is Mallory, right? Um, and so in a way what we're doing is compressing the information in that document in a format that's super efficient from a math perspective to then compare against lots of other vectors like that.

So if I create 100, 000 embeddings for 100, 000 members, and then if I have, let's say, every course that I offer or components of courses that I offer and I vectorize the content in From those courses. Now using vector math, I can compare people to courses, and then I can say which courses are most likely to be relevant to Mallory or to me or to anyone else.

And so vectors allow us to scale, um, in terms of both amount of data, but also the speed at which we can do these comparisons, but they don't mean that they, they capture every nuance. They capture a lot. But the actual data, the [00:37:00] actual like bits and bytes of every single piece of information we have is far more robust than what the vector is going to capture.

So vectors are absolutely a part of a solution that is really important broadly in AI and in the context of digital twins, for sure. Um, because you're going to need to be able to do things very, very quickly at scale, you're going to need, you know, these kinds of shortcuts and representations. But I do think there's, there's a broad, broad array of data that will go beyond that.

Um, because the digital twin needs to have an understanding of all of the different You know, knobs and levers, essentially, that affect what it is that it's modeling. So, the vector will be part of that solution, but not all of it. It'll be probably actually a small component of the overall idea of what a digital twin represents.

Mallory Mejias: think that makes sense. So comparing vector to vector. Is essentially comparing numbers using math, but having a simulation of me, for example, I would be able to ask it any potential questions. Like, what if we created this new thing or what if we switched the venue location and that it doesn't seem like you could do with [00:38:00] vectors.

Amith Nagarajan: Yeah, exactly. Like, if I was trying to predict which piece of content on your website Mallory might find, find most relevant, if I take a vector representation of you, which is again, an output from looking at a document, essentially that describes you, um, it's going to be effectively a shortcut to saying, Hey, these are the attributes of Mallory that we think are most semantically interesting from, from The embeddings models point of view, and then I'm going to give you content that's also in a similar way, kind of summarized, if you will, in a mathematical way, uh, that makes it possible to do that at scale.

Um, but is it capturing every possible nuance? No. Does it tell us what happens to, let's say, for example, I have 10 different values that I want to track about you. Um, your purchasing level, your customer satisfaction score, uh, the number of events that you have attended and, you know, a number of other things like that.

And each of those values as part of that profile, the vector essentially compresses all that into one, [00:39:00] you know, kind of summary mathematical meaning, but I still want to know what those individual values are, particularly in a digital twin environment. Part of what we're doing is looking ahead. Kind of think of it as like a video where frame by frame we're looking ahead five frames ten frames or fifty thousand frames We want to know what those individual values might end up being down the road So we might be able to say okay Mallory's order volume has gone up But her customer satisfaction level has gone down and digital twins go to that level of granularity Essentially simulating the future state of the entire complex system if that makes more sense

Mallory Mejias: No, it does. That actually makes a lot more sense. In my mind, I was thinking they were more similar than they were. It sounds like then, well, I'll make the statement. You can tell me if it sounds wrong or right. And moving toward the process of digital twins in the future, one day, or even a mini experiment that it would be pretty essential to have some sort of common data platform in place.

Would you agree with that?

Amith Nagarajan: Yeah. Now more than ever, uh, you got to get your data house in order and, uh, [00:40:00] a data platform, a common data platform. is critical to be able to ingest data from all of your structured systems, but also to help make sense of your unstructured data. There's, there's as much, if not more value in the unstructured data you have, you know, in Microsoft office, in Google, on your website, in emails, in box.

com and all these other places where you have stuff. That can all be made sense of. Now you can bring that all into one unified repository where you understand what this stuff is. And that can lead to the opportunity to create digital twins. It can lead to the opportunity to do way more advanced analytics.

You can do more predictions. Um, so put another way. Um, if you don't have your data house in order through a common data platform style of approach, you cannot. Even begin to contemplate doing a digital twin exercise because you just won't have the data. You know, you can't launch a rocket, no matter how cool your rocket is, if you don't have any fuel and the data is essentially that.

Mallory Mejias: Seems like a good [00:41:00] line to wrap up today's episode on. Everyone, thank you for tuning in to episode 52. We will see all of you next week.ZZ