Sidecar Blog

The Democratization of Expertise, Meta AI Chatbot, and Making Deepfakes with Microsoft’s VASA-1 [Sidecar Sync Episode 27]

Written by Mallory Mejias | Apr 25, 2024 4:46:53 PM

Timestamps:

0:00 Introduction
6:17 Future Predictions and Associations
13:59 Future of Marketing With AI Agents
23:36 Advancements in Personalization and Meta AI
29:50 Impact of Open Sourcing AI Models
39:54 Realism and Ethics of Vasa1 Technology
46:39 Deepfake Technology
50:19 Blockchain and AI for Authenticity

 

Summary:

This episode dives into the latest AI developments shaking up the association world. Amith and Mallory analyze predictions from VC Vinod Khosla on AI democratizing expertise, Meta's new AI chatbot launching across its social apps to billions of users, and Microsoft's controversial VASA-1 technology for animating realistic deepfakes from photos. They discuss the immense opportunities but also risks associations must prepare their members for as AI rapidly evolves areas like digital media, personalization, and disinformation. The episode emphasizes rethinking associations' value proposition as expertise becomes more accessible through AI. 

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

Follow Sidecar on LinkedIn

Other Resources from Sidecar: 

Tools mentioned: 

Other Resources Mentioned:


More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress (BlueCypress.io), a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on Linkedin.

Read the Transcript

Disclaimer: This transcript was generated by artificial intelligence using Descript. It may contain errors or inaccuracies.

Amith Nagarajan: Welcome back to the Sidecar Sync. We are back here today with another exciting episode of the pod. Tons Hang on one second. Okay.

He's, he's officially, our third co host is officially off for this podcast. Although he'll be scratching, he'll be scratching incessantly at my door in a minute. Aw,

Mallory Mejias: poor Ninja, poor Ninja.

Amith Nagarajan: Yeah, he likes, he likes, he's my office buddy, he hangs out all day. Um, alright, let's restart that.

Mallory Mejias: Perfect.

Amith Nagarajan: Alright.

Welcome back to the Sidecar Sync. We have another action packed episode [00:01:00] for you at the intersection of all things AI and associations. My name is Amith Nagarajan. I'm your host. I am actually, that doesn't sound right. If I introduce myself, then we can, then we need to say, and then you can jump in and say, Hey, and I'm Mallory Mejias, the co host or whatever.

Cause if I, yeah, probably do that and then I'll just keep going.

Mallory Mejias: Okay. Yeah. Yeah. Yeah.

Amith Nagarajan: Let's do that. All right. Take three. Welcome back to the Sidecar Sync. We're excited to have you tune in and we have another action packed episode for you today with all the latest at the intersection of artificial intelligence and associations.

My name is Amith Nagarajan. I'm one of your hosts.

Mallory Mejias: And my name is Mallory Mejias. I'm one of your hosts as well who runs Sidecar.

h Nagarajan: and we have so much to cover today. In such a short amount of time, we're gonna pack it in, and we're gonna do hopefully a really good job summarizing for you why these innovations in a I really matter a lot for your association.[00:02:00]

Some listeners have reached out to us and said, Hey, we love the pod. How can we help? And I thought I'd mention today that one thing you can do to help if you're enjoying the pod is leave us a review wherever you listen to the pod. This podcast on Apple, on Spotify, et cetera. Uh, if you're listening on YouTube, give us a thumbs up there and hit that subscribe button so you don't miss out on any of our podcasts.

Now, before we get going with the pod itself, let's take a moment for a quick word from our sponsor.

Mallory Mejias: Amith, how are you doing this week?

Amith Nagarajan: I am doing great. How are you Mallory?

Mallory Mejias: I'm pretty good myself. It seems like this week in particular we had a lot of AI news. We kind of had to pick and choose exactly what we wanted to cover. So it was a busy week in the AI world.

Amith Nagarajan: Yeah, unless we wanted a five hour podcast that would be that would definitely be important to filter that down.

Mallory Mejias: You know everyone listening let us know if you want a five hour edition of the sidecar sync I suppose we could make that happen with all this ai news Amith something that we didn't have as a topic today is [00:03:00] microsoft's introduction of its phi 3 family But I think this was announced yesterday. You posted about it on linkedin So I wanted to mention it microsoft introduced its phi 3 family of small models, which includes Phi 3 mini, phi 3 small, and phi 3 medium.

I want to say the article I read from Microsoft said tiny, but mighty. So what are your thoughts on this phi 3 family of models?

Amith Nagarajan: I hadn't heard that tagline, but I love it. I think that's exactly what we need to be thinking about with model architectures in our businesses. So when you hear about this explosion of different choices with AI, uh, first of all, it's an exciting time.

There's lots of different potential models you can use in your business. Um, one of the things I talk about when I deliver executive briefings on AI to associations is that you're going to be living in a multi model world. So you'll be using more than one model. Different models are good at different things.

There are certainly some broad generalists like GPT four and cloud three opus that are fantastic at most [00:04:00] things, but there are models that are much smaller in size, which means they're more performant. They use less energy and they're inexpensive. Uh, so these models can do things like, for example, in the association world, we've got lots of content.

Let's say we wanted to classify that content with the taxonomy. Something like Microsoft's. Phi three, which is spelled P. H. I. By the way, if you're gonna look it up and we'll include this in the show notes, but five three has fantastic language capabilities. And even the mini model would be likely able to very effectively classify much of your corpus of content with tags or taxonomical attributes.

So that's just one example of a use case. There's many others. These small models are affected that. So the key is, is that when you hear about models like this, first of all, just Be aware of them because there's so much happening. There's many choices. This week, this last couple weeks, Databricks released a small language model.

Snowflake today released a small language model. Theirs is a little bit optimized for data, but it's, you know, same kind of stuff. [00:05:00] Um, you don't want to just grab whatever is out there and start using it in production, but it's good to experiment with these things. Five three is notable because the model size, these very small models, I think the smallest one was three, 3.

8 billion parameters, which in the historical arc of A. I. S. A decent sized model, but compared to even GPT 3. 5, which is 175 billion parameters, the 3. 8 billion parameter 53 model is basically as performant as GPT 35 in many areas. So really interesting because, you know, you're packing a lot of punch in these small models.

You need to pay attention to this stuff because it's not necessarily the biggest model that will win, but the one that adapts to your needs the best.

Mallory Mejias: Absolutely. I feel like every week we're talking about small models on this podcast and it seems like that's not going to stop anytime soon. In today's episode, we've got three hard hitting topics.

First, we're going to talk about some predictions about the future by venture capitalist Vinod Khosla. Then we're going to [00:06:00] talk about the meta AI chatbot powered by Llama3, which you may have seen in some of your social media apps this past week. And then finally, we're going to talk about Microsoft's Vasa 1.

Which is quite interesting. I think there's a lot of pros and cons that we'll be able to discuss with that new research. So first and foremost, Vinod Khosla. He is a prominent venture capitalist and founder of Khosla Ventures, and he's made a series of bold predictions about the future, particularly focusing on the transformative potential of artificial intelligence and other technologies.

His predictions span across various sectors, including healthcare, education, and environmental sustainability. And we've decided to focus on three of those in this topic. And then I'll, Amith, I'll ask you a question about each. So first one, COSLA anticipates the advancements in AI will make expertise in various fields accessible to everyone at little to no cost.

He envisions AI powered doctors and tutors becoming widely available, which could democratize access to [00:07:00] medical advice and personalized education. The second one. Most consumer access to the internet will be mediated by AI agents. These agents will serve as intermediaries, helping users by filtering information, performing tasks, and protecting them from unwanted interactions with marketers and bots.

This approach is expected to help manage the vast amounts of data and information on the internet, making it easier for users to find what they need without getting overwhelmed. And the last prediction we're going to talk about is that COSLA anticipated AI will become a major influence in the fields of music, entertainment, and design, with content being tailored and dynamically produced.

to suit each person's tastes and emotional state. This expectation is consistent with the ongoing progress in AI crafted content and the concept of creating unique experiences for every individual viewer or listener. So a lot to unpack here, and this is only three of the predictions out of, I think there were 12.

So first [00:08:00] question I have is let's talk about this democratization of expertise. Associations are seen as a source of expertise for various industries and professions. If everyone eventually has access to expertise at little to no cost, should associations be concerned?

Amith Nagarajan: I think they should be concerned, and I think they should also be excited.

And before I go into more detail, one thing I thought I'd do is just share a little bit more about Vinod Um, just so that listeners and viewers who aren't as familiar with him, um, maybe just get a little glimpse into, uh, this guy and why his predictions really matter a lot. He is a prominent VC, as Mallory mentioned, uh, has his own firm, Coastal Ventures, before that he was a major, uh, investor at another very well respected Silicon Valley venture firm and before that he was at Sun Microsystems and was a, um, An early pioneer in many aspects of computing.

And so he's a brilliant guy, has proven to be as effective as a capital allocator as he [00:09:00] was in a founding type of role in one of the greatest success stories in the early times in Silicon Valley. So the guy knows what he's talking about, and he's been extremely good. At predicting the future. Because if you think about what does a venture capitalist need to do, they're placing bets on companies and people.

Certainly, you know, uh, the execution of ideas drives whether companies are successful, but they have to pick the right categories to be in. They have to pick the right ideas to be in. So even the best team with the best execution, if they have the wrong idea, You know, you don't end up in the right place from a return on capital perspective.

So that's one of the reasons listening to VCs can be interesting. And someone who has a track record as pristine as Khosla's, is always worth paying attention to. I also think he's just someone who's willing to say what he thinks. You know, a lot of people are somewhat guarded and try to make sure that they're not pissing anyone off basically.

But, um, he doesn't seem to, uh, suffer from that affliction. So I both respect that and look up to it because I try to do the same thing in my own life. Uh, in any event, [00:10:00] um, coming back to the democratization of expertise. I love that phrase. I think the fact that expertise is at the moment still very much a scarce resource.

And it's moving towards being an abundant resource, uh, means that, you know, we're going to see that happen. It's the democratization of everything over time. Um, you know, first we had to digitize. Because when expertise was analog, it was face to face. It was phone calls. It was, you know, interactions with people.

You can't democratize that because the constraint is all of us and the knowledge that, you know, you need to get at in some cases is very narrow in terms of the people who hold that expertise. Um, but then when you talk about digitizing it and then turning it into a format like a I where you can interact with it in a very powerful way.

It's exciting. And so I think for the world, the idea that expertise is available for everyone at basically no cost, anytime, anywhere on planet Earth through any device is just a stunning idea. And we've been heading in that direction for years [00:11:00] with the internet and with web browsers and with search and so forth.

But this is completely next level in terms of its applicability to distinct problems people have, um, on the associations being concerned point. Right. Yes, associations are in the business of expertise. In fact, you could call them an expertise broker, uh, so as an intermediary between people amongst themselves in a given profession who are the experts in the field, whether it be law or accounting, a field of medicine or any other discipline, as well as the audience or industry or consumers that seek that expertise.

In various ways, associations stand at the intersection of that, the flow of that expertise. And so clearly, uh, if associations were to stand still, they have a lot to be worried about. Um, so I think there is reason for concern. Um, and even if they're not standing still and they're listening to this podcast and they're doing other things to get ready for AI and start and implement experiments, there's still reasons to be concerned because we know that [00:12:00] people are busy And they have minimal attention.

And so we know that people go to probably the easiest source rather than necessarily the best source. So even if associations are super aggressive and contemporary with their A. I. Efforts, it's possible that they'll lose some traction to generic tools like a chat GPT or met his new A. I. Assistant, which will not have the expertise that they have, but be probably good enough for something.

So there's definitely reasons to be very thoughtful about what was once a moat around the association strategic value Uh, being lessened, certainly, and possibly eliminated.

Mallory Mejias: Ooh, wow, that's really interesting. I love what you said, associations being expertise brokers. I've never thought about it quite like that.

Do you think this means the whole association industry space will have to pivot if everyone has access to expertise at their fingertips?

Amith Nagarajan: The reason I'm standing on rooftops screaming at the top of my lungs, figuratively, sometimes, perhaps literally, whenever I visit DC anyway, um, the reason I'm doing that about AI so much [00:13:00] is because I believe that it is this moment in time where associations must pivot to embrace this technology along with all of its, you know, problems and benefits.

Eyes wide open in terms of potential issues. There's plenty of potential issues, but you can't ignore it. You have to embrace it and find ways to leverage it because the world is changing rapidly before our eyes. And so associations have a moment in time right now to either select from the, you know, build your own adventure book, right?

You pick, pick the next thing that happens. There's a fork in the road. You can either try to fight this off and try to go down the traditional route or you can embrace it and figure out what that means for you. Um, and if you don't go down the path of adapting, I think that the associations who choose that other path are gonna have, you know, a number of days, honestly, I don't think that, and that might mean a number of decades, honestly, for some of these spaces where they're very well protected and have deep pockets and, you know, a lot of resilience financially, but that doesn't mean the business model has resiliency.

So I think there's a lot of adaptation needed. The good [00:14:00] news is you don't have to do this all tomorrow. You just need to start tomorrow or preferably today.

Mallory Mejias: Start this afternoon. Start after you listen to this podcast. Um, okay, the next prediction. Most consumer access to the internet will be mediated by AI agents.

Immediately, I thought, great! I don't have to click out, click close on ten pop up ads or have like three videos play while I'm just trying to read an article. But then, on the flip side, I'm thinking about, from a marketing angle, What happens to marketing as we know it if we can't depend on things like, you know, ads or cold emails or even social media marketing getting to the audience we're trying to reach?

Amith Nagarajan: Yeah, I mean, I think from an advertising perspective, um, that's the reason I think meta and Google are so flipped out about AI. It's sure they want to be leaders in the space fundamentally in any field of of computing. Those those types of large companies will want that. But For their business models, those two companies make all of their money through essentially taking traffic [00:15:00] that they have.

They each have monopolies essentially on certain types of Internet traffic and putting ads up, right? So they have the billboards on the side of the road that you must travel on to do what you do in life these days digitally. And so if you disintermediate that through A. I. Agents doing the work and then giving you back a summary, The question is, is where is the business model?

So why does Google want to have, you know, Gemini and Meta have the Meta AI assistant and make that in all their products because they know that's gonna happen. And so they want to be able to inject ads into those assistants to make them really useful and valuable, but to have to maintain a platform to generate revenue, obviously.

And there's growth potential there for sure. For those companies and risk. You know, I think that the idea of having mediation by an agent is a really important thing to think about. So we talk a lot about good A. I. And bad A. I. Or potentially bad A. I. And I really don't believe fundamentally A. I. By itself will be good or bad.

But it's about who holds the A. I. In their hand, right? So if we have good intentions versus bad [00:16:00] intentions. So think about missed information. Think about a bad actor. Who intentionally produces articles for whatever their motivation that have the incorrect Procedure for a particular medical procedure, right?

Like an incorrect process for whatever reason they're putting intentionally incorrect information out there and people are consuming it. That's scary, right? Let's say doctors are reading this and actually following those steps. And, you know, of course, I know nothing about medicine, so I'm sure that there's more detailed ways that people validate their content than that.

But if you just follow along that thought experiment for a minute and say, Okay, well, how does a doc or a nurse or someone in that profession Understand that the content they're consuming is good. And so if A. I. Is exploding out all the content and, you know, by the way, the motivation for something like that would generally be to get clicks and to, you know, a lot of times when you have negative information or different information, you get more clicks so that it might not be that the people who publish publish content.

As vile as what I'm describing, that's intentionally wrong about something as important as a medical [00:17:00] procedure. They may not be trying to kill people, but maybe they don't care and that their goal is to just get as much traffic as they can, right? And that, that is actually something people do on the internet intentionally.

The misinformation isn't just political in nature, it's monetary, uh, primarily actually in, in my, in my opinion. So, in any event, um, coming AI intermediation or mediation of this, You need to understand that, like, no human can keep up with that. And so, how do you get the best information for you? Well, that's why generative AI is baking, making its way into all the major search tools.

Because it's hard to make sense of what's going on even with human generated content at scale with billions of us around the world publishing stuff. Um, as AI goes to scale, it's gonna be essentially impossible to do that. So, our own personal AIs, We'll serve that role. So we will all have a personal AI, which is like a personal assistant.

Um, that personal assistant will intermediate, you know, this content and be able to help you filter things out. It'll work to essentially do things like a good [00:18:00] researcher might do. So if you had a research assistant at a university and you're a professor, You might say, Hey, I want you to do research in this category.

And of course, that person has a protocol they'll follow. They'll get source content, they'll check it to make sure it's factual. They'll look for citations to make sure that it's a reputable source. Uh, and then they'll summarize it and they'll bring you something saying, Hey, here's what I, what I'm thinking.

And this is a, you know, a hypothesis we might want to go test. Similar things would be done with an AI. Assistant where it's gonna do more of the fact checking, more of the citation analysis, more of like the quality of content referencing and then on top of that, an overlay to personalize it for your preferences and tastes and your level of education in the field and all that kind of stuff.

So I think it's absolutely where we're headed. We're already seeing the early glimpses of that with being in Google and now within Metis products. That's exactly what Metis AI is doing.

Mallory Mejias: I think with your example, it's more clear cut, uh, between content that's correct or incorrect or, you know, is missing factual information.

I think what's tough maybe to [00:19:00] envision is content that's more neutral or like an association doing cold outreach to me and saying, Hey Mallory, you should join our association of marketers or whatever that may be. Will I ever receive that email if I have an agent saying, Eh, it's probably not relevant for you.

I think that's a scary thought.

Amith Nagarajan: Yeah, for sure. You know, so that the email channel or SMS or any other channel we have, right? Those channels being, um, mediated by an AI for on behalf of the consumer stands to both be a very interesting marketing challenge for the marketer. And also potentially like a really interesting benefit for the consumer to simplify our lives because we all get too many messages.

Uh, but at the same time of concern, because sometimes like, do you really want to trust the AI to truly make the decision on what you look at and what you don't, uh, it's very appealing sounding, you know, if you get hundreds of emails a day and all these other messages, you know, to have an AI look at it and say, Hey, you know, Mallory, these are the 10 things you should pay attention to this morning.

Mallory Mejias: Hmm. Yep. When I first read that prediction, immediately my mind went to, Is marketing as [00:20:00] we know it dead which I don't like to be alarmist But it did it did make me think huh if these agent if this is the truth and this is the future Uh, I do think a lot about marketing Will need to be changed.

Amith Nagarajan: I think that the cold outreach the spammy stuff that's gonna die I think AI is gonna be way smarter than that Of course, it's it's you know It's a cat and mouse game or move counter move kind of scenario where you know There's always gonna be ways to try to game the AI and as these things go to scale just like people, you know Approach how do you inject a virus into a Mac or a PC and for a long time the Macs?

Hey, we have no viruses. It's because historically in the past, the Mac had tiny market share. Um, now the max have a lot more market share and they have a lot more problems, you know? And so it's one of these things where, you know, the more a eyes that are used widely. So, like, if you had a product from perplexity or somebody else and used it a lot, would they be targeted through manipulative acts from other a eyes creating content that's designed to affect the way they process it?

Of course you will. So there's gonna be all that stuff going on. I'm really optimistic about this though, because I think the idea [00:21:00] that an agent on your behalf that learns you really well could do a good job. Think about it this way. Imagine if you had a personal assistant that was with you for life, not just for a year, two years, five years, whatever.

And all this assistant ever did was think about you. And all this assistant ever did was like, think about like, what do you like? How do I help you be happier? How do I help you be smarter? How do I help you be better informed? That assistant is going to get to know you pretty quickly and be really good at getting you the right content with the right information at the right time.

So, I think AI is going to be fantastic at the personal level when an AI is essentially trained just for you. We're all going to have our personal AIs. Um, you know, who wins that battle is going to be the owner of, you know, uh, many trillions of dollars of revenue potentially. But I think there will be, hopefully there will be many, many providers of that type of tech.

I wouldn't want it to be, One company, but I think that's going to be a fantastic experience compared to like the way we wade through emails and we're all, [00:22:00] you know, kind of basically hooked on like, got to check my email. I mean, I do it all the time, right? Like you're, you're, you know exactly what you should be working on today.

You set your top priority for the day and you said, this is my focus. And they get distracted because you look at your inbox and you're like, damn, I've got 50 more emails since I started recording this podcast with Mallory. And then I get distracted from what I really want to work on.

Mallory Mejias: Right. Right. All right, this last prediction is that AI will become a major influence in the fields of music, entertainment, and design, with content being tailored and dynamically produced to suit each person's taste and emotional state.

I think the idea of an audience of one is becoming more and more talked about and more and more prevalent, but I will say this does seem like a daunting task in terms of speaking to every single person in your audience, every one of your members at a one to one basis. Can you recommend, Amith, any small personalization wins that associations can start with, with this idea of an audience of one?

Amith Nagarajan: Well, there's, there's a lot of [00:23:00] ways to approach that. I think personalization is one of the areas that associations have to really get their heads wrapped around and understand what the opportunities are. Um, you know, many associations have tried over the years to do personalization, and usually what they've started with is their website.

Okay. Which I actually think is one of the hardest things to get right. Um, and one of the reasons people start there is because there's so much content on the Typical Association website. There's thousands of articles. There's all sorts of resources. So the idea of having a truly personalized experience for the consumer of that site is great.

But it's a complex area to get started with. So I would start in other areas. Initially, I would probably start with something like email because you control the channel. You control the frequency and what you send. So there's a lot of different technologies you can use these days to personalize email.

You can do that, obviously, with general marketing emails. You can do that with newsletters. You can do that with a lot of different things. Uh, types of email modalities, but I think that's a good thing to consider. I think you can also [00:24:00] consider, uh, personalization with SMS messaging. More and more people are finding that SMS is a really valuable channel for business to business communication.

Uh, so, and that's an area where personalization makes a big difference. Big difference, because your medium is typically like 100 characters or less in order to get someone's attention on a text message. Uh, so those are a couple areas I'd probably focus on. One thing I'd point to is, um, people who've had rough time doing personalization in the past.

I want you to take a deep breath. And say to yourself, times have changed because your attempt at personalization as challenging as it may was, is almost certainly something you did with older technology than what's available today. You know, Mallory and I spend time telling you guys that AI is in a six month doubling curve.

So every six months AI training data doubles, which, and it's actually a little bit faster than that, which, and training data, by the way, is just a rough approximation of the model's power. It's not exactly that, but it's a good way of thinking about it. So with six [00:25:00] month doublings every year, you have two doublings or four X increase.

Um, you know, and then you have a 16 X to increase every two years. It's crazy. So just to give you a quick thought on this. Um, most attempts at personalization technology over even the last several years have been using tagging architectures, meaning you would try to get tags. that are relative to the person.

So you'd say, Hey, Mallory, these are the 10 or 20 tags we think are related to Mallory. Same thing for Ameth. And then you try to tag all your content and use tags to basically select the content you present to each user. It's a pretty basic idea. Problem is, is that tagging has been really hard to do, to do it well.

And to do it at scale, it's been tough. Plus, it's been hard to figure out, like, how do you tag the user? You can tag your content, but how do you tag the user? Um, and so there's solutions to these things. But in more recent times, there's these new types of AI models called embedding models, which create what are called vectors or embeddings, which we don't have time for a full discussion on on this podcast.

But the basic idea is [00:26:00] that they're far richer in information than tags. And do a much, much better job of personalization. And it's possible at very, very low cost to generate embeddings for basically all of your content, including structured content in a database where you can take a user or a member's record and generate an embedding for that record.

And then compare it to all of your content and thereby drive personalization. So there's some great solutions out there now that two years ago, or certainly five years ago, would not have been possible without spending a ton of money if, if they were possible at all. So just take a fresh look at this stuff because personalization to Mallory's point is one of the most important things you can be considering.

Mallory Mejias: Moving on to topic two for today. If you are a regular social media user, you might have noticed the Meta AI chatbot pop up in your Facebook or Instagram apps. Over the past week, Meta introduced its AI assistant called Meta AI, which is built with the advanced technology of its latest model, Lama 3. This AI assistant is [00:27:00] designed to be a versatile tool for users available for free on popular platforms like Facebook, Instagram, WhatsApp, and Messenger.

Meta AI can handle a variety of tasks like planning events, crafting emails, and answering general general queries. It's designed to be a general purpose assistant that can also provide real time web results through a partnership with Microsoft's Bing. The chat bot includes an image generation tool called imagine, which allows users to create images from text in real time.

This feature is particularly enhanced in its ability to adjust images as the user types, providing a dynamic and interactive experience. You can also ask the chatbot to animate images. Now, I saw this pop up on my own Instagram and Facebook apps probably just a few days ago, and I gave it a test run because I had to see what was up, obviously, for this podcast.

My take is that it was really fast. I don't have any quantitative data on that in terms of like how much faster it works than [00:28:00] another model, but it seemed like it was generating its responses and the images quite quickly. Uh, when I used it on Facebook, it gave me a list of prompts that I could test out.

And one of those was make friends in a new city. And I don't know if I've mentioned this to all of you, but I'm moving to a new city. I'm moving to Atlanta next month, actually. So I said, make new friends or make friends in Atlanta. And it actually came up with some really general information like you should, you know Join some social groups and things like that But what was really interesting is that it recommended certain facebook groups that I could join for social activities It also recommended specific coffee shops and restaurants that I could try Um, so I found that really exciting to kind of have AI bleed over into my personal life and how I'm interacting on social media.

I also asked it to generate an image of a bird, and it did that, and then I asked it to animate that image, and it did it quite quickly. So that's my very general overview of, uh, meta AI, but I want to get your take on this, Amith. I know you're not a big social [00:29:00] media user, so I don't know if you've tested it out, but did meta just totally shake up the AI landscape?

Okay.

Amith Nagarajan: The short answer is yes they did, um, and I, so I use LinkedIn all the time, I, uh, I spend time there, so that's probably the only social media app that I'm a regular user of, um, I get on Twitter now X from time to time, because there's sometimes some interesting things happening there, but I generally avoid it, uh, and I'm not a fan of meta, like I, I personally think the company has some really, you know, messed up, you know, Ideas in terms of how they prioritize their short term profits over the safety of their users, particularly kids.

I've got two teen teens at home, so I think about this type of stuff a lot, and I don't think social media is a great place for a lot of people due to conflicting priorities where, you know, meta has been, uh, not a great steward of safety in terms of responsibility, in my opinion. Now, that being said, Uh, I am really excited about what they're doing in the world of A.I. I think that they through open sourcing the [00:30:00] llama models over the last year or so have really created, um, a different option. If if llama hadn't been open sourced by meta when they did that, there'd be a lot less happening in the world of open source because it takes a lot of resources to build these models.

Um, The shakeup that you're talking about, I think is probably the right way to think about it because LAMA three is sufficiently advanced to really give GPT-4 a run for its money, uh, in many use cases. So we briefly talked about PHI three from Microsoft earlier, which is a smaller model than than LAMA three.

At least the, the two smaller tiers are smaller, but. You know, llama three in the 70 billion parameter model, which I believe is what's powering the meta dot a I site and the inbuilt a I features in instagram and so forth that you're referring to. I'm pretty sure it's that model is very, very powerful for a 70 billion parameter model.

It's It's almost as good as GPT four in many areas, and some things that it's they've really made some advances, particularly in [00:31:00] performance. They've changed some elements of the architecture and advanced, you know, some pieces so that it's much, much faster. You know, this real time image generation. If you were an early user of mid journey, and I know you were Mallory or Dolly, you would sometimes wait 30 seconds to get your image or even longer.

And now these images are being generated nearly instantly. Um, on Metis platform, Dolly is super fast on chat GPT compared to what it was. So, um, what you're seeing there is, yes, a lot of hardware is being thrown at it, but it's, it's, these algorithms are getting way smarter. Um, and Metis is definitely doing some really good work in advancing the state of the art with, um, with the AI models themselves.

So, my belief is that, um, they have, they have done a good job shaking things up. I think they've done a good service to the community by open sourcing the models. I will say that I don't think it's out of the goodness of their hearts. I think they have a very smart strategy around it because the more momentum there is around open source projects at Metta, [00:32:00] uh, the more developers standardize and use their tools, the more lift it gives them, the better those models get and the better that makes the AI from Metta's products, which is how they make their money.

So, um, I'm not saying that's bad, that there's nothing wrong with that. That's the open source strategy for most open source companies, um, but I just think it's important for people to retain a little bit of that perspective.

Mallory Mejias: Yeah, very interesting. On the image generation piece, I also asked it to create an image of me in Atlanta.

Uh, thankfully it did not. I, I didn't know if it was going to be able to like use my images on Facebook to create an image of me. I'm kind of glad that it didn't, but I would assume that's going to be something that it can do very soon. Um, in my research for this, I wanted to pull some stats. So, uh, Meta says it's monthly active users across its family of apps, Facebook, Instagram, WhatsApp, and Messenger.

Was 3. 98 billion as of the end of 2023. Uh, while the meta chat bot is only available in English right now in several countries, this means eventually nearly 4 [00:33:00] billion people or more will have access to an AI chat bot in apps that they already frequent. And so I guess when I'm thinking of the shakeup, I'm thinking, I want to say chat, UBT is monthly users are around 180 million right now.

Thinking that AI chatbots, for the people who don't use them across the world, will be at the fingertips of 4 billion people. That, to me, is the thing that I think is going to shake up everything, in my opinion. Yeah.

Amith Nagarajan: No, that's a great point, Mallory. The distribution advantage they have is, is, Staggering, right?

I mean, that's half the world, and you know, it's it's a good percentage of the adult population. So it's a very large number of people use their products on people use their products for hours a day in many cases. And so having a I there is going to lift a I in general. And of course, that's a big part of the play, because if they have that kind of capability in their products, then the theory would be people spend even more time in those products.

One of the things to remember is that you have to look at the consumer services you're using for a I. Okay. And remember that every company [00:34:00] has both different terms of service and every company has a different history. of honoring their terms of service. So the point I'm trying to make is if you were to say, well, I'm not so comfortable putting certain content and chat GPT because I don't really know open a either startup.

I don't know them that well versus maybe with Google. or Apple's rumored to be releasing later this year and on device A. I. That's at the GPT three five or above level. And so being able to do on device with Apple, a lot of people more comfortable with that because the data never leaves your computer.

You know, there's different levels of privacy and security, both architecturally, you know, like on device obviously is gonna be the most secure. But when you're a consumer looking at these services are using chat, GPT are using Anthropix, Claude, are you using Gemini? Are you using Um, you need to think about that a little bit, because, and most people won't, obviously, the four billion people you're referring to are just going to start using it because that's where they are.

But, you know, if you think about sharing more personal details with Meta, essentially think of it as a straight [00:35:00] line to Zuck's brain, right? So, you have, you know, the people at Meta who, I don't even know if their terms of service say they will not use this data for anything, I'm guessing they don't say that explicitly.

But assume they will because meta has a long history of leveraging user data for all sorts of things, obviously for advertising. But you know, there's been a number of issues over the years. And I would say to you that based on their size, they're clearly under a lot of scrutiny. But at the same time, because of their size, they can move so fast.

They have so many resources that they can kind of do what they want. So be very thoughtful about where you place your data on public AI services. And I'm not suggesting to you that I think OpenAI is more trustworthy or Anthropic is. I'm simply saying that history tends to be a good guide, and companies that have done things like that before, you know, perhaps are companies that would engage in similar behavior in the future.

Mallory Mejias: Mm hmm. And social media is certainly a place for many individuals where they put a lot out there, a lot of information. You're right. I don't, and it's really interesting to think about, you know, that this chatbot is. Potentially trained on all of our [00:36:00] profiles and maybe like the messages that we send privately to each other That's kind of a crazy thought

Amith Nagarajan: Meta has said that llama 3 has been trained on 15 trillion tokens Which for those that aren't familiar a token is roughly equivalent to a word for purposes of understanding.

So 15 trillion words approximately and They say that no user data was used in the training process You know, hopefully that's true But I would suspect that over time as people use their AI products that That's not necessarily their plan. So, you know, there's a race to get the best data and meta certainly has advantages, as does Google.

By the way, I'm not suggesting that meta is worse or better than Google or Microsoft or others. But I'm just basically reason I'm saying all this stuff about privacy and safety is You just have to know what you're doing. Like you have to think about it a little bit about where you're putting your stuff.

Um, so ultimately, um, I think that the technology is, is fantastic. Llama three is absolutely a massive win. Um, but I also think that people have to contextualize what they're doing now. And by the way, what I'm saying [00:37:00] about llama three is in the context of meta dot AI and the Facebook, Instagram, WhatsApp products.

If you use llama three on other services, nothing that I just said applies because llama three is an open source model. If you deploy it on AWS or an Azure or anywhere else, um, nothing that I said applies because then obviously meta is not in the loop in terms of running the model.

Mallory Mejias: No, I mean, right. When I was talking about my example, using meta AI, I mentioned that the chat bot recommends.

Specific restaurants coffee shops in Atlanta that I could go to and you kind of touched on this earlier But are you thinking there will be opportunities to advertise in these chatbots? Is that

Amith Nagarajan: yes,

Mallory Mejias: okay

Amith Nagarajan: Yeah, 100 percent there have to be because you know from the perspective of the platforms which is all of them including Microsoft with being and And obviously, you know, Amazon as well makes a lot of people don't realize that Amazon makes tons of money off of advertising, not just on product sales and AWS.

But they have a very large advertising business. And [00:38:00] obviously, Google and meta are big advertising shops. There will be ways to do ads through chat bots. The question will be how clear is it that you're getting influenced by an advertiser? So will You know, will there be a clear call out to sponsored content?

If an answer that you're getting from a chat bot was influenced by a sponsored article, and if that sponsored article was given higher priority than content that ranked highly for your question, uh, outside of the sponsorship realm, right? So those are all questions that I would certainly hope companies would be very clear at identifying sponsored content.

You know, again, sponsored content is not bad. I'm not against it. We use it all the time in our advertising strategies for Sidecar and for other companies within our family. We, you know, entertain those, those ideas too on our platform. So it's not that it's bad or good. It's just that I worry that a lot of people will be influenced by sponsored messages without realizing it.

Mallory Mejias: Yeah, for me personally, individually, I don't know about you, Amith. I don't really like when I see sponsored content on Amazon, or you know what I mean, or on [00:39:00] Yelp. I'm like, no, I want to scroll past the sponsored content. But, it's definitely an interesting thought. Will we be aware of it? Hopefully.

Amith Nagarajan: Yeah, to me it depends.

Like, I actually look at sponsored content sometimes because I know they're spending money on it. So my point of view is, well, if they're spending money to try to get my attention, Um, I won't necessarily click on it, but sometimes I'll have a quick look at it and say, well, you know, maybe it is something that's not as well known and it's interesting.

It's an emergent technology or product or whatever, or a restaurant, you know, like restaurants that advertise on Yelp or on, on Google Maps. Um, so I'll take a look at it occasionally, but I generally feel the same way you do. I prefer to have organic results fed to me.

Mallory Mejias: Hmm. Well, I'm sure we will see a lot more of that.

I'm curious how Meta chose those companies because or those restaurants those coffee shops as far as I know This is not something you can do now and advertise in their chat bot. But um, yeah, I guess they got lucky

Amith Nagarajan: Yeah.

Mallory Mejias: All right moving on to topic three for today vaso one Vasa One is a [00:40:00] neural network developed by Microsoft designed to animate photos with a high degree of realism.

This technology allows static images to be transformed into dynamic animated sequences that can mimic real human expressions and movements. The primary application of Vasa One is to bring photographs to life by generating realistic animations. images. The technology works by using deep learning algorithms to understand and interpret the facial features and expressions in a photograph.

It then applies this understanding to create a moving image that maintains the likeness and characteristics of the original photo. This can include mimicking speech movements, changing facial expressions, and other subtle animations that make the image appear more real. Pretty much lifelike in my opinion.

The realism of the animations generated by VASA 1 has raised concerns about potential misuse like creating deepfake videos or other forms of misleading digital content. As a result, Microsoft has been cautious about the [00:41:00] release of this technology, ensuring that it safeguards its users. Ensuring that it includes safeguards to prevent abuse.

I quote this from Microsoft's own article. We have no plans to release an online demo, API product, additional implementation details, or any related offerings until we are certain that the technology will be used responsibly and in accordance with proper regulations. So Amith, you're the one who shared this with me.

We will link the article in the show notes. You've got to watch the examples that they included. They had a picture of the, a video of the Mona Lisa speaking and it, that one didn't look so realistic, but they also have examples of, you know, real images that they turned into speaking videos of people and they are crazy realistic.

Amith, what, what is the good about something like this and then the bad and the ugly? Because I feel like it's, we've got the full spectrum here.

Amith Nagarajan: Yeah, there was a lot of backlash. I mean, thankfully I didn't release it out there, although I'm sure other similar things will soon pop up that will be available, [00:42:00] possibly open source and so forth, but the idea of a neural net that's this good at realistic video generation from.

A simple static photo and an audio clip of very short length is both a stunning achievement technically and it's basically like a platform for deep fakes, right? So the concerns, a lot of it had to do with, some people posted things like, I can't imagine a single use case that isn't for nefarious purposes, right?

I can, I'll come back to what I think it can be used for on the good side of the ledger. But, um, certainly there will be a lot of bad stuff that happens with this type of technology. Um, and the same thing is true for a lot of dual use technologies. When we talk about the ethics of AI, you talk about, you know, the power level, and you talk about good cases and bad cases.

This is definitely one that evokes in most people's minds an oh crap moment, because we're thinking what, how easy is it to take an audio clip of anyone and an image of that person and create a video that looks like them speaking. Of course, there are many other technologies already that do that with video sampling.

So we've talked about [00:43:00] Haygen in this podcast. Before you can create an A. I. Avatar on Hey, Jen, that's very realistic and getting better all the time. So there are other ways to do this. What's really stunning about the Microsoft model Vasa one is that they are using just a single image. And so that that what that tells you is that the intelligence of the model, its understanding of physics, its understanding of people in general, and you know, the realism indicates the models gotten really smart.

Because it doesn't need a lot of training data from the user to generate these images. Um, but let's, let's flip around to the other side of the ledger and talk about potentially positive use cases. So if you stitch together this kind of video generative technology with other forms of generative AI, where we've talked obviously about being able to generate text, generating images, generating videos previously in other contexts.

Um, also music that we've talked about with Stuno and others. Um, there are ways of [00:44:00] stitching together lots of different experiences where you could potentially, you know, think about creative expression for your business, uh, taking on these tools and being able to create all sorts of new types of assets.

So if you think about learning, for example, um, you know, Imagine if we're trying to build a tutor for a student in a particular language with a particular character that that student finds really compelling, um, and a particular voice and an approach to it, right? So you could create, um, basically real time, Videos, um, to help assist people in their learning journey, right?

So that could be extremely powerful. Um, in the educational world, there's this thing that's commonly referred to as the two sigma problem, which is goes back several decades to some famous research that was conducted, where essentially what was found is that there's two sigmas or two, uh, deviations essentially of quality in terms of people's Uh, learning outcomes relative to between private [00:45:00] tutoring, essentially one on one tutoring in any form of, um, you know, broader, uh, educational formats like, you know, group lessons, etcetera.

And so the idea was like, how can we solve that to sigma problem? Meaning how can we get closer and closer to the idea of private tutors for everyone? And I think this could be a really powerful way of connecting with people where if you're trying to understand the student. Whether it's an adult learner or a child trying to understand them better through emotional insight, which A.I. Is getting better and better at looking at the student through a video camera and saying, Hey, like, how's this person feeling? How can I better adjust the way I'm trying to teach this? How can I adjust the way I'm having a conversation and then using real time video generation to then speak to that person with a video, real time video could be quite compelling.

It could be way more effective at connecting with someone than like some generic stock videos for sure. So that's one use case. I think that's interesting. I think it has all sorts of applicability in marketing with highly personalized [00:46:00] marketing. Imagine someone coming to your website and a short video is generated on the fly.

for that person. Um, and it's picked from a number of different individuals who might be people that they connect with better, right? And of course, you'd have to have opt in permissioning for this or you should. But the idea would be like, okay, we have, you know, 10, 000 possible people that we'd use based on who they actually know in the professional network.

And so, uh, you know, we can generate videos on the fly to compel this person to join us or whatever. Again, tons and tons of ethics concerns that there should be for that, for that scenario. Uh, but just an interesting possible modality, right? We don't have that capability right now and we're about to get it.

So what can we do with it? Um, as a side note, it's kind of interesting. I don't, I didn't dig deep into Microsoft's release. I don't know if they disclosed where the name came from. Did, did they, did you see anything about the naming Mallory?

Mallory Mejias: I'm sure something clever, but I didn't. So

Amith Nagarajan: years ago, this is probably, this is over 20 years ago.

I happened to be visiting Stockholm, Sweden. [00:47:00] And there is a really interesting museum there with a, uh, a warship, a 400 plus year old warship, um, called the Vasa, V A S A, spelled the exact same way. And it was, it's a stunning thing to see because, um, this ship basically, um, Sank immediately after it left harbor on its maiden voyage 400 years ago.

And I believe it was covered very quickly somehow by mud or something along those lines. And it was extremely well preserved. So, for that era of shipbuilding and that period of history, you know, they were able to somehow carefully extract it in the last, I think in the last 50 years, uh, and preserved it.

So it's like a super realistic, because it's exactly the ship as it was basically when it launched. It's an amazing thing to go see. So I have no idea if this is because of that. But, um, the idea of like having a, a realistic view of something, I wonder if that's tied to it, but that's just more of a side note.

So if you haven't been to Stockholm and you're going at some point, definitely make time for an afternoon to go check that out.

Mallory Mejias: That is a very [00:48:00] random piece of knowledge that you have on me. I did just look it up, and it looks like the VAS is visual effective skills, unfortunately. But you know what? Hey, you don't know.

Maybe there's like some double meaning to the name of this. I'm sorry to burst your bubble. Um, Well, I will say when I watched the demos that Microsoft released, um, of this technology, it was really impressive to see how spot on the, the human facial expressions were. So you see people talking. I feel like sometimes with Haygen, when I've used it at least twice, It's kind of static, like you're staring at the camera.

There's a little bit of movement, but pretty much it's very straight on. The videos I saw with Microsoft were more like people looking around, people looking up when they're thinking, little things like that that are very human when you think about it. So, uh, I was impressed with that piece. I also saw, I don't know if Microsoft released this, I saw one kind of positive headline about this technology in my research.

And it was, um, something along the [00:49:00] lines of this technology could allow us to have video calls with our camera off. And I just thought that was like such a random positive use case, I guess. But anyway, just be on the lookout for that. But what I want to circle back to is the kind of nefarious uses of this technology, um, and the question around, should this be released?

Can this be released? I know Amith, you have talked about on this podcast, the idea of, you know, An authentication system of some sort where we're able to authenticate which images or videos are real And which ones were created using deep fake technology Have you heard about any advancements on that front and do you see that as an essential step first before we release something like this?

Amith Nagarajan: Yeah, well, I mean, first of all, I think stuff like this is going to get released in the very near future because, you know, it's kind of like the four minute mile. Once someone realizes that it can be done, other people will very quickly follow. So there are plenty of AI researchers out there in different kinds of companies that are building [00:50:00] these things and are going to open source them or make them available commercially very, very soon.

So someone like, Hey, Jen, it's pretty cool. probably not far behind, and there's tons of other people working on similar things. So it's not it's not an if it's a when, and I would bet that it would be sometime this year, possibly very soon that this type of technology becomes available to anyone and everyone, uh, with all the concerns that that comes with and all the potential opportunity.

Um, what I would say about authenticity is this. Um, you know, there's been over a period of years, uh, there was hype cycles around a technology called block chain. And more notably, a big hype cycle around cryptocurrency, which is based upon the blockchain technology. Blockchain is notable for being an immutable ledger, meaning it cannot be changed.

And it's used for a lot of different, uh, interesting use cases in a lot of, in a lot of enterprises now. Um, I believe there's a really strong case to be made that blockchain could be leveraged in a different way. In conjunction with a I, um, to be able to help publishers provide authenticated images and [00:51:00] authenticated videos, which, you know, can also be watermark.

But, you know, like, like I said, move counter move every time you have something that is, um, that is built to detect something, you know, or to add a watermark. People could learn how to strip it out. But with with block chain based approaches, you could say, Look, you know, I'm a politician and I'm releasing a video and my Official videos are all on chain, meaning that they're all tied to a blockchain, and therefore, you know, you can verify the authenticity, essentially, of those videos.

Now, that doesn't stop other people from creating videos, but if those are not authenticated, and then, you know, if AI's tools, or AI agents are helping us in our lives, They'll look for the authenticated videos and the authenticated sources. Now, that doesn't solve it for everyone everywhere because, you know, like maybe eventually at scale like everyone's phone is automatically putting their images and their videos that they're sharing on chain and we authenticate it somehow back to the person, um, compared to images that are generated.

But, um, I don't know what the state of the art is right [00:52:00] now in this category. Um, I'm probably six to 12 months behind on really following that in any detail, but I know there are people working on that about You know, that time frame ago. So I would imagine there's gonna be a lot of progress there.

There's big opportunities for that. And there's certainly some use cases like high stakes UK use cases for politicians, public figures, etcetera, where that kind of authenticated video platform would be really powerful and important to ensure that people at least know if it's trustworthy or not. And it doesn't mean that if it's not authenticated that it's not real.

It's just it's possible that it's not real if it's not authenticated. Now, the flip side of all that is, is what if you get your account hacked and someone uses your authenticated account to you go on chain and publish a deep fake of you, right? So there's all these scenarios that can play out. So it's really hard to have like a single answer.

I just suspect that the long term solution is going blockchain, if not blockchain itself. Uh, providing the backbone for authenticity and essentially proof and verification.

Mallory Mejias: I feel like we talked about it [00:53:00] in our 2024 predictions episode at the end of last year. I feel like we are about to enter into a wave of deep fakes with the election year.

It's just, it's really scary and I guess you're right, there's not much we can do to stop it. Hopefully we have some good actors working on technology that will, you know, move, countermove, like you said.

Amith Nagarajan: Yeah, I think in that episode we talked about a prediction where, um, this year sometime during the election cycle, a major news outlet would air a video clip that was a deepfake, and I still think that's very likely, if it hasn't already happened, I know that there's been, um, widely reported examples of phone calls happening right now.

Where supposedly Joe Biden or Trump are personally calling you. Um, that's happening all over the place. And that by itself is very scary at the scale that it can happen. So, um, yeah, this stuff is out there and it's happening. And, uh, I think, you know, if you zoom back into our intersection of AI and associations, the lesson I would take away from this is just really the same lesson we keep coming [00:54:00] back to, which is the field is moving fast.

There's a lot of new emergent capabilities. Many of these things are already accessible and inexpensive and becoming increasingly available and, you know, costing less and less by the minute. So, you have to think about this stuff, both on the defensive side of the ball, in terms of what could go wrong, and be aware of it.

But you also have to think about ways you can better serve your audiences. So, um, the thing I would leave you with on this general line of thinking is, think first about what happens to your members in this world. So, if your members are going to be affected by these types of technologies, the short answer is everyone will be, right?

So, it's how will they be affected. Think about what that means for the profession or industry that you're in and then think about well What does that mean in terms of the needs of those professionals for the types of services? Associations are well positioned to provide. What are the services and products that that the audience will need in a year, in two years, in [00:55:00] three years.

Um, and then build those things because you probably don't have them. Probably nobody has them. Probably nobody has the types of training and the types of conferences and the types of content that the professionals will need to, you know, be effective, to be safe, to be successful in this new era. Uh, but there's no reason the association can't build that stuff using these very types of tools and engage with people in different ways and to, uh, to create the path that helps their members, you know, navigate all of this.

I think that's the association's most critical responsibility. And in doing that, you're gonna figure out a lot about the internal operations of your association. Many associations are thinking about how can we optimize and make more efficient our current processes. And while I'm not against that at all, I also think that it's kind of like saying, well, I've got a really good horse and buggy factory.

I want to make that thing rip and fast. I want to make it the most efficient, badass horse and buggy factory that's ever built. You know, been seen on the face of the earth, but no one's buying those things anymore. People have gone, they haven't even gone to cars, they've gone directly to airplanes and [00:56:00] spaceships.

So you have the best factory on earth of the best processes and the best technology for making, you know, horse buggies or saddles or whatever. And that technology is is not needed anymore, right? Or it's it's it's a niche product at this point. So I think that it's one of these things you have to look external first, then come back to the internal and build what you think is likely going to intersect.

Mallory Mejias: That is a great thought to end this week's episode on. Amith, thank you so much for your thoughts today. And to our listeners, if you've enjoyed this episode, if you enjoy the Sidecar Sync, be sure to leave us a review wherever you listen to your podcasts, or give us a like and subscribe if you're joining us on YouTube.

We'll see you all next week.

Amith Nagarajan: Thanks, everybody.