Sidecar Blog

Scaling Laws, AI Scientist Framework, and AI-Generated TV with Showrunner [Sidecar Sync Episode 44]

Written by Emilia DiFabrizio | Aug 22, 2024 4:00:46 PM

Timestamps:

00:00 - Introduction
02:33 - Discussion on Scaling Laws in AI
09:15 - Real-World Challenges of Scaling AI
15:01 - Legal and Data Constraints on AI Development
17:07 - Introducing AI Scientist: Revolutionizing Research
24:57 - Showrunner AI: The Future of Storytelling with AI
32:31 - How AI is Changing Business Strategies
44:15 - Final Thoughts and Future AI Innovations

 

Summary: 

In this episode of Sidecar Sync, Amith and Mallory take you on a journey through some of the most cutting-edge developments in AI today. The conversation kicks off with a detailed look at scaling laws and how they are driving exponential progress in AI models, from data demands to computing power. They then explore the AI Scientist framework, a revolutionary tool that promises to automate scientific discovery and reshape research as we know it. To top it off, they dive into Showrunner AI, an innovative platform democratizing content creation by allowing users to generate their own AI-powered animated series. Whether you're curious about the future of AI in research or content creation, this episode is packed with insights on the next frontier of artificial intelligence.

 

 

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by digitalNow 2024, the most forward-thinking conference for top association leaders, bringing Silicon Valley and executive-level content to the association space. 

Follow Sidecar on LinkedIn

πŸ›  AI Tools and Resources Mentioned in This Episode:

AI Learning Hub ➑ https://sidecarglobal.com/hub
Scaling Laws ➑ https://shorturl.at/Zlqmt
AI Scientist ➑ https://arxiv.org/abs/2408.06292
Showrunner ➑ https://shorturl.at/CMo2f
Chat GPT ➑ https://openai.com/gpt-4

βš™οΈ Other Resources from Sidecar: 

 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

Read the Transcript

Amith Nagarajan: Welcome to the Sidecar Sync podcast. Today's going to be interesting. If this podcast actually makes it to you on Thursday, August 22nd, I will consider that to be a minor miracle because Mallory and I have been dealing with intermittent Wi Fi, Power outages and some other issues between New Orleans and Atlanta.

My name is Amith Nagarajan

Mallory Mejias: And my name is Mallory Mejias.

Amith Nagarajan: and we are your hosts. Before we get into some interesting topics on AI and associations, first let's hear a quick word from our sponsor.

Mallory Mejias: Amith, how are you doing on this lovely Wednesday evening?

Amith Nagarajan: I'm doing great, you know, it's been a long day. Um, I woke up to a power outage here in New Orleans, which, um, I don't need an AI model to predict how frequently the power will be out in New Orleans, particularly in summer. Um, so it's, uh, it's, it's unfortunate, but it's pretty, pretty significant. We actually have a generator on our home, but our generator is also not working at the moment because my house is under construction.

And so there's just a lot of interesting things happening in my world at the moment down here in the bayou.

Mallory Mejias: Basically right before we started recording, Amith was taking me to Wi Fi school and helping me put my phone in the window, so hopefully we could have some strong enough signal to get this episode done. So, as he said, it will be really fortunate if this episode makes it to you, but hey, we're committed to the Sidecar Sync, and we're here to make it happen.

Today, we've got a few exciting topics lined up for you all. The first of those is scaling laws, which we've talked about a little bit on the podcast before, but not in depth. For the next topic, we are talking about an AI scientist. And finally, for topic three, we are talking about showrunner AI, which is a storytelling AI platform.

Starting with scaling laws. A lot of AI's progress is due to something called scaling laws. Basically, researchers found that if you give AI models more data and more computing power, they get noticeably better at all sorts of tasks. This discovery kicked off a race among big tech companies. They're all trying to build bigger and more powerful AI systems, hoping to create the next breakthrough. Recent research by Epic AI suggests this approach could keep working until at least 2030, potentially. Which is exciting, but it also brings some challenges. The first of which is power supply. These massive AI systems need an enormous amount of electricity. Earlier this year, we covered a 100 billion AI supercomputer by Microsoft and OpenAI that could need up to 5 gigawatts of power, which is how much electricity it takes on average to power New York City.

It raises questions about energy use and environmental impact, of course. Now another big challenge is chip manufacturing. AI needs special chips, and making these isn't easy or cheap. Building a new chip factory can cost over 10 billion dollars and take years to complete. This creates a bottleneck in producing the hardware needed for AI advancement.

Data, as we know it, is another concern. AIs learn from huge amounts of information, but we might be running out of suitable training data. Some estimates suggest we could exhaust the supply of public text data in about five years. And of course, there are glowing, growing legal concerns. Using books, articles, and websites to train AI is raising copyright issues.

It's leading to legal battles that could affect the availability of high quality training data. Despite these challenges, researchers are working on clever solutions. For the power problem, they're looking at ways to spread AI training across multiple locations. To address the data shortage, they're exploring using more diverse types of data, including images, audio, and video.

Some are even experimenting with having AIs create training data for other AIs. So, Amith, that kind of gave a quick overview of scaling laws, and as I mentioned, we've talked about it on the podcast before, but I feel like it would be worth it for our listeners to kind of have you set the stage for what scaling laws are.

Amith Nagarajan: I'm happy to. Well, it's, it's super interesting because in the past when he took a computer program and he said, Hey, I've got this program like Microsoft Word and I throw more horsepower at it, more compute, more memory, more storage. It's still Microsoft Word. It still does the same stuff. The functionality doesn't change just because it has a bigger computer.

Similarly, if you have a database like SQL Server or Oracle, and if you throw more CPU, more memory, more storage, better networking, you don't get emergent capabilities. You don't have new functionality coming into that piece of software. It's just the same software. It just is faster. So the idea is that with the AI models, of course, we're not in a deterministic world, which is traditional computer programming that are, you know, symbolic systems based on logic that are pre programmed.

Essentially, these are neural network based systems using deep learning techniques, which we've covered in this pod before. But the essence of that idea is that, you know, the, the models themselves, um, are, they are neural nets and they learn from training data. And then when you run them, Meaning when you, what, what, what, uh, in the AI world is called inference, um, you get different results sometimes with the same exact request.

And so because they're non-deterministic, um, and because they are neural networks, they can produce different results with different sets of, or with the same inputs. Now, um, when you significantly increase the amount of training data, which of course also requires a significant increase. In what you described earlier, Mallory, compute, storage, energy, all those ingredients, right?

What ends up happening is you have a more powerful model that emerges from that. So something like a GPT 4 was more power than GPT 3, which was more powerful than GPT 2. Each of these were orders of magnitude. increases in training data, compute, and power consumption. The interesting thing though is, um, the capabilities that came out of these programs with these extra resources were significantly better, um, somewhat predictable, that's why it's called a scaling law in terms of what level of new emergent properties popped out, not necessarily which properties or which capabilities started to emerge, uh, but that a certain number of emergent capabilities would keep you know, popping up basically at levels of compute and levels of training data.

Um, whereas the interesting thing is the actual programs, the actual software architecture, the way these neural networks work, the way they're trained, there have been innovations there, but largely it's the same basic transformer architecture that was originally introduced in 2017. So there's been incremental progress from an algorithmic perspective.

So what we're essentially saying is this, you can take the same program, throw a lot more money at it, And get a more powerful AI. That's what the scaling laws are basically saying. The reason they're being coined quote unquote laws is because it's a prediction. It's kind of like Moore's Law. Moore's Law was not saying, hey, it has to be this way.

But it was more like, you know, I am, I'm basically saying that it "Over a period of time, I believe this is going to happen." So, um, that's essentially what the scaling laws are intended to codify, is this idea of like, if you put more ingredients in, you're going to get more and more powerful AI out. That's the basic concept.

And it's kind of weird because thus far in the history of computing, we've had principally deterministic types of software. Again, software that doesn't change and evolve, it just does what it does. And so you throw more resources at it, it can support more users or work faster, but it doesn't all of a sudden do new things.

Mallory Mejias: Well, that was my follow up question is, is have we seen scaling laws hold with any other emerging technologies? But it sounds like maybe we haven't in the same way.

Amith Nagarajan: Not in the same way, I mean, this is a new animal because what we're talking about here isn't so much that performance is going to improve or the cost to the performance ratio is going to keep improving, right, where Moore's law was essentially saying, and the law of accelerating returns, that concept is kind of the broader thing about Moore's law, it was essentially saying that competing would double in power for the same price point, or put another way, computing would be half as expensive, um, over every roughly two year time span.

So that was the Moore's Law concept, and so if you double something and double something and double something over and over and over again, then you get this exponential curve. Um, and so that had more to do with price. relative to performance. So the number of computations per second that you could do for a certain amount of money.

But it wasn't that the computations could do new things. They weren't like growing new capabilities. It was just doing compute at a lower cost. Here, and most exponential curves are like that. If you think about, for example, in the world of photovoltaics or solar cells, same idea. It's talking about, you know, being how much, how much does it cost per watt, uh, to produce, you know, a solar cell and purchase it.

Um, and then, you know, you see this in, in other, in other exponentials as well. So the, the key thing here that's different is this scaling law concept is talking about, Not a performance increase, so much so in the traditional sense of speed, but rather a capabilities increase where the models actually can do more for you with the same basic algorithm.

The algorithm hasn't really changed or changed much, but the capabilities of the output, the models that get built, are significantly different.

Mallory Mejias: So this would mean that we are not at this point as reliant on AI innovations or discoveries, we might say, as we are on resources and infrastructure that goes into AI advancement?

Amith Nagarajan: Well, kind of, but that's kind of like saying what if we, like, you know, made a really, really gigantic steam engine, you know, would that be the best way to get stuff around, you know, if we'd ever had any other innovations in transportation? Sure, it'd be like a really giant steam engine, but it probably wouldn't be the best for everything, right?

So in the case of this particular model architecture, we have a number of known significant problems with it. Uh, the most notable problem is something called the quadratic problem, which has to do with the way the transformer architecture works. Uh, the idea of the longer the context is, so the more, the more stuff you send to the AI, uh, there's essentially this quadratic problem, which means that, you know, if you have a hundred tokens versus a thousand tokens, And tokens.

It's not a 10 X increase. It's 100 X increase, right? In terms of 100 times 100. So basically what's going on with the math is you're comparing every token against every other token, right? So, um, that is fundamental to this thing called the attention mechanism, which is this really, it's this key breakthrough that made transformers as powerful as they are.

But it's It's a significant performance limitation at scale. Uh, there's been all sorts of interesting, uh, incremental improvements in the current model architecture to address this. But ultimately what we need is infinite context window with what's called linear progression. Meaning, if I give you a thousand tokens or a hundred thousand tokens, I can still get a response back.

Essentially in a roughly the same time, right? So I'm not necessarily looking. It's maybe a little bit longer, right? But it's not dramatically more time. So, so that's the concept is that we have to solve for the quadratic problem. The other problem with neural networks as they are today is they're basically fixed in time.

So if you ever noticed with chat GPT or Claude, you will find that they're kind of fixed in time in terms of their training data sets. So they'll say, Oh, this most recent release of GPT four. Oh, as at the time of this podcast had a training cut off, I believe in October of 2023 or something along those lines, maybe it was December.

And so the knowledge of the AI is limited to that point in time. And when they have future model releases, what they do is they take that model and then they do more training on it with more recent data. And then they'll release, you know, GPT four, Oh, Of some future subversion, right? And there'll be a new flavor.

And then that's the new version of the model, but the model itself is very much static. Once you've completed training it and you've shifted it into the mode where it's serving a user in inference mode, it doesn't continue to learn. You can talk to chat GPT all day long. It's never going to learn anything from talking to you because you're in inference mode, you're not in training mode.

And so the architecture of the current neural networks we're working with are largely fixed. And so there's research going on, uh, to be able to make the actual training process much more dynamic and continuous. Uh, so there's a number of, uh, kind of threads of research happening in that arena as well that are exciting.

So there are a lot of really interesting things happening in the world of A. I. Research that will push forward even more aggressively here. So I'm hopeful that we will see solutions that will lower the energy requirement, lower the compute requirement, allow us to be a lot smarter A. I. Helping create better A. I. In some respects. So I don't think that it's just about scaling laws. I think scaling laws are a great thing to be able to kind of lean on because right now we don't have, you know, we haven't had that major step change in model architectures yet, but I think we're pretty close to getting something interesting.

Mallory Mejias: That makes sense. Out of the challenges that we talked about, so power supply, chip manufacturing data, and then legal concerns, do you feel like any of these challenges are more difficult to address or more pressing than the others?

Amith Nagarajan: I mean, I think the supply chain problem with chip manufacturing is a major issue that the. You know, it's it's going to be tough to figure out how to solve for that for a lot of different reasons You mentioned the long lead times for new fabs. We're talking about obviously cost capital costs But uh, and there's also the geopolitical tension around the fact that most, you know, advanced chips are made by one company in Taiwan, right?

TSMC. And they're a great company that does amazing work, but it's a super, super risky thing to have, you know, the global supply chain for AI chips come from a single company that's in a potential conflict zone. So, uh, there's those issues. I think on the energy side, You know, when you think about a lot of these things, um, part of what we talk about on this podcast is how one exponential might help another exponential.

So if we talk about what's happening, let's say, in the world of material science, we have AI helping with material science discovery. Well, new materials could help us accelerate AI. So, for example, if we have more efficient, um, more efficient materials to work with, uh, so, for example, superconductivity is this holy grail that we've talked about in materials in the past.

And the idea of power transmission using superconductors would result essentially in lossless transmission of power that makes the grid more flexible. Uh, we lose a ton of the power that we generate just through transmission. The same thing happens on a micro scale on a chip. You're losing a ton of the power on the chip, which the byproduct of that is a lot of heat because of that same lossiness of transmission of power even across very, very small distances.

So, if we can solve some of these materials issues, that will help a lot in terms of chip design, uh, chip design. manufacturing, uh, powering these, these, uh, AI plants essentially, right, the, the AI data centers, uh, and a number of other things. And I think, of course, better AI chips will help us design better materials.

So that's where I think there's a potential virtuous loop here, um, where we, these exponentials can build on each other in an interesting way.

Mallory Mejias: Okay, moving on to topic two, the AI scientist. The AI Scientist Towards Fully Automated Open Ended Scientific Discovery is a pioneering framework aimed at achieving fully automatic scientific discovery using artificial intelligence. This framework leverages advanced large language models to independently perform research tasks traditionally carried out by human scientists.

It's designed to generate novel research ideas, write code, execute experiments, visualize results, and produce full scientific papers. The AI scientist can also conduct a simulated peer review process to evaluate its own work, mimicking the iterative development process of the human scientific community.

This framework has been applied to several subfields of machine learning, including diffusion modeling, transformer based language modeling, and learning dynamics. Remarkably, it can produce papers at a cost of less than 15 per paper, and these papers have been shown to exceed the acceptance thresholds of top machine learning conferences when evaluated by an automated reviewer with mere human accuracy.

The concept of an AI scientist introduces a new era in scientific discovery where AI agents can potentially handle the entire research process, including idea generation, experiment execution, and peer review, thus enabling endless creativity and innovation on complex global challenges. While the system shows promise, there are still challenges and limitations like occasional flaws in the generated papers and the question of whether AI can propose genuinely paradigm shifting ideas.

So, Amith, you sent me this, and I thought it was really neat. I looked into it briefly, and I realized that it's all open source, so any one of our listeners can go and get access to this AI scientist. I want to know, from your perspective, why is this so exciting?

Amith Nagarajan: Well, you know, the idea of novel science being invented by an AI or being at least partially invented by AI is, it's a really interesting thing. I think that what you're seeing with this current generation of, uh, the AI scientists is that As this particular approach is entitled, is really kind of a pretend version of that in the sense that the core component of it that has to do this invention is really the language model that's coming up with the hypotheses that then drive the rest of the process.

Um, the hypotheses are, you know, the rest of the scientific process is only good as the hypotheses that you're putting into the front end. It's kind of like in marketing, if your top of funnel isn't that great, your bottom of funnel can be that great either. It's a similar kind of idea. And so, um, it is, that's definitely a weak part because current language models, even the state of the art GPT 3, GPT 4. 0, or, uh, Claude 3. 5, or any of these things are still largely incapable of coming up with novel ideas. Uh, nor do these models actually have long term planning, reasoning, or the ability to do like, kind of, you know, uh, anything beyond a facsimile of reasoning in the current incarnation of these models. That being said, um, They have the knowledge of all humans in them.

So, um, it isn't so much that they're necessarily creating novel ideas, but they might appear novel in the sense that they might be recombining ideas at scale in a way that no human would likely ever do. right? Partly because they're doing things that are not obvious to us. So even though they're not necessarily coming up with truly novel ideas, even essentially by mixing the pot a little bit, they might come up with novel enough ideas that represent breakthroughs that could be significant or could be minor.

But things that are worth going through the rest of the scientific process with. So to me, that's what's interesting with the current technology. What's more interesting is if you imagine a world where you fast forward and you take the A. I. Scientist. Which, by the way, all the AI scientist is, it's a multi agentic system.

We've talked about multi agent systems a bunch of times. It's basically AIs that work with multiple different steps. They have tools available to them, multiple different AI agents essentially, which are basically just different prompt structures that are talking to each other to solve a problem in kind of a, uh, uh, a coached way, right, where there's this particular goal in mind.

Uh, so I'm not, uh, saying that this isn't a great advancement, but the point is, is that it's a multi agentic system that's focused on science. What I'm excited about is, you say, okay, well this system is actually already pretty good, as you pointed out, 15 per paper, and it's not garbage papers, these are papers that are passing pretty high bars.

Um, but what if you plug in GPT 5? What happens when you plug in the next frontier model, which is a 10x improvement over GPT 4 0, and maybe those models really are capable of more, like part of the emergent property of scaling laws we were just talking about, might be, and we don't know this yet, right?

This is all speculation. Even the people who wrote scaling laws, they don't know what the emergent properties are going to be. But if you do an order of magnitude increase in power, even with the current model architectures, What will you get? Will you get new capabilities? Will you get truly novel ideas that come out?

That will be interesting to see what happens. The other thing is, is again, as we talked about in the last topic, the model architectures themselves are becoming smarter. They're capable of system 2 thinking, right? When we talked about that in the past, thinking fast and slow, we've talked about the idea of System one thinking, which is kind of like the reaction, the survival instinct and system two is longer range planning and we kind of switch back and forth between different parts of our brain from the limbic system, the frontal cortex and A.Is Only basically have a limbic system equivalent. They just instantly respond to everything as quickly as possible. That's what inference does. There's no ability to, like, kind of think about it and reason on it and kind of stew on it and sleep on it and kind of come back to it, right? There's no equivalent to that right now. The next generation of models are going to approximate that, and that's probably where you're going to see some first versions of creativity. Some people might put that creativity still in in air quotes, but I think you will see interesting stuff coming out of those next models. And then when you plug in those ideas into hypotheses into something like the A. I. Scientist, Watch out, you're going to start seeing some interesting stuff happen.

Uh, I've also seen versions of this type of approach that are tied into labs, where you can actually execute lab experiments, because there's a lot of automation that happens in labs, like in biology or in other disciplines, and so, if you can kind of tie in the actual execution of a physical experiment in the world of atoms, not just in the world of bits, with this type of model, that also is interesting, right?

Should there be human supervision somewhere in there? Maybe so, uh, but the point is, is that you're really able to consider a much broader range of experiments, right? We're limited by, you know, what's, scientific discovery is limited by resources, right? So if you say the resources are human ingenuity and labor, dollars, um, time, And if we can solve for some of those constraints, you can increase the amount of scientific discovery going on.

So actually, even if the AI scientist doesn't necessarily come up with all the novel ideas, but it's just doing the rest of this stuff, that's a, that's a superpower for human scientists. So to me, that's what got me excited about it. You know, at the end of the last topic, I was talking about how exponentials are converging and how they're feeding each other.

Like my example is material science leading to more efficient chips and better power distribution and blah, blah, blah. Yeah. And my point here would be that's exactly what we're saying here, too, is that, you know, the A. I. Scientist could be an A. I. Scientist of material science, right? Or some other related field that might power the next generation of A. I. And, of course, as the point example you provided, um, they're actually having the A. I. Scientist do work in the machine learning discipline itself. So I think it's quite interesting,

Mallory Mejias: Conceptually, what I'm struggling with, you mentioned with the emergence of GPT 5, one day, we might have an AI scientist that could come up with truly novel ideas, but with generative AI as a concept, or especially large language models, which are next word predictors, which are predicting next words based on what they've been trained on, I'm struggling to understand how conceptually the next models Could be capable of novel ideas when they're predicting based on what they've been trained on.

Amith Nagarajan: Right. I think part of it is even more power, even more data. But the other part of it is if the model had the ability to pause and think slower. and contemplate and recalibrate its thinking based on its initial results, kind of like loop on itself in the way we might think about it, or, uh, complex tasks that can be broken down into smaller components, and then the model's capable of, you know, kind of reasoning out solutions for each part.

Um, there's, there's more sophistication, more horsepower ultimately in that kind of model, and that's where all the research labs are heading in terms of, you know, A reasoning that's this all this hype you hear about strawberry or previously we covered Q star last fall, which is strawberry. That's where, you know, the open AI team is heading.

That's pretty clear that everyone else is doing something similar because that's going to be, you know, a significant change, a step change, if you will, in functionality in these models. My point, though, would be. Even if the current models are the limit of what we have, the thing that I think, I think it's somewhat of an esoteric argument.

So people, the purists will say, no, GPT 4. 0 is not capable of novel ideas because it is a statistical machine based upon all the data you fed it. Therefore, by definition, it is simply replicating that which it was trained on. True statement. True statement. However, what's necessarily different between that and how we come up with new ideas?

We don't necessarily know, right? Like, our training data is our life experience. How do we come up with our next idea? What we call intuition, what we call that feeling in the middle of the night, or when you're driving down the road and you're like, oh, that eureka moment. What exactly is that? I do think there's something to do with the system 2 thinking, that longer range, kind of slower process thinking.

Um, but what is the input in the human brain that drives that ingenuity? There is no really good answer to that, but, um, ultimately, we don't necessarily know that it's not something similar, just a much, much different thing. more powerful version of it. Um, so I, you know, I'm going way beyond what I'm an expert in, in terms of neuroscience and in terms of how these AI models work at a really deep level.

But the point is, is that there's a lot of unknowns. And I think the reason I said it's kind of an esoteric debate is because um, even if all the thing was doing was recombining words from the past, but they're, it's doing it in such an extraordinary level, but it comes up with new ideas because of the combinations and combinations.

Of what it's learned, right? That's what a lot of, I mean, that's what a lot of art is. That's what a lot of science is, too. You know, you build on ideas. You're influenced by others in your field. That's, you, you talk about that. You cite other papers in scientific research, right? So, I don't, I, I think it's an interesting conversation.

To me, the most important thing is what are the results. Um, a few months ago, we talked about an AI model that was, um, coming up with the ideas for a whole bunch of new materials. I think, I think it was the Project no, maybe and by Google's deep mind and I think it came up with a hundred and twenty thousand Novel crystal structures that were never before known and that is not all of them were replicable or things that could actually be you know synthesized but the idea essentially was fascinating to me because that's what it was doing.

It was essentially coming up with these new hypotheses for potential materials. Um, some of which are good, some of which were total garbage. But I think in that episode, we talked about how hundreds of those have actually been synthesized by humans in a lab after the A. I. Had suggested that they would have unique properties and applications.

So I think we're already seeing this happen. Um, To me, it's exciting because it's applicable to a lot of other fields. We talk about science as a domain that, um, I think is, is just generally interesting because it's about human progress, but when we talk about associations, and we say, okay, well how many associations within the back office and the staff of the association are doing scientific research?

Basically, none that I know of. A lot of them are working in fields where their members are doing that, but they themselves are not. They're operating marketing departments and membership departments and so forth. So why do we keep talking about this stuff and how does it apply is perhaps the question we should spend just a minute on.

And, you know, my answer to that is, um, if a I can come up with novel or even pseudo novel science, what can it do for you? How can it help you solve problems? A sticky membership problem. How can it help you solve a governance problem? How can it help you solve issues related to the way you structure your event for next year?

If you're having debates in your committee about the topics that you should be programming and on and on and on, right? Um, there's so much that an AI can do for you when you think in these higher order terms That people are not thinking about yet because they're still stuck on the initial use case of is the blog post that chat gpt Wrote for me good enough to post You Without human review, right?

Um, so I think there's so much more like this. So I get drawn into the scientific stuff because if we can solve for that, um, that's a much higher order function than most of the back office things that people are doing. And it's applicable at the same time. So it's complex, multi step planning. It's long term execution, it's actions, right, taking advantage of tools.

It's all that stuff that we talk about with multi agent systems.

Mallory Mejias: Mm hmm It must drive you kind of crazy, Amith, I'm thinking, we've been talking about multi agentic systems for a while on the podcast, and to think that this AI scientist exists, but we could take that out and put it into business, and we could have a business scientist in our organization who could just run experiments right now, we have that technology right now, so I'm, I guess that's just, it's kind of mind blowing to think that we, it's all here at the tip of our fingers, but But yet so many people aren't using it or maybe don't have the knowledge to

Amith Nagarajan: I think it's a lack of awareness, partly it's a lack of the creativity to think about how to use these tools. And it's also, at the moment, there still are some barriers from a technical perspective and technical sophistication. You know, to actually pull this code down, execute it, work with it, it's going to require some degree of technical skill.

But that's also where, you know, A. I. S. Gonna make it much more accessible to everyone, um, in the very near future to be able to do these things without any technical skills. You know, if you if you were to kind of run a thought experiment said, Well, you know, Mallory, you're marketing digital. Now you're spending a lot of your time thinking about digital.

Now, wouldn't it be great to run dozens of concurrent experiments marketing different kinds of messages to folks Yeah. Yeah. seeing what works. You're doing some of that, but you're doing it a human scale, right? And you're pushing the boundaries of what you and your team can do. But imagine a marketing agent that is essentially like the A.

  1. Scientist, and you give it the general idea of like, try different ideas every single day and see what happens. Um, and it goes and does stuff. And, you know, of course, you've got to get comfortable with the idea of this thing communicating with people. But that's where the human in the loop idea and multi agent systems comes back into, uh, into the fold.

Mallory Mejias: So many of our conversations and meet you and me about sidecar about marketing are very often. Yeah, let's try it Let's experiment. Let's run this Let's see if it works and let's iterate and try something new later and just to think about having that power Within Sidecar. It's just, it's crazy to think that that technology is here.

Amith Nagarajan: Yeah, and in the sidecar community I mean, we're in a really enviable position in a lot of ways compared to the challenges A lot of our listeners have you know sidecar has an audience of about 12 000 people who have all selected into the group to receive, you know newsletters three times a week to be notified about webinars and education to Read the books and ebooks and all this other stuff that we produce and the people that have kind of gravitated towards sidecar are those that are more of an innovation mindset.

People who want to push forward and see where should associations and nonprofits go. And we tell people, look, we kind of experiment on ourselves. That's part of what we do. So there's a bit of an expectation that sidecar is a little bit out there. We're kind of nutty and we do stuff that sometimes doesn't work, right?

And that's kind of the culture we set from the very beginning years ago. Whereas most associations have a challenge in that their association was set on standards of predictability, excellence. Uh, you know, tradition for hundreds of years, in some cases, or decades. And, um, when you have that kind of a cultural backdrop, and perhaps a governance structure that's extremely risk averse, it's a lot harder to experiment in the way Wirt's talking about would probably be seen as totally reckless.

And maybe it is to some extent, right? But that's kind of in our ecosystem it works. For a lot of associations, therefore, I think, that's why we always talk about how you have to do things. In a sandbox and start off with really, really small experiments so that you're not betting the farm or betting your job, but you are doing something small enough that it could show promise, which is really exciting.

But if it doesn't work, you learn from it and you move on to the next thing. Um, so I think part of what we're trying to do is to inspire people with all sorts of crazy ideas and to test out a bunch of them for our own business. Because essentially sidecar is pretty much an association. It's very much a that type of a model.

Mallory Mejias: Moving on to Topic 3. Showrunner AI, referred to as the Netflix of AI, is an innovative platform developed by The Simulation, formerly Fable Studio, and allows users to create and watch AI generated animated series. The platform is designed to democratize content creation by enabling non professional users to generate their own TV shows using AI.

Users can create episodes by providing short prompts, which the AI then uses to script, produce, and cast shows. The episodes can range from 2 to 16 minutes and are currently limited to specific styles like anime, 3D animation, and cutout animation. The platform is in its alpha stage and has generated significant interest with a waitlist of over 50, 000 people.

Showrunner aims to make TV production accessible to a broader audience, allowing users to experiment with storytelling in real time. It features AI generated dialogue, voices, and editing, and allows users to edit scripts, shots, and voices to personalize their episodes. Showrunner's launch includes several shows like Exit Valley, a satire of Silicon Valley, and Pixels, a family comedy with Pixar style animation.

The platform's episodic nature is currently more suited to sitcoms and self contained stories rather than long, epic narratives. It does face some challenges like the quality of AI generated content, which some critics can find clunky and hard to watch. And additionally, the platform's reliance on AI raises questions about the originality and the creativity of the content produced as we've discussed on today's episode.

So, Amith, this one was a fun one for me because anytime I see. News like this in the AI space given my other work as an actor I don't know this stuff always is very interesting to me But I kind of want to take a little bit of a different angle on this you and I have spent a lot of time talking about this book called the seven powers by Hamilton Helmer and You can explain the book better than me But it's basically a book about the foundations of strategy and having different powers within your business Um that give you durable You Persistent returns.

Is that right, Amith? More or less.

Amith Nagarajan: Persistent and differential. So

Mallory Mejias: Got it.

Amith Nagarajan: differential meaning better than everyone else

Mallory Mejias: Got it. Persistent and differential returns. And so one of the powers in that book is counter positioning, which is interesting because that's how Netflix, at least initially, disrupted the media space was with counter positioning. And so now we see platforms like Showrunner using AI to take a potential counter position to To Netflix.

So kind of a lot of thoughts in this question, but I'm curious if you can talk a little bit about the seven powers book and kind of how AI is redefining what these powers mean and what's available now to businesses. And

Amith Nagarajan: Powers, uh, written by Hamilton Helmer, as you mentioned. I'd recommend this as reading to anyone listening or watching. A fantastic book on strategy. I've read dozens of books on strategy over the years. This one I find particularly enticing because it has, uh, really a mathematical foundation underneath it in the sense that it's essentially trying to show that, um, if, first of all, what is a power?

A power is something, to the earlier point you made, that produces durable differential Returns. So the durability is, of course, meaning it lasts over time. A durable good is a refrigerator. A consumable good is a t shirt, right? So something that lasts a long time. There's durability and differential being more profit or more margin than anyone else.

And this is, by the way, very applicable to not for profits because not for profit should be and in many cases are Yeah. Extremely profitable. You just reinvest in your mission rather than distributing that cash to Uh to shareholders since you don't have shareholders, but the point I would make is that this is super applicable to All organizations, especially the non profit community And so we're gonna be talking a lot more about the seven powers over time But just to give you an example of some of the powers some mallory mentioned counter positioning Which is a great example of something where it's what it sounds like you're essentially going up Against a an incumbent and you're saying hey, there's a different way of providing The value, either more value, or there's a different way of providing the value.

So it's Blockbuster versus Netflix, uh, is the example I think you're providing there, where you're delivering media, uh, initially through DVD by mail and then through streaming. Uh, and so in fact, interestingly, Netflix counter positioned against themselves in their second phase. Their first phase was DVD by mail, and that was, you know, a more convenient way to get a physical medium into the DVD player at your home.

And then they counter positioned against themselves when it came to the delivery of those bits over the air. Um, and then, of course, they shifted into a completely different business because counter in a completely different business that's powered by a different power because counter positioning is inherently fleeting because once you become the dominant player, then someone else is going to counter position against you with some other way of doing it.

As you just pointed out with this story. So, um, Netflix, actually, just further going down the powers, you know, some of the other powers are things like scale economies, network economies, uh, cornered resource, uh, switching costs, branding, process power, etc. And we're not going to talk about all of them, but the, the essence of what Netflix is all about is scale economies.

So the way the author Hamilton Helmer describes scale economies is that you basically have the lowest per unit cost relative to the value you're creating, which, of course, yields the highest margin, right? That's the differential return. Um, and the way you do that is by having more customers. And so, you know, in the case of Netflix, they spend more than anyone on content.

Um, yet their profit margins are better because they have a much, much larger paying base. And that, of course, becomes a virtuous cycle because if you have more customers paying, you can invest more in content. Your content is a fixed cost. Uh, the more customers you have, the more Incremental margin you generate and then the overall margin relative to that fixed cost goes up, and it's very, very hard to compete with that, right?

Um, because if you have a much smaller budget for content, your content would, in theory, not be as good. Enter AI. And so your conversation here, where AI can produce potentially content that is, um, obviously subjectively better, but it's all relative to the viewers. So if I create my own content using this tool and then I consume it and I share with my friends, um, it has an interesting effect because it perhaps is something I enjoy and gain more utility from than the static library of content that someone like a netflix has.

Um, so perhaps in that scenario, also this startup, uh, is not only counter position, but they Potentially could gain from some degree of network economies, because if there's kind of a sharing ecosystem where you create something and then you're promoting it to your friends, maybe there's an element of that.

But there's not really two sides to that in terms of both supplier and consumption. So I think it's an interesting conversation. But yeah, that's kind of a super brief synopsis of seven powers, and we use it all the time at Blue Cypress to evaluate trade. Businesses were thinking about entering into or thinking about, can we create one of these powers, one or more of these powers in a business?

And we think about it from a startup perspective, it's super easy to say, of course, we're going to counter position whoever the incumbent is. Um, if there is one, and if there isn't an incumbent, you can still be counter positioning in terms of the value creation process. But then ultimately, what would be the durable power that we would go after over time?

Um, so I think this is a super interesting conversation. We'll probably be talking about it a bunch more on the pod and in other formats. Um, but coming back to this, AI, uh, opportunity. I just think it's super interesting because you know This is not a tool that i've had any exposure to personally at all I probably won't even look at it even though I find it conceptually interesting But the idea of being able to create short form episodes that are Highly tailored to whatever it is.

I'm interested in I mean there I could see that being appealing to a lot of people

Mallory Mejias: I imagine using AI to do that, the cost per unit would be essentially zero, right? So then I'm not sure if scale economies, do you think they'll hold in the, in the age of AI? Hmm. Mm

Amith Nagarajan: Yeah, I think the question so the resource that's uh, this massive fixed cost that's required to produce content in the current world Um does that over time become a lesser issue, right? Is is there is that one type of content that will continue to be necessary to produce these? Massive big budget TV series and, and movies and so forth.

Um, or does this potentially supplant all of that? I think it maybe is like a category, right? That becomes part of what people consume, but, um, it's hard to say. And the other thing too, is of course, AI is in this crazy fast doubling. So maybe we're at two to three. Uh, but very quickly, these things can become feature length films, they can become TV series, uh, self generating episodes or new seasons of, of TV shows.

Like, there's all sorts of crazy stuff that can come out of this. You know, about two years ago, I think it was, um, Earlier, yeah, in the whole, right around the time ChatGPT came out, uh, somebody had invented a, essentially a multi agent system, and what it did was it used, uh, an LLM to generate ideas for, uh, Seinfeld episodes.

So it was trained on all the Seinfeld scripts of the past, and the idea was to generate new Seinfeld scripts. So the ideas originally, and then it would generate the actual script. Then it would take the script, and it would feed it back then to a very rudimentary, Uh, text to video type of thing. It was basically like, you know, a really lousy animation kind of thing.

Um, but it had audio and it had a very rudimentary form of video. And it was just a continuous Seinfeld episode was the idea. So, uh, the AI just kept generating more and more, you know, frames. And it was just a little bit ahead of what people were watching. Uh, and that was of course just a novelty. But the idea was interesting, right?

It's similar conceptually to what you're describing here.

Mallory Mejias: hmm. I'm definitely

Amith Nagarajan: to your point, that resource is no longer kind of the constraint necessarily if people are, if people do want to consume this stuff.

Mallory Mejias: I want to know how our listeners and our viewers feel. So feel free if you're listening audio only to send us a text. You can actually do that in the link, um, in the link, in the show notes. And if you're viewing on YouTube, please drop us a comment. Are you interested in consuming this kind of content?

That's AI generated content. That's personalized just for you with the kind of humor that you like, or are you more leaning toward human created content? Amith, I don't know what your take on that is.

Amith Nagarajan: Well, you know, I actually, I could see kind of maybe a hybrid being kind of interesting. So, sometimes, I don't watch a ton of TV. I watch live sports and then, you know, Pretty much that's the only TV consumption I'll get into. And so it's starting to get to be that time of the year where I'm a little bit less productive because NFL is about to kick off.

But, um, but other than that, um, I'm not a big TV person. Every once in a while, someone will say, Oh, this. This series is really great. You should check out whatever and I'll get into the series and I'll really enjoy it and I'll watch it May not binge watch it per se but I'll watch it pretty consistently for a period of time until I finish it And then that'll be that and then I'll not watch tv again for six months or something until someone else tells me Oh, there's this other great thing Um, I don't really rely on the recommendation engine or anything like that and maybe occasionally I find something interesting But if I find something I like Wouldn't it be cool if it's almost like a choose your own adventure where you can get more episodes of a series that you do like?

And so someone like Netflix who owns the IP for so much content could be in a great position to offer subscribers, maybe a premium tier subscriber, or maybe it's just part of the core service, the ability for Mallory to say, hey, I want more episodes of this type, continue this storyline, or, you know, what if these characters did these things, or whatever.

Now, of course, there's all sorts of risks and issues with doing that. There's IP issues, there's how do you pay the actors, there's all the other intellectual property pieces that go into that, but conceptually, it's kind of interesting. Um, so think about what Disney can do with their library of IP. It'd be unbelievable, you know, so you think about, uh, kids going nuts with Disney where they can create their own episodes of Marvel, you know, shows, or whatever.

Mallory Mejias: I could definitely see kids going nuts with that. That I don't think I've ever thought about it from the kid angle. I think in my core, it just feels like the antithesis to everything I believe in. So for the time being, I'm not super interested in the the choose your own adventure thing. But I I definitely think there's a market for it.

Um, and I don't I guess I don't necessarily I'm not saying there's anything wrong with it either. I just think for the reasons that I consume. Entertainment, I can't imagine that would be satisfying to me.

Amith Nagarajan: Yeah, I, you know, I don't know. I mean, I wouldn't know until I've consumed something that's like this. But, you know, like, for example, like, um, you know, probably my favorite TV series of all time is 24. I don't know if you've ever watched 24.

Mallory Mejias: I've never, I don't know if I've heard of 24. What is, what is that about?

Amith Nagarajan: myself then. So, um, you should go check it out.

Um, the first couple episodes of the first season might take a little bit of time to get into, but, Uh, it's, it's amazing. Kiefer Sutherland, it's, uh, he's Jack Bauer, it's basically counter terrorism, it's every season, it's 24 episodes, they're an hour each, the episodes are kind of in real time, that's the idea behind the show, and, uh, it's, it's really awesome, I, like, it was my favorite TV show for a long time, and, um, I would love it if I could say, hey, give me a new season of 24, I would totally watch that, um, if it was good, but I have no idea if it would be good, right, so, but I think it's just an interesting, um, Exercise to think through.

Um, I would be shocked out of my mind if in the next few years We didn't have these kinds of options on at least some of the streaming services And perhaps in an experimental mode, but the technology is there if you think about it. We have text to video that's getting pretty good Uh, we have obviously the audio to audio stuff.

We've got all sorts of amazing, you know things that we can do in terms of generating ideas. So I think we're going to see stuff like this. So it raises all of these interesting questions of what does it mean? But, um, I think there could be demand for that.

Mallory Mejias: Oh, absolutely. And I will check it out by reason of this podcast. I can assure all of you I will consume it and I'll let you all know how I feel. This is how I'm predicting I'll feel, but as we all know, humans are not so good at predicting how they'll feel in the future. So.

Amith Nagarajan: That's true.

Mallory Mejias: Well, everyone, we made it to the end. We didn't have any major, I don't think, power outages. We'll find out. Blackouts. Thank you all for joining us and we will see you next week.