Skip to main content
Join the AI Learning Hub

Timestamps:

03:29 – What is AI?

09:05 – Neural Networks Explained

14:07 – Core Components of AI

25:30- Different Types of AI

34:43- We are in a Period of Exponential Growth

38:55 - Test Scores of the AI Relative to Human Performance

Summary:

In Episode 19, Amith and Mallory dive into the first part of a two-part series on the fundamentals of AI, shifting from their usual format of discussing contemporary topics. This episode is designed to lay the groundwork for understanding AI, covering its core components, types, and the significance of its exponential growth. It aims to equip listeners with basic knowledge about AI, setting the stage for exploring specific AI use cases relevant to associations in the next episode.

 

 

 

Let us know what you think about the podcast! Drop your questions or comments in the Sidecar community.

This episode is brought to you by Sidecar's AI Learning Hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

Follow Sidecar on LinkedIn

Other Resources from Sidecar: 

Tools mentioned:  (Mentioned in Part 2) 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress (BlueCypress.io), a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
Follow Amith on LinkedIn.

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on Linkedin.

Read the Transcript

Disclaimer: This transcript was generated by artificial intelligence using Descript. It may contain errors or inaccuracies.

[00:00:00]

Amith Nagarajan: perhaps you have some anxiety over this. Perhaps you're excited about the channel, that energy, whatever it is towards taking a step towards learning. That's the most important message in all the talks that I give is to start learning now.

Amith: Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights, and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future.

No fluff, just facts and informed discussions. I'm Amith Nagarajan, chairman of Blue Cypress, and I'm your host.

Amith Nagarajan: Greetings everybody. And welcome back to the sidecar sync. We are really excited because we have a special two part series, this podcast and the next episode. Where we'll be talking about the fundamentals of AI so we're going [00:01:00] to be taking a brief detour from our usual format where we talk about three contemporary topics at the intersection of AI. And innovation and of course associations. And we're going to be talking specifically about the fundamentals of AI many of our listeners have told us that they would love to hear more of the basics. How do you get started with AI? But even before that, what are the fundamentals of AI? So we're going to dive into that here in part one and next week in part two, we're going to go deeper into use cases of AI that are specifically relevant to associations. So can't wait to get that started. But before we do, let's pause for a moment to hear from our sponsor.

Mallory: Today's sponsor is Sidecars AI Learning Hub. The AI learning hub is your go to place to sharpen your AI skills and ensure you're keeping up with the latest in the AI space. When you purchase access to the AI learning hub, you get a library of on demand AI lessons that are regularly updated to reflect what's new and the latest.

And the AI space. You [00:02:00] also get access to live weekly office hours with AI experts. And finally, you get to join a community of fellow AI enthusiasts who are just as excited about learning about this emerging technology as you are. You can purchase 12 month access to the AI learning hub for 399. And if you want to get more information on that, you can go to sidecarglobal.com/hub.

Mallory Mejias: Welcome everyone to part one of the fundamentals of AI episode. We are really excited to dive deep on exactly that, the fundamentals of AI. And then like Amith mentioned in the next episode, getting into those practical use cases, I would say this is going to be a great episode that you can refer your friends to if they might be just dipping their toes into AI in the future.

But I also think this will be a solid episode to refer back to if you are maybe more in that intermediate. phase. I know for sure that I am going to learn some stuff in today's episode and in next week's episode. So keep in mind, I'm sure we'll be updating this episode in a few months because [00:03:00] honestly things change so fast, but feel free to refer back to this fundamentals episode at any point, if you want a refresher on what AI is and how you can use it.

So today in this episode, we will be talking about. AI, what it is, core components of AI, different types of AI, and then talking about why it matters so much right now and talking about this exponential growth curve that we're on. So, first and foremost, an introduction to artificial intelligence. What is AI?

We gotta go right down there, deep. So, Google says that AI is the theory and development of computer systems able to perform tasks that normally require human intelligence. such as visual perception, speech recognition, decision making, and translation between languages. I'm just now thinking, actually, maybe I should have asked Chat GPT what the definition of AI was.

That could have been pretty funny. But, I will say, we like to use a slightly different, maybe a more simple [00:04:00] definition of AI. And that is, AI is the science of making machines smart. Now, this was said by Demis Hassabis, founder and CEO of DeepMind, which was acquired by Google in 2014. Amith, my first question for you, kind of simple, kind of difficult.

What is AI and why does it matter?

Amith Nagarajan: Well, the definition of what it is, the reason I like the quote that you just shared from Hasabis from DeepMind, which, which by the way, he's still at Google, he's leading their combined AI research efforts there and doing amazing work. I like the definition because it really signals to you that it's a moving target. earlier definition that was very good and accurate and detailed will be out of date soon because the examples it's saying of. You know, visual perception, speech recognition, translation between languages. Up until a few years ago, those were science fiction, and now they're science fact. of course, the idea behind that definition is to say, what would normally, quote unquote, require human intelligence, that's really a moving target [00:05:00] because AI is moving so quickly.

So the idea of making machines simply smart is what you don't expect them to be able to do. And that's why I like that definition. Now, what it really is under the hood is a multitude of different technologies. AI Is a field that actually goes back in time about 70 years. So from when digital computers first started coming online, the ideas of artificial intelligence, there were almost immediately there, you know, in terms of how to go about doing this. And of course, if you go back further in time, the ideas of thinking machines and robotics and, you know, in popular fiction has been around for, for generations before that. But as a scientific discipline, AI has existed in most people's definition for about 70 years. Now, the challenge was that many of the early theorists. Actually had some pretty good ideas, but they didn't have the equipment to do the work. It's kind of like people might have had the ideas for advanced microbiology concepts, but didn't have a microscope. In a similar vein, you know, [00:06:00] computer scientists back in the fifties and sixties and even up until early part of this century. Did not have the raw compute, did not have the data, did not have the raw capabilities to make the algorithms really work. So currently AI and the explosion of capabilities that you're aware of, I'm sure as a listener or viewer. A lot of them come from a technology called artificial neural networks, and we'll talk more about neural networks.

But up until actually about 10 years ago, a lot of people were detractors from neural networks, saying the technology would never work. Actually, I should say more like 15 years ago, about 10 years ago was already blowing up. But 15 years ago, many people were still saying neural networks will never work. are great theoretical concept and people were pursuing other ideas within the AI umbrella. So there's many other concepts, algorithms and ideas. But neural networks are the fundamental building block of technology that powers AI today. So from my perspective, [00:07:00] it is somewhat synonymous to say AI and neural networks are equivalent.

But just understand that it's a broad field and there's many other techniques within AI that are not neural network based that can complement neural networks.

Mallory Mejias: So that was going to be one of my follow up questions. Is AI quote unquote new and how has it evolved over time? So it sounds like based on what you're saying, we had the ideas for a while. We just didn't have the equipment. In

Amith Nagarajan: the original ideas for neural networks go all the way back to mid last century, and then even back in the eighties, there were significant advances made with techniques like which is a technique with the neural networks that allow them to make sense of much larger, more complex data sets through multiple layers and a lot of other things we won't talk about here. there's been progress happening from a scientific perspective for years and years and years. But we've kind of had this convergence where about a decade ago, a little bit longer than that, there was the confluence of [00:08:00] data compute. And the data came from network access, but the network access itself was a very valuable resource for AI and that led to a number of innovations.

We're gonna talk about in this episode and the next. But yeah, the concepts have been around a long, long time. The idea and why it's so powerful to really get back to why it matters for associations and nonprofits. Mallory's. It's why it matters for everyone, which is if you can make intellect and fundamentally the idea of reasoning something computers can do, then that capability opens up the doors to prosperity for people who really haven't had a chance to experience the developed world.

That's exciting to, you know, kind of level the playing field. It's exciting to think about how nonprofits and associations can harness AI to advance their missions. And, of course, in the commercial world, there's Tons of opportunities for growing businesses. I think AI it will basically affect every corner of life, not just business.

Mallory Mejias: what ways does [00:09:00] AI mimic human intelligence and in what ways does it differ from human cognition?

Amith Nagarajan: The basic concept of a neural network is similar to the way biological neural networks. So, an artificial neural network is something that is constructed in a similar fashion. It's very much taking cues from biology. And the idea is essentially you have neurons and these neurons are connected to each other. And neurons are at a very, very granular level, essentially capable of signaling you know, based upon different inputs. So you pass inputs through a neural network and different things happen. Now, if you think about like systems in, in in any biological environment, you have these layers of neural networks that build up and you have, you know, things like vision, for example, which is a very complex phenomenon when you think about it. from the top down. It works in a similar fashion. So, artificial neural networks have been heavily inspired by biological neural networks. [00:10:00] Of course, the scale of data artificial neural networks can be trained on is quite different. Now it used to be that we would say in the field that, you know, artificial neural networks will never get anywhere close to , not even a human, but like even other animals. And then all of a sudden there was this explosion where now artificial neural networks, pretty much all of them are trained on far broader sets of data. It doesn't mean they're smarter yet, but it means that they have a bigger training set. So it's different. I'll say the thing that's interesting that researchers are really trying to figure out is how you can train a neural network with very few examples and get it to have incredibly high quality.

So an example of that would be you think about image recognition and the millions and millions of images that were needed to train early image classifiers, this goes back about 15 years and compare that to a baby. Or a toddler. Perhaps you give a toddler two pictures of a cat and a dog. And if they've seen a cat or a dog even once or twice, they can probably say cat or say dog [00:11:00] and identify it. So how does that work? You know, that's not necessarily a mystery of science as much as it is I mean, there's definitely mysteries there, but it's an area of engineering and an area of computer science that we're still trying to figure out how to mimic that type of really rapid learning from a small data set because it's kind of crazy how much data you need to train a computer to be really good.

But with scaling, we've actually made these image recognition models as just one example Far better than any, any person. So, that's one area that where there's a significant difference is something in biological neural networks allow for far flexible learning. And you also have generalization.

You know, models today are very narrow in their scope, and we'll talk about that in a bit. But the idea is that, you know, they really are only good at a narrow range of things, whereas in particular, but all all really biological creatures, right? We have a range of things we can do once we're trained.

So those are a couple of main distinctions.

Mallory Mejias: So in the way that a neuron in your [00:12:00] brain fires a message to the next neuron, you can think of a neural network, which you said could be synonymous with AI at this point, as sending a message, quote unquote, from one neuron to the next. Are they called neurons

Amith Nagarajan: Yeah, they're still called neurons. Idea basically is there signals that are propagating up and down in the network. So, you know, for example, the lowest layers of a neural network might be doing very basic things like in the context of image recognition, their role might be something along the lines of simply detecting. Edges or rough boundaries of shapes in an object or in a picture, I should say, and then the next layer up might take into account something like color. The next layer above that might take into account other things. And so these networks basically propagate and back propagate these signals in order to basically identify things.

And that's I'm using a very simple example. Ultimately, all of the neural networks have share this in common. So yes, it's a similar concept.

Mallory Mejias: And the AI we have right now is not thinking or reasoning. [00:13:00] Is that right?

Amith Nagarajan: That's right. And you know, , the concept of thinking, of course, I think 100 years from now, philosophers will get to argue about whether or not machines are truly thinking as far as their reasoning skills are concerned, our current state of the art AI as of early 2024 is not able to reason it is able to simulate reasoning, and it's quite good at certain types of reasoning problems, but it's not truly reasoning. So and that's an interesting conversation is if it's simulating reasoning and it's pretty good at a lot of problems, doesn't even matter that it's not truly reasoning. But short, the short version of the story is right now, anything truly complex that requires step by step analytical skills, breaking the problem down into smaller chunks, possibly branching off in different directions, depending on the nature of the problem and the sub problems, AI does not do that. That is going to change very rapidly, but at the moment AI does not reason as to whether it will eventually think that's a question for someone far more skilled in philosophy than I am.

Mallory Mejias: [00:14:00] Right. Maybe in a couple editions of this episode in a few years, maybe we'll cover that.

Amith Nagarajan: a great thing for a special guest.

Mallory Mejias: exactly. Diving into more of the core components of AI. At the heart of AI, we have algorithms, data, and computing power. So algorithms are the instructions or rules that guide AI to make decisions or predictions.

Data, which AI algorithms learn from can be anything from images and text to more complex information. This data trains the AI to recognize pattern and make informed decisions. Computing power is essential for processing large datasets and running complex algorithms efficiently. Together, these components enable AI systems to perform tasks that usually require human intelligence, like understanding natural language, recognizing objects.

In images and making predictions based on data now going one step further into that at AI s. Core machine learning and natural language processing stand out as crucial components as [00:15:00] well. Machine learning, and often you'll see this listed as M. L. Enables AI to learn from data, improving its performance over time without being explicitly programmed for each and every single task.

It's the backbone of AI s. Ability to make predictions and decisions. Natural Language Processing, or NLP, on the other hand, allows AI to understand and generate human language, enabling interactions between computers and humans that feel natural, or at least feel natural to us as humans. Amith, can you explain the role of machine learning in AI and how that differs from traditional programming?

Amith Nagarajan: Sure. I think for purposes of our audience here, machine learning and deep learning and AI and neural networks, they all kind of overlap a little bit, and that's okay. You don't need to really be able to distinguish them. And truthfully, even within the field, there is a lot of overlap between all of those terms. the main difference between traditional programming and all of these AI [00:16:00] categories is the idea that the machine isn't being told specifically what to do. in traditional computer programming, you have a goal, and you, like, whatever that is, I might write a program that says, hey, I'm going to take a document, and I'm going to open it up, and I'm going to read the contents of that document. And then I'm going to, let's say, break it up into chunks for whatever reason, right? So there's some process. I decided I wanted to write a program to do that. Or a program that processes an order, processes a credit card. Every one of these programs, whether they're as complex as an operating system, or as simple as, you know, a very, very basic program, they all have a set of rules. And these rules are codified by software developers or programmers. So a programmer will say, okay, well, this is what my goal is. And they'll take the problem and start breaking it up into smaller and smaller chunks. And then eventually they'll say, okay, well, this chunk, basically I have to write the code, essentially, that has these particular rules written into the code.

So you're [00:17:00] coding the specific rules and objective of your, of your software. And so that's how traditional programming has worked for a long, long time. In comparison, all of the AI areas work differently. What you're doing essentially is feeding a lot of datAInto a model. A model is think of it is like a program. It is. It is a program, but a model has two phases. There's a training phase and there's something called inference, which is basically when you run run the model. So when you work with chat GPT, it's in the inference phase. So the model is trained in some cases for months in the case of these very large models. And then the model builds up through that training process. That's how it's building its neural network. So no programmer goes out and says, Here's all these neurons. Here's how they're connected. These are the different weights between the neurons. No one does that. That's the training process. Because literally, even in our smallest models today, we have billions and billions of parameters, which basically are a rough [00:18:00] analog to neurons and complexity. So there's no individual that's going in there and saying, Hey, I know how all this stuff works. In fact, that's a problem of AI.

We can come back and talk about is no one that knows exactly how these things work. Which is an issue that is being worked on, and I think will be rapidly solved. solved this field called A. I interpret ability. But coming back to this idea of the structure and how it works. These neurons basically form through the process of training. So what happens is you have an algorithm, a training algorithm that looks at data and what it's doing is it's feeding it into the neural network and the neural network essentially is forming its weights. These weights are essentially the strength of the connections between the neurons. And so this organization process and how the weights and the parameters are structured essentially is the training process. Now that training process results in a model and the model ultimately is a computer program, but it wasn't trained by rules. It wasn't trained by me or someone else going in there saying, Oh, if this, then do this thing. If this [00:19:00] other condition exists, do this other thing. It is based upon the training data.

And so we've trained the neural network based on the training data, and then we inference with it, meaning we pass it some kind of input, like text or an image or whatever. And then it gives us an output. So it's very different than traditional programming. It's actually kind of hard to get your head around it because you're like, well, how does that work?

Like, how does it actually do what it's doing? There has to be some kind of logic in there. And the short answer is, it's just a simulation of how biological neural networks work. So it's very different than rules based programming.

Mallory Mejias: Well, this makes the black box of AI make a lot more sense to me because, Amith, you and I have talked about how a lot of people who are creating these models out there don't fully understand how they work. And this makes sense to me. At the top of this episode, I said I would be learning things. I actually didn't know that the neural network just comes about.

I assumed that that was something we were putting there. So how is that even possible?

Amith Nagarajan: Well, think of it almost like a parent kind of, shaping the way their child [00:20:00] is learning and in the case of neural networks, there's a lot to it in terms of obviously the algorithm you're using to train it the type of neural network. There's many different kinds of neural networks that all have different uses and different pros and cons. And, you know, you do have some architectural decisions in terms of network depth and things like that. But ultimately, what's happening in the training processes, you're just kind of like taking a baby and feeding it information. yon Lacoon, who's the head of AI research at meta and a brilliant researcher in this field recently at Davos was quoted as saying something along the lines of our biggest models today.

Our biggest AI models have consumed roughly as much information as a child does in the first three years of life. So. It's not necessarily a great comparison in terms of how intelligent these models are, obviously, but it's an interesting data point in terms of how much information we consume through all of our senses, right? From the moment we're born. Coming back to your question, Mallory. If you think about how the training process works, , there's all [00:21:00] the algorithmic research to build the algorithms that do the training and to construct the neural network in a particular way. That's the algorithmic work.

But then part of the training process is selecting the data saying, Oh, well, what type of data are we going to put in here? And how much data will we put in there? You know, we talk about open source in this podcast a fair bit. And one thing that's important about open source in AI to understand. is that there's many pieces that you can open source, but not all open source models necessarily make everything open. So for a model to be open source, the basic algorithm to run it and the weights of the neural network have to be open source. So you can't run it. You can't do other things with the neural network. But, you know, there's other things that you might want to consider thinking about. Like, for example, is the data that was used to train the model open source as well?

Or do we even have a description of what it is? Right? And more and more model developers, including proprietary model developers are telling you that. Continuing kind of the, you know, the growth of a person [00:22:00] kind of analogy. If you were to say, okay, well, I'm going to hire an employee and they just graduated from college.

They're 22. They just graduated from a university. You probably want to know what they majored in. You probably want to know which university they went to. Maybe you care about their GPA. Maybe you don't. But you probably have some interest in their background. If they're more experienced individual, you look at their resume, you look at what they've been doing.

Well, that's their training data set in a sense. And so, you know, what you train a model on directly influences what the model can do.

Mallory Mejias: Is it safe to say that the more layers in a neural network, the more advanced the model?

Amith Nagarajan: Yes.

Mallory Mejias: Okay, so then how have we gone from less layers to more layers? Is that just more compute power?

Amith Nagarajan: Yep,

you got it. Because layers are computationally intensive, and also to train a model that justifies a lot of layers has a lot of layers, you have to have a lot of data. And the scale of data, you know, if you think about the convergence of really high, you know, compute a lot of network bandwidth and data, what's [00:23:00] a lot of it's come together because of the Internet, the Internet scaling, right?

Internet still very early in its life. a fairly new technology. People don't realize that, but it's not that old. But the scale of the Internet has reached staggering proportions because it's growing at an exponential pace, too. And in addition to that, you know, obviously, compute has been on an exponential curve, so we couldn't do the kind of neural networks we have now.

So, Even really 10 years ago, for sure. I mean, and probably even a few years ago, a lot of the things we have, you know, a lot of people kind of woke up to AI with chat. GPT in the fall of 22. This has been about, I guess, 14, 15 months since the popular imagination has been captivated by AI. But GPT 3. 5 is what powered the original chat GPT. And as the numbers indicate, you know, it's the third and a half edition or version of that model. GPT one came out. You know, years before that, it was very limited in its capabilities. It was more of like a research project. GPT came out from OpenAI and that was It's starting to get interesting, you know, as people started building products on it.

And then GPT [00:24:00] 3 came out a good, I think, 18 months before ChatGPT came out. And a lot of people, including companies that are family of businesses were using GPT 3 to build AI solutions before ChatGPT existed. so, the point I'm making is, is these models have progressively become bigger and more complex and more layered.

Mallory Mejias: Last question here, Amith. When talking about natural language processing and computers understanding language as we know it, was that kind of the last piece that made AI explode within the last 14 to 15 months, or have we had that capability for a while?

Amith Nagarajan: Well, true natural language understanding and comprehension is a fairly recent thing, you know? So I think we're gonna talk a little bit later in this podcast about exponential curves and maybe share a chart or two that depict the progression of natural language processing through reading comprehension.

And you know, when we think about what that means, it's kind of a stunning advance, actually. And that's why people are freaking out so much is [00:25:00] that computers can now interact with you in the language that you speak rather than requiring you To go and learn a computer language or to learn how to use a particular type of program, you can just interact with computers through language.

So, yes, that's definitely the piece that's, it's kind of like the killer app for AI, so to speak, because what it does is it makes AI accessible for everyone. Anyone can interact with an AI because the AI is able to interact with you.

Mallory Mejias: That makes sense. So now that we've talked about core components of AI, we want to discuss the different types of AI that are out there. And we can kind of divide AI into two overall groups. Narrow AI, also known as weak AI, specializes in one task, performing it as well, or maybe even better than a human.

Examples include chatbots, recommendation systems, and image recognition software. This type of AI operates under a set of constraints and doesn't possess consciousness or self awareness. General AI, or Strong AI, also known as [00:26:00] Artificial General Intelligence, AGI, lots of I's there, remains a theoretical concept.

It would have the ability to understand, learn, and apply its intelligence across a wide range of tasks mimicking human cognitive abilities. Unlike Narrow AI, General AI would possess self awareness, problem solving, and reasoning capabilities across diverse domains without being explicitly programmed for each task.

Now, those are kind of the two overarching groups, but I also want to talk about Generative AI, which is, I feel like, a subset of AI that's gotten a lot of press within the last 14 or 15 months, as Amith mentioned, it focuses on content creation, and it can be classified under both weak AI for its specialized applications and strong AI potential for its advanced learning and generative capabilities.

Generative AI is designed to generate new data resembling the training set, including text, images, and video, as we talked about last week. In our episode, it's a really versatile tool in AI research and application, and while [00:27:00] primarily used for specific tasks aligning with weak AI's definition, its capabilities hint at moving toward more generalized AI systems.

Amith, would you say that most of the tools we have access to right now fall under narrow AI?

Amith Nagarajan: answer is yes. longer answer is, much like the definition we shared early on about what is AI making machines smart what is. A narrow AI or weak AI versus what is strong AI or general AI is also a bit of a moving target. There are some milestones along the way to AGI that I think are quite clear. Strong reasoning skills and the ability to generalize are the key components. But yeah, right now it's definitely, we're not at AGI yet. And there's a lot of speculation, obviously, about, like, how long it'll take to get there, and will we get there, and all those kinds of interesting questions. I think we can come back to that at some point, but ultimately, to answer your question, yes, what we have right now is [00:28:00] weak AI. And, put another way, it is the worst AI that you will ever use, because the AI keeps getting better and better, literally, on a weekly basis. So, it is, it is definitely A stepping stone. This is not the end product that you're gonna end up with. And, by the way because of that statement, I've often been asked this question. Well, if that's the case, why even bother starting now if we know that it's gonna get way better really soon? It's a reasonable point. However, the biggest problem with that thinking is that it's an issue about you. It's not an issue about the AI. And what I mean by that is You need to start learning the AI, even if you're starting out with basic, simple, weak, narrow, whatever term you want to use.

It's just today's current 2024 AI is pretty powerful, but it's not the ultimate AI. But if you don't start learning it now. You're gonna have a problem learning the more complex, more powerful AI s. That are coming. Now, will those AI s. Be smart enough to just [00:29:00] figure out that you don't know how to use them that will help you? they will. But you're putting yourself at a disadvantage if you just kind of sit on the sidelines. So I think that was a reasonable thing to say. Maybe in 2015, maybe even 2020. But in the last few years, I think you need to get off the sidelines and start to play ball, even if you're playing nerf ball for now, because that's what you have is the current AI. We have or is not something that you play at the professional level with. it is something that I think people need to get familiar with and need to get comfortable with now. But that's my very long winded answer to say yes. The current AI is very much narrow or specific use cases. Yes.

Mallory Mejias: And you know, I think if you extrapolate that example you just gave to kind of any other technology, let's use cell phones, for example, if someone, you know, was like, I'm not going to use this flip phone or this BlackBerry because I know it's going to be better one day. They probably would have been at a disadvantage, you know, when iPhones came out and they had never used a cell phone in their life, but I'm sure you can kind of apply that across the board.

I do think it's. It's overwhelming, and especially when we're about to talk about this exponential curve that [00:30:00] we're on to think of how quickly things are moving and how it almost feels impossible to keep up with it. But at the same time, I don't know if that's a reason not to engage.

Amith Nagarajan: That's the reason you need to engage because you've got to start learning a little bit at a time. We are amazing, you know, creatures, so to speak, is as a species, right? We can do so much and we're highly adaptable and we're able to generalize and reason and then we have a capacity for art and so many wonderful things. we work at a linear pace. Our species is linear in how we think and how we adapt, how we evolve. That's not the way AI work. So you need to jump on this stuff. Even those of us that are spending a lot of our time on AI keep up with everything that's going on. We are overwhelmed. So don't feel bad about that. perhaps you have some anxiety over this. Perhaps you're excited about the channel, that energy, whatever it is towards taking a step towards learning. That's the most important message in all the talks that I give is to start learning now.

Mallory Mejias: You mentioned that narrow AI is, in a way, a step to [00:31:00] strong AI or artificial general intelligence, AGI, so does that mean once we have AGI that narrow AI will just cease to exist?

Amith Nagarajan: No, I don't think so. There are already examples where, while it's not what I'd call AGI, we have these broader models like OpenAI's ChatGPT, Google's Gemini, Anthropic's Clod, and these broader models are capable of doing a wide array of different things. In fact, we're not even sure everything that they can do, right?

You learn by example and see what they can and can't do, but they're able to do a wide range of things. Are they perfect at any of those things? It's really not. They're good at a lot, but it's kind of like generalists versus specialists in life, right? Where, you know, you might have a Honda Civic and you might need to have the oil change.

You just go to Jiffy Lube, but if you have a specialty car of some sort, you might not want to go to Jiffy Lube. You might want to go to someone who knows that particular vehicle really well. And so specialists exist in, you know, in our world and they're going to exist in AI. So to [00:32:00] give you a more. More useful example, you might have specialist AIs that do things like recommendation engines.

And that's actually not generative. AI recommendation engines fall into the category of predictive AI and they've been around a long, long time. And you can use both, right? You might use a recommendation engine along with the content generator. So you might recommend content, right? for your audience and your website. And then you might use a state of the art generative model to summarize that content. You might also have other models that are specifically tuned on particular things like customer service or within your professional domain as an association.

I think that there will be different levels of AI for different kinds of things, and you'll mix them together. Another way to think of it is. Imagine the A. G. I. Is kind of like the aircraft carrier. It's this massive hulking thing. It can project power. It can do things, right? But that doesn't mean you need the aircraft carrier to go to the grocery store. You might just want a small boat for that that can quickly get there [00:33:00] nimbly and AI models are somewhat similar in that You know, there's a cost factor, but there's also a speed of execution factor narrow and small models are incredibly fast They're essentially instant.

Whereas these big models Take a little bit of time to run and they're super compute and energy intensive which makes them costly

Mallory Mejias: That makes sense. Well, if you've gotten to this part of the episode, I'm hoping you've learned at least one thing, probably many things about AI, but you might be sitting here listening or watching if you're on YouTube, thinking, why does this matter to me? Why does this matter to me professionally as someone who works for an association or nonprofit?

And we think that the potential of AI for associations and nonprofits is huge and that it can and will revolutionize Your operations, your decision making, and your member engagement. AI's ability to analyze data, predict trends, and automate tasks can significantly enhance the efficiency and effectiveness of your organization.

So that's one piece, the potential. [00:34:00] Parallel to that discussion on AI's potential, we have to consider the pace of advancement. So I want to go a little bit further back in history right now and talk about Moore's Law, which is something Amith that you talk about in most of your presentations. Moore's Law predicts the number of transistors on a microchip will double approximately every two years, though the cost of computers is halved.

Moore's Law implies that as technology advances, The capabilities of digital devices increased dramatically while the cost decreases, enabling more widespread access to powerful technology. So you might think that that sounds impressive, but consider that AI computational power is on a six month.

Doubling curve. Actually, we might even be talking about less at this point, Amith. We are in a period of exponential growth, which is why we've got to engage with these technologies now so we can harness them and leverage them for your organization's own growth and impact. Amith, I know you like to talk a lot about this exponential curve that we're on, exponential growth.[00:35:00]

Why is this so essential to consider when talking about AI for associations and nonprofits?

Amith Nagarajan: Well, what's happening is because computing power and then we'll come back to in a second, but just raw computing power has stayed on this Moore's law curve, which is, you know, you mentioned the technical definition of the doubling of transistor density which essentially implies, the equivalent, which is cutting costs in half for the same computational power, it basically means that the marginal cost of compute has Effectively approach zero. So that opens up an era of abundance in terms of things you can do with compute. One of those things is AI aI Is obviously an application of compute, but it requires other elements to be successful, including data. And the thing about datAIs that data has also been in an exponential curve due to what's happened on the internet.

 

And so, A. I. Is growing at a faster pace, partly because of the convergence of those two exponential curves, but also because there's just a tremendous amount of capital flooding in. And so there's an insane amount of research happening, and there's tremendous progress being made on the algorithmic front.

When we say algorithmic front, we're saying essentially, look you're not just pumping more and more datAInto the same program. Essentially, you're coming up with new programs, new ways of doing neural networks, new architectures that are more powerful. And that's one of the reasons you see this happening so fast.

Now, there could be an argument out there like, will we stay on a six month doubling curve? Will that slow down a little bit? I would,

Mallory Mejias: Okay.

Amith Nagarajan: suggest to you that it doesn't really matter that even if we're on a doubling curve, that's no better than Moore's law in terms of speed, we're still in for a really crazy ride.

So all of that equates to capabilities that affect business and society. And therefore, you know, the rate of change being so rapid means there's opportunities and there's also risks. And that's why people have to pay attention to this is because you can't just insulate yourself and say, we're not going to change.

Mallory Mejias: we touched on this a little bit earlier, but I want to get into it a little bit more. How can you not feel like you're going to [00:37:00] be left behind when things are changing so quickly? Like I would even argue maybe as much education as you do, like you can't possibly keep up with what's happening. What do you say to that?

Amith Nagarajan: Well, like a lot of things in life, I don't have a great answer for this because I too feel overwhelmed and I spend a lot of my time doing this and it's my background. So I, you know, I'm pretty well positioned for the AI changes that are happening, but I also feel overwhelmed a lot of times. So I think part of it is you just have to get comfortable feeling overwhelmed at times. But the, the key to that is, you know, put your head in the sand and say, Oh, I hope this doesn't, you know, affect me. You realize it's going to, and you look at it and say, Well, I can't do everything. I can't stare down this mountain of knowledge I need to consume. Let me pick a piece of it. Let me learn something.

So you start off by just learning broadly, like, what is this stuff and how can it affect me as a person? How can it affect my organization? And then you pick and choose. You say, oh, that looks interesting over there. Or, oh, this particular capability could really help improve member service.

Or it [00:38:00] could, you know, change the game in terms of the way our field does its work. And you, you have to, you have to narrow your focus. So that's, I think, the key is it goes back to concepts of just how do you run a business? How do you run your life? You

Mallory Mejias: Okay.

Amith Nagarajan: to pick narrow priorities and then stay focused on them. I think that exponential growth is a hard concept to really get your head around. You know, I spend time speaking with audiences all over about exponential growth and exponential associations in this era we're entering into. There's actually one chart that I'd like to share with our viewers. For those of you that aren't familiar, we run this podcast as both a podcast through all the channels you're listening through, and also on YouTube, where you get to see us and, in a moment, see some additional helpful visual aids.

So I'm going to share a slide here that talks about a particular setof growth curves that I think are quite interesting. So, for those of you that are watching on YouTube, you can see on your screen a pretty interesting chart. And I'll do my best to describe it for those of you that are listening. So what we're seeing on the screen are capabilities within the domain of AI and how they've progressed over a period of time relative to human performance. So on this chart, we have a horizontal line that says zero about three quarters of the way up the chart, and it says human performance as the benchmark is set to zero.

And so essentially what we're doing here is comparing different capabilities of AI relative to the average person. That's basically what this chart's trying to do.

And then what the chart shows are five different lines. The first one on the left, starting in the late 1990s, is handwriting recognition. When people first started trying to tackle computers ability to recognize handwriting. And what the chart shows is a pretty slow progression. So, to get from the point where the computers were able to go at all, like do anything with handwriting recognition, So just being at, you know, effectively a small fractional capability relative to our skills took a number of years of lots of research, lots of, of investment.

 

And in the late 90s that happened, and the original application actually is a little trivia tidbit, was to read zip codes. So the postal service obviously processes a crazy amount of mail. And so how do you read those zip codes? They're all written in various different kinds of handwriting. There's cursive, there's block letters. There's my handwriting, which I don't even think the best day I could read today but all of these different variations were out there. So it took a while. And so for those of you that are on the audio only it took a good, you know, about seven years just to get kind of to the point where it was respectable.

So it still wasn't human level in like 2005, but it kind of hovered there for a while. It was really useful for specific use cases like zip code recognition. Reading checks. So banks were able to deploy the technology to validate that the, you know, checks were valid, but also later on this enable things like mobile check deposit that most of you probably are familiar with.

And and really those those capabilities only got good enough to really deploy its scale until, you know, really after 2010 2011. So the deep learning revolution really happened in the early part of the 2010s. And that enabled, as the name implies, deeper Neural networks, neural networks with far more layers. And the ability to communicate between those layers more effectively. And that resulted in, all of a sudden, right? It's a success overnight that was made over decades. Handwriting recognition achieved human parity in around 2015 2014 timeframe. It has gotten a little bit better from there. But the research essentially kind of stopped because once we got to human parity, we were pretty much good.

The next curve is speech recognition. So we we see a green line on the screen now, also starting where people were trying to play with this back in the nineties really didn't make much progress. And then it was really slow progress all the way to the early 2010s [00:42:00] when we're like, you know, 20 percent 30 percent capable. If any of you have ever used a product from the old days called Dragon Naturally Speaking, it was a really cool dictation tool that you could plug into Microsoft Word. And it would kind of, sort of, a little bit, you know, transcribe what you said. And one time I actually dislocated a finger and broke some other stuff in one of my hands and I couldn't type.

And I tried to use the thing for a period of, like, several weeks. And it was painful because it just even typing with the other hand, you know, I was able to do faster than, than,

that tool worked. So, and it, it was not a programming injury, by the way.

um, But, but anyway I I found early on that it was basically hard to use.

Then all of a sudden, right, we had this capability come Siri, Google Assistant, Amazon Alexa, and you were able to sort of talk to these assistants. But they were able to only understand you know, a little bit of what you said. Same thing started to happen is, you know, mid, mid 2010s, we go almost vertical because deep learning catches up. You're able to pass [00:43:00] massive amounts of training data, and the neural networks are able to understand speech. I'll go a little bit faster through the rest of these. We have image recognition, reading comprehension, and language understanding, of the other three curves in this chart. People didn't really even try image recognition until kind of the late 00s.

I mean, there were research projects years before that, but image net was a big project that started around that time and it was originally

Mallory Mejias: Okay.

Amith Nagarajan: slow. It was very poor quality and the same thing happened is there was a project out of the University of Toronto called Alex net, which was the first real application of deep learning that blew people's minds in the image net competition where they got to Radically better performance very, very quickly, and that just got better and better.

And those of you that are looking at the screen can see that image recognition is the domain of computers now. They're far better than we are at recognizing objects, but also really picking out a lot of subtle details that we will miss. And then, then if you look further on the chart, reading comprehension and language understanding, I really like to point these out to people because these lines are nearly vertical.

So Even though in the field of A. I. Reading comprehension and language understanding were thought to be, you know, these essentially not only gold standards, but grand challenges of sorts, right? Where they're like things that you don't know actually at all how you're gonna get there. But you know that they're incredibly powerful.

I remember when we started rasta dot I o about seven years ago, which is our A. I. Newsletter platform. Rasa primarily, really initially only leaned on predictive AI. So Rasa used predictive AI to predict what people would be interested in and then provide a personalized newsletter. And for years, that was you know, just the predictive AI was great, but our, our grand vision for Rasa was to actually summarize all that content to say, oh, Mallory, if I send you five articles that are different from the five articles I get, that's cool.

That's like a good start. That's what predictive AI did for rasa from the, from the early days. But what if we could summarize all those articles for you so that you have 200 words that tells you everything you need to know [00:45:00] and generative A. I can do that. But that requires reading comprehension and language understanding.

Compressing everything. So language understanding is a radically more complex discipline and endeavor than handwriting recognition. Yet in comparison to 20 plus years of pursuit to get to human parody, we did with language understanding in 18 months. That should blow your minds because that also means further capabilities that are far beyond any of this are coming to you fast, whether you like it or not.

Mallory Mejias: I would highly encourage all of you to check this out on YouTube because just seeing the straight line when you're looking at a graph, right, with years of time spanned over it, and then you see a straight line going up. It's pretty wild. And honestly, I mean, I've seen this. But what I haven't really paid attention to is the surpassing human capabilities part, and that is really neat, and I'm assuming is just going to continue surpassing it.

Amith Nagarajan: Yeah, and I think what's the limit in terms of performance in an area like language understanding and reading [00:46:00] comprehension? We don't know is the short answer because we're humans, and so even the best of us have a certain capability set. And actually the question is, is like, will there be emergent things that we learn about language that we don't know that are really interesting from a research perspective those domains that could be a fascinating thing to study? But beyond that, the question is also, well, even if it doesn't get better, if it's already 20 percent better than the average human. happens when the cost becomes basically free, right? And so that's part of it is, is that language understanding at the highest levels right now is still somewhat expensive.

It's fairly available, but if you think about GPT 4, Google's Gemini Ultra, Anthropx Claude, that's where you see the examples of the best level of language understanding today. And those models are all fairly slow and they're fairly expensive to use at scale. But that's all going to change. So imagine if you had instantaneous access to GPT four and it was basically free. And that's the thing you have to plan for because we'll be there in 18 months or [00:47:00] less.

Mallory Mejias: Well, there you have it, folks. That's the exponential condensing of exponential curves that we're on right now. That makes this stuff so important. I hope you all have enjoyed today's episode. Part one of the fundamentals of AI episode reminder. Next week we're going into practical application, so hopefully you learned a lot about the building blocks of AI in this episode, but if you want to learn how to use it and what next steps you can take.

Check out our episode next week. Amith, I'll see you then.

Amith Nagarajan: See you then.

Amith: Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book, Ascend, Unlocking the Power of AI for Associations, at ascendbook. org. It's packed with insights to power your association's journey with AI. And remember, Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world.

We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.

Mallory Mejias
Post by Mallory Mejias
February 29, 2024
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.