What is AI? And what does it mean for me and the world?

Listen to my interview with the Insatiably Curious Podcast host, David Gee. Transcript of the podcast is as below.


Welcome to the Insatiably Curious Podcast, where we invite lifelong learners to join us on a personal and professional journey. Now here to inform, entertain and enlighten, while always keeping it interesting from our nation’s capital. It’s your host, David Gee.

David Gee  00:26

Joining us today so glad to have her, Kavita Ganesan… She is the author of “The Business Case for AI: A Leader’s Guide to AI Strategies, Best Practices & Real-World Applications.” Welcome to the show, Kavita.

Kavita Ganesan  00:40

Yes, thank you for having me, David.

David Gee  00:41

So one of the things when we have broad complicated topics, and I certainly think this qualifies… I like to kind of set the landscape or the ground rules a little bit to make sure we’re talking about the same thing or that we even know, in some cases, what we are talking about it.

So we’re talking about Artificial Intelligence, AI, how it informs the kinds of decisions we can make it work. But why don’t you give us the layman’s definition of what it is we’re talking about, and maybe second to that the difference between how you might use it in your computer science lab in an academic or research setting and how I might use it at my office or IBM or something like that.

Kavita Ganesan  01:33

Yes, so AI is all about trying to mimic human decision making, within a computer. And using software programs. And the way AI systems today work is by learning from data. So let’s take a credit card. Let’s say you’re trying to detect fraudulent transactions, credit card transactions. So the way that AI systems today learn is by looking at 1000s of different examples of what makes a transaction fraudulent, or non-fraudulent. And then it mines the patterns [phonetic 02:09] from that. And then the next time it sees a new transaction, it then decides, hey, this seems suspicious. So it might be fraudulent. So that’s how systems today learn. But historically, they have been very rules based. And in the future, it may not even be data dependent anymore.

So right now it’s highly data dependent. And from a business perspective, AI systems are very good in improving efficiency of workflows. So let’s say, right now, your customer service agent is manually routing tickets to the appropriate teams to get things resolved. So if you put an AI system there, it can do that very efficiently, 24 hours a day, and maybe even more accurately than your human agent.

But from a research perspective, in a research lab, that’s not how we look at things, we are not looking at what benefits that can be to a business. So we are thinking about how can we get better? How can technique-2 become better than technique-1. So we are always looking at incremental improvements, either through better data or better techniques. So we’re always trying to beat the state of the art. So it’s kind of different, what happens in the research world versus what’s happening in the business world. And the business we have to be completely focused on the applications of it, how it’s going to help us.

David Gee  03:40

There’s probably not a day that goes by that I don’t come across AI and in Washington Post, New York Times, Wall Street Journal, you know, whatever your source of news is. It’s certainly on the minds of business leaders today, is it currently in the state that we use it today as it is a kind of the purview of large corporations, IBM’s and Googles and in Tesla’s, or are you seeing the use of AI filtered down to more medium sized businesses or even in some cases, small businesses?

Kavita Ganesan  04:24

So there are two groups that I see, that have really embraced AI, and that’s the large corporations. And that’s specifically on the tech sector, not in other industries, and startups…. This AI startup are very good using AI and they’re very efficient because they’re small, they’re nimble, they know how to adopt new technologies and just run with it. And then the large corporations on the tech side, they are already very AI driven. Their infrastructure is set up for AI. So these two groups are running fast with AI but I think that’s the group in between the midsize businesses, they are thinking about AI, but they just don’t know how to get started. Because there’s so much of confusion in the media, like what’s happening in research gets overstated as the current capabilities.

David Gee  05:16

In your book, you talk about some of the myths, conceptions and the myths around AI that we should all be aware of… Why don’t you specifically outline a couple of them or predominant ones?

Kavita Ganesan  05:30

Yes, sure. So the first myth that people think is that they need to use the latest and greatest technique that the media is talking about. But really, the latest and greatest techniques are very still in the r&d phase. And they may not fit in within your infrastructure, you may have an old infrastructure that can maybe take in basic computer algorithms, not something sophisticated that needs GPUs, and TPUs. And you have to keep in mind that AI has been there since the 1940s. So the techniques are already very old, it’s only become popular now because of the computation power that we have. So using any one of those techniques that can benefit your business, is a good technique. So not to worry about what’s being discussed in the media.

David Gee  06:20

I don’t necessarily want to make this a history lesson, but you just told me something I absolutely was not aware of. And that is 7 decades give or take of AI. So without supercomputers and the computational power that we have today, how was AI used in the 1940s, the 1950s, I’m fascinated by that.

Kavita Ganesan  06:44

Yes, so back in the days, it was like heavily rules base, you have to encode human knowledge in the form of specific rules. And they also had started neural networks research long ago, but that research stopped because insufficient computation power. And then he picked up again, I think, around 2011, when big data became a thing, then we had lots of faster computation power, and then it just accelerated from that. And neural networks now has become deep learning. And that’s a big field of study now.

David Gee  07:19

You talk again, in your book about the 5 basic pillars of AI preparation and AI landscape, why don’t you go into a little bit of those, if you would, please.

Kavita Ganesan  07:32

Yes, so given how AI systems today learn, like they heavily dependent on data. So if you’re not already collecting data, or if your processes are still paper based, then data infrastructure is a big pillar in your AI preparation. And you don’t have to start off fancy, you just have to make sure that you’re doing the basics to start with, like, if you’re using paper processes, why not shift to Excel. If you’re not collecting data, think about where, which are the areas that AI could benefit, and start collecting data in those areas. So that’s one huge pillar, the data infrastructure pillar, then there is the cultural pillar. That’s like addressing some of the fears around AI, because a good percentage of Americans are fearful of AI, and  a research has actually shown that [unintelligible 08:36] that in my book. So you want to put this fears to rest so that companies can actually start looking into the benefits of AI, as opposed to fearing AI. Then there’s also understanding what’s development around AI? Like, that’s also a cultural element. It’s very experimental. It’s very iterative. So you need to establish those cultural elements. So those are the two big pillars:

  1. Data


  • Culture.

David Gee  09:09

Follow up on that fear angle, oftentimes just kind of the way that humans are wired, we fear the unknown, whatever that is. Is this what’s it at play here? Is it simply that most people don’t know enough about AI to feel comfortable with it? Or is it something kind of spookier where they envision giant super computers and robots and everybody or things like that, you know, kind of taking over the world with information and you know, brains mimicking my brain and so on. And so, you know, what, what is it that that, you know, I know you’re not a psychologist, but in your take, what is it that people fear about the subject of AI?

Kavita Ganesan  10:00

So, there are some celebrities out there that are spreading this type of information that it will take of our jobs and later humanity, it can be used for bad things. And some of it is true, like AI can be used unethically. But taking over humans, that’s untrue. Because AI systems today don’t have the common sense reasoning as humans do, they can’t read body language, they don’t natively understand emotions, they can’t read between the lines, and just connect the dots like humans can. But on very specific tasks, AI systems can be very effective, and can even surpass human accuracy.

So the misunderstanding is that these AI systems can become conscious like humans, and then start making decisions that can overpower humans, I’ve seen that is a real fear, really. And that actually even came up and I was trying to hire one of my narrators for my book. And she said, Hey, I was afraid to audition for this book, because there’s so much fear about AI in our community. So, it’s widespread.

David Gee  11:14

Interesting. Yeah, I can think of a couple of Hollywood movies where the machines are plotting… Ex Machina is actually one of the [unintelligible 11:23] a machine plots to take over the mind and world of the founder or the inventor of the machine. So yeah, Hollywood has definitely taken that theme. And I’m sure some media outlets as well. You talk about back to the book, you talk about what you call vanity AI, you know, anything that’s kind of in the zeitgeist or that becomes a kind of a thing will be social media or anything else. There are companies that will just kind of glom on to that not in a real authentic sense, but just to say, hey, yes, guess what, yes, we do AI too… You talk about vanity AI?

Kavita Ganesan  12:11

Yes, Vanity AI happens a lot. And a lot within large corporations, especially, because there’s a rush to adopt AI. And as that trickles down the ladder, people think that they just need to use AI, and the way companies start using AI today is just by looking at data, seeing what data is available, and then coming up with problems based on data. But the problem with that approach is that it tells you nothing about the inefficiencies within the company, like what problem really, is it solving? What pain point is it addressing? So because of that the AI projects don’t necessarily solve real business challenges. And then executives don’t see the value from it. And then they start distrusting AI as a whole, that it’s useless. It’s just hype. But really, there’s not enough planning around it for what problems are we applying AI.

David Gee  13:12

So, I’m in the content creation, marketing messaging business, and I use data all the time to inform my content, whether it be page views, or how long people engage with content, you know, downloads, you know, all those kinds of things. We have lots of ways to measure the way that people interact with our marketing with our messaging, is there do you draw a distinct line between what I would call data driven decision making and AI? Am I using AI a little bit when I make those content decisions? Or is it just something completely different?

Kavita Ganesan  13:57

Yes, that’s a great question. So AI can serve two grand purposes.

  1. One is to improve efficiency within businesses.
  • And one is to help make better decisions.

And if you look at our data, we have a lot of structured data, things that fit neatly in an Excel spreadsheet. And then we have lots of unstructured data, like all the Twitter comments, all the documents within your company, your customer support conversations, all of that is completely unstructured. So when you’re trying to extract data-driven insights, you can’t just use your structured data, which I refer to as simple data analytics, but to make deeper decisions, like what are your customers complaining about? What’s your top wish list? You need that unstructured data and you can’t really aggregate unstructured data like you do structured data, you need some form of AI on that, and specifically, a branch of AI called natural language processing that tries to make sense of all that text data, and then extract key elements that you can later make sense of and inform decisions like, what are my customers’ top points, wishlist and so on. So there’s a lot of opportunity there.

David Gee  15:16

Well, certainly that… I mean, regardless of how you achieve, it was certainly from a customer service perspective, a marketing perspective, the more we know about our audience, our prospects, our consumers, the better we’re going to be and the bigger competitive advantage we have, right? And it’s not just,

Kavita Ganesan  15:38

Yes, that’s right.

David Gee  15:38

Yes. So when you first came to University of Southern California to get a master’s in computer science, were you hearing about AI from your first day in school was AI, a big thing in the halls of computer science classes a long… a while ago.

Kavita Ganesan  16:00

So USC is a special breed, they were so much into AI, even when I joined. So, that’s how I got exposed to this whole field. I took one natural language processing class and the professor. And I was hooked. I liked it. So then I went on to doing my PhD in the field, and then became a data scientist. But within USC, AI was a big thing. Within academic institutions, AI has been a big thing since I guess long ago.

David Gee  16:32

And you’ve been at the consulting game for 15 years, give or take and working with Fortune 500 companies, as well as some smaller organizations, when you first you know… When you got your freshly minted degrees and kind of went out in the business world, were people conversing about AI? Did you have to do a lot of educating. Tell me about the landscape when you first started.

Kavita Ganesan  17:00

So when I first started, AI was not known in the industry. So I had to become a software engineer, because there was no prospect for AI right after I graduated with my master’s. So I did my PhD thinking that I’ll go into a research lab to maybe use some AI. But around 2013 is where data science started to really become a thing. So then I jumped on to the whole data science field where I could apply AI. So it’s only around 2013, 2011, where AI became a thing in the industry.

David Gee  17:46

We spent a little bit of time today talking about both the past and the future, as you look into to the future and to your crystal ball. What do you think is and conversely, is not appropriate to expect from Ai?

Kavita Ganesan  18:04

I think what we can expect is as the understanding around AI improves, I think all the midsize businesses are going to start using AI the right way. And also the non tech companies are going to start embracing AI and start seeing value from it. But in the next few years, don’t expect a conscious bot to be walking around. That’s not going to happen in the next few years. And maybe not even this whole decade. So these are the two things that are going to happen. Business are going to pick up on AI a lot more. And research is going to improve around trying to mimic like human reasoning within an AI system, but not to the point of it becoming conscious, really.

David Gee  18:51

So you do see it filtering down into the mainstream and into smaller companies, though, is that correct?

Kavita Ganesan  18:57

Definitely. Yes.

David Gee  18:59

Anything that you’re concerned about as a data scientist, as a software engineer, as someone who’s spent so much time around AI? Is there a potential? I mean, we don’t know, you know, the capacity for human beings sometimes. But is there something that you’re even as knowledgeable as you are that you’re a little fearful of?

Kavita Ganesan  19:22

So my biggest fear is one bias in data. So that can propagate through AI systems because AI systems, learn from data and bias has been shown to be very prevalent, like in facial recognition systems. So in fact, some of the big tech companies stop using facial recognition systems for law enforcement, I think?

David Gee  19:47


Kavita Ganesan  19:48

Yes. So that’s one big problem, the bias in data and the second problem is how people are using AI like we saw with Deep fakes. The recent news about Deep fakes on how people can make you say things that you didn’t really say. And then make that to be reality. So the applications of AI really needs to be regulated. Otherwise, it’s going to be used in unethical ways.

David Gee  20:15

Wow. Yes. I mean, talking about big tech and regulation. I live in Washington, DC. And that’s obviously, you know– the pages of The Washington Post all the time about how we regulate these companies and social media and that could be a whole another conversation. So thank you so much for joining us. I’ve really enjoyed it.

Kavita Ganesan  20:36

Yeah, thank you for having me, David.

David Gee  20:38

Kavita Ganesan, author of The Business Case for AI: A Leader’s Guide to AI Strategies, Best Practices & Real-World Applications.Thanks again to her and thanks to you for joining us as well. I’m David Gee. We’ll see you next time. So long, everyone.

About The Author

Have a thought?