title AI Expert: Why AI Doomsday Is Overhyped

description Burned out at work? Get clarity on your next step with the Get Clear Career Assessment.

 

In this episode, Ken sits down with Zack Kass, the former head of Go-to-Market at OpenAI. Learn why chasing AI trends can derail your career, how AI is reshaping education and health care, and why human connection will become your greatest advantage.

 

Next Steps:


🪑 Join the Front Row Seat live audience! 

📗 Order Zack Kass’ new book today, The Next Renaissance.





Connect With Our Sponsors:


Head to Avocado Green Mattress today for $50 off adult mattresses with code FRONTROWSEAT.

Get 20% off when you join DeleteMe.

Try Quo for free, plus get 20% off your first six months. Quo: no missed calls, no missed customers.


 

Explore More From Ramsey Network:

🎙️ The Ramsey Show 

📈 EntreLeadership

💸 The Ramsey Show Highlights

🧠 The Dr. John Delony Show

🍸 Smart Money Happy Hour

💰 George Kamel

 

Ramsey Solutions Privacy Policy
Learn more about your ad choices. Visit megaphone.fm/adchoices

pubDate Tue, 21 Apr 2026 10:00:00 GMT

author Ramsey Network

duration 3821000

transcript

Speaker 1:
[00:05] Okay, Zack, so much hype out there about AI, and we're going to dive in, but right out of the gate. Will AI make most white collar workers unnecessary, or is it going to make us more valuable?

Speaker 2:
[00:18] Can you define unnecessary?

Speaker 1:
[00:21] Meaning you don't need a job. They don't need you. Let's put it that way.

Speaker 2:
[00:25] One, I struggle with this definition of white collar versus blue collar, knowledge work versus non-knowledge work. But if the question bluntly is, will most of the work we do today be automated at some point by machines? Or can it be? The answer is yes, it could be probably at the current rate. Then there's a bigger question, which is, will it be? We can go into this, but this is the theory of technological thresholds. What can a machine do versus societal thresholds? What do we want a machine to do?

Speaker 1:
[00:54] That's a key distinction. So let's go there. Can it be versus will it be?

Speaker 2:
[00:57] But then I still want to answer something that you asked sort of implicitly, which is, you used the word unnecessary.

Speaker 1:
[01:05] Yes, on purpose.

Speaker 2:
[01:06] Yeah, and I think that's part of the charge right now. And I want to start by saying, I don't want to, I'm trying to introduce ideas that let people arrive at their own conclusions. I have some strong opinions. Most of them I actually keep fairly close because I want my job to be helping people make sense of this on their own, less giving people the answer.

Speaker 1:
[01:28] It's good.

Speaker 2:
[01:29] And some of this means better understanding what it is we're facing right now, which is unprecedented in some ways. And actually we've seen before in many others. Necessary in this case is a word that we should revisit because it asks, will we be found, will human labor be important to the future? My answer is absolutely yes. What will that work look like? It is not always clear right now. And that I think is asking are humans necessary. Yeah, I believe in the soul and spirit of the individual. And I think there's a reason we're here and we can go into that. But asking if we will have to do the jobs that we do today in order to fulfill the work that society needs to be fulfilled for us to live the lives we lead, probably not. And those things should be separated basically on lines of emotional displacement versus economic displacement. And the idea of economic-

Speaker 1:
[02:28] That's a good distinction.

Speaker 2:
[02:29] The economic displacement argument says we're gonna be automated and we will be poor as a result because we won't have jobs and so we can't make money and we can't make money so we can't eat and we can't eat so we will starve or we will eat scraps. The problem with that argument is that we are all descendants of people whose jobs were automated to our economic benefit for thousands of years and we don't think twice about them. In fact, we wander the earth all day asking ourselves, when is this good or service going to be better, faster, cheaper without realizing what we're asking is, when is a human going to be extricated from the manufacturing of that good or service? We don't do this because we're jerks. We do this because we are conditioned to believe the world should be better, faster, cheaper all the time because of automation and technology. And so I use this platform now to sort of talk to people about the fact that it is scary but not necessarily bad. That we live in a world that is so much more abundant than our great-grandparents could have imagined because we've automated so much. Many of the jobs that they did, we no longer do very gratefully. But we don't consider what it actually took to arrive at this moment. And the fact that if you play it out far enough, the world could be economically abundant. But we might not know what to do. And I will quote, and then we can challenge, we can talk about societal and economic thresholds, John Maynard Keynes. And now this paper is getting a bunch of attention, which is perfect because I opened the book with it. And I don't know if you've read the intro. The most important paper I read in college, and the paper that I would recommend that everyone read, because it's so prescient in ways that I don't think have really been recognized even by modern economists. John Maynard Keynes, 1930, writes a paper, Economic Possibilities for Our Grandchildren, in which he writes against a backdrop of despair. People are dying in the streets of starvation. I must now disembarrass myself to imagine a future that I will certainly not live to see, one in which humans will have solved the economic problem and be faced with something more profound. I get goose bumps because the father of modern macroeconomics was proposing that at some point we would have, we would feed, clothe, care for everyone, the economic problem, and then we'd really be in for it. And then humans would have to be like, okay, well, if I'm not toiling, what am I doing? And he was arguing that we were facing a spiritual event, which I sort of tried to lead people to in the book.

Speaker 1:
[04:59] Today on the show, we're talking with AI expert, Zack Kass. As the former head of Go-to-Market at OpenAI, he helped to bring tools like ChatGPT to the world. Now, he's helping leaders navigate what he calls the greatest leap in technology and showing how AI can lead to a better future. Yeah. By the way, I want to mention the book is, of course, The Next Renaissance, and I mean, that's a strong statement. I mean, for those of us who appreciate history, let's just go right there. You're leading to something that I wanted to talk to you about. I think that the AI fear is forcing some people to confront what purpose looks like outside of, to use your word, toil and pay for provision, which is a pretty simple formula. That most people view work through that lens, which has always irritated me, as opposed to I see work as some form of creation, an art, and a part of our soul. I really believe that. So do you think that that's what's so scary about it? On some level.

Speaker 2:
[06:02] There's a lot to say here, but part of the reason you believe this, and I think this is important to call out, is you clearly have a genetic predisposition to this. And the fact that your son is studying film says that you've passed him a very special gift that he may only one day come to appreciate, which is an innate belief that work should have purpose beyond the money you make. And people who see themselves against, or in this context, are actually having a much easier time imagining the future than those who see their work as simply a means to make money. Because what we are facing, I think, is this very sort of hard question, which is, okay, well, if I'm not here to do a job, what am I here to do? And many of your listeners, if you're many of your listeners are last name Miller and Smith. And the reason I'm sure of that is because it's the two most common last names in the United States. And the reason it's our two most common last names is because for hundreds of thousands of years, in this case, we named ourselves after our professions. You could be very confident that in most cases, your son would do the job that you did. And so we just started co-opting the name of the profession. And then you would pass that profession as an apprenticeship to your child, often your son. And that carried forever and that is no longer true. Borrowing that is this idea in The Next Renaissance, which I use somewhat tongue-in-cheek as is evident in the title, that we are approaching this profound awakening in on economic terms, on political terms, on spiritual terms, art and culture. And that much like the first renaissance, the technology and we attribute the printing press to be the major catalyst for that moment. The technology in our case is probably going to be some combination of electricity and AI, electricity, internet and AI, but maybe not even internet, maybe just be the local computer that does the most work. And I use this as a reference, as an analogy point, in part because the period that predated the renaissance of the late middle ages, and that was a particularly awful time to be alive, where you really had no case for hope. And I borrow it to somewhat try to disarm the people who find themselves so distraught right now. By the way, the purpose of the book is to offer constructive responsibility as a reasonable replacement for ambient dread. And ICS is just sort of filled with ambient dread today, some of which is reasonably placed, but most of which is actually totally unproductive and counterproductive to what we want to create, which is a better world for our children. You cannot do that if everyone's just assuming things are going to be terrible. And I chose this period in time to call back to because of the amount of flourishing that happened at the individual level, when you unlock what the individual can do and the amount of costs that came with it. The Renaissance was full of new deadly conflicts and a bunch of other things that we had to control for. It's not all good. That's not how the world works. But it is directionally good and there's a lot of reason, in this case, to believe in something that is much better than what we have today.

Speaker 1:
[09:13] And is it fair to say that if we look back on history, every time there was a massive technological shift, there was also cultural panic to some degree.

Speaker 2:
[09:24] Oh yeah.

Speaker 1:
[09:24] Yeah. And so why should we not pay attention to history and believe that this is the final advance that somehow makes us all homeless and hopeless?

Speaker 2:
[09:36] Humans are amazing and in so many ways, and one of the best parts of my job in traveling the world is just seeing how amazing humans are and good, I believe. And we have incredible amount of neuroplasticity, so we're actually quite capable of updating to new information and being quite adaptable. But we lack an enormous amount of historical humility. So in the same way that we don't look backwards with any degree of humility, like when I described to people what it might have been like to explain space travel to someone in 1900, that thought experiment doesn't work. It breaks because we hadn't flown a plane yet. We were Wright Brothers Flyplane in 1903. So, and like how you, what would have had to be required to even arrive at a place where you could theoretically convince someone that we would go to the moon before. So we had even put man at 50 feet off the ground. And yet 70 years later, we have rocket propulsion. The rocket propulsion and like oxygen in space, I mean like kind of wild ideas. This is, this plagues us actually both directions. So humans struggle to have backwards humility, but we really struggle also with, with forwards humility. We don't, we, well, we have a negativity bias and we have a, we have a rosy retrospection, which tells us the past was way better than actually was. But we don't really project quite well on a societal basis. We're very good at saying this is what I want to accomplish individually within these constructs. This is the family I want to build. This is amount of money I want to have. These are the friends that I want to keep. But when we start to imagine what society could look like, we go, well, how will health care ever be less expensive? How will we ever fix the university problem? How will housing ever get affordable again? Well, quite easily if we're really committed to it. We fixed much harder things. This is the challenge in this moment, which is actually pointing to all the problems that we face saying we've never had more eminent solutions, but we have to talk about them.

Speaker 1:
[11:48] All right, so we have an audience of professionals, they want to get better, they want to move up, and they want to lead well, right? That's the progression. And so to this end, adaptability, I love that you mentioned adaptability. I think the most valuable skill in today's workplace, LinkedIn incidentally said it was, two out of the last three years, according to managers. The question is, how do we, Zack, adapt right now? Things are moving quickly. How does a profession who wants to stay relevant, keep growing and win, and lead well, how do they adapt to the AI tools and advancement? What can they begin doing now?

Speaker 2:
[12:25] Okay, first, I will answer this question, but here's my very hot take, which is, and I fell into this, I wrote a paper called The Adaptability Trap, after I was also guilty of telling everyone about adaptability. Then we started, well, then we started doing some research for the book. And I realized, like, a couple of things. So one, we asked a bunch of people, we asked this control, we did this experiment, and asked a bunch of people to describe their least favorite friend. And the results came back basically describing a chameleon. The least favorite friend is an important caveat, right? It's not your enemy, it's someone that you like, but the person-

Speaker 1:
[13:03] Yeah, it's a very good distinction.

Speaker 2:
[13:04] And people cannot stand the friend of theirs who has all the qualities that they would want in a friend, but always says the right thing. Or always says the thing they assume they are saying to appease everyone else. And the word cloud sort of formulates around this idea of infinite adaptability.

Speaker 1:
[13:24] But hold on a second, this is great. I love that you're pushing back a little bit. But what you just described to me is not adaptability, that's manipulative. Yes? No?

Speaker 2:
[13:33] Well, it's manipulative.

Speaker 1:
[13:34] The friend who's always trying to do that is a deeply insecure person, it feels like to me. Adaptability is an attitude.

Speaker 2:
[13:43] Maybe, but let me push back on that. So if you say, hey, do you want to go hiking today? I go, sure, yeah, I love hiking. Let's go. Do you want to?

Speaker 1:
[13:50] I wouldn't say that.

Speaker 2:
[13:51] Okay.

Speaker 1:
[13:51] Because I don't like hiking, to my point.

Speaker 2:
[13:54] Yeah, yeah, fair. And I actually don't either.

Speaker 1:
[13:57] No, keep going, keep going.

Speaker 2:
[13:58] It's one of the great sins of living in Santa Barbara and not liking hiking. That's true. I do it anyways.

Speaker 1:
[14:04] Well, let's play it out. So you say, do you want to go hiking? And I say, sure.

Speaker 2:
[14:08] No, no, I say, do you want to go get Indian food tonight? You don't like Indian.

Speaker 1:
[14:11] Sure.

Speaker 2:
[14:11] But you say, yeah, sure, let's go anyways. What we arrive at is not manipulation. You're not doing this because you're actually trying to manipulate me. You're doing this because you want to, you have a genetic predisposition or you have this lived experience that tells you you need to appease people. Again, this is not someone that people observe as being psychopathic. This is someone who people observe as being genuinely trying to appease everyone by saying the right things. It is actually, I argue in the paper, infinite adaptability. I'm willing to do whatever you need me to do in order to accomplish this goal. And my point in this is not to actually define adaptability, but to say, there is a risk we run right now, individually and collectively, especially as things change. And you see this, I know you see this, of people performing in a way that they assume is optimal economically or socially, but does not actually serve them or isn't living their values. And the problem here is that we've been trumpeting adaptability for so long that we've created this world where increasingly people seem pretty willing to do whatever, quote unquote, whatever it takes. My argument in this adaptability trap paper is that what we need now more than ever are individuals and companies who are anchored to their mission, vision, and values, who are very unwavering in what they want to accomplish, what they stand for, and what their vision is for the future of themselves, individually, collectively. Because we need to be very adaptable to the ways and means. I caveat all this to say, yes, adaptability is great, but you should be anchored here, mission, vision, values, and you should be adaptable here how you get there.

Speaker 1:
[16:06] Yeah, I agree with that.

Speaker 2:
[16:06] What it is you want to accomplish shouldn't change just because OpenAI just launched a new model or Anthropic, just Open Cowork. That's happening all the time right now. All the time around the world, every time the research updates, someone goes, I know what I want to do now. And I go, really sit and think because next week there's gonna be a new update. Paint a picture in your mind of the life you want to live, what it is you stand for, what you think your purpose is, what you want to accomplish, and then be super adaptable to how you arrive there. But this to me is a major distinction, and when we talk about professionals and their pursuits between what it is you want to accomplish and how it is you're accomplishing it. Now I totally agree. In the workplace, if you're clear on what it is you want to accomplish as a company, you should be super adaptable to that. And if you're really clear on what you want to accomplish as an individual, you should be super adaptable to what it is you do. But I just draw this distinction because I think so many people right now are chasing the latest, greatest, and missing the fact that the horizon, the point on the horizon doesn't actually have to change that frequently.

Speaker 1:
[17:14] No, I appreciate that. You know, something that absolutely blows my mind, there are companies out there gathering your personal information and selling it online. Your name, your address, old addresses, phone numbers, even family connections, and you never approved any of it. That's insane, and that's why I used DeleteMe. Listen, I work in the public arena, so my name is my work. It's everywhere online. My reputation is my livelihood, so I take this seriously. But you don't have to host a national show to care about this. If you're building a career, leading a team, growing your income, your name matters. Don't let strangers profit off of your information. DeleteMe finds your personal data on data broker sites, gets it removed, and keeps monitoring those sites so it doesn't pop back up. DeleteMe is proactive, it's professional, and it lets you stay focused on what matters most. So if you're serious about winning at work and protecting your future, be serious about protecting your personal data. Right now, you can get 20% off an annual plan at joindeleteeme.com/frontrowseat. That's joindeleteeme.com/frontrowseat. What are the skills that you think people are gonna need to add to their tool belt or make sure they're sharper if they already have them so that they blend, I don't even wanna use the wrong word because a guy like you will take a word and, and I love that, but what I'm trying to get at is-

Speaker 2:
[18:42] Pedantically annoying.

Speaker 1:
[18:43] No, but I mean words matter.

Speaker 2:
[18:45] Yeah, they do.

Speaker 1:
[18:45] What I'm getting at are what are some skills that would behoove all professionals to be thinking about as AI advances in the workplace. How's that?

Speaker 2:
[18:53] Can we, before we go any further, words do matter. I'm so glad you say that. And I cannot stand the point we have arrived at. Part of the reason that I just post so little anymore on social platforms is I go there and I'm like, we have, one of the issues with slop, and it's slop is slop, whether it's AI or human drivel, is that people now say the thing that sounds good to everyone else, but actually has no real meaning. Things like humans should always be in the loop. People say this. And then it gets a thousand likes and everyone goes, yes, this is how, and I go, no, I don't think a human should always be behind the wheel of a car. I'm just not sure you actually are thinking about you're saying we don't need to be in all, quote unquote, mechanical loops. There are plenty of places where you do not need a human to intervene because human intervention will lead to catastrophic failure or simply the fact that it's a job that we wanted to fully automate a long time ago because it's super dangerous or it doesn't pay well, whatever it is. And this is part of, anyway, that's a separate aside, skills. I think A1, the one we talk about in the book and the one that I remind all young people, young people ask me all the time what they should study in college. Now we can come back to this. What they're asking is how do they make a lot of money? Because they are terrified about the impending affordability crisis. And I am, I feel like I now have to just remind everyone that like the reason we're at an affordability crisis is not that we can't do things inexpensively. It's that we choose not to. It's not a technological failure that we have, prohibitively expensive housing, prohibitively expensive healthcare, prohibitively expensive education. It's a policy failure. And we need to put incredible pressure on policy makers to fix that. But when I tell kids that, and then I say I also have bad news, which is what you study doesn't matter. There is a very low likelihood that your major will define your economic outcome unless you study math at Harvard or Stanford, in which case then you can go work at a hedge fund. And if you're going to go to a random college, pick a place on the map, then you're going to study anything. Study something you love. Study something that you are willing to be excited about, not because that thing will get you a good job, but because that thing is intrinsically exciting to you so that you can taste mastery, which is not something we ever challenge high schoolers to experience, or not often, I should say. Only if you're an elite athlete or only if you're an elite student, do you really taste what it is like to work very, very hard in pursuit of getting better. So do that. Go to college and study something that you can be excited about for the pursuit of that study, because when you graduate, your employer is probably not going to care about your skills, but they will care about your ability to acquire more skills. And this is also true for adults. No matter where you are in your professional journey, your ability to acquire more skills has now never been more important because the thing that got you here is not going to get you there.

Speaker 1:
[21:45] Give me some of those skills.

Speaker 2:
[21:46] I mean, truly the ability, well, in this case, the ability to learn new ones, but one of the most important skills now is sort of like this new rise in agent coordination. People are building agents all across websites. Exceptional writing, it turns out. I've never seen more companies hire copywriters. You can look on LinkedIn, you can look on Indeed. There's this crazy rush to hire really good copywriters. Content creators, we're in a room full of people who have exceptional tastes, who know very well how to line up a shot. There's this new interest in creating deeply human experiences. There's this skill now associated with taste that we didn't know how to define previously and we're starting to define better, especially in media. There's this incredible new emphasis placed on what we used to call soft skills and now placed on human skills or whatever, I call them all sorts of things. I tell most people comedic timing has never been more valuable in a lot of spaces. Understanding how to toe lines on what's inappropriate and what's funny, there is a bunch of life skill activity that has become really valuable to employers in large part because anyone can produce lots of content. How do we now produce much better content?

Speaker 1:
[23:06] Would curiosity make your list?

Speaker 2:
[23:08] Yes, but I will argue that curiosity is actually not a skill, it's a quality. So I think this is where I, again, words matter, I separate in the book skills from qualities. Some qualities can be learned, it turns out. Most qualities are imbued to us by our parents and our creator, but some qualities can actually be practiced, whereas skills are almost exclusively practiced, although some can be. Tiger Woods was born to swing a golf club, most of us have to try really hard. And even then, I hate golf, but Serena Williams was born to swing a tennis racket, most of us have to have to try really hard. And so these skills are things that you must learn and qualities are generally things that you sort of are imbued to your personality.

Speaker 1:
[23:54] Well, let's go to another title, which is better, The Next Renaissance. I do like what you're doing there because to me, it feels very positive and very hopeful that we are entering into, you know, a maybe a explosion of creativity and maybe more humanness. It's my hope. Now, again, tell me if I'm Pollyanna positive, won't hurt my feelings, that AI and its advances and adoption will only heighten the human touch.

Speaker 2:
[24:23] Why do you think that, first of all?

Speaker 1:
[24:24] Because I think that technology is going to become so even more pervasive than it is, for lack of a better word. It's just going to be so, so instinctive. You know, I think of how it's going to affect every air of our life, that we will rely so much on technology, it will become so interwoven to what we do, that at some point, I hope that we just want to really go some place and sit next to a stranger at a concert hall, you know, and laugh or listen to music, or I want to go sit with somebody on a park bench and just have a great conversation. You know, I think, you know, the idea that I have so much coming at me and so much insight and so much help and so much assistance from all of this AI, that what I really crave is connection and wisdom.

Speaker 2:
[25:10] So that argument is a decent one, which is that we're going to be, no, no, no, it is a decent one, but that one has sort of been refuted, which is that we're going to have so much technology that we're going to crave humanness.

Speaker 1:
[25:22] How has it been refuted?

Speaker 2:
[25:23] Well, it's not that it's refuted. By the way, let me not bury the lead. I agree. But let's, I think we should be clear why. Some of this is that it's true. We are inundated with technology, but I do think technological inundation has been a bit of a plague to our minds.

Speaker 1:
[25:40] Oh yeah.

Speaker 2:
[25:41] But my point in this is, I'm not sure that filling our life with more technology is actually the antidote that we crave and that in fact, the way we describe technology is in and of itself half the problem. When we talk about technology today, you and I, if you use the term technology, what we're talking about almost always is a computer.

Speaker 1:
[25:59] Right.

Speaker 2:
[26:00] Where the history of technology is actually antibiotics, anesthesia, electricity, the Internet, the light bulb. Right. Like all of the things that have come before us that have allowed us all the luxuries that we, and the commodities that we experience today. And when we talk about technology now, we use it sometimes as a pejorative term, almost strictly to describe the device, the screen in particular. And the screen is a demon. I feel very comfortable saying it at this point. The screen is a demon that has stolen the lives of many people. The souls and spirits have been lost straight into the device, especially our children, which we should fight to the death to reclaim. And we should acknowledge that we shouldn't have given kids smartphones and we should talk about the fact that we definitely shouldn't have given them social media. And we should fix the scourge that these things have created, particularly for the child whose childhood should have been precious and the sanctity of which has been lost. And we can do that in two ways. We can acknowledge that the screen is bad and also, and pass policy to restrict, create safe spaces for kids, and also work really hard to create more ambient technology, which is so much of what we love. The stuff that works without our active participation, the stuff that works behind us. And this is where the opportunity for AI becomes so incredible, which is that it gets to start doing things in sort of two capacities. One is a lot of people's lives are gonna improve in ways that they can't experience, because we're just gonna get way better at novel sciences. We're like AI is going to improve our understanding of the known universe, which is gonna benefit everyone who doesn't, who never ever downloads an app, right?

Speaker 1:
[27:45] Can you give us an example?

Speaker 2:
[27:47] We discovered our first antibiotic in 60 years recently because of AI. We split HIV out of DNA recently because of AI. Baby KJ in May last year, first infant in the United States, first infant in the world to receive a custom gene therapy invented by AI and CRISPR, curated of a previously immutable deadly disease that would have killed it by the age of four. Baby KJ will go on to live a long and healthy life. You can actually start to argue on the basis of moral clarity. Like at what point are we playing God? And I'm willing to have that discussion. It's an interesting one. But for now, we should acknowledge that we are going to improve our understanding of novel life sciences, biosciences and start to end an enormous amount of needless suffering. And that's amazing. And particle sciences, material sciences, molecular sciences. We just, the Japanese discovered a plastic that's biodegradable, stronger and cheaper recently. We're getting pretty close on quantum infusion thanks to AI. And I tell people like unmetered intelligence is cool. The idea that intelligence is a resource. Unmetered energy is wild. Producing an electron without producing a carbon atom is something we've never considered, because it's never been possible. And it suddenly increasingly is. This like future of true unmetered energy. And maybe we should have been asking all along, not how do we spend less energy, but how do we spend much better energy? A lot more of it. So all this is to say like, I think that the way that AI ends up improving our lives is sort of in active and passive ways. But the passive ways is ideally by moving so much of what we have to do in front of ourselves on the device.

Speaker 1:
[29:34] Which then in fact creates more connection.

Speaker 2:
[29:38] Into the background. There are other things I think, and moreover, your point is a good, I believe in your argument for more academic reasons, and also more anecdotal reasons. And I will share a story, which is, I had long basically been studying for this book, these roles that I thought were going to have more value in the human qualities. In part because we were already observing the commoditization of the cognitive functions of the job. Wealth management, big one, and real estate. But in 1995, most people picked their wealth manager and their real estate agent because they were sure they were going to get them more money, or get them a better price in the house. This was a very common belief. By 2005, most people agreed that their wealth manager, their real estate agent couldn't get them a better price, but they thought they could. They were like, maybe my guy or gal is good. By 2025, almost everyone is like, nope, my wealth manager doesn't make me more money, and my real estate agent doesn't get me a better price in the house. They choose these people, in the case of the wealth manager, because someone picks up the phone and they trust them. Number one reason a wealth manager gets fired, they are not accessible. They choose their real estate agent because they like spending Saturday and Sunday with them. In a world where you got to have someone help you buy your house, rather it be someone that you just want to be with. You're going to have to drive a lot of places with them. So might as well explore the town with someone you enjoy being with. And this started to reshape how I saw services roles and these other roles, where it's like, okay, there's definitely something else going on here. And then one of the most important moments in my life happened, which I will share with you. My father is an oncologist. He has been practicing cancer medicine for 38 years, during which time his specialty, breast cancer, the survival rate has gone from about 35% to 90%.

Speaker 1:
[31:22] Wow.

Speaker 2:
[31:23] Which is a reminder that technology can make our lives a lot better and we can do really hard things in service of each other. And he received a lifetime achievement award from the Breast Cancer Association of America last year. Now, I growing up saw my dad in part because of these results because I didn't understand the job as a scientist. And part of my identity about my dad was that he was brilliant and that's how he was saving lives. He was saving lives because he was smart. And he is by any account a smart man. He was a lawyer before he was a doctor. He speaks seven languages. And I was sure that that was why he was doing this. And I went to this award ceremony, very excited to hear people talk about my understanding of him. And a patient of his got on stage just before he received his award and she spoke about him. For the first 30 seconds, she talked about her patient outcome. She had survived cancer and was now giving back to the community alongside of him. And for the next four and a half minutes, she talked about how he made her feel. And she described getting a cancer diagnosis and going to see three oncologists and getting the same three prognosis and the same three treatment recommendations. And she said two things I will never forget. She said she realized after the third identical prognosis that the doctor was no longer the smartest thing in the room. The technology had progressed far enough where the machine was now determining the next best action. And then she said the bedside manner is no longer a feature, it's the product.

Speaker 1:
[32:55] Oh, that's good. That's really good.

Speaker 2:
[32:59] And that changed my life. In that moment, I realized two things. One, if medical oncology, which we have long exalted as the most complex medicine, one of the most complex medicines to practice, could cognitively commoditize, then what role at the limit wouldn't. And also, what did it mean, the fact that my dad could now actually predicate his entire purpose on the qualities and skills that I loved in him as a father, courage and compassion, and the fact that his patients loved him for the same reason I did. And he stood up on that stage, I will never forget, this is so powerful, and he's 77 years old and he very comfortably admitted to the group that he was probably past his intellectual sell-by date. And he had never thought of himself as a better doctor. And he said, I can care for every soul and spirit that walks through the door. And it is actually the final thing that compelled me to publish the book. I was pretty anxious up until this point. I was like, do I have enough conviction? And I was like, honestly, yes. Like, what an amazing world to imagine where so much of the experimentation that is required can become automated such that the actual job becomes the interpersonal. And what more do we want but a world where we can care for each other very deeply, very intimately? And then I concluded in the book, as you have, that at some point, it will not be very impressive to be the smartest person in the room. And you will instead have to be the most courageous, the most compassionate, the funniest, the most moral, the most curious. That like there is a point on the horizon, and I don't know when it is, when AI will actually require that we be more human. Because what else do we get to be?

Speaker 1:
[35:09] Hey, here's a truth I've learned the hard way. Cheap stuff usually costs you more. You buy the bargain mattress and three years later, you're replacing it. That's one of the reasons I love Avocado Green Mattress. They're assembled in the USA with certified organic materials, no chemical junk, no glues or adhesives, no petroleum-based foam, just natural latex, wool, and cotton. And they're built to last, holding their shape and supporting your body better. You're not throwing this thing out in a few years. You're choosing something made responsibly, a longer lasting mattress made with materials you can trust for a healthier home environment. And less waste means it's better for the planet. That's how you buy well. And if you're a parent, they even offer kids and crib mattresses made with the same safe materials. So make a wise choice. Go to avocatogreenmatress.com/ken to find your next mattress today. And get an extra $50 off adult mattresses with the code FRONTROWSEAT. That's avocatogreenmatress.com/ken and use the code FRONTROWSEAT. Okay, let's keep going. You touched on this earlier and I made a mental note. I want to come back here because you're so visionary and you're on top of this stuff. How does education change? I'm really thinking higher ed at first, but this has implications. I'll let you roll into, my goodness, as a father who still has one in high school.

Speaker 2:
[36:35] How old are your kids?

Speaker 1:
[36:36] 17, 18, and 20. Wow. So I got one graduating, so I'm already putting him at like, so he's going off to college in the fall. But I think of my daughter's got two more years left. What reason is there for her to go to school to learn what they're currently teaching, given that AI is going to just be so different and what it can tell them?

Speaker 2:
[36:59] Higher Ed is going to have an awakening when it realizes that we've made some really strange decisions. One is we emphasize research in Higher Ed, when in fact private companies are leading the way now. So most great novel research is not coming from universities anymore. So I'd say they can't produce great research. They can. It's mostly economic research at this point because if it's really valuable novel science, it's happening in a lab. And the labs can't really compete anymore. Now I'm not proposing that we should privatize all research. I'm just proposing that if you want to do really great novel science, you're getting as fast as you can to the companies that are well-funded and well-capitalized to do this. This is as much true in computer science as it is in other life sciences. It's also true that we have endowed these universities with so much status that it's terrifying institutionally to not go to university right now, even if you know it might economically not matter. And we've convinced a bunch of kids, if they don't get into good school, they're gonna be in trouble. That is a whole other plague on the student's mind. Because by the way, we stopped building universities a long time ago. And so the acceptance rate just continues to decline in a nation that everyone wants to live in. So you have these very few elite schools, a bunch of students that want to get in. And we have a 5% acceptance rate to some of these, to some, or three or two or one.

Speaker 1:
[38:29] And tuition skyrocketing along with student loans.

Speaker 2:
[38:32] So we've created this mess where what we really want are a bunch of places that kids can go transition to be adults. And the great American universities, most of the SEC schools, we just wish we could replicate. But there's something special that only exists in these places. And at some point we reach this point where it's like, okay, well then we have a bunch of people who are left out of this cool social experiment. I also think that we have overblown the importance of the academic pursuit at these schools. Not because I don't think the skills that you're acquiring matter, because I think most employers recognize that what we really want to see is achievement versus actual accreditation.

Speaker 1:
[39:14] Right.

Speaker 2:
[39:15] With the exception of trade schools. And this brings me to where I think higher ed goes. My expectation for higher ed, the future of higher ed, is that it will bifurcate on a social and academic basis. And that we are gonna start to recognize with almost all education that there are basically two kinds of education we have long amalgamated. Learning how to be a human and learning about how to do a job. And the really amazing thing about a trade school is it doesn't pretend that you should go and do anything other than learn that trade. Now in the process, you actually end up learning a ton of other things, which why trade schools are incredible and we should build as many as we can. But also, we have found ourselves in a place as we get further down the education chain, especially in early childhood education, where we have totally lost the plot. And I think as much as higher education matters, and I think it does, if we were to fix one thing tomorrow, it would be childhood education, in particular early childhood education, to start to recognize that we have really mailed in the importance of protecting the child.

Speaker 1:
[40:22] Do you know how AI will shape that? Do you have a sense?

Speaker 2:
[40:24] Well, it's happening already. I mean, some of it's good, some of it's bad. The good news is this idea of tailored education, tailored learning, and that is amazing. This idea that you can, again, bifurcating scholastic and social learning. The scholastic learning can happen with an AI-powered tutor at the student's own pace, knowing exactly what the student learns and feed that information to the necessary stakeholders, parents, teachers, state, et cetera. There's a company, well, there's a school called Alpha School run by Mackenzie Price that's doing exceptionally well. And actually she and I are together tomorrow and she features in this book. She represents what AI could do at the sort of scholastic level. What we need to do is recognize that at the social level, AI presents a threat more than almost anything else. Because what we actually want to do is protect the child as much as possible from the technology that divides them and create safe spaces for them to do nothing except be children. They don't need devices. They just do not need cell phones on the playground. For a while, I mean for a while in the child's life, we should acknowledge that many hours of the day should be spent outside in physical activity, learning game theory, sports, life skills in community. They do not need a phone in order to do that well. This is where we should be very careful not to corrupt that experience using technology which we have done so far. But if we do that, then we are actually going to potentially reinvent what education means. By the way, the market is ripe for it because homeschooling and microschooling are seeing an 8% year over year rise. So parents are already walking with their feet or voting with their feet. They're like, hey, system is messed up. I don't even know what to do but I'm going to try my best. Now we're giving them the tools to actually fix scholastic learning. Soon, we're going to expand that and scale that system. I think a lot of people, the future of education to me is sovereign. I think people are going to get to both build the communities that they want and help their child learn the values that are important to them while also not compromising on really important academic development.

Speaker 1:
[42:35] Public education K-12. How can AI, because here's what we do know, there's a crisis in public education with the burnout rate of teachers. The NEA published a study last year that said they expected 52% or 3%, I think it was, of their teachers to quit. Burnout is a big thing. I go back to the story of your father. Teachers got in the business because they love kids and they love instruction. Will AI change public education as we know it?

Speaker 2:
[43:02] Yes. How could it not?

Speaker 1:
[43:05] I would think it could make the individual instruction a little easier. But I'm curious to know what you think.

Speaker 2:
[43:11] Again, I think this is a matter of bifurcating scholastic and social learning and realizing that for a long time, the teacher had to be the smartest person in the room because there was no one else coming. Internet didn't exist. If your teacher didn't know something, you weren't going to know it.

Speaker 1:
[43:25] That's a great point.

Speaker 2:
[43:26] What was really interesting when we launched Chat GBT, I remember at OpenAI watching parents go crazy when Chat GBT would make a mistake or they perceived it as a mistake. The tolerance for Chat GBT telling a child something wrong was so low. Then I would find myself talking to a lot of teachers and educators, and I was like, have you ever audited your child's classroom? Have you ever watched what the teacher says? What you discover when you go and actually audit most education is that students get lied to every day all the time, and not even because teachers are malicious, most are not. Most are just now teaching things that are well beyond the scope of what they should be teaching, because they are asked to teach new. The English teacher just quit, could you step in? So we're in this crisis of actual expertise, and that to me enters, enter AI at this perfect moment where we can basically say, you know what, take a load off. You don't need to be the smartest person in the room. You do not need to know the most about this subject. What you need is to inspire the student to actually want to learn and to help them on their journey. You need to be a guide. You need to, in the process, help them understand why cheating doesn't serve them. You need to help them understand why discourse and disagreement is really healthy. You need to help them understand why so-and-so is talking really loudly or why so-and-so is afraid to ask a question. You need to create spaces where everyone is comfortable learning together. That, it turns out, has always been the ideal role of the teacher. We just never let them do this. We told them, you need to specialize in English because that's going to be how we teach everyone English. But in fact, what we really want is a bunch of people who are quite adept at actually helping the student create space to learn at some, at in some cases at their own pace without the need to actually be the smartest person in the room.

Speaker 1:
[45:20] If you're serious about winning at work this year, let me ask you a question. Are your systems helping you grow or holding you back? That's why today's episode is brought to you by Quo, spelled Q-U-O, the smarter way to run your business communications. Here's the deal. Missed calls don't just cost you business, they cost you momentum. That's like showing up to a job interview in your pajamas. You might be brilliant, but you just blew it. Effective leaders don't miss opportunities because of sloppy communication, and Quo gives you and your team one shared business number, so every call and every text is handled professionally. It works right from your phone or computer, so your team stays aligned as your company grows. Make this the year where no opportunity gets away. Try Quo for free, plus get 20% off your first six months when you go to quo.com/ken. That's quo.com/ken. Quo, no missed calls, no missed customers. You were saying going forward, you're going to see higher ed as we know it, Bifurcate, take us there a little bit further.

Speaker 2:
[46:25] I think it's a high likelihood that we are, the universities are going to realize that they are particularly valuable in like, once they realize that they're not leading in novel sciences anymore, once they realize that like their actual core competency is creating spaces that people want to be, they're going to probably start doubling down on these social elements. There are two things that most universities provide a student that are quite valuable. One is the credential. When you graduate, I went to Berkeley and I can say I went to Berkeley. Now, whether I learned a lot of that school, no one really bothers because they go, that's a good school. I've heard of that place. That's really great. I think my point in all this is we credential ourselves. We say, I did this thing. The other thing it provides is people, a network. This is why you go to business school. You go to business school not to learn a bunch. You can learn some interesting, you can learn most of that stuff on YouTube.

Speaker 1:
[47:21] You're getting in the pipeline.

Speaker 2:
[47:23] Harvard Business School, Stanford Business School, most of them open source their curriculums. You're there to meet a bunch of people that have made it through this incredibly narrow aperture to sit next to you, to be smart and you can then go start a company together, you can meet their dad and get a job through their dad, whatever.

Speaker 1:
[47:39] That's right.

Speaker 2:
[47:40] Those two things are actually exceptionally valuable in any world and very hard for a system. AI does not replicate or replace this. The credential matters, that seal on a diploma matters and your community matters. But everything else is pretty easy to replicate. If you're a university, you probably want to start doubling down on the community that you're creating and the credential that you are providing and you probably want to start, as they are, starting to recognize open sourcing the rest of the stuff. That allows for far more prolific academic learning.

Speaker 1:
[48:15] Do you think that we'll see a scaling down of all the different majors and you'll see where almost a law school is, in effect, a trade school and medical school is a trade school? You don't do undergrad, you just go right into law. Do you see a future?

Speaker 2:
[48:34] For almost everything except med school, yes.

Speaker 1:
[48:37] That's fair. You do need that extra undergrad.

Speaker 2:
[48:39] Well, I'm not sure you do. I will actually, I'm not sure you do.

Speaker 1:
[48:43] Then why not be able to go right into med school?

Speaker 2:
[48:45] Now we're going to get into policy issue, which is why don't we have a bunch more doctors? Well, med schools don't have any interest in getting bigger. Like, why aren't we printing?

Speaker 1:
[48:53] We are running out of- That's what I'm wondering, does AI, is it going to level the playing field?

Speaker 2:
[48:57] Well, yes, but on a different basis. Like the question you're asking is why don't we build a faster pipeline for people to get accredited as doctors? That starts asking some really dangerous questions about why we have a negative replacement rate of primary care physicians, why we treat them so badly when it's the job that would matter most in terms of creating a healthier society. That answers this. That leads us to a policy problem.

Speaker 1:
[49:22] So forget the policy for a moment, but the question is, because this is all AI context here, does AI at least give us the capability to train lawyers and doctors better and faster? It does not.

Speaker 2:
[49:34] Barely. What it does is it gives us a chance to say, well, first of all, we have a million lawyers in the United States. I'll say the hot take. I'm not sure we need more lawyers.

Speaker 1:
[49:43] I would agree with that.

Speaker 2:
[49:44] I think most of the reason that we have such incredible tort laws, we are creating work. We are creating demand where there is a lot of supply.

Speaker 1:
[49:51] That is a policy and I would go crazy on talking about that one.

Speaker 2:
[49:54] But let's go back to, we cannot print doctors. We are not going to print doctors. Correct. AI is probably going to marginally improve the experience by which a doctor learns how to be a doctor, but on the whole, that process is a trial by fire. Literally, it is designed to test the doctor mostly emotionally and mentally, but also quite physically. The rounds that you do, there's just an incredibly hard process that they put you through to say, okay, you can now go save someone's life. That is not bad, that excruciating test, that trial by fire. What's bad is that we only let a few people do it. It would be really nice if we'd let a lot more people do this, but we don't because we don't print med school so we can't make more doctors. What we can do is make the doctor more effective and also start to replace and bifurcate administration from care. This is again one of the principal ideas in the book, which is that we have basically forgotten the purpose of a hospital. I know this because we've seen a 5,000% increase in administrators and a 20% increase in caregivers during-

Speaker 1:
[51:03] Mind-blowing.

Speaker 2:
[51:04] It's mind-blowing. It's so bad. It's coming from someone whose three of my grandparents were doctors and my parents, and caregiving is in my blood, and it is so hard to watch the destruction of the American healthcare system at the hands of the bureaucrat and the administrator, who by the way doesn't actually want bad things either. There are just too many of them at this point.

Speaker 1:
[51:25] So you were saying that we've forgotten the purpose of a hospital.

Speaker 2:
[51:28] We built a hospital now. Most of the purpose of a hospital is for administration. It's to collect dues and revenue and not to actually provide care. You can see this when you walk into a hospital. But what if you didn't need to go to a hospital to get care? This is the question that Function Health asks and other these health apps, which are go to a lab, go to a lab corp, get a phlebotomist to draw blood. We'll analyze that for 170 markers and then we'll tell you how you perform based on who you are and your peer group. That is upstream understanding of our health that we have never had that is unlocking for people a better life overnight. Prenuvo, you don't need to go to a hospital. Go get an MRI, full body MRI scan every two years. We'll tell you if you have any pockets of any tumor. All of a sudden, what you start to see is the purpose of the hospital, which is a place to get care, has actually just been totally messed up because so much of the care that we really seek is so much further upstream. By the time we're triaging major problems, it's not too late, but it's super expensive. What we want is technology that allows us to treat ourselves, the sovereign individual pursuit, much further out front, to know what our markers are, to know how to serve them. This is the work that Mark Hyman and a lot of these people do to help people make smart decisions on their own behalf. That is work that AI is going to do in spades and start to do with incredible effect, allow individuals to have way more control over what they actually know about themselves and their health outcomes, such that by the time you arrive at a box, which is what they call the hospital, you are in desperate need of, something has probably acutely happened or you've let something accumulate.

Speaker 1:
[53:25] This is fascinating because we live in a day and age now where I think, and this is anecdotal to my life and people around me, where it feels like there's more distrust than ever for doctors. Because of so much information, we've always had the second opinion, which to me is just basic wisdom. I bring this up to say, do you think that this will make trusting AI an issue on the analysis or will it make us distrust doctors more than I think we already do?

Speaker 2:
[53:54] The data is that humans don't distrust their doctors more. They distrust health care much more. So the system itself. Oh, sure. That sort of has like compounding consequences.

Speaker 1:
[54:05] Almost like the doctor is a victim to the health care.

Speaker 2:
[54:07] That's exactly it. It has compounding consequences. Most people go see their doctor and they're like, I like you, but I don't trust any of this. Or they'll see a doctor and say, I like you, I trust you, but I need a second opinion and I'm just gonna get one.

Speaker 1:
[54:19] So will that change you think because of what you just described?

Speaker 2:
[54:22] Again, this idea of the sovereign individual, the individual being able to do a lot more on their own, yes, is going to create a world, I think, in some ways good and some ways bad where people can constantly seek a second opinion. And this will have some strange consequences, but in the end, I think the net effect of this, which is gonna be pretty interesting, is that people are gonna know a whole lot more about themselves. And whereas you sort of go to the hospital, after you go to the hospital to get an annual checkup and you're like, tell me about me, we're gonna know a whole lot more further upstream. And we're gonna be able to make a lot more decisions for ourselves and on our behalf that we didn't used to be able to make on our behalf. And you're already seeing the stories come out of people being like, I went to see this doctor and they said I didn't qualify, or they said that I need to do this thing. And then I went and did my own research, do your own research. And whatever pick an AI tool they were using or pick research they did on their own, I actually discovered that I could do this instead. That is going to become pretty normal. And I think what's going to happen is the role of the doctor is going to become much more about a caregiver, as I described earlier. And the role of the healthcare system itself is actually going to shift where people start to realize they can do so much more on their own, much, much, much further upstream.

Speaker 1:
[55:39] I want to pivot really quick because I want to honor my time with you. And almost come full circle on AI. You know, there are fun stories. We pulled some outrageous stories. Just a quick glance. The Chicago Sun-Times put out a list of books for a summer reading list that never even existed. This is my favorite. An AI-controlled vending machine started a cartel after being told to maximize profits. You know, the stories go on and on and on. I just put that out there to go, how could we be discerning as humans and not be scared, but learn how to, you know, what can we trust as AI proliferates? How do we not become skeptics?

Speaker 2:
[56:23] Yeah. Well, the first topic is an interesting one, which is like this idea of cognitive surrender, what I call idiocracy. There is just like, you know, someone thought they could outsource that they're thinking and it backfired. I've seen a lot of examples of this. This goes to like this idea of, you know, AI slop.

Speaker 1:
[56:46] Right.

Speaker 2:
[56:47] I think that this kind of plays out in a pretty good way for what it's worth. Like my very hot take here is like, I welcome the proliferation of garbage content because I actually think it's gonna force us to realize that, maybe this is too much. Most content has been garbage all along.

Speaker 1:
[57:02] Right.

Speaker 2:
[57:03] And we sort of drowned ourselves in, you know, Facebook news posts a long time ago that were actually made on a click farm in Bangladesh. And now we can finally acknowledge that it was never really good for our brain.

Speaker 1:
[57:17] Right.

Speaker 2:
[57:19] And that that's okay. Like maybe we've been over consuming content anyways. The second is a whole other one, which is it speaks to this idea of a paperclip theory. In the paperclip theory, there are a couple of catastrophic downside cases to AI. I don't really, it's weird to go into it, but paperclip theory proposes that if you told a machine to maximize paperclip production, and it was misaligned, meaning it didn't appreciate the consequences of its actions, it might convert everything in a paperclip. That we might turn us all into paperclips. And these are, again, mostly scientific theory. They're not actually sort of defensible by any real practical outcome, but they do present in some strange ways. And on that topic, I just think alignment has to become a principal policy matter. And we have to put pressure, as we are, on policymakers to pass policy standards for alignment, explainability and low resource bad acting, people using AI to do bad things. Now, if the policymakers won't, and they, it seems like they're going to move pretty slowly and not well at all here, which has its benefits, by the way, and its costs, then the market has to step in. And the market's actually done a really good job in this case. Like, the market is actually starting to, it's seesawing and it's kind of wild watching the public perception of companies, but Anthropic has been rewarded exceptionally well recently in public eye for over-emphasizing safety. If the market does this, if it responds in these sort of appropriate way, or perceived appropriate ways, then we're going to see this actual, I think, more impressive force play out where people who are, who know enough will say, actually, this is the thing that I want, this is the behavior I want to incentivize. We also should expect, and we do, that the companies will self-regulate, and they actually have pretty well. But we shouldn't expect that forever. And this is where, at some point, there's going to have to be policy measures. And those policy measures are probably going to come, if we're not careful, they will be so burdensome that we will do to AI what we did to nuclear power. And this is why, what I talk about, I think, the actual risk in all this is not the Chernobyl and the Three Mile Island events, which almost certainly will happen AI. I borrowed those two events, because they are actually very small in terms of total life lost. But they are enormous in terms of public perception.

Speaker 1:
[59:52] Yeah, absolutely.

Speaker 2:
[59:53] And you talk to people about Chernobyl, they're like, how many people died? How many people died at Three Mile Island? No one actually, it turns out. But you remember, they're seared into our brain as these events where nothing can ever happen again. And I do think the risk in AI is that it has these events and the public perception snaps so fast that we see this overcorrection in policy. And then we will have these strange seesaw moments. And that's why I tell most people, this is not obvious. None of this is predetermined. And a lot of this is actually going to take shape over many, many years as we start to feather what it is we do and don't want.

Speaker 1:
[60:26] Really good, so thoughtful. All right, folks, what did you learn? What did you take away from this conversation? I came back to the story Zack shared about his father. What a powerful story. And I think that that's the takeaway for me. You know, AI, the news, the headlines, there's fear stuff all over the place. How much do we really know? Zack knows a lot, but even he says, hey, we don't know 100% where this is going. And I think the humanness of the story of Zack's dad is so powerful. As technology advances in whatever form, the ability for us to meet a human and just be a good version of ourselves, it takes us all the way back to the Stoics, the very philosophy of the pursuit of happiness that Jefferson Pins in the Declaration of Independence was all about being a virtuous person. And I love this story. That's going to stick with me. The bedside manner. If we want to win professionally, what's our bedside manner? How are we going to interact with people? That is our chance to be unique and to give somebody a special experience. And that's what Zack's dad did. And that's what I'm going to work on. And I challenge you to do the same. Thanks for being involved in this and stay tuned to everything that's going on. Pay attention to the show notes and all that for what we got going on. And we appreciate you being with us. And Zack, as I wrap, thank you. We're better off for you hanging out with us today. That was really, really fun.

Speaker 2:
[61:47] Thanks for having me.

Speaker 1:
[61:48] Thanks, buddy.