title Greg Brockman: Inside the 72 Hours That Almost Killed OpenAI

description The AI race, the future of AGI, and the inside story of OpenAI.

Greg Brockman is the co-founder and President of OpenAI, the company behind ChatGPT and GPT-5. He was the first engineer at Stripe before leaving in 2015 to help start OpenAI.

In this rare conversation, Greg goes inside the moments that built, and nearly broke, the most important AI company in the world.

Greg explains how the original Napa offsite produced the three-step technical plan OpenAI has followed for a decade and the real reason OpenAI had to abandon its pure nonprofit structure.

He then walks through the 72 hours after Sam Altman was fired: where he was when he got the board call, why he quit the same day, how the "Phoenix" backup company was designed at Sam's house the next morning, and the moment Ilya Sutskever's tweet changed everything.

From there, the conversation turns forward: whether we're in a global AI race, how much of OpenAI's own code is now written by AI ("it's hard to know what percent is not"), why OpenAI stopped showing reasoning traces, what a compute-constrained world means for who gets access to AGI, and Greg's answer to the question everyone is really asking: What happens to your job?

-----

Timestamps:

00:00:00 Introduction

00:00:49 Meeting Sam Altman and Starting OpenAI

00:02:40 Building the Founding Team

00:04:25 DeepMind's Lead Over OpenAI

00:04:54 The Change from a Pure Non-Profit

00:06:05 Breakthrough Moments at OpenAI

00:08:22 What Dota 2 Meant for OpenAI

00:10:04 Reasoning Versus Prediction

00:11:59 Tensions Grow at OpenAI

00:15:44 Sam Altman's Firing

00:17:49 Greg Quits OpenAI

00:19:56 Sam Explores Deal with Microsoft's Satya

00:20:28 OpenAI Employees Sign Petition for Altman's Return

00:23:43 Ilya Sutskever Leaves OpenAI

00:24:59 Lessons Learned in Leadership after Sam Ousting

00:28:22 The Thing Ilya Said that Greg Can't Forget

00:32:22 Is AI Going Parabolic?

00:33:24 How Much of OpenAI's Code is Written by AI?

00:36:21 Are AI Chatbots Just Telling Us What We Want to Hear?

00:38:06 The Global AI Race to Reach AGI

00:38:40 What Happens if US Doesn't Reach AGI First?

00:39:49 Are Competing Countries Stealing AI Advancements from U.S?

00:40:38 Why ChatGPT No Longer Shows Reasoning

00:41:47 The Finite Constraints of Compute

00:43:38 On Investing Early in Data Centers

00:46:31 The Future of Data Center Specialization

00:47:52 How OpenAI Will Decide Whose Queries to Serve

00:49:08 OpenAI on Consumer vs Enterprise Models

00:53:05 Data Centers in Space?

01:00:56 What Should AI Regulation Look Like?

01:04:33 The Future of AI-Powered Entrepreneurship

01:04:44 AI and Job Loss

01:07:15 The Skills Young People Should Invest In

01:11:30 What Does Success Look Like For You?

------

Newsletter: The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠fs.blog/newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

------

Follow Shane Parrish:

X: ⁠⁠⁠⁠⁠⁠https://x.com/shaneparrish⁠

Insta: ⁠https://www.instagram.com/farnamstreet/⁠

LinkedIn: ⁠https://www.linkedin.com/in/shane-parrish-050a2183/⁠



Follow Greg Brockman:

LinkedIn: https://www.linkedin.com/in/thegdb/

Blog: https://blog.gregbrockman.com/

------

Thank you to the sponsors for this episode:

+CoinShares: Delivering Reason to Digital Asset Investing. ⁠https://coinshares.com/⁠

+Granola AI, The AI notepad for people in back-to-back meetings: https://www.granola.ai/shane

Check out the Granola Notes.

HeyGen is a message-first AI video platform that helps people and AI agents turn ideas into professional video in minutes. Try for free at https://www.heygen.com/

Join the salty rebellion: https://drinklmnt.com/
Learn more about your ad choices. Visit megaphone.fm/adchoices

pubDate Wed, 22 Apr 2026 04:00:00 GMT

author Shane Parrish

duration 4361000

transcript

Speaker 1:
[00:00] So, how did OpenAI come about?

Speaker 2:
[00:02] I knew I wanted to do a startup, because I felt like that was something-

Speaker 1:
[00:06] But you were just in a startup, Stripe was a startup.

Speaker 2:
[00:08] It's true, but I never, I felt like Stripe, the problem that we were solving was not my problem, right? It wasn't the problem I'd grown up thinking about. It was an important problem that I dedicated myself to that mission for a number of years. But I felt like it was going to succeed with or without me. And so, then I had a first moment to really think about, what is a mission that I want to dedicate myself to, where I would spend the rest of my life working on this problem just to see it play out in a slightly better way. And it was very clear to me that top of the list was AI, right? If you can actually make a difference in how AI will play out in the world, that would be a life well lived.

Speaker 1:
[00:50] When you were thinking about leaving, Patrick told you to go talk to Sam Altman. What happened in that conversation?

Speaker 2:
[00:56] Well, Patrick had said, Sam has seen lots of young people in your situation. And Patrick, I think, really hoped that Sam would convince me to stay. A few minutes of talking to Sam, he's like, okay, you clearly have already decided. It is very obvious. And so he asked, well, what are you planning on doing next? And I said, well, I'm thinking about doing an AI company. And he said, I'm also thinking about doing something in AI. We should keep in touch. So I had talked to Sam maybe one more time after I was leaving Stripe. And he asked, are you still thinking about doing something in AI? I said, yes. He said, I'm also starting to get more details and putting together this dinner in July. And I flew out for the dinner. And the thing that I remember was a topic was, is it too late to start a lab with many of the best researchers? Is it possible?

Speaker 1:
[01:49] And this is what year?

Speaker 2:
[01:50] 2015, right? Because you think about just the degree to which DeepMind had all the researchers, all the capital, all the data. It just felt like, is it even possible to get something off the ground still? People came up with all sorts of reasons it was hard. No one could come up with a reason it was actually impossible. And so Sam and I driving back to the city that night, I remember we looked at each other and we said, we got to do this. We just have to. And so next day I was full time on putting this together. And it was tough because it was very ill defined. We had a mission, a vision of saying, we think that we can build human level AI, make it be something positive for the world, make the benefits be something that are distributed broadly. But how? And how do you get people to actually leave their jobs to come and join this thing? Initially the set of people that I narrowed down to were actually Ilya, Dario, Amadai, Chris Sola and myself. That was going to be the team. And we spent a lot of time together, we spent a lot of time talking about potential visions for the lab, potential ways that things would work. It didn't quite come together and that there was just partly a question of will this have enough momentum? Dario felt like that he needed to go and establish a name for himself, and he wasn't sure if this was really going to be it. It was a question of just how it was all going to work. In the meanwhile, I was trying to get John Schulman interested. He said that he was going to do it. Dario and Chris ended up deciding to go to Google Brain, and so it was really just Ilya, me and John starting to be maybe a few others. So I had a group of about 10 people that many of them were saying I'm interested, but who else is in? I asked Sam, okay, how do we break symmetry here? How do we actually get everyone to say, all right, we're joining? Sam's suggestion was invite people out for an offsite. So we set up a thing in Napa and I actually made T-shirts. At the time we were going to-

Speaker 1:
[03:44] And this is before they joined?

Speaker 2:
[03:45] There was no official offers, no one had joined. We didn't have a structure, we had nothing. We just had an idea, we had a vision, we had a mission and we flew people out. We drove up to Napa together and it was an amazing day. The ideas were flowing. We came up with what I would really say is almost the technical plan that we have pursued for the past 10 years. Number one, solve reinforcement learning. Number two, solve unsupervised learning. Number three, was gradually learn more complicated and quotes things. After that offsite, I sent offers to everyone and said, hey, we want to get started in the next two to three weeks. Please let me know if you're in.

Speaker 1:
[04:25] Why did you think that DeepMind had such an insurmountable advantage?

Speaker 2:
[04:30] It was very much the case that Google DeepMind was the 10,000 pound gorilla in the field. They just had lots of capital. They had the track record. This is before AlphaGo. AlphaGo came out a couple of months later, but it wasn't a surprise. It's like very much the momentum was very clearly there. The question of, is it really possible to build something independent and new? It wasn't obvious.

Speaker 1:
[04:54] At what point did you realize that this non-profit thing just wasn't going to work?

Speaker 2:
[04:59] In 2017, we started to think very hard about, first of all, how do we really achieve the mission? How do we actually build an AGI? What will that look like? We started to do the math on compute, and you start to realize that it's going to take big computer and we came across a company called Cerebris, which was building a unique piece of computing hardware, and the kind of computer that they were promising, we realized was going to be far advanced of where our compute calculations looked. You start to realize if we could buy a lot of those computers, we can actually probably succeed at building an AGI. If we could get exclusive access to Cerebris, that could give us an overwhelming advantage. If we could buy very large data centers, that could be something unique as well. The thing about non-profit fundraising is I think that there is essentially a cap to what is possible there. Elon, Sam, Ilya, and I all agreed that the only path forward for OpenAI, the only path to achieve the mission was to create a for-profit entity associated with OpenAI of some form. And so we were committed to that direction, and that is something that we knew was the only way to achieve the mission.

Speaker 1:
[06:06] When was the moment that you realized everything was going to change for you? Was that Dota or was it before then or after?

Speaker 2:
[06:13] The way that OpenAI works is it's a series of moments where you realize that it's real now. And every time you think that you understand it, that is really settled in for you, you realize that there is a new horizon you had not yet appreciated. And so along the way, I think that there was the initial launch. It was like, wow, we actually got a team together. Now we can pursue this mission. But you show up at the office the next day and are like, well, what do we do? We didn't even have a whiteboard. Ilya and John wanted to write something on a whiteboard. I was like, I will get a whiteboard. That's something I can do. Dota, we had our first big result. That really was like, wow, we can actually accomplish something when we put our mind to it. You can actually see all this compute coming together. You scale up the compute, you scale up the result. There were multiple moments with the GPT series. I remember actually, in early moment was the unsupervised sentiment neuron paper. Have you heard of that one?

Speaker 1:
[07:05] I've heard of it, but I haven't read it.

Speaker 2:
[07:06] Okay. That one's an interesting one because it's 2017, and it's really the first time that we saw semantics arise from training on a language modeling objective. You train on learn the next character, predict the next character, and then suddenly, you get a neural net that understands sentiment, understands if something is positive or negative. It's harder than it sounds. But that was a moment where you realize, wow, we are building machines that can learn semantics, not just where the commas are and where the nouns and verbs are, but it can really learn the meaning of sentences. You got to push that. Then of course, when you see something like GPD-4, I remember we were playing with it, and someone asked, why is this thing not in AGI? It's actually really hard to put your finger on it because you can talk to it fluently in anything you want. It clearly wasn't in AGI. It was lacking something. But just if you'd describe your criteria for AGI two months prior, it probably would have been compatible with what GPD-4 was. There are many moments along the way where you feel like it's real now, it's going to really happen. The economy is going to transform into this compute powered world. I think that those moments are not yet at the end. I think that we have many more breakthrough moments where you realize that the next stage is possible.

Speaker 1:
[08:23] I thought Dota was an incredible moment because it wasn't chess, like Deep Blue, and it wasn't AlphaGo, which is computationally intensive but very defined rules. It was actually interactive against humans in a way that the world is structured, but you have all this freedom.

Speaker 2:
[08:41] Yeah. That was something very compelling about it. The ironic thing is we'd actually set out with Dota to develop new methods because the reinforcement learning at the time was clearly not going to scale. The algorithm we use called PPO, you plan over every single time step. There's no hierarchy. As a human, that's not how you plan your day. We knew that this algorithm was incredibly flawed, would never scale and had all these problems. But you got to start somewhere. You got to push your baselines to reach the wall so you actually see the limits of what good looks like with what you have, and then you can bring to bear a new algorithm. We just kept scaling PPO, and we exceeded the performance of the best humans. That itself was the finding, that actually massive compute with simple algorithms. That is something where we cannot just, doesn't just work in theory, it works in practice. We can really make it happen. In this incredibly messy environment where you cannot program it, you cannot look ahead, you cannot do a search, you just need this almost human-like intuition. By the way, the neural net we used, tiny little insect brain, similar number of synapses as to truly an insect brain. You realize like, wait, what if you had the same computational approach, but scaled it up to something that's much more human brain scale? What would that be like? Very evocative question.

Speaker 1:
[10:05] Is there a difference between reasoning and predicting? You mentioned predicting the next character, predicting the next word versus actually reasoning in first principles.

Speaker 2:
[10:15] I think they are connected in a deep way. On the one hand, just predicting what comes next sounds like a pedestrian task. But if you really can predict the next word out of Einstein's mouth, you are at least as smart as Einstein. You can make arguments, oh, well, but I think that those arguments fall flat, that there's something false there because the point of prediction is not about being able to predict what is known. The point is you put yourself in a new situation you've never seen before, and predict what comes next. I think that there's something deeply connected to intelligence and prediction that there's a long story of academic literature and how you think about this compression. They're all part of the same thing. Now, these reasoning models, the thing that I think is very interesting is that we train them with reinforcement learning. So there's really back to the original OpenAI plan, there's two steps to it. The first is unsupervised learning. You train a model just by having it predict what comes next. There, it's much more static data, it's much more observational. Again, it's data it's never seen before, situation it's never seen before, but it is a situation that has already happened. Then you do reinforcement learning, which is you basically have the AI learn its own data. You have it make its own, here's the action I'm going to take. You get an observation from the world and you learn from that. It's again, the way you actually train it is still predicting. It's trying to predict if I take this action, what's the thing that's likely to happen? And you reinforce that depending on how good of a job you did. And the beauty of that is that it now is an AI that has this background knowledge and has real world experience. But fundamentally, the technology we use to train during unsupervised stage and during the reinforcing stage, they're exactly the same. You are just predicting, but you've changed the structure of the data.

Speaker 1:
[12:00] When did things start to get tense?

Speaker 2:
[12:02] I think the thing about OpenAI is that if you truly believe in the mission, if you truly believe in the possibility of creating machines that have the intelligence level of humans, it means the stakes always feel very high. The question of who's making the decision, the question of what are the values that go into those decisions, the question of these things that are maybe mundane in a typical company that are much more like office politics, start to take on this existential weight. I think that that has colored a lot of how OpenAI these more high-profile conflicts. Sometimes it's like you put it in even just the question of who gets credit for a particular thing and suddenly takes on this existential weight.

Speaker 1:
[12:40] Well, that's where I was thinking about this because at that point, you probably realized this technology is inevitable and it's going to change the world. That wasn't broadly known to the world. Then I would imagine there's people who like, I want to be front and center. I want to take credit for this.

Speaker 2:
[12:56] Yes. That is the overwhelming dynamic that I have observed in this field. It's not just about OpenAI actually. One observation I had early on is that this technology is by nature very fragmentary. Sometimes when you have a lot of pressure, you can get a diamond or you can get cracks. Often you'll see diamonds form in pockets, teams of people that really work together, that have a lot of high trust, that know how to operate. But sometimes you can see that they splinter off and they go their own way. I think within AI, I think we've gotten some real benefits out of diversity of approach and different groups that are really pushing each other in order to both bring this technology in a more beneficial way. Sometimes how to think about all the thorny questions around safety, around what does it mean to be safe? What does it mean to actually deploy this technology and how to think about how to mitigate but also how to maximize those benefits. And that's something where I think that there's a lot of very healthy debate. It's always gone on within OpenAI's walls. Now it's starting to really happen, I think, in the world. I think that's something that we as a society really benefit from.

Speaker 1:
[14:09] The sponsor of this show is CoinShares. While most of the industry was still arguing about whether digital assets were legitimate, CoinShares was quietly building the infrastructure to invest in them properly. They now manage over six billion in assets and have stayed profitable through every market cycle. Fully regulated with the kind of transparency and governance that serious investors actually expect. Whether it's crypto ETFs, active strategies or Bitcoin mining ETF exposure, you can access all of it through your existing brokerage account, CoinShares. The adults have arrived. Learn more at coinshares.com. This is not investment advice. You're in meetings all day. You're trying to stay present, but you're also worried you'll forget the decision, the action item, the important next step. That's where Granola comes in. Granola is an AI-powered notepad for meetings. You jot down rough notes like you always do. And in the background, Granola transcribes and turns them into clear, useful notes when the meeting ends. There are no bots joining your calls, no distractions, just a clean notepad that helps you focus. During or after the call, you can chat with your notes. You can ask Granola to pull out action items, help you negotiate, make a decision, write a follow-up email, and so much more. I even use it when I'm listening to podcasts. Once you try it on a first meeting, it's hard to go without. Head to granola.ai/shane, and get three months free with the code Shane. That's granola.ai/shane. Take me back to the moment you found out that Sam had been fired. Where were you?

Speaker 2:
[15:42] I was at home.

Speaker 1:
[15:43] And what happened?

Speaker 2:
[15:44] I got a text saying, can we hop on a video call? So I hopped on the video call. I noticed that it was the board minus Sam who were on there.

Speaker 1:
[15:58] No, at that point?

Speaker 2:
[15:59] No. I mean, I inferred something was up.

Speaker 1:
[16:02] But because you're on the board.

Speaker 2:
[16:03] I am on the board or wasn't at this point. Yes.

Speaker 1:
[16:06] And then what happened?

Speaker 2:
[16:08] I was told that the board has decided that Sam would be removed and effectively the message that I got was the same messaging that was in the public post. And I asked if I could have any more information. I was told no, not right now. And I pressed on that maybe another time. And again, I was told nothing more to share. And then I was told wait, there's more. Also that I had been removed from the board, but would be staying with the company because I was very critical to the company, the mission. And I said, again, I asked if I could get any reasons, any feedback. I was told no. Towards the end, I was told that, hey, in this new setup that you will start to get, hopefully you can get feedback in this new configuration. And so that was the conversation.

Speaker 1:
[17:08] What went through your mind?

Speaker 2:
[17:10] It just wasn't right.

Speaker 1:
[17:12] Was it anger?

Speaker 2:
[17:14] No. I felt like I understood what had happened.

Speaker 1:
[17:17] How long before you knew what had actually transpired to sort of cause this?

Speaker 2:
[17:22] Well, there's two parts to the answer. One is I feel like I still am learning some additional facts, some additional thing that was in someone's head. To some extent, it comes down to lack of communication, right? That you realize that there are all these different things that have buffered. And to some extent, you know, approximately, I kind of knew, I was like, I understand for every person here, I have a pretty good model of why they acted the way that they did. But it wasn't what was most important to me in the moment. I just knew that this wasn't right. Right after I hung up the call, I talked to my wife and I said, gotta quit. And she said, I agree.

Speaker 1:
[17:59] And you quit that day?

Speaker 2:
[18:00] Yes.

Speaker 1:
[18:01] And then what happened?

Speaker 2:
[18:03] That day when I quit, started to get all these messages, people saying, I don't know what you and Sam are doing next, but I'm with you. I want to go start something with you. That was a real, honest surprise. I didn't really expect to get that kind of support, that kind of outpouring. There are a few of my close collaborators who quit that day as well. That's Jakub, Shimon, Alexander, and the five of us. So those people plus Sam, we all got together and we started to chart out what a new company could look like. I remember feeling that first day like, okay, there's a 10 percent chance that we actually get the company back. The next day, we set up a meeting at Sam's house, a bunch of people from the company came by, and we showed the vision that we'd been sketching out. So it was really one day in this fresh picture of how we'd run the project, and we spent a bunch of time over that weekend also negotiating with the board and the company and trying to figure out is there a path back together that makes sense. That Sunday night, the board replaced Mira as interim CEO with a new person and the company just rebelled. We'd actually been at the office, we thought we were close to a deal and...

Speaker 1:
[19:18] To come back.

Speaker 2:
[19:19] To come back, yeah. We thought that we had a path, and then the board made that change. And then suddenly, it was everyone streaming out of the building, and it was just like real chaos. I was on video calls with many of the people who had been interested in coming to this new company saying it's going to be okay, we have a plan, and we expanded. We'd been building this little life raft for the small set of people we expected to want to come. And suddenly, it was like no one wanted to be associated with this entity. People wanted to stand up for what they viewed as right. Sam talked to Satya, who had been talking about, hey, could you be a funder? Could you help support this new endeavor? I was like, hey, actually, could we expand from the small life raft to like everybody? Yes, could we take everyone? And you're like, all right, we'll figure it out somehow. And a lot of people, this was right before Thanksgiving. A lot of people were supposed to be flying to home wherever that is. And instead, they canceled their flights and the office was packed. It's like everyone's at the office just to be there, be part of it. And just, you know, even if they couldn't contribute to any of these conversations, that they just wanted to be there. As this history was made, then this petition starts to circulate. So many people were trying to sign the petition at once, it actually crashed Google Docs. And so you had to have certain people who were designated as the person you go to to actually put your name on the document, so you don't have too many editors at once. I think that that was a statement that was really heard loudly. And I remember, you know, I probably got home around like 5 a.m. or something, went to sleep and I woke up like 45 minutes later. And I checked Twitter and I saw that Ilya had posted and had signed a petition and it said that he wanted the company to come back together. And that was this real moment of relief. I felt so much gratitude that it just felt like, okay, like we can put this back together. We can get back to a good track.

Speaker 1:
[21:16] You and Ilya built this together. What was it like trying to find your way back to that relationship after?

Speaker 2:
[21:23] Look, it was tough. It was definitely, that was definitely a very close relationship. You'd been the officiant at my civil ceremony. We'd been through extremely tough times together. Like any relationship, you always have your ups and downs. We spent a lot of time afterwards really talking things through and really trying to understand and just articulate some of the things that we had built up or had left unsaid between us. And I think that we got to, through that process, I think we had gotten to a really good place. And for me it was, I felt like we got to closure on everything that had happened.

Speaker 1:
[22:00] How did you feel about all the loyalty you've inspired?

Speaker 2:
[22:04] Deeply grateful. Truly it was never something that I would have asked for, something I never would have expected. I think the way that I operate is I'm very much in the trenches kind of leader, try to lead from the front. And sometimes when I do that, I don't always, sorry, I'm getting a little emotional, but I don't always look back to see if everyone's following. I just like run right in. And when people do, when people do come and really help to build the thing, I just, it makes me feel so grateful for them and to feel like they have exceeded my expectations in every way.

Speaker 1:
[22:40] And so eventually, everybody comes back.

Speaker 2:
[22:43] I'll tell you, it was not guaranteed because throughout that weekend, all the competitors were circling. Just imagine this like feeding frenzy that people were shaping up to do, people are getting offers, and we actually did not lose a single person through that weekend. No one accepted a competing offer.

Speaker 1:
[23:00] I think that's incredible.

Speaker 2:
[23:02] It really was.

Speaker 1:
[23:03] That's more, you know, Coach Belichick told me this actually, when we were talking about the best teams, and he said, they're not playing for money, they're playing for the person beside them. And when you were saying that all these people quit, it makes me think of that, like, and none of them left for presumably more money, better offers, everybody was trying to circle and poach.

Speaker 2:
[23:24] Yeah, it was a very, that was a diamond moment.

Speaker 1:
[23:28] After all of this happened, you took some time off. What was going on internally with you?

Speaker 2:
[23:33] That was an intense experience to go through, an intense experience to come back to. And honestly, just one of the hardest moments for me at OpenAI was when Ilya left. And it was maybe the only moment in OpenAI's history where I felt like I didn't want to do it anymore. I think I needed some time to kind of find my way back to remembering, like, why I was doing this, and why it was so important, and why it was worth the pain.

Speaker 1:
[24:07] What did you do during the time?

Speaker 2:
[24:09] I trained language models.

Speaker 1:
[24:11] That's when you learned how to do it, right? Like you did the self-study thing I read on your blog.

Speaker 2:
[24:16] Well, no, so I actually had done that throughout the course of OpenAI. So I trained language models on DNA sequences. Oh, wow. So I basically got to take my-

Speaker 1:
[24:23] Oh, for Arc?

Speaker 2:
[24:24] For Arc Institute, yeah. And it was a very great experience. I took my skills that I had and applied them in a very different domain, a domain that's very personally meaningful to both me and to my wife. She has a lot of health conditions and that we think about what AI can do for her health, what it can do for the health of animals, who were both very passionate about just, it's like this application area that felt like maybe we could help in this very different way from how I've been pursuing this technology. So that was a very positive part of the experience.

Speaker 1:
[24:59] If I were to say like open a Google document, write out sort of what you learned about yourself on one page from this whole starting to Sam getting ousted, to you quitting, to inspiring all this loyalty, to the time off and then coming back, what would you write?

Speaker 2:
[25:14] I think I've just learned to just keep going for something that's worth it. If you have a mission that matters, then the fact of you keep going through the ups and the downs, there are going to be moments where it's all over, moments where we're so back, and you just can't let those moments pull you off course. I think that the degree of just personal resilience that you have to grow during these times, because if you're leading, people look to you for that steadiness, for that support, for the direction that the whole thing will go. And I think that a lot of what I've tried to grow with is to really be able to both understand the details of what we're doing, what the implication will be of a choice, but also be decisive. I think that there have been moments where I think I've been very much approaching OpenAI through a lens of uncertainty, of feeling like, I don't know what the right answer is. I don't know what the right way to build this technology is, or how do you answer these very thorny questions. But there's lots of people here who are very smart, who have very strong opinions, and so really try to understand all those opinions and figure out how to put them together. Sometimes that's the right thing. Sometimes that you realize that the opinions are mutually contradictory, they can't all be true at once. Sometimes you do just have to pick, and you know that that means that there's going to be someone who's going to be upset, someone who's going to quit, someone who's going to feel slighted. I think that a lot of what I've tried to do is have a stronger sense of self and a stronger sense of when there is conviction that we need to act on it. I think of things that we have done over the course of OpenAI where I feel like I wish that we had done differently. I think usually that's of the form. We dragged our feet on something we knew. We knew it wasn't quite the right person in the role. We didn't think it was quite the right technical direction. We didn't think that this way of letting the projects run was going to quite work, but we just waited too long. So that's something I try to learn from and acts on which I try to grow truly every day. When I reflect on both the course of OpenAI and Stripe and even rewinding to college and the projects I worked on in the past, I think that the way I tend to operate is that I both really love the day-to-day activity. I love the individual contribution. I love the software. I love the thinking through the problem. But I also really care about the environment in which these things are done. I actually am willing to give up on that type 1 fun of just the quick hit, like you get to build the thing, it's always cool, for something that's more like type 2 fun of, it's painful in the moment but it's worthwhile. But what you do is you create an environment where everyone else can do the IC work, do the great thing, and so really trying to build an environment is something I just gravitate towards. Not always the easiest, that you really do have to be willing to take on great personal pain. In the words of Ilya, Ilya always says that you have to suffer. If you're not suffering, you're not building value, and I think there's deep truth to it.

Speaker 1:
[28:31] Double-click on that.

Speaker 2:
[28:33] The Ilya perspective, I think, it's funny because he has a particular way of talking that I think is very unique to him, and there's always deep inspiration. In the words that he chooses, and this picture of suffering was something that we thought about throughout the course of OpenAI where it's like, we had so much uncertainty from the beginning. Is this thing going to work? There's many reasons why it might not work, why it should not work, why you could even say it cannot work. Whether it's how do you get the people, how do you pursue the technology? How do you get enough capital? How do you keep people motivated? How do you make the right decisions? Each of these things is extremely hard, extremely uncertain, and it's easy to just sweep the problems under the rug and just blindly say go. I think that is the negative side of Silicon Valley culture. Certainly Silicon Valley perception is just the, you just blindly do the thing and you do a reality distortion, whatever it is, but I don't think that works in AI, and I don't think that works for OpenAI. I don't think that's how we've operated ever. I think the way that we have always operated is to say, encounter the hard truth, understand science, the reality as it is, and that is, I think, something that has contributed to the successes we have had of thinking about the problems differently, of not being happy with, even in the early days, we were thinking about, okay, if we just write some papers and publish them, it'll be great, we'll get citations, we'll be the coolest people at these conferences, but will we achieve the mission? How is it that you do that activity and then AGI goes better for the world? They're not connected, it's not enough. Maybe it's a foundation, maybe it's a step, but it is not sufficient. Then you start really thinking about these bigger picture questions of what would it take to build an AGI and not pleasant, because you realize there's no path. You realize you need dollars, you don't have any mechanisms going to allow you to raise dollars and you can try hard. We did try hard, we tried extremely hard, but maybe you're raising $100 million, you could do $500 million, great, but $1 billion, pretty hard. You look at what OpenAI has been able to accomplish with the resources we have been able to raise. To further that mission, there truly would be no other way to do it, besides having leaned into the suffering and trying to understand the truth of what it is we're trying to accomplish.

Speaker 1:
[30:54] What's a lesson you've had to learn more than once?

Speaker 2:
[30:57] Make the hard decision, have the hard conversation.

Speaker 1:
[31:00] What's the best advice you've ever been given?

Speaker 2:
[31:03] I would actually say it was from my Harvard freshman writing class. Of just keep cutting words in order to be clear and communicate well.

Speaker 1:
[31:14] How do you filter information?

Speaker 2:
[31:16] I read a lot, triage aggressively.

Speaker 1:
[31:18] Who are your role models and why?

Speaker 2:
[31:20] I would say Gauss and Descartes as people who are incredibly thoughtful, very much ahead of their time, very much visionaries, who came up with real breakthroughs that I think transform how we think and how we live.

Speaker 1:
[31:35] What do you want non-tech people to know about AI?

Speaker 2:
[31:38] That it's going to be a force for good in their personal life that they'll benefit from and will help advance science, medicine and really lift up everyone.

Speaker 1:
[31:50] What does the world get wrong about Greg Brockman?

Speaker 2:
[31:53] I think people don't understand how focused I am on this mission in a way that I think has been very personally painful at many turns. But I just believe this technology can just help empower people and benefit everyone. I really want to help make that happen.

Speaker 1:
[32:17] Why is OpenAI so bad at naming models?

Speaker 2:
[32:21] That one I can't tell you.

Speaker 1:
[32:23] Are we near the point where AI makes AI go parabolic?

Speaker 2:
[32:29] I would say we are in this phase where you apply AI to its own development process and it's going to go faster and faster. That is something that's been happening really, I mean, certainly since ChatGPT in many ways. We use ChatGPT to make our development process 10 percent, 20 percent faster. Now we have these amazing coding tools which have truly revolutionized how software engineering is done. Most of what we do in the production of models is bottlenecked by software. It's about implementing these systems, it's about scaling them up, it's about managing these massive computers. We're going to be hitting a phase soon where the AI will also come up with its own research ideas and test those out, run experiments. I think that the speed of iteration and innovation is going to continue to increase as a result of what we're producing.

Speaker 1:
[33:26] What percentage of the code is now written by AI?

Speaker 2:
[33:29] It's hard to know what percent of the code is not written by AI. It's a vanishing fraction. The actual writing of code currently, the AI is much better than humans at writing code, given the right context, given the right structure. Now, there's parts of the actual structure of the code that our human experts still are much better at. That's about thinking about how the modules should be laid out, how the pieces should work. Maybe it's the definition of certain kinds of interfaces, but the actual writing of code is essentially all AI now.

Speaker 1:
[34:04] Is it coming up with novel ideas that you wouldn't have thought of?

Speaker 2:
[34:08] I'd say that where we are is we're getting close. We've seen, for example, in chip design, so in the design of our own chip last year, we applied our technology to trying to get a better fit to actually shrink the area used by the circuits. There, we found that the optimizations that the model produced were actually on our list. It didn't come with something novel and new that no human ever would have, but it implemented it faster in a way that we wouldn't have had time to accomplish. If you look at math and physics, we now are solving open math problems. We're solving open physics problems and actually have resolved this particular physics problem recently in quantum physics in the opposite way that the community expected. With a beautiful, elegant formula, it's really happening. We have new ideas from these models, extremely doable. We're starting to see it in some of these domains. Now applying it in harder and harder domains or ones that require more real world context and things like that, we're starting to see it. We have a line of sight for how to accomplish it, but we have a lot of work to do.

Speaker 1:
[35:16] Why do models feel like they have a political leaning to them, like a political bias almost?

Speaker 2:
[35:21] So we put a lot of effort into neutrality for our models to have them represent truth. And you can see exactly the values that go into our models on our website. We have a publicly published spec, which defines and you can get feedback on the different ways we want our model to behave. We've spent a lot of effort to really get to this neutral point of view and trying to be fair and balanced. And I think that sometimes when you see these screenshots on Twitter, that they're not always fully honest themselves in terms of where they came from, either because there's some memories that are behind the scenes that tweak the answer in a certain way, or hidden instructions, or previous parts of the conversation. And so sometimes it's also there's just no right answer. And so you can have a question, you say answer in one word, and no matter which one you say, you're going to get some claim of bias. And so I think that to some extent, the core of it in my mind is that we care a lot about truth and about having an AI that really represents you.

Speaker 1:
[36:21] Do you think the models evolved to tell us what we want to hear if they're based on reinforcement learning? So if I lean left, it's going to tell me an answer that leans left, or if I lean right, it's going to give me an answer that leans right?

Speaker 2:
[36:33] Well, so we've actually gone through an evolution of how we train the models to user preferences. And that we've seen that at one point, like last year, that the models really did start to lean into telling you what you wanted to hear. And we were saying, oh, that's such a great answer. And we've reacted to that. We said that this is not how we want our models to operate, and we made changes. Because the true thing we want the models to be aligned to is helping you solve your goals, your long-term goals. Right? And maybe in the moment, it feels good to be told, that was a great question, best question anyone's ever asked. But that's not what you actually want. Maybe there's some people, but it's not what most people truly want. And so, we've actually made great technological improvements to make sure that our AI training does not result in what's called hacking the grader. Right? That we really want to make sure that there is a good signal there that is about the goal, not just your short-term, what's going to get you a quick hit. And that to me is maybe the most important part of a vision for where our personal AI, personal AGI is going to take us, is to really make sure it's not just about something that looks good in the moment. It's really about alignment with your long-term well-being, your long-term goals, the thing that you actually want. And that is what I think will most empower people, is really put you in the driver's seat, because you will have this entity that is there operating on your behalf 24-7. You're asleep, it's out there trying to figure out what is it that Shane wants, how can I do it better, and is actually able to accomplish it.

Speaker 1:
[38:06] Are we in a global AI race?

Speaker 2:
[38:08] I think we are certainly in a global AI renaissance. And I think that the dynamics between countries are not yet fully defined. We have this concentration of where the breakthrough algorithms come from in the US, in Western companies. There's clearly a lot of innovation happening around the world. But I think exactly the balance of dynamics and how, like which countries rely on which providers, all of that is something that I think is still being determined.

Speaker 1:
[38:40] Is there a consequence, do you think, for the United States not being the first country to reach AGI?

Speaker 2:
[38:46] Well, I do think that leading an AI is very critical for America. Because I think that this is how you can ensure the democratic values are protected and preserved. And I think that every country is also starting to realize that they need some sort of sovereign AI strategy. They need to, if this is becoming the basis of economic security, national security, they need to participate somehow. And if you look at a lot of the efforts by the United States, to think about how to manage ship exports, how to think about technology exports, there's something where if you lean too far out, then everyone else has to develop their own competitor or rely on someone else who's building this. If you lean too far in, then maybe you lose your advantage. And the question is, how do you balance those? How do you maintain your leadership? But leadership is not just about being ahead. Leadership is about also bringing along the world with you.

Speaker 1:
[39:49] Are other countries stealing advancements? I've been reading a lot about distillation.

Speaker 2:
[39:53] There's certainly a lot of attempts to distill models. And that comes from companies in the US, it comes from all over the world. But I think that it misses the core point, which is that the way this technology is developing is on an exponential. And anytime we have a model, we've already moved on to the next one. We're already moving to the next level. So we put in a lot of effort to protect against distillation, make it harder to do, especially with things like chain of thought and other parts of the model that are not really necessary to get the benefits to someone, to get the outputs to someone. But that the core advantage that we have, the strength that we're building up over time is really about not just any one model, it's about the machine that makes the models.

Speaker 1:
[40:39] Is that why you guys stop showing reasoning?

Speaker 2:
[40:41] That is part of it. There's two reasons. One is to think about distillation, but the second, in some ways more important, is that we had this insight when we first developed the reasoning paradigm, that it gives us a interpretability mechanism we had not been anticipating, because you can really read the model's thoughts. You can see exactly how it got to an answer. You can interpret what was actually motivating that answer. Now, the problem is, if you train the model to have a chain of thought that looks good, then you lose all the faithfulness. It's just going to be like the model knows that part of the answer that is desired is for the chain of thought to look a certain way, and so it may not be representative of how it actually arrived at that answer anymore. We made an early decision to say we want to avoid any temptation to train these chain of thoughts, to look favorable, to look like something you could present to a user, and so that really made us lean out for multiple reasons, for competitive reasons, for safety reasons, from the idea of showing these intermediate thoughts.

Speaker 1:
[41:47] It seems like the current trend right now is to release preview models. Is that because we're computer constrained, do you think?

Speaker 2:
[41:53] I would say that we, in general, are heading to a compute-constrained world. Like, if you think about the amount of value that these models can produce for someone, it's extreme. It's not just answering a quick question anymore. It's not just even giving you access to health information. It's really going deep and spending a lot of tokens to put together a bunch of different data sources, search through your enterprise knowledge base, to actually be able to solve this hard problem, to write that software that's better than a human would be able to. All of that is something that is hard. If you look at the progress that we made between GPD 5 to 5.1 to 5.2 to 5.3 codecs to 5.4, it's been extreme and these models are getting extremely better at understanding your intent molding to what you want to accomplish. We also put them in these surfaces like codecs that make them very usable so that you as a developer, you can really fly, that you can achieve more than you would have dreamed otherwise. All of this is powered by compute fundamentally, and there's not enough compute in that if you just wanted enough compute for one GPU for every person in the world, you're talking like eight billion GPUs. We are not on a trajectory to build anywhere near that level of compute. It's like hundreds of thousands of GPUs, that's a pretty big fleet these days, millions of GPUs coming up. It's not surprising that there is too little compute in the world, and that we're going to do much more in order to really be able to bring this technology to everyone. Then in terms of the training, I'd say that the way that we tend to launch things, so we have put in a lot of effort to make sure we are building compute in anticipation of what we see coming. So I think we're going to be very focused on our mission of bringing these models to everyone, making them widely available.

Speaker 1:
[43:39] You guys were teased for putting so much effort, money into data centers. How do you think that's playing out now?

Speaker 2:
[43:46] Well, I think it's going to give us an advantage, and I think it's going to be something that's an advantage, not just for the business, but for actually delivering on the mission of bringing this technology to everyone.

Speaker 1:
[43:56] Because you guys, you saw that way in advance. You got teased for it by almost all of your competitors.

Speaker 2:
[44:03] Who's laughing now?

Speaker 1:
[44:04] Yeah.

Speaker 2:
[44:06] I mean, I think our competitors are not having a good time on compute. Let me put it that way.

Speaker 1:
[44:10] But you must have seen something that they didn't see. Everybody was in a very similar, or it seems at least from the outside, everybody was in a very similar technological space. They all knew this was coming, and yet you guys had the boldness to make that bet with $100 billion.

Speaker 2:
[44:30] But that is the core of OpenAI, is really encountering reality as it is. Really thinking about what is the implication of what it is we'll accomplish in the next six months, the next 12 months, the next 10 years. That is true for the grand mission. It's true for day-to-day how we design different pieces of our software, and it's true for things like scaling up compute. I think that we are deeply motivated by bringing this technology to everyone, and we think about lots of different mechanisms for how to do that well and safely.

Speaker 1:
[45:03] Smart operators look for leverage. Video is the most effective way to communicate, but it's always been too slow and expensive to produce. HeyGen eliminates that. You go from idea to professional video in minutes, no camera, no crew. Their avatars are rated the most realistic on G2, and over 30 million people are already using HeyGen, including 85% of the Fortune 100. You can even reach an audience in over 175 languages with AI lip sync and translation. Same voice, perfectly matched. If you're someone who communicates at scale, take a look. Your first three videos are free. heygen.com, heygen.com. Ever hit 3 p.m. and feel like your brain just quit? For a lot of people, that's not caffeine or sleep, it's electrolytes and water alone won't fix it. That's why I drink Element every day after lunch. Zero sugar, no dodgy ingredients, just a real dose of sodium, potassium, and magnesium. I know you're all thinking electrolytes are for athletes, but you don't have to be an athlete to benefit from it and it tastes great. Stay sharp in the afternoon and grab a free eight-count sample pack with any purchase at drinklmnt.com/tkp. That's drinklmnt.com/tkp. Do you think data centers eventually get dedicated towards a problem? You'll have a huge data center in North Dakota and it's just on solving cancer and that's all it's doing?

Speaker 2:
[46:41] Yes.

Speaker 1:
[46:42] How far away are we from that?

Speaker 2:
[46:44] I think that this kind of thing happening this year is not out of the question. It's really amazing if you think about it, having this giant machine. Have you been to any of these data centers?

Speaker 1:
[46:55] No, I've seen them online but never in person.

Speaker 2:
[46:57] It is a very different experience to walk amongst these racks, to walk down the rows and you look at the cables that are all perfectly, exactly the right length and you just realize that what a data center is, it's a massive machine. These are maybe the biggest machines that humanity creates. Then you ask the question of why. Why do we build these machines? Why is it worthwhile? It is because they have the potential to solve problems that matter for people. To come up with cures for cancer, to help people run businesses, to sometimes maybe it's mundane queries. The purpose in my mind is really about how do you deliver value, how do you deliver on people's goals? I think the opportunity presented by these massive machines targeting one problem is something we have not yet really internalized.

Speaker 1:
[47:52] But if we're computer constraint, how do you decide who to serve? Why are you serving me when I'm trying to make an image over solving cancer?

Speaker 2:
[48:01] Well, this is going to be the most important question for society to answer. Where does the compute go? What problems are worthy? There's lots of worthy problems, but you need to prioritize them because you only have so much compute. One thing we really believe in is that everyone is going to need access to compute. That's why we have a free tier of ChatGPT. We've really put effort into making sure that people are able to use this technology that's widely available because we believe that is core to what we're doing here. We think that putting this technology in people's hands, that empowers them, that lets them achieve goals. It helps them also understand the technology. It's something that helps them then shape, how does this technology slot in? You could take a very different approach and say, well, it's all about the ivory tower. It's all about the just solve the problem and we will then distribute the technology breakthroughs in some way. I think there's merit to that as well, but that's not where I'd put the balance of what we do. I think that that is very much a, like we do want to make great strides on specific problems. But I think that that should be in service again of the, we want the benefits of this technology to be broadly distributed.

Speaker 1:
[49:08] How do you think of it that internally just at OpenAI between consumer and enterprise?

Speaker 2:
[49:13] Well, a lot of what I've been thinking about recently has been focus. Because this field, it is opportunity incarnate, right? It's like you can take AI and apply it to any problem, any sort of thing you want to build, it's now on the table. The problem that we have is that there's only so much compute. Where do you want to put it? So you need to have synergies, you need to have returned to the fact that you have multiple things happening, that they all add up 1 plus 1 equals 10. That's the dream, that's the goal. A lot of what I think is important for this next phase of OpenAI, very clearly, enterprise, because the economy is becoming this compute-powered economy before our very eyes, it's happening right now. We've seen this with software engineering, and it's going to happen with every single field of work people do with a computer. Everyone's computer work is going to be something where, rather than you doing work with your computer, your computer is going to do work for you. It's truly going to be amazing, and so we need to be there to help people deploy these models, figure out how to utilize them, figure how to get the most benefit out of them. By the way, there's also going to be a blurring of the line between what is enterprise and what is consumer, because entrepreneurship is going to become far easier than ever before. Like we're seeing this already. And even for example, one of my friends was describing that his sister was describing this app that she really wished someone had created, that she had this picture of like exactly what she wanted. And he in the meanwhile was typing in to Kodak's, uh-huh, uh-huh, and then pushed enter. And a few hours later, he shows her this app and she's like, wait, what is this? Like where did this thing come from? Who built this? And he said, you did. And that is, I think, just an amazing thing where you realize anyone can be a builder. Like these tools, like Kodak's is for everyone. It's not just for software engineers. It's like everyone now can be a software engineer. If they have a vision, if they have this agency, they have a thing that they want to accomplish. Like you now have this magic tool that can do it. And then on the consumer side, the thing about consumer is it's too broad of a term, right? That there's lots of different things. There's like entertainment, there's a bunch of things in self-expression and there's also solving goals. And the aspect of consumer that we're really dialed in on is solving goals. Like we believe that this technology, you look at smartphones, that's like 4 billion people use them. All of those people should have a personal AI, a personal AGI that's out there, that knows them well, that has their personal context, that is trustworthy, they can ask for advice, but that also knows them so well that if your favorite musician is in town, it just goes and proactively purchases tickets. Maybe it knows like, oh, I should check in before doing this, or maybe it knows, yep, that just like, I got to do this and I have prior approval. That level of having an AI that knows you and can help you achieve whatever it is you want to achieve, and help you flesh out what are your goals. You should still set those goals, they should be your goals, or you should be in charge, but that is something we want to create. That is something I think that not just 4 billion people are going to want to need, I think it's going to be 8 billion people. I think that the whole planet is going to really benefit from and need access to personal AI, personal AGI. So you look at those two dimensions of deep knowledge work and broad distribution of access to an agentic system, and we want to build those two aspects, and they come together because ultimately they're the same technology. Ultimately, you want an AI that is there in the Cloud, that has access to information, that is trustworthy, that is able to give good answers, and able to take actions on your behalf, whether it's building, whether it's in your personal life, and maybe you have multiple instances of it, but fundamentally it is one technological system.

Speaker 1:
[53:05] Do you think we'll have data centers in space?

Speaker 2:
[53:08] I think we're going to have data centers everywhere.

Speaker 1:
[53:10] How far away do you think we are from that?

Speaker 2:
[53:12] Well, data centers in space has many technical problems associated with it. Even for example, the data centers we build today are very finicky. They're these massive machines with very breakable, very expensive components. We've had many issues in the past where the cables were just too taut, just literally like too tight of cables, and then you get signal integrity issues and the computer doesn't work. So figuring out how do you maintain systems today, it's people go in and physically pull them, probably we'll move to robotics. So I think figuring out how to solve some of these technical problems are going to be very important dependencies as we think about putting them in. People talk about putting data center in various difficult locations. Space feels like a grand challenge, but I think that we have such need for compute that we need to be thinking about all options.

Speaker 1:
[54:07] What is iterative deployment and why do you do it?

Speaker 2:
[54:11] Well, iterative deployment is one of the core pillars of how OpenAI has approached how to get this technology to benefit people and to achieve our mission. This is something that I think I was probably the person who articulated those two words, but this spirit is something that really emerged as we thought about our first product deployment and really thinking about how does that connect to what we're trying to do. You realize that there were two different routes that you could take in terms of thinking about you want to build an AGI, it's going to benefit people, how do you do it? One is you go for build it in secret, you don't deploy anything, you have a lot of time to just polish it, get it right, but then at some point you push a button and you say deploy. I remember thinking about, could I sign up for that strategy? Could I be accountable for that strategy? Do you want to be sitting in a room thinking about, okay, we ran all our tests, are we ready to deploy? You've never deployed anything ever before. That's your first contact with reality. It's a very powerful system that's going to really change the world. That is a very tough problem set. But instead, what about if you take an approach where this is your 100th system? You've had to solve this problem 99 times before with systems of increasing power, and the world has also had a chance to adapt to them, to reconfigure around them. We learned very early on with GPT-3, we got to see this very concretely what it's like to deploy something where we spent a lot of time thinking about, what are all the misuses of GPT-3? What are the ways it could go wrong? We thought about misinformation, we thought about these kinds of grand pictures, and you know what the number one misuse of GPT-3 was?

Speaker 1:
[55:50] What?

Speaker 2:
[55:51] It was medical spam, like advertising different drugs to people. It's not something we ever would have thought of as a problem, but we see it in front of our eyes, and we get a chance to react and learn from it. So iterative deployment is the idea that we will bring intermediate versions of this technology. Now, it's not an excuse to just blindly deploy. You still need to think at every step about what's the best view on all the ways this might be misused, what are the downsides, what are the risks? Let's mitigate those, but you get to see it. You get to see if you're right, learn from reality, and do better the next time.

Speaker 1:
[56:24] I think people don't understand the extent of which, this is all new, there's no playbook. You're figuring this out as you go as well on the most rapidly deployed technology in the world, perhaps that's so powerful.

Speaker 2:
[56:39] It is true that at various points in OpenAI's history, we've had some hope that, hey, there are people who have deployed transformative technologies before, maybe they can tell us the answers. It's never been so simple. They do have wisdom and insights, and that's something that I think we've really incorporated. But we realized that we're the closest ones to this technology, that by virtue of creating it, we have an understanding of the ways in which we could shape it, that is hard for someone who isn't so close to it to opine on, to advise on. I think that one observation I have is that, the right choices are extremely specific to the facts of the technology. There's different pressures that are exerted by cell phones versus mainframe computers versus AI versus electricity. Each one of these has its own unique proclivities and problems. That's ways that they're being developed. The individual is doing it matter too. The dynamics between different humans and these human factors have been hugely impactful for how AI is playing out today in front of our eyes. I think that a lot of what we spend our time doing from the beginning of OpenAI and really even before is, you spend a lot of time dreaming. You spend a lot of time really thinking about all the implications of what you might do. I think that one thing I've observed is that we haven't really been surprised by some moments along the way, but we have been surprised by when they arrive, how hard they are to accomplish, exactly the order in which we see them. The world that we are moving towards, I think one that is, in many ways, more wonderful and awe-inspiring than many of the ones we anticipated.

Speaker 1:
[58:18] If one frontier model puts safety as a primary concern, and another frontier model doesn't, how do you view that competition playing out over time?

Speaker 2:
[58:30] Well, I think we have found that safety is actually a core product feature. No one wants a model that is not aligned with them. You want a model you can trust that does the right things in any circumstance you give it. So we have invested, I think we've actually invested possibly far more than certainly people perceive and possibly more than any other lab in safety. That we have in ChatGPT the broadest deployment of AI, these language models in the world used by the most people. We have to care. We've always cared, but you really see it in terms of us being able to bring this technology to so many people. So I don't think that there's a sustainable state where the people who are building this technology and having successful products are not also investing super hard in safety. I think that actually the challenge is a little bit about if you step back, because there are some aspects of what it means to deliver safety that are not necessarily short-term. You have to think long-term for not just your business, but for what it is that you're creating. Some of this is about how you train the models. Some of this is about how do you get your feedback loop. But I would just say that we are committed to safety as part of our mission, and that's something where I think it has played out in our products and in the world. One thing that people also miss is that it's not just about the safety of the model, it's about the resilience of society. If you look at how transformative technologies enter the world, that society builds around them about their strengths and their risks. You think about engines, you build cars, but you also need seatbelts, you also need to have roads, and you reorient cities around the fact of this is how this technology works. You think about electricity, you have various safety standards, you have different kinds of where you're allowed to put the electric poles and high-voltage lines and all these things. And I think the same will be true for AI, that it's not just about the technology itself, it's not about the model itself, it's really about how do they integrate into the world with a society that is resilient. And that is something we're investing in very significantly. The OpenAI Foundation has this as one of its key focuses of trying to help society invest in and build a resilience layer for AI.

Speaker 1:
[60:56] What do you think regulation for AI should look like?

Speaker 2:
[60:59] Well, I think that there's a number of different pieces to what regulation for AI needs to accomplish. One that I think is very important is we need to ultimately ensure this technology benefits people. And you think about questions like, like it is very clear that institutions, jobs, just life paths that people thought would be stable, those assumptions may not hold anymore. And we need to make sure that we provide support, that we're all there to support each other as this technology rolls out. And so what does that mean from a regulatory perspective? I think there's a lot of ideas, whether it's things like everyone should have access to compute. How do we ensure that that's true? How do we ensure that as this technology starts to generate more economic value, it doesn't accrue to just one place, right? There should be something that actually everyone is benefiting from. This technology shouldn't just abstractly benefit the economy. It's very clearly going to do. It should directly be something that people feel in their daily lives, that they themselves, their life is better because this technology exists, because they're using it, because they're able to accomplish more. I think that the ways in which I see this playing out, it's very important to ground it in what are we really seeing. A good example is the number of people who say that their life was saved, or the life of a loved one was saved through the use of ChatGPT. You realize that that's something that should be supported and protected. So a good example of how you can do that through regulation is thinking about privacy and privilege. You talk to a doctor, you talk to a lawyer, those are privileged conversations. You feel comfortable sharing them. There's certain guardrails on when the health care provider would have to provide that information to law enforcement or alert someone. Well-defined in the law. We don't have anything like that for AI right now. But people are using these tools and they should use these tools because they're so important for giving people access to information that they wouldn't be able to get otherwise, and that they should have the appropriate kind of understanding and protections there, too. And so I think that there's a lot of just really leaning into thinking about how do these models insert into people's lives? How do we make sure that we can continue to innovate, while at the same time also making sure that the benefits flow broadly? How do we ensure that America remains a leader, right? That you think about robotics, where I think we are not the leader. I think that for AI, we have to make sure that we continue with this remarkable position that we have been able to achieve. And we think about things like data centers, that those are something where there's clearly been a lot of concern about questions like, do they drive up electricity prices? And we have a commitment to ensure that they do not. And I think that each of these things can be achieved through many different mechanisms. Sometimes it's through regulation, sometimes it's through commitments from the company, and sometimes it's just through people understanding the facts. Like a good example is data centers and water usage. Like that's something that people talk about a lot. But actually our data centers use incredibly little water, right? That's actually misinformation that they use a lot.

Speaker 1:
[64:09] It's less than a household, isn't it?

Speaker 2:
[64:11] It is, because it's a closed loop. You basically fill up a giant, like think of it as like a swimming pool of water, and you just circle it around. And so it's a fixed amount of water that's not very large. But I think people really understanding the why. Why are we building these things? Why is it worthwhile? How does it benefit me? And being able to give people that empowerment, whether it's helping them feel that they can be an entrepreneur now, that they can build a business, that they can create something. Like all of that, we have to solve for, we have to make sure that people feel it in their daily lives.

Speaker 1:
[64:45] When I told people I was doing this interview, one of the common reactions is that they're fearing for their job and their uncertainty. What would you tell them?

Speaker 2:
[64:53] Well, I do think that this technology, it is uncertain exactly how it will play out. I think it is surprising how it will play out as well. Like the AIs that we have right now, the world that we have right now is not really something that was anticipated by science fiction. It's just different. And some inevitable conclusions, I think, actually turn out to not quite look the same way when they come to pass. So I believe it's always easiest to see what you lose. Right? And the change is coming. There is no denying that. That is absolutely the case. But it's much harder to see a priori what you gain. And as an example, just think about Uber being described to someone in 1950. You have to think about computers. You have to think about mobile phones. You have to think about GPS. And it's all so you can get a car to appear where you are in three minutes. And that's actually crazy if you think about that level of technological investment for that kind of use case. But it really happened. And it didn't just happen for that one use case. It happened for thousands, for tens of thousands, for millions of other use cases. And so I think that my view of AI is it is about empowerment. It is about human agency. And that that does mean that some of these institutions, jobs, these kinds of things, there will be things that we thought we could rely on that turn out not to be as stable as we thought. And so it will affect people. But the question to lean into is, what do you gain and how do you benefit from it? So now you can be a builder, you can create anything, anything you can imagine can become real. Well, what do you imagine? How do you build that skill really leaning into this technology? One thing that I have observed is across multiple generations of this technology, the people who seem to be getting the most benefit out of it, are the people who did it for the previous one. So the more that you build the skill, and the core of it is agency, is having a vision, is having ideas because now the barrier to entry to trying them out is lower than ever before. So I think there will be new opportunity created. I think that the world does need to think about, how do we support everyone through this moment of uncertainty, through whatever transitions will come, because the economy will be a compute-powered economy. It will be different. But I think that there will be a place for everyone to contribute.

Speaker 1:
[67:15] Where should young people be investing today? If you're in high school or university or just trying to start out in a job, what skills do you think will be more valuable in the future?

Speaker 2:
[67:25] Well, I really think leaning into this technology is going to be a critical skill, just really understanding how do you get the most out of AI. Because we're all going to be heading to a world where we're managers of agents and soon maybe the CEO of an autonomous AI corporation. Just imagine if you had the workforce of 100,000 person company all at your disposal, operating on your behalf.

Speaker 1:
[67:53] 24-7.

Speaker 2:
[67:54] 24-7. As long as you've got the tokens, the compute to power it, which again, I think everyone needs access to compute, that's so critical for the world to figure out and get right. Because then at that point, you can point that at any problem. The number of problems that humanity could want to solve are boundless. I think that the more that people do lean into this technology, figure out how to take advantage of what's coming, how to combine these technologies in new ways, how to interact with their agents to really manage them, to think about what is it that I want? What is it that is my sense of self? What is my purpose? What do I want to see in the world? It is going to be easier than ever to accomplish that. I think that that world, with what we gain, I think is going to be almost unimaginable in its upside.

Speaker 1:
[68:44] That's the most positive view of the future. What's the most negative one you can imagine?

Speaker 2:
[68:49] One thing that's very interesting about how technology has played out to date is that it's really been about contorting ourselves to the machine. You think about how many people work where you have this box, and you're typing away at it, and you're getting your carpal tunnel, and your shoulders are hunched, and all of those things that were not natural. That's not really what we're designed for. We're going to be moving to this world where it's not just that you're doing work with your computers, your computer actually does work for you. That is something that presents opportunities. I think it presents risks. I think we need to figure out how to mitigate those. Like one core thing at the end of the day is that if you have machines that help people actualize their goals, right, that's out there doing what you want, sometimes people have conflicting goals. How do you resolve that? How do you decide what the bounds are on what an AI will help you with and what they won't? Really trying to figure out how does this slot into society? How do you make sure that the benefits don't just go to one corporation, one set of people, but that actually do lift up everyone? We need to raise the floor so that everyone has access to a great life, this technology, and are able to do things with it. And I think it'll correspondently also lift the ceiling. And so I think we're going to be in a world where everyone is going to have new opportunities, that there will be more just, I don't know if the right word is safety net, or just like that there should be something that really is able to make sure that everyone gets brought along. But then we're going to be able to accomplish so much more. And you think about things like access to medical care. Like we should be in a world, if we do our job right, where everyone gets access, has a doctor in their pocket, that is better than any team of doctors today. The world's best doctors, they're there for you. They care about you. They're actually reading your charts that they're thinking 24 seven about how can we actually help with this condition. It's disruptive, right? It's not going to come for free in terms of how this technology will interact with the world. And we've already seen the beginning eras of it. But I think that what we're going to see over just even the next two years, I think it will be this force for good. But we have to also acknowledge all the ways that it could go wrong or the risks of it in order to achieve those upsides.

Speaker 1:
[71:16] We always end every podcast with the same question, which is what is success for you?

Speaker 2:
[71:22] Achieving the OpenAI mission of ensuring that artificial general intelligence benefits all of humanity.

Speaker 1:
[71:28] Thank you very much. This was awesome. This is a great conversation, man.

Speaker 2:
[71:32] Thank you. I had a great time.