transcript
Speaker 1:
[00:00] One, two, three, one, two, three. Yeah.
Speaker 2:
[00:02] Does that sound right?
Speaker 1:
[00:03] We're going to do all that personal drama in 90 minutes.
Speaker 2:
[00:05] Oh, yeah.
Speaker 1:
[00:05] Oh, don't you worry. Don't you worry. Superintelligence. Is this part all going to be in the podcast?
Speaker 2:
[00:10] No, maybe. Bear with me through my slightly silly intro. But I think you guys are going to like it. It will be okay. Welcome to the Fortress of Finance, the Capital of Capital. This is not that podcast. This is Core Memory, I am Ashlee Vance. And I'm Kylie Robinson. And I think we have a hell of an episode for you. Normally we do an introduction of people at this part, but that's perhaps unnecessary today. We have Sam Altman and Greg Brockman, the co-founders of OpenAI. You may have heard of it. Thank you guys for being here.
Speaker 1:
[01:01] Thank you for having us. Thank you so much.
Speaker 2:
[01:04] I think this is the first time you guys have ever done a podcast together.
Speaker 1:
[01:07] And that is amazing, but I think that's true. Certainly in a long time.
Speaker 2:
[01:10] In a long time.
Speaker 1:
[01:11] I think maybe the first one.
Speaker 2:
[01:13] And I will not fight you for doing it on our show, but you did buy a podcast. Is there, we just lucked out.
Speaker 1:
[01:19] You lucked out.
Speaker 2:
[01:20] Yeah. I'll take it. I was curious why you guys bought a podcast. It's not really, you don't have to go deep on it, but did you have quick thoughts on that?
Speaker 1:
[01:29] I think that the people who do TBBN are incredible. I think they're just very creative thinkers. And I think that in this world that we're moving to of building these AI systems that are so useful for people and helping people understand why, why that's valuable for them in their personal lives and work lives. Like these are the kinds of people that I think could help tell that message.
Speaker 2:
[01:49] I see you on it. Have you been on TBBN?
Speaker 1:
[01:51] I have. Yeah.
Speaker 2:
[01:52] Okay. Okay. I don't watch all the episodes.
Speaker 1:
[01:55] It's a fun pod.
Speaker 2:
[01:57] Well, I thought since you guys haven't done this together, at least in a while, I was going to go down the Stalja Lane just for a little bit at the beginning. Kylie and I have gotten to know both of you over the years. We were reflecting as we were preparing for this. We're a little bit past the 10-year anniversary. You guys are two of the remaining co-founders, I think Vojchek is the third. So you're this constant line that's been running through the company. You started as an underdog, you ended up as the top dog. It's all through a lot of drama, undulations. I mean, I was genuinely curious at how your relationship through all this has changed and how you guys have played off each other and if it's morphed over time.
Speaker 1:
[02:46] It is extremely nice in, look, we all wish there were less drama. We wish we just got to focus on the tech. But in a world of so much chaos and drama and tension and fighting and power struggles, it has been unbelievably nice to have a relationship with someone that's got the full context. We have all this history and really depend on each other in amazing times, very tough times. It has been one of the nicest things about all of OpenAI. In many ways, the very first moment of OpenAI was right after this dinner that we did in July of 2015. Sam and I were driving back to the city together, and we looked at each other and we were like, we have to do this. There had been all this conversation of, is it too late to start up the hub that could go after AGI and have a positive impact? That seems so ridiculous now that we were so worried about that. Yeah. But it was too late. Yeah. You missed it. You missed it.
Speaker 2:
[03:41] I remember feeling like that when you guys started. I was like, no.
Speaker 1:
[03:44] She finds it going to run away with this. Yeah. The conclusion of dinner was, it wasn't obviously impossible. I think both of us just felt like, okay, this is just so important. We just have to do it. Yeah. I think that spirit continues. A lot of how we operated in the early days, I remember I was unemployed at the time, so I was full-time on it the next day. Sam actually had a day job, but we were constantly on the phone, probably like five times a day. Yeah. Exactly.
Speaker 2:
[04:08] Were you close friends already at that point or not?
Speaker 1:
[04:12] We had known each other for a super long time, or it felt like a super long. I don't actually remember. Yeah. We met 2010, 2011. Whenever you started at Stripe. That's right. Yes. We met through the Colesons. Yeah. We'd been kind of casual social friends. I guess it wasn't as long as I thought. Yeah, maybe it was 2010 and this was now 2015. So five years. Time compresses.
Speaker 2:
[04:32] Yeah. Being in the pressure cooker of all this doing this work, I would imagine you guys have only got closer over time.
Speaker 1:
[04:44] Yeah, people use the word trauma bonding. I hate that. I like the other things about the people you're in the foxhole with. But one of the nice things about hard work no matter what, but certainly hard work in stressful times is you really forge these relationships that I at least have not seen get formed in the other way.
Speaker 2:
[05:03] Yeah.
Speaker 1:
[05:04] And I do think the way that Sam and I work and relate is maybe different from what you'd expect from a typical co-founder relationship. But I think that we are just in constant contact, that five calls a day, two minutes, five minutes each, that kind of spirit remains. I think we're just in constant sync. And we don't always agree on everything. It's not like we come at the world from exactly the same point of view, but that's why we're so strong together. Is that I think we have very complimentary approaches that Sam will say, here's an idea, I'll think about, well, maybe we could do this other way, or what about if we approached it with this angle, or how does this relate to this other thing that we're thinking about? And one thing I deeply appreciate about Sam is that I think he always sees these connections between different ideas or just keeps focused on, here's the big picture that we need to get to, and then together we figure out, well, how do we actually do it? And I think connecting the grand ambition with the execution, that is what has always distinguished OpenAI. Yeah, what are some of the points in these 10 years where you felt like it was really important that you guys diverged? Do you remember any key moments for you guys? I think one of the things that Greg has done the best, which is not my instinct, is really just push to focus on the most important thing in his own work and also in what the company is going to do. So there have been times where I have wanted to do more things, and Greg has just said is this the most important thing? Let's really just do this. Let's get the company focused. And we've diverged on that, and that's been a very helpful spirit of Greg's throughout the company. Yeah, and I would also add, I think even, for example, thinking about compute and just constantly raising the ambition. And sometimes I feel like, okay, I kind of logically know that yes, we're moving to this compute-powered economy, and yes, that demand is always going to outstrip supply, but we've got all this hard work to do, and we already have all these big computers, and we're operationalizing them, and you still have all of this just physical infrastructure to build, and you feel already swarmed in it and swamped in it. And Sam is like, no, we need even more. And I think that that actually has been a very important thing to really not, sometimes it's easy to lose sight of the higher order bit, of just the fact of this is going to be so important for not just the next six months, but this is what's important for the next two years, the next five years, ten years. And I think that the keep, like you need this balance of sometimes swimming in the details, but you can't be swamped in the details. And I think that that balance is something that I think again, is what really has contributed to what OpenAI is and where we are going to go.
Speaker 2:
[07:43] Is there, there must be one product or strategy you guys have, what's the one that you've disagreed about the most vehemently?
Speaker 1:
[07:52] I was just thinking when Greg was talking about this, this is not a product, but it was the thing that came to mind that I was going to save before you asked that. We used to talk a lot about how to talk about safety. We never disagreed on the extreme importance of safety and what it will mean to get this right or get this wrong, but the field has had a strange relationship with how we've talked about safety, how we view safety and how much that becomes about power versus actually keeping things safe. And I think earlier in our history, I got swept up more in that we got to really talk about this in a particular frame. And Greg was very disciplined about, we're not going to fall into the traditional frame. We can't talk about that way. Now, even then, I think we have, because this is so important, we, I would ever have fallen into the trap of still talking more in the wrong frame than we should. But I think one of OpenAI's greatest contributions to date has been finding a different way to talk about safety, not just in how we build the products, and how we talk about society needs to do, but like what, how we deploy them, the whole idea of iterative deployment, and actually getting to, maybe not the, actually get into a world where we're figuring out how to deploy products that get increasingly safe as the stakes go up. And Greg really held a line there that I think has been quite important to the company against extreme pressure not to do that. And I think that's been like quite important to our whole strategy, not just how we talk about things, but how we ship and build products. Yeah, and if you look at, for example, the OpenAI Foundation, which is the nonprofit that governs OpenAI and has a very large chunk of equity, one of its pillars is AI resilience. And what that really means is thinking about how do we make AI be something positive for the world? And the answer is not any one intervention, right? It's not you have chain of thought monitoring and now you've achieved the mission. It's really a whole deep sequence of different ways that society should orient around this technology. And I think that this perspective of you're not going to solve AI going well for the world in a paper. It has to.