title How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)

description Cat Wu is Head of Product for Claude Code and Cowork at Anthropic, building one of the most important AI products of this generation. Before joining Anthropic, Cat spent years as an engineer and briefly worked in VC. Today, she’s interviewing hundreds of product managers who are trying to break into AI—and seeing firsthand what separates those who thrive from those who fall behind.

We discuss:
1. How Anthropic’s shipping cadence went from months to weeks to days
2. The emerging skills PMs need to develop right now
3. Why you need to build products that don’t yet fully work, so you’re ready when the next model closes the gap
4. Cat’s most underrated AI skill: asking the model to introspect on its own mistakes
5. Why Claude’s personality is core to its success
6. Why Anthropic’s mission alignment eliminates the friction that slows most large organizations
7. Why “just do things” is the most important principle for working at AI-native companies

Brought to you by:
WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs
Vanta—Automate compliance, manage risk, and accelerate trust with AI

Episode transcript: https://www.lennysnewsletter.com/p/why-half-of-product-managers-are-in-trouble

Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0

Where to find Cat Wu:
• X: https://x.com/_catwu
• LinkedIn: linkedin.com/in/cat-wu
• Newsletter: https://catwu.substack.com

Where to find Lenny:
• Newsletter: https://www.lennysnewsletter.com
• X: https://twitter.com/lennysan
• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:
(00:00) Introduction to Cat Wu
(01:29) Working with Boris Cherny
(04:29) What Anthropic looks for when hiring PMs
(06:18) How to help your teams move fast
(08:58) How PRDs and roadmaps have evolved at Anthropic
(10:28) The Mythos model and Anthropic’s shipping velocity
(11:54) What happened with the Claude Code source code leak
(12:53) Integrating with OpenClaw
(14:19) How the PM team is structured at Anthropic
(15:42) How engineer and PM roles are merging
(17:54) Why product taste is the most valuable skill
(20:10) Where human brains will continue to be useful
(22:23) How to stay sane in constant chaos
(24:16) What gets sacrificed when you ship so fast
(27:47) The /powerup command
(28:32) Why Anthropic has been so successful
(32:28) When to use Claude Code vs. Desktop vs. Cowork
(35:58) Tips for getting started with Cowork
(38:44) Demo: Using Cowork to build slide decks overnight
(41:48) Cat’s PM tech stack and internal tools
(46:47) Which teams use the most tokens
(51:15) The emerging skills PMs need for AI companies
(55:00) Why building evals is underappreciated
(58:44) Why Claude’s character and personality matter so much
(1:00:44) How new models force product changes
(1:05:11) The vision for Claude Code and Cowork
(1:07:22) Advice for thriving in an AI-driven world
(1:09:18) Why 95% automation isn’t good enough
(1:11:58) Build apps you use every day, not prototypes
(1:13:41) The divide between AI skeptics and believers
(1:15:19) Lightning round

Referenced: https://www.lennysnewsletter.com/p/how-anthropics-product-team-moves

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Lenny may be an investor in the companies discussed.


To hear more, visit www.lennysnewsletter.com

pubDate Thu, 23 Apr 2026 15:01:50 GMT

author Lenny Rachitsky

duration 5134000

transcript

Speaker 1:
[00:00] I think it is very hard to be the right amount of AGI-pilled. It's very easy to build a product for the super AGI-strong model. The hard thing is figuring out for the current model, how do you elicit the maximum capability?

Speaker 2:
[00:13] I've never seen anything like the pace you folks at Anthropic are shipping at.

Speaker 1:
[00:17] We want to remove every single barrier to shipping things. The timelines for a lot of our product features have gone down from six months to one month and sometimes to even one day.

Speaker 2:
[00:26] You're interviewing hundreds of PMs, and you just keep feeling like they're approaching it very incorrectly.

Speaker 1:
[00:32] The PM role is changing a lot. It's changing really quickly. The thing that is extremely important for building AI-native products is iterating so quickly, figuring out a way for you to actually launch features every single week.

Speaker 2:
[00:44] What do you think are the emerging skills PMs need to develop?

Speaker 1:
[00:47] It comes back to product taste. As code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write.

Speaker 2:
[00:56] Today, my guest is Cat Wu, Head of Product for Claude Code and Cowork at Anthropic. Cat is at the center of everything that is changing in AI and product and building, and she and her team are building the product that is most changing the way that we all build our products. She is so full of insights and wisdom and lessons, this is an episode you cannot miss. Before we get into it, don't forget to check out lennysproductpass.com for an insane set of deals available exclusively to Lennys Newsletter subscribers. With that, I bring you Cat Wu. Cat, welcome to the podcast.

Speaker 1:
[01:35] Thanks for having me.

Speaker 2:
[01:37] I have so many questions, I'm so excited to have you on this podcast. I want to start with giving people an understanding of your role alongside Boris. Everybody knows Boris, his episode is the number one most popular episode on this podcast, no pressure. He created Claude Code, he leads the Ench team, he ships a bazillion PRs a day from his phone, just like, I don't even know what the number is anymore. I think people don't give you enough credit for the success that Claude Code has had and Cowork and all the things you all are building. Help us understand your role on the team, how you work with Boris, how you split responsibilities, just like what does the PMR look like on the Claude Code team?

Speaker 1:
[02:16] I feel very lucky to work with Boris, he's been an amazing thought partner, he's our tech lead, he's very much the product visionary, and he is great at setting like, this is what the product needs to be in like 3 months, 6 months from now. This is like what the AGI-pilled version of the product is. And a lot of my role is figuring out, okay, what is the path from where we are today to like that vision 3 to 6 months from now? And I spend more of my time on the cross-functional, so making sure that our marketing team, sales team, finance capacity, etc. are like bought in on the plan and that we're all rowing in the same direction. And that once the feature is ready, that there aren't any blockers to shipping it. I think in many ways it works well because we kind of like mind-meld, but it is actually like remarkably blurry of a line. Like I think we're like 80% mind-meld. And then there's like this 20% of things that like, maybe I care a lot more about them Boris, so like I'll drive those and like 20% work he cares a lot more than me, and he just like drives those.

Speaker 2:
[03:20] This episode is brought to you by our season's presenting sponsor Work OS. What do OpenAI, Anthropic, Cursor, Versel, Replet, Sierra, Clay and hundreds of other winning companies all have in common? They are all powered by Work OS. If you're building a product for the enterprise, you felt the pain of integrating single sign-on, skim, RBAC, audit logs and other features required by large companies. Work OS turns those deal blockers into drop-in APIs with a modern developer platform built specifically for B2B SaaS. Literally every startup that I'm an investor in that starts to expand up-market ends up working with Work OS. And that's because they are the best. Whether you are a seed-stage startup trying to land your first enterprise customer or a unicorn expanding globally, Work OS is the fastest path to becoming enterprise-ready and unblocking growth. It's essentially Stripe for enterprise features. Visit workos.com to get started or just hit up their Slack where they have actual engineers waiting to answer your questions. Work OS allows you to build faster with delightful APIs, comprehensive docs and a smooth developer experience. Go to workos.com to make your app enterprise-ready today. Something that you shared actually before we started recording is the fact that you're interviewing hundreds of PMs all the time. Like if I had a nickel every time someone asked me for an intro to someone at Anthropic to go work at Anthropic as a PM, I'd have 30 billion in ARR. It's just like the number one place people want to go work at, so I can only imagine how many PMs you're interviewing. You told me that you're just seeing people doing it wrong, the way they're approaching what they think it takes to be a successful AI PM. Talk about what you're seeing and what people need to understand about what it takes to be successful these days.

Speaker 1:
[05:03] I think before AI, technology shifts were a lot slower, so you could plan on the 6 to 12 month time horizons. And because you were shipping features at a bit of a slower rate, there was a lot more emphasis on coordinating with all the other partner teams to make sure that they're shipping features that unblock your features because code at that time was very expensive to make. I think now with AI and with how much that has accelerated engineering and with how quickly the model capabilities are improving, the timelines for a lot of our product features have gone down from 6 month to 1 month and sometimes to 1 week or even 1 day. And with that, we actually need to make sure that products ship quite quickly. And what that means is as a PM, there should be less emphasis on making sure that you're aligning your multi-quarter roadmaps with your partner teams and more emphasis on, okay, how can we figure out the fastest way to get something out the door? How can we figure out how to make a concept corner of our product suite where we can just, an engineer has an idea or a PM has an idea, and by the end of the week, we are able to get into our users' hands. I think the PMs who do the best on AI native products are the ones who can figure out how can I shorten the time from having this idea to actually getting the product in the hands of users and help define what are the most important tasks that need to work out of the box for my product.

Speaker 2:
[06:37] What I love about this is what you're saying is just like people haven't grasped how fast they need to move and how much of the job now is just moving, is helping the team move fast. What helps do that? What do you do, what does your PM team do to help them move this fast other than have access to the most advanced models?

Speaker 1:
[06:57] I think the first thing is to set clear goals because LLMs are so general that actually creates a lot of ambiguity in who we're building for, what problems we're trying to solve, what the top use cases are. And so I think a great PM is able to say, okay, our key user is professional developers. The main problem that we want to solve for this feature is maybe there's too many permission prompts and people are feeling fatigue. And the use case is we want professional developers at enterprises to safely get to zero permission prompts. And that actually sets a pretty clear goal because it rules out a lot of potential approaches for reducing permission prompts so that people can get a lot more done with one prompt. And then I think the second thing that's very important is figuring out some repeatable process for getting these features shipped. So for Claude Code, what we do is we actually ship almost all of our features in research preview. We clearly brand this when we ship something so that users know that this is an early product, this is just an idea, this is just something that we're trying to get feedback on and iterating on, and that this might not be supported forever. And what this does is it reduces our commitment for shipping something. We can just get something out in a week or two. And the third thing that a PM should do is help create the framework for the team so that they know when to pull in cross-functional partners and what those cross-functional partners' expectations are. So for example, we have a really tight process between engineering, marketing and docs. So when engineers have a feature that they feel is ready and that we've dog fooded internally, they post it in our evergreen launch room. And then Sarah, who leads our docs, and Alex, who leads PMM, and Tarek, and Lydia, and Devereux, just like jump in and can turn around the marketing announcement for it the very next day. And because we have this really tight process, it lowers the friction for any engineer to ship something. PM is the role that should be setting this up.

Speaker 2:
[08:59] How do PRDs fit into this? The fact that you said that goals are really important, but it's just like being aligned on what does success look like? Who is this for? Who is this not for? Are you writing PRDs? Is it just like a couple of bullet points? How has that evolved in the world of PM?

Speaker 1:
[09:11] So there's two things that we do. One is we have very rigorous metrics, and we do metrics readouts with the entire team every week. The goal of this is to make sure that everyone deeply understands all the facets of our business, what our key goals are, how they're trending and what drives them. The second thing that we do is we have this list of team principles, and this includes who our key users are, why those are our key users. And the reason that we articulate all of this is so that everybody on the team feels like they understand how our business works. They understand what's important to us and what we're willing to trade off. And it lets people make decisions by themselves without feeling like they're blocked on PM or any other stakeholder.

Speaker 2:
[09:55] I love how so much of this is like, okay, we still need PMs in the future. And there's so much talk of like, why do we need PMs? We're just gonna ship and build, we need engineers.

Speaker 1:
[10:03] Oh, we actually do PRD sometimes. So I think for features that are like particularly ambiguous, it does help to write out just a one pager on what the goals are, what the delightful use cases are, what the failure modes currently are that we need to fix. And there are occasionally some projects, especially things that require heavy infrastructure that do take many months. And for those situations, we do write PRD still.

Speaker 2:
[10:29] I want to drill a little bit further into just how you're able to move so fast. I've never seen anything like the pace folks at Anthropic are shipping at. Like someone made this calendar of launches across Anthropic, and it was literally every day there was like a major feature or product. So one question people had online is, you guys just launched this incredible, not launched, but built this incredible model, Mythos, that is still in preview because it's so powerful. People are a little afraid of what it can do. Have you guys been using this? Is this part of the reason you've been able to move so fast?

Speaker 1:
[11:03] We've been moving pretty fast for several quarters now. So I think it's not fully Mythos. Mythos is an incredibly powerful model. We do use the models internally. I think this has increased our rate of shipping a little bit, but I don't think it explains the bulk of the increase. I think a lot of it is the process and the expectation on the team. So we're very low on process. We want to remove every single barrier to shipping things. We want to make sure every single person on the team feels empowered to take their idea from just an idea to like out in the world in less than a week, sometimes even in a day.

Speaker 2:
[11:41] Cool. Oh man. What an advantage to have the best model and also be building product. That's so cool.

Speaker 1:
[11:46] We are very lucky to be able to work with the frontier models.

Speaker 2:
[11:49] Oh my god. What an awesome advantage. Just like build a thing and then use it and then accelerate faster. It's so interesting. There's a couple like these other side things. I want to just kind of go on these like side quests on this conversation. There's so much happening with Anthropic and I just I'm so curious to get your insight. One is, a week ago or so, the whole source code of Claude Code leaked. Somebody got it out there and think it was a mistake. Someone made... Is there anything you comment there just like what happened? What went wrong? What are your people now?

Speaker 1:
[12:15] So we immediately looked into this when we saw it. We realized that this was the result of human error. There is a human working with Claude to write a PR. This was just an update to how we release our packages. And it actually went through two layers of human review. And so this was a result of human error. And we've hardened our processes to make sure that it doesn't happen in the future.

Speaker 2:
[12:40] This person is still at Anthropic. Are they doing it right?

Speaker 1:
[12:42] Yes, yes. It's a process failure. And the most important thing is to just like learn from it and to add more safeguards so that doesn't happen again. And so that's what we've been focused on and most of those have shipped.

Speaker 2:
[12:54] Okay. Another question I had is OpenClaw. So recently, there's been this move to keep people from using Claude's subscription with their OpenClaws. People got really upset. They're confused why this is happening. It feels like there's like, you know, harm caused to the open source community. What do people need to understand about kind of what went into this decision?

Speaker 1:
[13:18] So we've been seeing a lot of demand for Claude and we've been working very hard to both scale our infrastructure and also to make our harness more token efficient so that you can get more usage out of it. It wasn't designed for third-party products which have different usage patterns than our first-party ones. We spent a bunch of time trying to figure out what is the most seamless transition that we can offer. And so I was very happy to be able to say that everyone gets some credits alongside their subscription. But yeah, we did have to make the hard decision that we needed to prioritize our first-party products and our API. And so this is the decision that resulted from that.

Speaker 2:
[14:01] Yeah, to me it makes so much sense. You guys are subsidizing this usage at like 200 bucks a month. It's basically unlimited use of this. And I think people don't understand. They're trying to make money. We're trying to be profitable here. We can't just give away compute when it's so in demand. So I get it. Coming back to the PM team, what does just like the PM team look like at Anthropic? How many PMs are there? How are they kind of organized?

Speaker 1:
[14:26] Yeah, so we have a few PM teams. I think we're maybe around 30 or 40 PMs right now. So we have the research PM team who Diane leads. And this team is responsible for understanding all of the feedback from our customers for our models and then feeding that to the best research team to act on it. And they also shepherd the model launch. There is the Claude Developer Platform team that maintains the APIs that Claude Code is built on top of. And they also release things like managed agents, which is a way for you to build your agents and we can host it on your behalf. And then there's Claude Code that works on both Claude Code and the Cowork Core products. There's Enterprise that helps make Claude Code and Cowork easier to adopt for all of our Enterprise customers. And so this is everything from like cost controls, bar back security controls, and just making sure that these enterprises feel very confident and comfortable using our tools. And then we also have our Growth team that is responsible for growing across our entire product suite. So we work very closely with them on Claude Code and Cowork growth. And I know they also work with our other teams on CDP growth. So growth of people who use the Claude API.

Speaker 2:
[15:43] So speaking of growth, so Amal was just on the podcast. You have this really interesting insight that most people haven't been sharing. There's always the sense that we need fewer PMs in the future. What's the, why do we need PMs? Engineers can just ship. His take is that because engineers are moving so fast, PMs and designers are squeezed. There's less time to stay on top of everything that is happening. There's a feature shipping every day. So his take is he needs more PMs because it's hard to keep up. What's your take there? Do you feel like there will be an increase in hiring of PMs? What do you think is going on with the PM profession long term?

Speaker 1:
[16:15] I think all of the roles are merging. PMs are doing some engineering work, engineers are doing PM work, designers are PMing and also landing code. You can either hire a lot more engineers who have great product taste, or you can keep your engineering hiring the same and hire a lot more PMs to help guide some of their work. On our team, we're pretty focused on hiring engineers with great product taste. This way, we can reduce the amount of overhead for shipping any product. There are many engineers on our team who are fully able to end-to-end go from see user feedback on Twitter through to ship a product at the end of the week with almost no product involvement. This, I think, is actually the most efficient way to ship something. So, I think, like, engineer and PM are kind of overlapping, and you will get a lot of benefit from having more of either. I think product haste is still a very rare skill to have, and we'll pretty much hire anyone who we feel has demonstrated this strongly.

Speaker 2:
[17:25] And your background was in engineering, right?

Speaker 1:
[17:27] Yeah, I was an engineer for many years. I was in a VC very briefly before joining Anthropic. And actually, almost all the PMs on our team have either been engineers or ship code here on Claude Code. And so that's one of the things that I think helps build trust with the team and also just enables us to move a lot faster. And then actually, our designers also have been front-end engineers before.

Speaker 2:
[17:54] Wow, because that's the big question. Like, there's definitely this merging that's happening. The Venn diagrams are combining. I think the big question for a lot of people is if you're coming from engineering or product or design, which of those core skills is going to be most valuable? I could see it at Anthropic and on Claude Code. Engineering is very valuable. I'm curious if other companies, if you have a design background, becoming a PM is more valuable or just a PM PM.

Speaker 1:
[18:16] I still think it comes back to product taste. Like, as code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write. Like what is the right UX for this feature? What is the most delightful way that a user can experience it? What, like, we get tens of thousands of GitHub issues asking for every single thing under the sun. And it takes a lot of care and taste to figure out, okay, which of these is worth building and what is the right way to build it? And I think that that skill set can come from any background, but I think that's the most important thing. I think the reason why an engineering background is particularly useful, at least for the next few months, is if you have an engineering background, you have a better sense for how hard something should be, and that's often a factor in what you choose to build. So like if something is very easy to build, then maybe instead of debating it, you just spend an hour doing it. But if something is harder to build and you know that upfront, then you know that, okay, this will just like cost a lot more for our team to get this out the door. So it helps a bit with the prioritization.

Speaker 2:
[19:27] You said in the next, for the next few months, is that just like, because the models will get so good, potentially in the next few months, you may not even need to know that as much.

Speaker 1:
[19:37] I think the value skill sets does change quite frequently. And so it's really hard to predict more than a few months out. So it's less a commentary on what shift I think will happen and more of a commentary that I think large shifts will happen.

Speaker 2:
[19:53] So you're not saying that's when Mythos comes out and will change everything. And we don't need to know anything about engineering.

Speaker 1:
[19:59] No, I'm just saying that every few months, it seems like there's a large increase in coding capability, which then changes what other roles are valuable. I think the most important thing is to have this first principles thinking where you can figure out how the tech landscape is changing, what the team really needs from you, and to jump in and fix that whole. Because I think the work is becoming more amorphous, which means that a great PM is able to understand what all the gaps are to figure out what the highest priority ones are, and then to just figure out, okay, how do I learn that skill set or what is the skill set that I have, that I can like apply to this challenge? So I think the current environment values people who are able to wear a lot of hats, are able to swap them, and are like very low ego about what work they do to help the team move faster.

Speaker 2:
[21:06] I love this answer. There's this question I've been asking people in your shoes, folks that are kind of at the bleeding edge of what AI is capable of and building with the latest tools, which is just like, where will human brains continue to be useful and necessary for a while until we get to superintelligence? What I'm hearing here is essentially picking the things to work on, knowing where the market's going and figuring out what to prioritize, essentially. And then it's knowing if the thing you've built is good and right and getting it out there in some early version, at least. Does that sound right? Is there anything else of just like where human brains will continue to be useful for at least the next few months?

Speaker 1:
[21:43] I think humans still provide a level of common sense that the models don't. And there's like a thousand moving pieces to any product launch. Some of them are very small, but there's always a lot that could potentially go wrong. I think the model doesn't always have a great sense of who all the stakeholders are, how they relate to each other, what their preferences are, what are the right venues to communicate with them, to keep them on board. I think a lot of this more tacit, common-sense EQ kind of knowledge is still very valuable. Of course, we want the models to get better at this, and I think they will be, but right now, I think there's still gaps.

Speaker 2:
[22:25] How do you just deal as a human going through so much constant change, just being on the inside of the tornado? Maybe it's calm there, but how do you stay on top of what's going on? How do you stay sane through all this craziness that we're moving through?

Speaker 1:
[22:39] I think our team is still people who would lean into the chaos. So we try to face every challenge with a smile, because there's always so much going on. There's always so many risks and tricky situations that, if you get too stressed about anything, you'll burn out. So we really look for people who can look at a challenge, be like, that's going to be hard, but I'm excited to tackle it and I'm going to do the best that I possibly can. And I know I won't be perfect, but I'll be able to sleep at night knowing that I did my best.

Speaker 2:
[23:11] That's an interesting answer to just like what skills will be important in this future. Because I forget who said this, maybe Ben Mann, that this is the most normal the world will ever be.

Speaker 1:
[23:23] Yeah, it definitely gets harder. Like I feel like there are a lot of weeks where maybe Sunday night, there's some like P0 and then by Monday, there's like a P00 and by Monday afternoon, there's a P000 and you're like, wow, I can't believe I was so worried about that P0 from Sunday. But I think you just have to acknowledge that there's only so much that you can do that you need to sleep well so that you can make good decisions next day and just like brutally prioritize where you spend your time, what's the most important thing to get right and be okay letting things go. Like there's products that we ship that aren't as polished as I wish they were. But our top goal is to help empower professional developers, and if a product isn't successful, as long as it's not blocking the core use case, it's okay because we'll hear the feedback and we'll fix it in the next release. Launching a feature that is buggy is the kind of thing that would have kept me up at night, but it is something that I am now able to live with knowing that, okay, we're going to get that quick feedback and we're going to fix it in the next release.

Speaker 2:
[24:31] What I'm imagining is there's that GIF. I think it's maybe from Pirates of the Caribbean, where it's this guy walking down a pair of stairs on a ship and the whole ship is just being demolished around him and he's so chill just strolling down the staircase as everything's falling apart. And that's interesting because everyone I've met from Anthropic is just so chill and just so optimistic. Yeah, I think that's a really interesting insight is just like having this calmness and optimism versus just like, oh my god, everything's crazy and going nuts.

Speaker 1:
[25:00] Yeah. I think if you don't have it, you'll get pretty burnt out. I think we also tend to hire people who have been in the industry for a while and have experienced lots of ups and downs and have a good sense for what gives them energy and how to maintain their energy over time. I think that's helped us a lot.

Speaker 2:
[25:20] So interesting. Something that I wanted to ask about is, so there's these roles blurring, engineers are becoming PMs, everyone's dogs or cats, everyone's everyone. What do we lose in that world? Do we lose career ladders and clear career paths? Do we lose design consistency, code quality? There's probably some downsides. What are some things you find or just say, okay, that's something we're sacrificing for the greater good?

Speaker 1:
[25:42] We're sacrificing product consistency. Historically, when code was expensive to write, you would carefully plan out everything in your product suite, how every product relates to each other, what the use case for every single one is, how they integrate, and you would pretty much have one product for each use case. Now with AI moving so quickly and with so many ideas that we need to test out, we do sometimes have features that overlap with each other. A lot of the times it's because there's two form factors that we love internally, and we want the external audience to tell us which one is better. What that means for someone who's a new user though, is a new user might not know, okay, what is the best path to accomplish X? There is more education we need to do to help people understand what the core features are and what the best practices are for using them. I think this is the cost of launching a lot of features. I think users also feel like it's hard to keep up with the latest. Usually, in traditional PM, you ship a feature every month or quarter, and so it's really easy for a user to understand, okay, I just need to check in on this once a month, and I'll learn some new things. If I ignore it for six months, it's fine, I don't feel like I'm missing out. I think with these agentic tools, not just Claude Code and Cowork, but across the whole ecosystem, people feel this need to check Twitter every single day to see what the absolute latest thing is. And I think there's more we can do to help people feel less like they're on this ever increasingly fast treadmill, and that they feel like, I would love people to feel like they can just open these tools, the tools will educate them, or like teach them what they want to know, and that they can just feel more bought along.

Speaker 2:
[27:48] Yeah, I saw you launch this really interesting feature the other day, I think it's slash power up, where it basically walks you through all the cool ways, and basically all the best practices to use Claude Code, is that kind of all in these lines?

Speaker 1:
[27:57] Yeah, exactly. So in the past, we didn't actually want to do something like power up, because we felt like the product should be intuitive enough that you don't actually need to go through any tutorial. And over time, we've just realized that there's just so many features, and there's so much demand for a built-in onboarding experience, that we diverged a bit from our original principle, saying no onboarding flow, and added this because there's just so many users who wanted to know, there's a hundred features, what are the ten that I absolutely need to use? And so we put that together.

Speaker 2:
[28:32] Yeah, it's such a bizarre world. So Anthropic has been really successful with B2B enterprises, where traditionally you don't launch a bunch of stuff, you just kind of have a quarterly release maybe, and it's like the opposite of every day we got some new. So just maybe following that thread, the run Anthropic has been on is just otherworldly. Anthropic was way behind when it started, it was, Amol shared this, just like one of the least funded companies didn't have a distribution, wasn't the first to go, OpenAI was way ahead, and it was just like no way Anthropic has any chance to compete significantly long-term. Now it's just killing it, it's beating the biggest companies' teams so much, just like the growth is just like $11 billion in ARR in one month, perhaps in growth, by the time this comes out, it'll probably be even higher. Just being on the inside, what are some ingredients that have allowed Anthropic to be this successful and kind of come from behind and do this well?

Speaker 1:
[29:29] The two most important things are one, this unifying mission. It's hard to state how important this is. We hire people who care most about bringing safe AGI to all of humanity. This is actually something that we reference frequently in our decisions about what our entire product org should focus on shipping. Because we put this mission above any individual product line, we're able to make very fast decisions that cut across the entire org and execute on them in a unified way. I think this is something that I've never seen at a company of our scale.

Speaker 2:
[30:12] Just to make sure that's clear, so essentially having the number one mission is safety, alignment, making sure AI is good for the world. You're saying just having that as a clear mission makes decisions a lot easier to make.

Speaker 1:
[30:24] If there's two competing priorities, we'll talk about which one is more important for Anthropic's mission. It makes it a lot easier to decide which of the two we prioritize, and then everyone will stand behind the one that we decide. Sometimes that means that like, hey, we want to ship something on Claude Code, but this other thing is more important, and so we de-prioritize shipping this, and we just wait until later.

Speaker 2:
[30:47] What's really interesting about that is that it explains, I think, versus another company, maybe rhymes with Bopen AI, did a lot of different things. And what I'm hearing here essentially is like, okay, we're not going to launch a social network, we're not going to launch a feed of interesting information because it's not aligned to this mission. And that has kept Anthropic focused, which seems to be a core ingredient to the success.

Speaker 1:
[31:10] Well, when I think about mission, I think about putting Anthropic's goals ahead of any individual org or any individual product. And so for me, it's I think the second thing that we're very good at is focused. I think mission to me is slightly different. Mission means that teams are willing to make sacrifices that hurt their own goals and their own KRs in service of Anthropic's goals and Anthropic's KRs. And people are very happy to make those trade-offs. So, like, an extreme example is if Claude Code failed but Anthropic succeeded, I would be extremely happy. And like, we're, like, the whole team is very willing to make decisions that follow that chain of thought.

Speaker 2:
[31:58] I don't know if you can talk about this in depth, but do you feel like the OpenClaw decision is a part of this? Just like, okay, this is not furthering the mission of Anthropic. We need to stop this because it's not working in the way we want it to work.

Speaker 1:
[32:10] I think one of the most important things for Anthropic is to grow the number of users that we're able to reach. One of the ways that we're able to do this is with the Claude subscriptions with our first-party products. And so we just very much want to double down on that, but that does come at the expense of third-party products sometimes.

Speaker 2:
[32:28] So we've been talking about Claude, Cowork, all these things, something that I want to make sure people get, and I'm curious just how you use these tools. So there's Claude Code, there's Claude Desktop slash Web, there's Cowork. What's the best way to understand when to use which? When do you use each of these three?

Speaker 1:
[32:44] So I tend to use Claude Code in the terminal when I'm just kicking off a one-off coding task, and I want all of the latest features. The CLI is our initial product surface, and it's also the one where our features often land first. And so it's the most powerful of all the tools. So that's what I tend to use when I'm just trying to kick off one or maybe a handful of tasks at a time. I think desktop really shines when you're doing something that requires front-end work. And so one thing that I love to do is to use our preview feature. So if I'm building a web app, I'll often use Claude Code in desktop. I'll have the preview pane open on the right-hand side so that I can actually see the web app that I'm making in real time as I'm chatting with Claude. It's also really great for people who want something a bit more graphical. A terminal can feel very unfamiliar to someone who is non-technical. You get a bunch of these scary pop-ups on your machine, and you can't click around the way that you're used to in pretty much every other product that you use. So there's a lot of people who just don't feel comfortable in terminal. If that's you, I would highly recommend checking out Claude Code on desktop. Desktop is also great for getting an at-a-glance view of everything that's happening. So you can see your CLI terminal sessions in desktop. You can see your other desktop sessions. You can see your sessions that you kicked off on web and mobile. So it's a one-stop control plane where you can see all of your tasks. I think the benefit of web and mobile is that it's really great for kicking things off on the go. So CLI and desktop both require you to be on your local laptop. And this is constraining because sometimes you're out and about. You're like touching grass. You're going on a walk. And you don't have your laptop open and you don't... I can't count the number of people who I've seen holding their laptop open tethered to their phone while they're outside. And this just means that we're missing a product that solves that need. And so for me, what mobile lets you do is kick all these tasks on the go, so that you don't need to bring your laptop everywhere and make sure that your laptop is open wherever you are.

Speaker 2:
[34:57] I love that. I've seen people on plane. Like it's just like such a meme now. Just say, I need to let this agent finish. I can't shut this down.

Speaker 1:
[35:04] And then I think for Cowork, the role that this fills is there's a lot of work that everyone does where the output isn't code. So whether that's like getting to Slack zero or Inbox zero, or whether that's creating a slide deck for some customer meeting that's coming up, or whether that's writing a quick doc on what the goals of a feature or what the launch plan for a feature is. All of these tasks produce outputs that are non-code, and Cowork is best positioned for that. So the way that I split the products in my mind is, if I'm building something where the output is code, I'll use Claude Code or Desktop or Claude Code on mobile. If the output is anything that's not code, I'll use Cowork for it.

Speaker 2:
[35:48] People are just sleeping on the success that Cowork is having. It's just growing incredibly fast. And I think people still don't understand maybe what it's for. And so what if you give us a couple of use cases just in your work as a PM? What are some really interesting, maybe unexpected ways to use Cowork to save you time, get more work done?

Speaker 1:
[36:08] If you're getting started on Cowork, the first thing that you really need to do is connect all the data sources that are relevant to your role. Because Cowork can only do a great job if it has access to all the context that it needs to be able to curate the output for you. What that means for me is I connected to my Google Calendar, I connected to my Slack, to my Gmail, to my Google Drive, so that it has the flexibility to find relevant context, to ask questions, to pull in threads, and this substantially improves the quality of the result. The kinds of things I use it for are, like last night, we have this Code with Claude conference coming up, and there's a few talks that I'm giving there. And one of the talks that we're doing talks about the transition of Claude Code from an assistant to a full-on agent. And one of the things that I wanted to do in this talk was to showcase all of the products that we've been shipping that enable this transition, and also to figure out, okay, what are the success stories that people have had internally that we can use as demos? And so I have my Google Drive connected, I have Slack connected. Alex, who's our product marketer, put together a draft of what the points that he thinks we should cover are. And so I just fed this all into Cowork. I told Cowork the narrative that I wanted to tell, and it actually just worked for an hour. It walked through Twitter to see what we launched, it looked through our Evergreen launch room, it looked in our Claude Code Announce channel, which is where our team posts demos of how they've been getting the most value out of Claude Code. And it synthesized all this together to this 20-page deck that I woke up to this morning, and I read through it and it was pretty good. There were a few tweaks, so I did have to give it a round of feedback. I like my slides to have extremely minimal words, and it was a little too wordy. But it was far faster than what I would be able to produce. And because Cowork has access to our whole design system, it actually looks like an Anthropic designer put it together. When you visually see it, you're like, oh, this is incredibly polished. So these are the kinds of things that are so much faster. Making this slide deck would have taken me hours. But instead, it like turns out a draft that is actually quite good. So I could focus on making sure that the demos are amazing, that we plug into it.

Speaker 2:
[38:45] This sounds like a dream come true to PMs that putting decks together is so annoying.

Speaker 1:
[38:49] It's so slow.

Speaker 2:
[38:51] And I love people will see this deck whenever you present this. This will be out in the world. Like obviously, it's not the one-shotted version, but you've iterated on it. So just to help people try this for themselves. So step one is connect their, what did you say? Slack, what else do you suggest they connect?

Speaker 1:
[39:07] Slack, Google Calendar, Gmail, G Drive. You should connect your communications tools and where you store your source of truth data for what your team cares about, what you care about and what you're working on.

Speaker 2:
[39:21] Okay. And then what was the prompt roughly that you put in there to generate this deck?

Speaker 1:
[39:25] So I just wrote, make me a slide deck for the Code with Claude conference. This is what our PMM suggested it should cover. This is the current draft that I made that I don't like. This is one that I made manually that I don't like, but I linked it. Can you start by creating a proposed outline with details? Also make sure it doesn't overlap too much with a keynote talk, which is more important. And then Claude read a bunch of the links that I sent to it and created a proposed outline. So then I read through its proposal and all the different ideas that it had generated for what we could cover. And I just made a decision on what I wanted to actually be in the final deck. And I think this is like an example of what the role of the PM still is today. It's like Claude is a great brainstorming partner. It's able to synthesize a massive amount of information really quickly and present all of the possibilities to you. But the role of the PM is still to make the end decision of, OK, what should belong in the final product. So for this, what I ended up deciding was that I wanted the talk to cover the progression from making local tasks successful, to making every PR green, to like helping engineers land more PRs. And for each of these, which demo would be the most compelling? And then after this decision about the outline, Cowork just like went off for a few hours and built the whole side deck.

Speaker 2:
[40:50] This is so awesome. What an awesome part of the job to not have to do anymore. And it feels like you're talking to essentially a deck designer, that also has like actual knowledge about what you've worked on and can like make it actually the content which you want it to be, not just make it look really nice. How did you do the design system piece? How does that work? How does it know the design system of Anthropic?

Speaker 1:
[41:15] So what I did for this is we actually already have like a standardized deck that we use across all of our external engagements. And so I just gave Claude access to that. And so it's able to see like what colors we use, what fonts we use, the different kinds of what's it called, like slide formats that are possible. And so it has like 20 of these example slides.

Speaker 2:
[41:36] So give an example. Got it. So you like upload, here's our template work from this. Yeah.

Speaker 1:
[41:41] You can also connect like your Figma MCP if you, if you have your side format saved there and it can pull that in.

Speaker 2:
[41:48] Along those lines, something I'm always curious about is what's kind of in your in your stack of tools as a PM and Anthropic. Obviously, Claude Code and Cowork and all the Anthropic tools. What else are you using? What other Slack you mentioned? Is there anything else?

Speaker 1:
[42:02] So my stack is pretty heavily Claude Code, Cowork and Slack. Anthropic largely runs on Slack. I feel like it's like the core OS of our company. And day-to-day, like a lot of, I would say maybe 30% of my time is pushing the boundaries of what Cowork and Claude Code can do so that I have a very strong sense of what we're not good at. And I spend a lot of time talking with the model to understand why it makes mistakes that it does. We actually have a lot of internal tools that we make. I think one of the things that Claude Code has really unlocked for our entire company is it really lowers the barrier to making any custom app that you want. And so we've seen this surge in personalized work software that people are building for custom use cases instead of using tools that don't perfectly fit the use case.

Speaker 2:
[43:06] I got to hear more. What are some examples? What are things you built, other people built that are really popular and useful?

Speaker 1:
[43:12] One of the sales folks on Claude Code, he realized he was making these repetitive decks over and over and over again. And so he actually has this web app that he built with the examples of the core Claude Code decks that we know work well. So like a 101, 201 and mastering Claude Code. And then he has a way to input specific customer context that pulls from Salesforce, that pulls from Gong, that pulls from other notes. So that we can customize the decks for specific customers. And so we'll pull out things like, okay, this customer is using like Bedrock or Code for Enterprise or Console, which affects what features are available to them. It will pull out things like, okay, this customer is concerned about the code review stage of the SLC. And so we'll add a slide about our code review features there. It will pull out things like, okay, this customer needs to be like HIPAA compliant or needs XYZ security controls. And so we'll make sure to add a slide or two in their deck about that. And then, for example, if this is a customer that's on Vertex or Bedrock and doesn't want to use Code for Enterprise, then we'll just take out some of the slides that are Code for Enterprise-only features. And so normally this is like manual work that could take 20, 30 minutes. And so people will either spend that time doing it or they'll just decide not to do it and use the general deck. With this, it takes a few seconds and you get a tailored deck.

Speaker 2:
[44:42] What's interesting about it is Slack is the tool that nobody's... It's just like nobody's trying to create their own. Slack just continues to win. And it's just like the way you describe it is kind of the OS of so many companies. It's so interesting. People talk about Salesforce as just like SaaS, we don't need SaaS software anymore. We're going to build our own. Slack is an adorable tool that nobody wants to try to compete with and build a better version.

Speaker 1:
[45:04] I think it's pretty important communications infrastructure. And I think they do the core task of helping everyone get real-time updates incredibly well.

Speaker 2:
[45:13] Yeah, like people hate on Slack, but it's really great at what it's trying to do. And like the most cutting-edge teams are hooked on it. So interesting.

Speaker 1:
[45:21] Yeah. And I also love how easy they've made to customize it. And so we love making Slack bots. And this kind of like hackability means that we're able to integrate with Slack the way that we want to. So really appreciate Slack's work on that.

Speaker 2:
[45:38] Time to buy some CRM stock. I am so excited to tell you about this season's supporting sponsor, Vanta. Vanta helps over 15,000 companies like Cursor, Ramp, Duolingo, Snowflake and Atlassian earn and prove trust with their customers. Teams are building and shipping products faster than ever thanks to AI. But as a result, the amount of risk being introduced into your product and your business is higher than it's ever been. Every security leader that I talk to is feeling the increasing weight of protecting their organization, their business, and not to mention their customer data. Because things are moving so fast, they are constantly reacting, having to guess at priorities, and having to make do with outdated solutions. Vanta automates compliance and risk management with over 35 security and privacy frameworks, including SOC 2, ISO 27001 and HIPAA. This helps companies get compliant fast and stay compliant. More than ever before, trust has the power to make or break your business. Learn more at vanta.com/lenny. And as a listener of this podcast, you get $1,000 off Vanta. That's vanta.com/lenny. Okay, so you talked about all these different teams and how they use Claude Code and Cowork to operate. Which teams do you find other than engineering? Imagine engineering is the biggest token spender, but if not, that'd be really interesting. What's the second place function right now for tokens?

Speaker 1:
[47:04] Applied AI is amazing at pushing the boundaries of what Claude Code and Cowork can do. A lot of our Applied AI team spends time with our customers helping them adopt our API. And so sometimes our Applied AI team will, for example, make prototypes on behalf of these customers, which Claude Code makes so much faster than it used to be. They also have the dual goal of needing to manage a lot of customer comms, a lot of customer inbound and historical contacts, call notes. And so they're both extremely heavy on Cowork and on Claude Code.

Speaker 2:
[47:42] And just to understand Applied AI, is that like forward-to-play engineering sort of role? Like how would most people describe what the Applied AI team is doing?

Speaker 1:
[47:51] Yeah, it's helping our customers adopt the latest API and model features across their company, both for powering their company's products and also for internal acceleration.

Speaker 2:
[48:05] Got it. So it's like customer success, go-to-market-y, kind of like forward-to-play engineering sort of thing.

Speaker 1:
[48:10] Exactly. It's like a very technical go-to-market person.

Speaker 2:
[48:13] Got it. Okay, awesome. So you're saying that might be the second org that uses the most tokens.

Speaker 1:
[48:19] Yeah. And then we also see them pushing the boundaries of what Cowork can do. So for example, a lot of these folks cover multiple customers and in any given day can have like five to ten customer engagements on a high day. And so what they often use Cowork to do is the night before, they'll ask it to summarize, okay, what are all my customer meetings that are coming up the next day? What are all the things that this customer has asked me for? What's top of mind for them? What are the action items from the past meetings? And Cowork will just put together this dossier, this brief of what they should be aware of going into the next meeting. And Cowork can also research answers. So if a customer asked, okay, when is feature X going to launch? Cowork can help the applied AI person research through Slack to get the latest ETA. Add that to the notes so that during the customer call, the applied AI person has the absolute latest. And these are just workflows that people are building for themselves and sharing with other people on their team.

Speaker 2:
[49:25] So cool. Something that kind of this question, this trend, I don't know, question topic comes up a lot recently, which is tokens spend exceeding people's salary, where people just use AI and it costs more than how much they're making. Are there any numbers floating around in the topic of just like how much tokens spend, say, engineers spend, I don't know, a month a day or PMs, anything like that?

Speaker 1:
[49:50] It is clear to us that as the models get better, people delegate far more tasks to it and they spend a lot more hours in tools like Claude Code and Cowork. And so we do see the token cost per engineer or like per any knowledge worker increase every time that there is a model jump or like a substantial product improvement. I think it's still much lower than what the average engineer salary is, but we see the percentage increasing over time.

Speaker 2:
[50:21] It's such an interesting, like we talked about how you have access to the most cutting-edge models and other advantage of working Anthropic. I believe you guys have basically unlimited tokens. You can use as much as you want. Is that right?

Speaker 1:
[50:32] We can use a lot of tokens. Some people do run into limits.

Speaker 2:
[50:36] Okay, there's a limit. Boris, shut it down. It's so interesting how many advantages come from having the most advanced model. It's such an interesting like flywheel that starts to kick in.

Speaker 1:
[50:49] I think we also believe a lot in empowering our internal teams to build as fast as possible. We also trust that everyone understands how much capacity that serving these models truly costs. And we trust our team to use the tokens responsibly. So it's very frowned upon to waste tokens. But we do trust individuals to make that judgment call.

Speaker 2:
[51:14] Awesome. Coming back to the PM role, we talked a little bit about this, but I think this will be really interesting for people to hear. Just what I want to understand is what do you think are the emerging skills that PMs need to develop slash you most look for, or AI companies most look for when they're hiring PMs these days?

Speaker 1:
[51:35] I think the hardest skill is being able to define what the product should look like a month from now. I think there's a lot of ambiguity in what models are capable of in that timeline and how user behavior will change. But I think there are patterns that the best PMs can see based on how users are abusing the limits of the existing product. And the best PMs can sense that, can set a direction, and can steadily execute towards it and change the path if the model capabilities are much better than or worse than what they'd originally expected. I think it is very hard to be the right amount of AGI-pilled. So I think everyone can see this future where the models are extremely smart and can do almost everything, in which case you actually don't need that complicated product. You can actually just have a text box again, where you tell the model what you want. And it's so smart that it can add any tool or add any integration that it needs to get the job done. It knows when it's uncertain. It can ask clarifying questions. It's kind of very easy to build the product for the super AGI strong model. I think the hard thing is figuring out for the current model, how do you elicit the maximum capability? How do you help users go get onto the golden path? How do you guide users to interact with the model strengths and patch its weaknesses? This skill is pretty rare.

Speaker 2:
[53:19] How do you build that skill? Is it just using each, basically understanding the limits of each model, having like, you talked about taste, understanding, having taste into what the model maybe is capable of, what it's great and not great at, where it's changed.

Speaker 1:
[53:31] I think it's spending a ton of time talking and using the model. One of the things I really like to do is to ask the model to introspect on its own behaviors. Sometimes when I notice that the model does something unexpected, like for example, there's like situations where the model will make a front-end change and run tests but not actually use the UI. It's actually pretty useful to ask the model to reflect on why it did this. Sometimes they'll say that, hey, there was like something confusing in the system prompt or I didn't realize that the front-end verification was like part of this task or, hey, I delegated the verification to this subagent and the subagent didn't do the test and I didn't check its work. A lot of times just like being very curious about why the model made the decision that it did will show you what misled it so that you can fix the harness in order to close this gap. The other thing that helps is to figure out who are the users who you trust the most to give you accurate feedback about the model. Usually there's like a handful of people who are much better than others at articulating what makes a specific model or model harness combination good. And there's a lot of people who will give you feedback but not everyone's feedback is as qualified. And so finding a group of those like five people you trust is really important for getting very fast feedback. I think the third thing that is useful but not everyone loves doing is building evals. You don't need to build hundreds of evals for them to be useful. Just building 10 great evals is important for helping the team quantify what the goal is and what their progress towards it is and what they're missing. And so I think evals is this like underappreciated thing that more PMs, more engineers should be working on.

Speaker 2:
[55:33] We've covered evals a bunch. There's this trend of just like that is the future of product management is writing evals because essentially it's what a success look like. Okay, cool. Let me actually concretely define it and then we'll know. How much of your time are you spending writing evals would you say?

Speaker 1:
[55:46] I think the importance of evals varies a bit based on the feature that you're working on and or like what the problem you're trying to solve is. So there are a lot of folks on our team who do spend a lot of time working on evals. We have a small pod of folks who collaborate very closely with research to more precisely understand our Claude Code behaviors and what the largest areas of improvement are and trying to measure those pretty concretely. I personally jump into evals when there's a feature that I think needs a bit more product definition and often the output of this is, okay, here are like five evals that I made, this is how you run them, these are the ones that succeed and these are the ones that don't, and this is like the prompt that I've used to increase the success rate. It varies a lot though based on the exact feature. Not every feature needs it, but I think features such as memory benefit a lot from it.

Speaker 2:
[56:46] This point you made about people being very good at evaluating models, it's interesting. It's almost like a human level of just like, okay, they understand where it's spiking or it's maybe lacking. Is there anyone specific that you want to shout out that's very good at this?

Speaker 1:
[57:00] Two people who I think are incredible at this are, one, Amanda, who molds Claude's character. It's just like such a hard role because the task is so ambiguous. Even coding is easier because you can verify the success, whereas crafting the character requires a very strong sense of conviction in who Claude should be. I think she has an incredible ability to not only mold the character, but also to articulate what the goals are, what the character, what's successful, and what's not. The other group of people who I really trust is just like the Claude Code team. So we often have team lunches and whenever there's a new model we're testing, one of the fastest ways for us to get feedback is to just like at these team lunches, just like go to every single person and just be like, hey, what is your vibe on the model? Oftentimes, we'll get feedback like, okay, this model is not fully explaining its thinking, it's like too abrupt or like, hey, this model is like just like loves writing a ton of memories, but like we're not sure if the memories are high quality or not. Or like some people will notice that, okay, this model loves to test itself, which is great, or like this model isn't testing itself enough. So that informs what data we look at to verify, okay, is this a larger pattern? So we have a ton of data, but it is very hard to extract insights. And so the feedback from this group helps us inform, okay, what are the hypotheses we want to test? And then we're able to extract data to test that.

Speaker 2:
[58:45] This point you made about the character of Claude, I had Ben Mann on the podcast co-founder, and he talked about this, just like the character, the constitution of Claude is such an important part of Claude. And I didn't realize until afterwards, just like people, like with OpenClaw actually, one of the reasons people are sad like the personality of your Claude is like because Claude's personality is so good and fun and interesting, unlike other models. And there's, and the way he put it is the personality is what makes Claude so good at so many things. It feels like this like trivial side thing. Okay, it's going to be funny and interesting and talk in a fun way, but it's like so core to the success of Claude. Is there anything you'd share there about just like what people may not understand about why the character as you described and the personality is so key?

Speaker 1:
[59:34] When you reflect on everyone you've worked with, there's just some people where you're like, I really like their energy, like I really like their vibe. When people think about Claude and Claude Code, this is one of the things that people bring up the most, where they just really love that Claude is like, it's like lighthearted and fun, but it also is extremely confident at your task. People really like that Claude's low ego, and so if you tell it, hey, you did this thing wrong, it's like truly sorry. It's like, oh, shoot, thanks for telling me, let me fix it, let's work together. It's also very positive. So if you're feeling like, oh, this is like an insurmountable task, I don't know how to get started, Claude is like, okay, it's okay. These are the steps that I think we should take. Do you want me to get started on it for you? I think part of what makes a great coworker is this positivity, this bias towards action, this ability to give you earnest feedback, not just agreeing with every single thing that you say. So we try to imbue this into Claude because we think it makes it a lot more enjoyable to work with.

Speaker 2:
[60:45] There's something I want to come back to. You talked about how when new models come out, you often have to revisit things you've built. That's so interesting and so frustrating maybe, just like, oh, god damn it, we ship this thing and I have to rethink it. Talk about just like how often you have to come back with a new model and they're like, okay, we have to redo this product that we launched a few months ago.

Speaker 1:
[61:03] A lot of the changes that we make with a new model is removing features that are no longer needed. So a lot of times we add features to the product as a crutch for the model because it's not naturally doing itself. So the classic example for this is the 2D list. When we first launched Claude Code, people would ask it to do these large refactors and Claude Code would say, okay, cool, I need to change these 20 call sites, and it would go and change five of them and then stop. Then we were like, okay, how do we force it to remember to get every single one of these 20? So Sid on our team was like, okay, what if we just think about what a human would do? A human would make a list of everything that they need to change, similar to how in VS Code, you would look up all the call sites and it will be a list on the left side, and you would go through them one by one and replace all. How do we give this kind of like a tool to Claude? And so he added the to-do list. And we found that with that, Claude was actually able to fix all these 20 call sites. But then with Opus 4 and later models, we realized that we didn't need to force it to use this to-do list. It would naturally use it itself. For the earlier models, we had to keep reminding it, hey, did you finish everything on to-do list? You can't finish until you're done with everything on the to-do list. And for the later models, without prompting, it just naturally thinks to do everything on the to-do list. These days, the to-do list is still nice to have as a user because then you can more clearly see what Claude is working on. But honestly, it's such a de-emphasized part of the product right now that the model may use it, the model may not use it. It's really not necessary for it to make thorough changes anymore.

Speaker 2:
[62:44] I forget who said this on the podcast, that the model will eat your harness for breakfast. What I'm hearing here is essentially, you remove things over time that you've had to add on top of the model where it was not operating the way you want it. Essentially, as the models get smarter, it becomes simpler and simpler for it just to do the thing you want it to do.

Speaker 1:
[63:03] Yeah. We can remove a lot of prompting interventions every time the model gets smarter. We actually do this every time we launch a model. We read through the entire system prompt, and we reflect on, okay, for each of these sections, does the model really need this reminder anymore? If not, we'll remove it. The most exciting thing that new models unlocks though, it's just entirely new features. There's a lot of features that we've been testing out with prior models and the accuracy wasn't high enough for us to want to launch them. One example of this is code review. We tried to build a code review product a few times, and we've launched simpler versions of code review, which is the slash code review command in the past. It was only with the most recent models that we felt like, okay, this code review is so good that our engineering team relies on this code review to pass before we merge PRs. We've always dreamed of Claude being able to be a reliable code reviewer that we can confidently feel, catches the majority of bugs. And it was only with Opus 4.5 and 4.6, and Sonnet 4.6, that we felt like, okay, we are now able to run multiple code review agents simultaneously to traverse the entirety of the code base and to synthesize a set of real issues that an engineer needs to address before merge. And so this is a new capability that the newest models have built.

Speaker 2:
[64:39] This is another trend that is very common on this podcast of build something that will possibly be possible in the next six months, be kind of at the edge of what's working sort of, and then it'll catch up, and then it'll be an amazing product and you'll be ahead of everyone.

Speaker 1:
[64:52] Yeah, exactly. It's pretty important to build products that don't necessarily work yet, so that you know, okay, what is missing for this product to work? And then with the newest model, you can just swap it in to the prototype you've already made and see, okay, does this new model close that gap?

Speaker 2:
[65:12] How much are you able to speak to just kind of where things are going with Claude and Cowork as kind of the vision of it? I imagine you don't want to give away too much about the goal, but it feels like there's all these awesome features being added on top. Dispatch, control from phone and all these mobile app, all these things. What's kind of just like a way to understand the vision for all these things long term?

Speaker 1:
[65:32] We think about this in terms of building blocks. So for both Claude Code and Cowork, the core building block is making individual tasks successful. So you want to produce some output, you give it a clear prompt description. Is it able to consistently produce acceptable output that you're able to either merge or share with your colleagues or external audience? So the task is the core building block. As the models get smarter, the task success rate gets a lot higher. And then we see people moving towards doing multiple tasks at the same time. So multi-clawing was this big thing and towards the end of 2025, and it's only increased since then. And so we see this as, okay, great, one task works, and now you can do like six tasks at a time. As the models get even smarter, the way that we are extrapolating this is, okay, next, maybe you're going to run like 50 claws at a time or hundreds of claws at a time. And so what is the infrastructure we need to build to enable that? At that point, you're probably not going to run everything locally on your machine anymore. There's just like not enough RAM to do it. And so we're thinking about how do we make it easier for you to manage all these? These will probably run remotely. How do we build the interface so that you as a human know which tasks you need to look into? How do we make sure that the agent is fully verifying its work so that when you look at a task and it says it's done, you like can very quickly verify and fully trust that it is done to your spec? And how do we make sure that this process is self-improving so that when you do see a task that isn't done to your liking, you can give it feedback and the model will know for every future run to incorporate that feedback so it never makes that mistake again. So this is the progression that we're bringing our users along for.

Speaker 2:
[67:23] There's a lot of people listening, a lot of product managers, a lot of maybe founders, a lot of other cross-functional folks listening. There's a lot of worry about just how their role, just the future of their careers. What advice would you have for just people to not just survive this transition to this very AI-driven world, but to be really successful to essentially just to thrive in this future? What are just things people need to hear, need to be doing?

Speaker 1:
[67:49] I think AI gives everybody a ton more leverage than they used to. And so I would push you towards, anytime you realize that you're doing some manual task multiple times, think about how you can use Claude Code, Cowork, or other AI tools to automate that for you. Most people have creative parts of their job that they absolutely love, and then tedious parts of their job that they really hate doing. I think the beauty of AI is that it can do those tedious parts for you. It can learn from every time that you've done that manual task and generalize, and then run it automatically. And so that you can focus on the creative parts, and that means you can do a lot more than you used to be able to do. So I think my immediate push for people is figure out the repetitive parts that you can pass to Claude, iterate on those automations until the success rate is very high, and then focus on, okay, what more can you be doing for your team, for your product, for your company that people haven't had the bandwidth to pick up so far? Or what is that pet project that you always thought the company should do that you've never had bandwidth to do? If AI can take care of the grunt work, then you have this extra 20 percent time now that you might not have before. So my push is to lean into these tools, hand off the work that you're not excited to do, figure out how it can accelerate you, and then as a result, you'll be able to do so much more.

Speaker 2:
[69:19] Something core to what you just shared, which I fully agree with is find problems to solve with AI. There's all this potential of what all these tools can do. For a lot of people, the hardest part is what should I actually do? And what you're saying here is just pay attention to things that you are doing constantly. You can automate. Pay attention to just ideas that have been floating around that you haven't had time to do. Basically, it's like solve a problem for yourself is the core advice there.

Speaker 1:
[69:45] Exactly. I would also push listeners towards focusing on bringing your automations from, okay, this is a cool concept to like, hey, this actually works 100% of the time. Like sometimes I see users trying to automate something, getting it to like 90-95% accuracy and then giving up on it. And this, if an automation doesn't work 100% of the time, it's not really an automation. And that last 5-10% does take more time. Also, building the automation is often a lot slower than you doing it yourself. I would encourage listeners to put in that time to scope some automation that you really want to get to 100%, put in the elbow grease to teach Claude your preferences, to like give it feedback so that it can improve its skill, so that it can get to that 100%, and then like really then you'll be able to rely on it. There's just not much value in a 95% there automation.

Speaker 2:
[70:44] I am super guilty of that. This is really good advice for me.

Speaker 1:
[70:48] I am guilty of this too. I've been teaching it. I've been teaching Cowork to try to get me to inbox zero for Gmail, and it has been very time consuming, and it is definitely not there as you probably realize.

Speaker 2:
[71:02] Yeah. Funny enough, that's exactly where my mind goes. I have this workflow I set up where every email I get, it looks for things that are spammy, which is just like all these like, can I come on your podcast or what about this one? All these things, I'm just like, I don't have time for these sorts of things, and I have it categorized into a folder called spammy, and it's just like it's 95 percent great, but then there's like, oh, wow, I missed an email because it went in there. So this is a good push for me to like, I'm going to work on this, I'm going to get it to perfect.

Speaker 1:
[71:28] Yeah. We also are working on making the flow for customizing these commands a lot easier, because right now I think you have to know too many concepts. You have to know to define a skill, you have to know to use this skill and give it feedback, and then you have to know to tell Cowork to update the skill based on all the feedback that you gave, and then you also have to know where to read the skill to make sure that the feedback was incorporated the way that you want. It's also our job to make this flow really seamless, so that it doesn't feel painful to do.

Speaker 2:
[71:57] Amazing. Is there anything else, Cat, you wanted to share, anything else you wanted to leave listeners with, anything you wanted to double down on that we haven't already touched on before we get to a very exciting lightning round?

Speaker 1:
[72:08] I see a lot of people playing around with AI, and building like prototype apps, and tinkering with building workflows. I would really push people towards building apps that you're actually using every single day. Cause I think only through that usage are you actually getting the value. Like if you build a prototype app that isn't helping you get more done, then the AI isn't really adding value to your day.

Speaker 2:
[72:38] And there's only so much you learn from that and it's like, okay, I just one shot at something. Oh, that's cool. And then you never come back to it. Like you're not learning a lot.

Speaker 1:
[72:45] And you're not getting like much leverage from it.

Speaker 2:
[72:47] And actual leverage. Yeah. That's such a good point.

Speaker 1:
[72:49] I also think there's a lot of people who spend a lot of time like customizing their workflow. So there's like, I think there's like two ends of the spectrum. One is like people who never customize or never build automations. But there's like this polar opposite end of people who like obsess around customizing their tool, like adding a ton of skills and MCPs and these like workflow improvements. And I think sometimes that can even distract from your core goal of like launching some product or building some feature. I think there's a lot of fun in customizing and we definitely want to make our products very hackable so that you can make it work really well for you. But there is a limit to how much it's useful. And I think there's a camp of people who maybe spend so much time customizing that they're like not sleeping and not doing the like core task that they originally set out to do.

Speaker 2:
[73:41] I see a lot of that on Twitter. Just like, look at my setup. It's out of control. It's so optimized. Then what are you actually building? No, but my setup is so awesome. I could get so much done.

Speaker 1:
[73:52] I think the simple setups actually work better.

Speaker 2:
[73:56] Slash power up, get level up a little bit.

Speaker 1:
[73:58] Yeah.

Speaker 2:
[73:59] There's this Karpathy tweet that just came out on yesterday where he talked about this divide that's interesting between people that tried ChatGPT Claude back in the day. It was like, okay, and they're like, no, this is terrible. They gave up on what AI could do for them and they're just so cynical. No way it's not actually that big of a deal. Then there's people that are using it to code essentially, who see the full intense power of it and how good it is. People on both sides don't understand the other side and how they see the world. And so your advice is really good here. Just like actually use it for real things and see how good it actually has gotten.

Speaker 1:
[74:38] Yeah. I think the big shift is that the 2024 generation of products were chat-based and the Claude Code generation of products is action-based. And the big aha moment people have is when Claude can just do things on your behalf. It is an amazing feeling to know that the agent is capable of doing so much more than telling you what to do. Like the agent can actually just do it itself. And when people feel that, I think that's the eye-opening moment.

Speaker 2:
[75:10] Shout out Chrome Extension, the Claude Code Chrome Extension, which you could just watch it doing stuff. You'd be like, fill out this form for me, and I'll go, right here I go.

Speaker 1:
[75:18] Exactly.

Speaker 2:
[75:19] Okay. Anything else before we get to our very exciting lightning round?

Speaker 1:
[75:22] No, let's do it.

Speaker 2:
[75:23] Let's do it. Cat, I've got five questions for you. Welcome to the lightning round. There's this animation that plays. I have to make sure to say it. Are you ready?

Speaker 1:
[75:32] I'm ready.

Speaker 2:
[75:33] First question, what are two or three books that you find yourself recommending most other people?

Speaker 1:
[75:38] I really like How Asia Works. It's a story about economic development and what are the policies and governments that make long lasting successful economies. The other books that I'm really into are The Technology Trap. So this is actually about the past few technology revolutions, so the industrial revolution and the computer revolution, and how this has affected workers. The reason that I really like this is because I think there's a lot we can learn from history to make sure that this transition goes well. Maybe on a fun note, I really like Paper Menagerie. It's just like a book of short stories about coming of age and AI and just like self-discovery.

Speaker 2:
[76:30] Favorite recent movie or TV show you have really enjoyed?

Speaker 1:
[76:33] I really like Drive to Survive. There's no deeper meaning to it. There's just something very satisfying about people being so obsessed with a singular engineering goal and just like the purity of the pursuit. I also really love Free Solo, which is about Alex Honnold climbing El Capitan without a harness. And I think similarly, it's just such a pure achievement to be able to climb this extremely challenging, dangerous route and to be able to have the mental focus to do it, knowing that if you make a single mistake, you die.

Speaker 2:
[77:17] It's insane. Yeah, that movie is out of control. And it's interesting how these relate in some way to the work you do.

Speaker 1:
[77:22] I actually have a rock climber. I first watched Free Solo before I climbed rocks. And so I thought it was impressive, but I didn't understand how impressive it was. It's one of the rare movies where the more you know about it, the more you're blown away by how insane this is. Like the kinds of movies he's doing on the wall are things that I don't think I will ever be able to do in my lifetime if we were set in a gym, like one feet off the ground.

Speaker 2:
[77:47] With a rope.

Speaker 1:
[77:48] With a rope.

Speaker 2:
[77:50] Did you see the documentary on that other guy, the younger one that went on like ice?

Speaker 1:
[77:54] I did. That one was very sad.

Speaker 2:
[77:56] But that was wild. Okay. Favorite product you've recently discovered that you really love?

Speaker 1:
[78:00] The product that has most changed my life outside of Claude products is probably Waymo. I'm a diehard Waymo user. Use it twice a day, get to and from work. The two things that I really like about it are one, I don't feel bad if a Waymo is waiting for me, and so I feel less pressure to be right at the curbside the moment it arrives. The second thing is, I feel like it lets me be a bit more productive. When I'm in the car with another human, I typically try not to do any work calls. I feel a little rude if I'm on my laptop the whole time. But one thing I really appreciate about the Waymo is I can call into a work call. I'm not worried about someone overhearing me. I'm not worried about, hey, is this rude? Am I talking too loud? Do I need to ask someone to change the music? And so this has been like, I feel like this has given me back 30 minutes every day.

Speaker 2:
[78:55] All these second order effects of technology, it's so interesting.

Speaker 1:
[78:59] Yeah, I always thought Waymo needed to be priced lower than Uber and Lyft to succeed. But actually, I'm very happy to pay a 2x premium for it.

Speaker 2:
[79:06] I love Waymo. It's just like once you see it, you're just like, this is insane. And then you get used to it. You get in there, you're like, this is crazy. And then you forget about it.

Speaker 1:
[79:17] Totally. I think it's also changed the vernacular. A lot of people at Anthropic love Waymo. And I think in the past, you'd be like, hey, what's called Ba-ba, write your app. And now everyone's just like, okay, is Waymo here?

Speaker 2:
[79:30] Okay, two more questions. Do you have a favorite life motto that you often come back to in work or in life?

Speaker 1:
[79:35] Just do things. I think there's a lot of value in first principles thinking. And if you know what you're optimizing for and you have strong first principles, then you can normally deduce what the right course of action is and be able to clearly articulate that to all the stakeholders. And then you should just do it. I think jobs are fake. If you understand the constraints, you can figure out what you can do and then just try to do it quickly, learn from the mistakes and apologize or fix them if you did something wrong.

Speaker 2:
[80:08] You could just do things, whoever said that.

Speaker 1:
[80:10] I think it's liberating actually to tell people this. I think a lot of companies, roles are very strictly defined. Okay, this is what the PM does, this is what the designer does, this is what the engineer does. And then even team scopes are very rigidly defined. So hey, this corner of the codebase we touch and this corner we're not allowed to touch. And I think what just do things, people do is they feel like empowered to make these decisions, empowered to operate across team boundaries, just to like get something done.

Speaker 2:
[80:38] That feels like a big important skill to be good at. People call it agency. Just like do the things that need to be done. Bystanders action, all these ways of describing, just like get away for permission.

Speaker 1:
[80:50] Yeah, I think this is my favorite reason to work at a startup at some point in your life, because like one thing that was like very life changing for me was actually working at scale when we were 20 people. And so there was just no process and we have like really big problems that we needed to solve. And it was like, I really appreciate Alex and the rest of the team for like empowering me and the rest of the team to just like figure things out without any boundaries for what sales supposed to do, what ops supposed to do, what engineers supposed to do. Just like you have all the tools at your disposal. You have some like ambitious, hairy problem statement and you can do whatever you need to like get to a good solution.

Speaker 2:
[81:28] You almost need that experience to build that skill to feel comfortable doing that because a lot of people, they go through school or in college and all these like, do the thing we tell you to do and then you will get a good grade. You have to kind of unlearn that of like, okay, I'm just going to do the thing that needs to be done. Even if people think it's dumb, I think it's the right thing to do.

Speaker 1:
[81:46] Yeah, exactly.

Speaker 2:
[81:47] Okay, actually I have two more quick questions, two more file questions. One is, when Claude thinks there's all these, I don't know if you call them verbs, what's the term for these things?

Speaker 1:
[81:55] Thinking words.

Speaker 2:
[81:56] Thinking words. And interestingly, these all leaked in the source code. Do you have a favorite thinking word?

Speaker 1:
[82:03] I really like manifesting. It's also like the sticker that I have on my laptop.

Speaker 2:
[82:10] Clearly the winner. Okay, final question. Ask Boris this too, with AGI potentially arriving in our lifetime, when you don't potentially have to work, what are you going to do? What are you going to do with all your time?

Speaker 1:
[82:23] I think it will take a long time for AGI to defuse across society. So I think the immediate thing is actually just like helping bring the world along. I think my non-serious answer for after this happens is I'll probably just do a lot of rock climbing. I'll probably just like live in some, I'll probably move to like Fountain Blue and just like live amongst 10,000 boulders and climb for a bit. There's also so many books I want to read. My goal is to be able to read one or two books a week. I'm currently at probably like 0.5. The backlog is pretty big. I think there's just like so much we can learn from history and so much that I don't understand as well as I would love to. I don't know anything about physics or like robotics or like any hardware or like aerospace. There's just so many interesting topics. So I'm excited to learn even knowing that the AGI will already know it.

Speaker 2:
[83:26] Cat, this was amazing. You're awesome. Two follow up questions. Where can folks find you online if they want to reach out and just follow what you're up to? And how can listeners be useful to you?

Speaker 1:
[83:35] The best way to reach out is I'm underscore Cat Wu on Twitter. Feel free to tag me in things. Feel free to DM me. I read all my DMs. I don't always respond to every single one, but I will read them all. And then the thing that is most helpful is tell us where Claude Code and Cowork aren't working well for you. We are very grateful for the amount of positive feedback. But the thing that we thrive on is edge cases, errors, like specific tasks that we can reproduce where Claude Code or Cowork fail. Because if you are able to share that with us and we're able to reproduce it, then this is something that we're able to actively improve for our next generations of models and for our next harnesses.

Speaker 2:
[84:25] Extremely cool. People on Twitter are not shy with sharing this feedback, so keep it coming.

Speaker 1:
[84:30] Yes, please share the problems that you're having with us.

Speaker 2:
[84:34] Yeah, and it's really cool to see all your team being so active on Twitter and responding to people. What I'm hearing, this is actually stuff you guys actually see and react to.

Speaker 1:
[84:44] We appreciate everyone being so engaged with us. It gives the team a ton of energy. We have this channel of user love, and so whenever you guys share a success story, we post it there, and whenever you guys share issues with our product, we put it into our feedback channel. That way, our broader team is able to act on it.

Speaker 2:
[85:02] That is so cool to know. Thanks for sharing that. Well, Cat, thank you so much for being here.

Speaker 1:
[85:07] Thanks for having me.

Speaker 2:
[85:09] Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify or your favorite podcast app. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode.