- How do you out=compete them? - I think there's gonna be many AGI's in the world, so we don't have to, like, out-compete everyone. We're gonna contribute one. Other people are gonna contribute some. I think multiple AGI's in the world with some differences in how they're built and what they do and what they're focused on, I think that's good. We have a very unusual structure so we don't have this incentive to capture unlimited value. I worry about the people who do but, you know, hopefully it's all gonna work out. But we're a weird org and we're good at resisting. Like, we have been a misunderstood and badly mocked org for a long time. Like, when we started and we, like, announced the org at the end of 2015 and said we were gonna work on AGI, like, people thought we were batshit insane. - Yeah. - You know, like, I remember at the time an eminent AI scientist at a large industrial AI lab was, like, DM'ing individual reporters being, like, you know, these people aren't very good and it's ridiculous to talk about AGI and I can't believe you're giving them time of day. And it's, like, that was the level of, like, pettiness and rancor in the field at a new group of people saying we're gonna try to build AGI. - So, OpenAI and DeepMind was a small collection of folks who are brave enough to talk about AGI in the face of mockery. - We don't get mocked as much now. - We don't get mocked as much now. So, speaking about the structure of the org. So, OpenAI stopped being nonprofit or split up in '20. Can you describe that whole process costing stand? - Yes, so, we started as a nonprofit. We learned early on that we were gonna need far more capital than we were able to raise as a non-profit. Our nonprofit is still fully in charge. There is a subsidiary capped profit so that our investors and employees can earn a certain fixed return. And then, beyond that, everything else flows to the non-profit. And the non-profit is, like, in voting control, lets us make a bunch of non-standard decisions. Can cancel equity, can do a whole bunch of of other things. Can let us merge with another org. Protects us from making decisions that are not in any, like, shareholder's interest. So, I think, as a structure, that has been important to a lot of the decisions we've made. - What went into that decision process for taking a leap from nonprofit to capped for-profit? What are the pros and cons you were deciding at the time? I mean, this was 2019. - It was really, like, to do what we needed to go do, we had tried and failed enough to raise the money as a nonprofit. We didn't see a path forward there. So, we needed some of the benefits of capitalism, but not too much. I remember, at the time, someone said, you know, as a non-profit not enough will happen, as a for-profit, too much will happen, so we need this sort of strange intermediate. - You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here? Because AGI, out of all the technologies we have in our hands, is the potential to make, the cap is a 100X for OpenAI - It started as that. It's much, much lower for, like, new investors now. - You know, AGI can make a lot more than a 100X. - For sure. - And so, how do you, like, how do you compete, like, stepping outside of OpenAI, how do you look at a world where Google is playing? Where Apple and Meta are playing? - We can't control what other people are gonna do. We can try to, like, build something and talk about it, and influence others and provide value and you know, good systems for the world, but they're gonna do what they're gonna do. Now, I think, right now, there's, like, extremely fast and not super deliberate motion inside of some of these companies. But, already, I think people are, as they see the rate of progress, already people are grappling with what's at stake here and I think the better angels are gonna win out. - Can you elaborate on that? The better angels of individuals? The individuals within companies? - And companies. But, you know, the incentives of capitalism to create and capture unlimited value, I'm a little afraid of, but again, no, I think no one wants to destroy the world. No one wakes up saying, like, today I wanna destroy the world. So, we've got the the Moloch problem. On the other hand, we've got people who are very aware of that and I think a lot of healthy conversation about how can we collaborate to minimize some of these very scary downsides. - Well, nobody wants to destroy the world. Let me ask you a tough question. So, you are very likely to be one of, if not the, person that creates AGI. - One of. - One of. And, even then, like, we're on a team of many. There will be many teams, several teams. - But a small number of people, nevertheless, relative. - I do think it's strange that it's maybe a few tens of thousands of people in the world. A few thousands of people in the world. - Yeah, but there will be a room with a few folks who are like, holy shit. - That happens more often than you would think now. - I understand. I understand this. I understand this. - But, yeah, there will be more such rooms. - Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So, that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you? - For sure. Look, I don't, I think you want decisions about this technology and, certainly, decisions about who is running this technology, to become increasingly democratic over time. We haven't figured out quite how to do this but part of the reason for deploying like this is to get the world to have time to adapt. - Yeah. - And to reflect and to think about this. To pass regulation for institutions to come up with new norms. For the people working out together, like, that is a huge part of why we deploy. Even though many of the AI safety people you referenced earlier think it's really bad. Even they acknowledge that this is, like, of some benefit. But I think any version of one person is in control of this is really bad. - So, trying to distribute the power somehow. - I don't have, and I don't want, like, any, like, super voting power or any special, like, thing, you know, I have no, like, control of the board or anything like that of OpenAI. - But AGI, if created, has a lot of power. - How do you think we're doing? Like, honest, how do you think we're doing so far? Like, how do you think our decisions are? Like, do you think we're making things net better or worse? What can we do better? - Well, the things I really like, because I know a lot of folks at OpenAI, I think what I really like is the transparency, everything you're saying, which is, like, failing publicly. Writing papers, releasing different kinds of information about the safety concerns involved. Doing it out in the open is great. Because, especially in contrast to some other companies that are not doing that, they're being more closed. That said, you could be more open. - Do you think we should open source GPT4? - My personal opinion, because I know people at OpenAI, is no. - What does knowing the people at OpenAI have to do with it? - Because I know they're good people. I know a lot of people. I know they're a good human beings. From a perspective of people that don't know the human beings, there's a concern of a super powerful technology in the hands of a few that's closed. - It's closed in some sense, but we give more access to it. - Yeah. - Than, like, if this had just been Google's game, I feel it's very unlikely that anyone would've put this API out. There's PR risk with it. - Yeah. - Like, I get personal threats because of it all the time. I think most companies wouldn't have done this. So, maybe we didn't go as open as people wanted but, like, we've distributed it pretty broadly. - You personally and OpenAI's culture is not so, like, nervous about PR risk and all that kind of stuff. You're more nervous about the risk of the actual technology and you reveal that. So, you know, the nervousness that people have is 'cause it's such early days of the technology is that you'll close off over time because it's more and more powerful. My nervousness is you get attacked so much by fear mongering clickbait journalism that you're like, why the hell do I need to deal with this? - I think the clickbait journalism bothers you more than it bothers me. - No, I'm third person bothered. - I appreciate that. I feel all right about it. Of all the things I lose sleep over, it's not high on the list. - Because it's important. There's a handful of companies, a handful of folks, that are really pushing this forward. They're amazing folks and I don't want them to become cynical about the rest of the world. - I think people at OpenAI feel the weight of responsibility of what we're doing. And, yeah, it would be nice if, like, you know, journalists were nicer to us and Twitter trolls gave us more benefit of the doubt, but, like, I think we have a lot of resolve in what we're doing and why and the importance of it. But I really would love, and I ask this, like, of a lot of people, not just if cameras are rolling, like any feedback you've got for how we can be doing better, we're in uncharted waters here. Talking to smart people is how we figure out what to do better. - How do you take feedback? Do you take feedback from Twitter also? 'Cause does the sea, the waterfall? - My Twitter is unreadable. - Yeah. - So, sometimes I do, I can, like, take a sample, a cup out of the waterfall, but I mostly take it from conversations like this. - Speaking of feedback, somebody you know well, you worked together closely on some of the ideas behind OpenAI, is Elon Musk. You have agreed on a lot of things. You've disagreed on some things. What have been some interesting things you've agreed and disagreed on? Speaking of fun debate on Twitter. - I think we agree on the magnitude of the downside of AGI and the need to get, not only safety right, but get to a world where people are much better off because AGI exists than if AGI had never been built. - Yeah. What do you disagree on? - Elon is obviously attacking us some on Twitter right now on a few different vectors. And I have empathy because I believe he is, understandably so, really stressed about AGI safety. I'm sure there are some other motivations going on, too, but that's definitely one of them. I saw this video of Elon a long time ago talking about SpaceX, maybe it was on some new show, and a lot of early pioneers in space were really bashing SpaceX and maybe Elon, too. And he was visibly very hurt by that and said, you know, those guys are heroes of mine and it sucks and I wish they would see how hard we're trying. I definitely grew up with Elon as a hero of mine. You know, despite him being a jerk on Twitter, whatever. I'm happy he exists in the world, but I wish he would do more to look at the hard work we're doing to get this stuff right. - A little bit more love. What do you admire, in the name of love, about Elon Musk? - I mean, so much, right? Like, he has, he has driven the world forward in important ways. I think we will get to electric vehicles much faster than we would have if he didn't exist. I think we'll get to space much faster than we would have if he didn't exist. And as a sort of, like, a citizen of the world, I'm very appreciative of that. Also, like, being a jerk on Twitter aside, in many instances, he's, like, a very funny and warm guy. - And some of the jerk on Twitter thing. As a fan of humanity laid out in its full complexity and beauty, I enjoy the tension of ideas expressed. So, you know, I earlier said that I admire how transparent you are, but I like how the battles are happening before our eyes as opposed to everybody closing off inside boardrooms. It's all laid out. - Yeah, you know, maybe I should hit back and maybe someday I will, but it's not, like, my normal style. - It's all fascinating to watch and I think both of you are brilliant people and have, early on, for a long time, really cared about AGI and had great concerns about AGI, but a great hope for AGI. And that's cool to see these big minds having those discussions, even if they're tense at times. I think it was Elon that said that GPT is too woke. Is GPT too woke? Can you steel man the case that it is and not? This is going to our question about bias. - Honestly, I barely know what woke means anymore. I did for a while and I feel like the word has morphed. So, I will say I think it was too biased and will always be. There will be no one version of GPT that the world ever agrees is unbiased. What I think is we've made a lot, like, again, even some of our harshest critics have gone off and been tweeting about 3.5 to four comparisons and being like, wow, these people really got a lot better. Not that they don't have more work to do, and we certainly do, but I appreciate critics who display intellectual honesty like that. - Yeah. - And there there's been more of that than I would've thought. We will try to get the default version to be as neutral as possible, but as neutral as possible is not that neutral if you have to do it, again, for more than one person. And so, this is where more steerability, more control in the hands of the user, the system message in particular, is, I think, the real path forward. And, as you pointed out, these nuanced answers to look at something from several angles. - Yeah, it's really, really fascinating. It's really fascinating. Is there something to be said about the employees of a company affecting the bias of the system? - 100%. We try to avoid the SF group think bubble. It's harder to avoid the AI group think bubble, that follows you everywhere. - There's all kinds of bubbles we live in. - 100% - Yeah. - I'm going on, like, around the world user tour soon for a month to just go, like, talk to our users in different cities and I can, like, feel how much I'm craving doing that because I haven't done anything like that since, in years. I used to do that more for YC. And to go talk to people in super different contexts and it doesn't work over the internet. Like, to go show up in person and, like, sit down and, like, go to the bars they go to and kind of, like, walk through the city like they do. You learn so much and get out of the bubble so much. I think we are much better than any other company I know of in San Francisco for not falling into the kind of like SF craziness, but I'm sure we're still pretty deeply in it. - But is it possible to separate the bias of the model versus the bias of the employees? - The bias I'm most nervous about is the bias of the human feedback raters. - Ah. So what's the selection of the human? Is there something you could speak to at a high level about the selection of the human raters? - This is the part that we understand the least well. We're great at the pre-training machinery. We're now trying to figure out how we're gonna select those people. How we'll, like, verify that we get a representative sample. How we'll do different ones for different places. But we don't have that functionality built out yet. - Such a fascinating science. - You clearly don't want, like, all American elite university students giving you your labels. - Well, see, it's not about. - I'm sorry, I just can never resist that dig. - Yes, nice. (Lex laughing) But it's, so that's a good, there's a million heuristics you can use. To me, that's a shallow heuristic because, like, any one kind of category of human that you would think would have certain beliefs might actually be really open minded in an interesting way. So, you have to, like, optimize for how good you are actually at answering, at doing these kinds of rating tasks. How good you are empathizing with an experience of other humans. - That's a big one. - And being able to actually, like, what does the worldview look like for all kinds of groups of people that would answer this differently. I mean, you'd have to do that constantly instead of, like... - You've asked this a few times, but it's something I often do. You know, I ask people in an interview, or whatever, to steel man the beliefs of someone they really disagree with. And the inability of a lot of people to even pretend like they're willing to do that is remarkable. - Yeah. What I find, unfortunately, ever since COVID, even more so, that there's almost an emotional barrier. It's not even an intellectual barrier. Before they even get to the intellectual, there's an emotional barrier that says, no. Anyone who might possibly believe X, they're an idiot, they're evil, they're malevolent, anything you wanna assign. It's like they're not even, like, loading in the data into their head. - Look, I think we'll find out that we can make GPT systems way less bias us than any human. - Yeah. So, hopefully, without the... - Because there won't be that emotional load there. - Yeah, the emotional load. But there might be pressure. There might be political pressure. - Oh, there might be pressure to make a biased system. What I meant is the technology, I think, will be capable of being much less biased. - Do you anticipate, do you worry about pressures from outside sources? From society, from politicians, from money sources. - I both worry about it and want it. Like, you know, to the point of we're in this bubble and we shouldn't make all these decisions. Like, we want society to have a huge degree of input here. That is pressure in some point, in some way. - Well there's a, you know, that's what, like, to some degree, Twitter files have revealed that there was pressure from different organizations. You can see in the pandemic where the CDC or some other government organization might put pressure on, you know what, we're not really sure what's true, but it's very unsafe to have these kinds of nuanced conversations now. So, let's censor all topics. And you get a lot of those emails like, you know, emails, all different kinds of people reaching out at different places to put subtle, indirect pressure, direct pressure, financial political pressure, all that kind of stuff. Like, how do you survive that? How much do you worry about that if GPT continues to get more and more intelligent and the source of information and knowledge for human civilization? - I think there's, like, a lot of, like, quirks about me that make me not a great CEO for OpenAI, but a thing in the positive column is I think I am relatively good at not being affected by pressure for the sake of pressure. - By the way, beautiful statement of humility, but I have to ask, what's in the negative column? (both laughing) - I mean. - Too long a list? - No, I'm trying, what's a good one? (Lex laughing) I mean, I think I'm not a great, like, spokesperson for the AI movement, I'll say that. I think there could be, like, a more, like, there could be someone who enjoyed it more. There could be someone who's, like, much more charismatic. There could be someone who, like, connects better, I think, with people than I do. - I'm with Chomsky on this. I think charisma's a dangerous thing. I think flaws in communication style, I think, is a feature, not a bug, in general, at least for humans. At least for humans in power. - I think I have, like, more serious problems than that one. I think I'm, like, pretty disconnected from, like, the reality of life for most people and trying to really not just, like, empathize with, but internalize what the impact on people that AGI is going to have. I probably, like, feel that less than other people would. - That's really well put. And you said, like, you're gonna travel across the world. - Yeah, I'm excited. - To empathize the different users. - Not to empathize, just to, like, I want to just, like, buy our users, our developers, our users, a drink and say, like, tell us what you'd like to change. And I think one of the things we are not good, as good at it as a company as I would like, is to be a really user-centric company. And I feel like by the time it gets filtered to me, it's, like, totally meaningless. So, I really just want to go talk to a lot of our users in very different contexts. - But, like you said, a drink in person because, I mean, I haven't actually found the right words for it, but I was a little afraid with the programming. - Hmm, yeah. - Emotionally. I don't think it makes any sense. - There is a real Olympic response there. - GPT makes me nervous about the future. Not in an AI safety way, but, like, change. - What am I gonna do? - Yeah, change. And, like, there's a nervousness about changing. - More nervous than excited? - If I take away the fact that I'm an AI person and just a programmer? - Yeah. - More excited but still nervous. Like, yeah, nervous in brief moments, especially when sleep deprived. But there's a nervousness there. - People who say they're not nervous, that's hard for me to believe. - But, you're right, it's excited. It's nervous for change. Nervous whenever there's significant exciting kind of change. You know, I've recently started using, I've been an Emacs person for a very long time and I switched to VS Code. - For Copilot? - That was one of the big reasons. - Cool. 'Cause, like, this is where a lot of active development, of course, you can probably do Copilot inside Emacs. I mean, I'm sure. - VS Code is also pretty good. - Yeah, there's a lot of, like, little things and big things that are just really good about VS Code. And I've been, I can happily report, and all the Vid people are just going nuts, but I'm very happy, it was a very happy decision. - That's it. - But there was a lot of uncertainty. There's a lot of nervousness about it. There's fear and so on about taking that leap, and that's obviously a tiny leap. But even just the leap to actively using Copilot, like, using generation of code, it makes me nervous but, ultimately, my life is much as a programmer, purely as a programmer of little things and big things is much better. But there's a nervousness and I think a lot of people will experience that and you will experience that by talking to them. And I don't know what we do with that. How we comfort people in the face of this uncertainty. - And you're getting more nervous the more you use it, not less. - Yes. I would have to say yes because I get better at using it. - Yeah, the learning curve is quite steep. - Yeah. And then, there's moments when you're, like, oh it generates a function beautifully. And you sit back both proud like a parent but almost, like, proud, like, and scared that this thing would be much smarter than me. Like, both pride and sadness. Almost like a melancholy feeling. But, ultimately, joy, I think, yeah. What kind of jobs do you think GPT language models would be better than humans at? - Like, full, like, does the whole thing end to end better? Not like what it's doing with you where it's helping you be maybe 10 times more productive? - Those are both good questions. I would say they're equivalent to me because if I'm 10 times more productive, wouldn't that mean that there'll be a need for much fewer programmers in the world? - I think the world is gonna find out that if you can have 10 times as much code at the same price, you can just use even more. - Should write even more code. - It just needs way more code. - It is true that a lot more could be digitized. There could be a lot more code in a lot more stuff. - I think there's, like, a supply issue. - Yeah. So, in terms of really replace jobs, is that a worry for you? - It is. I'm trying to think of, like, a big category that I believe can be massively impacted. I guess I would say customer service is a category that I could see there are just way fewer jobs relatively soon. I'm not even certain about that, but I could believe it. - So, like, basic questions about when do I take this pill, if it's a drug company, or I don't know why I went to that, but, like, how do I use this product, like, questions? - Yeah. - Like how do I use this? - Whatever call center employees are doing now. - Yeah. This is not work, yeah, okay. - I want to be clear. I think, like, these systems will make a lot of jobs just go away. Every technological revolution does. They will enhance many jobs and make them much better, much more fun, much higher paid and they'll create new jobs that are difficult for us to imagine even if we're starting to see the first glimpses of them. But I heard someone last week talking about GPT4 saying that, you know, man, the dignity of work is just such a huge deal. We've really gotta worry. Like, even people who think they don't like their jobs, they really need them. It's really important to them and to society. And, also, can you believe how awful it is that France is trying to raise the retirement age? And I think we, as a society, are confused about whether we wanna work more or work less. And, certainly, about whether most people like their jobs and get value out of their jobs or not. Some people do. I love my job, I suspect you do too. That's a real privilege. Not everybody gets to say that. If we can move more of the world to better jobs and work to something that can be a broader concept. Not something you have to do to be able to eat, but something you do as a creative expression and a way to find fulfillment and happiness and whatever else. Even if those jobs look extremely different from the jobs of today, I think that's great. I'm not nervous about it at all. - You have been a proponent of UBI, Universal Basic Income. In the context of AI, can you describe your philosophy there of our human future with UBI? Why you like it? What are some limitations? - I think it is a component of something we should pursue. It is not a full solution. I think people work for lots of reasons besides money. And I think we are gonna find incredible new jobs and society, as a whole, and people as individuals, are gonna get much, much richer. But, as a cushion through a dramatic transition, and as just like, you know, I think the world should eliminate poverty if able to do so. I think it's a great thing to do as a small part of the bucket of solutions. I helped start a project called World Coin, which is a technological solution to this. We also have funded a, like, a large, I think maybe the largest and most comprehensive universal basic income study as part of sponsored by OpenAI. And I think it's, like, an area we should just be looking into. - What are some, like, insights from that study that you gained? - We're gonna finish up at the end of this year and we'll be able to talk about it, hopefully, very early next. - If we can linger on it. How do you think the economic and political systems will change as AI becomes a prevalent part of society? It's such an interesting sort of philosophical question. Looking 10, 20, 50 years from now, what does the economy look like? What does politics look like? Do you see significant transformations in terms of the way democracy functions, even? - I love that you asked them together 'cause I think they're super related. I think the economic transformation will drive much of the political transformation here, not the other way around. My working model for the last, I don't know, five years, has been that the two dominant changes will be that the cost of intelligence and the cost of energy are going, over the next couple of decades, to dramatically, dramatically fall from where they are today. And the impact of that, and you're already seeing it with the way you now have, like, you know, programming ability beyond what you had as an individual before, is society gets much, much richer, much wealthier in ways that are probably hard to imagine. I think every time that's happened before it has been that economic impact has had positive political impact as well. And I think it does go the other way, too. Like, the sociopolitical values of the enlightenment enabled the long-running technological revolution and scientific discovery process we've had for the past centuries. But I think we're just gonna see more. I'm sure the shape will change, but I think it's this long and beautiful exponential curve. - Do you think there will be more, I don't know what the term is, but systems that resemble something like democratic socialism? I've talked to a few folks on this podcast about these kinds of topics. - Instant yes, I hope so. - So that it reallocates some resources in a way that supports, kind of lifts the people who are struggling. - I am a big believer in lift up the floor and don't worry about the ceiling. - If I can test your historical knowledge. - It's probably not gonna be good, but let's try it. - Why do you think, I come from the Soviet Union, why do you think communism in the Soviet Union failed? - I recoil at the idea of living in a communist system and I don't know how much of that is just the biases of the world I've grown up in and what I have been taught, and probably more than I realize, but I think, like, more individualism, more human will, more ability to self determine is important. And, also, I think the ability to try new things and not need permission and not need some sort of central planning, betting on human ingenuity and this sort of like distributed process, I believe is always going to beat centralized planning. And I think that, like, for all of the deep flaws of America, I think it is the greatest place in the world because it's the best at this. - So, it's really interesting that centralized planning failed in such big ways. But what if, hypothetically, the centralized planning... - It was a perfect super intelligent AGI. - Super intelligent AGI. Again, it might go wrong in the same kind of ways, but it might not, we don't really know. - We don't really know. It might be better. I expect it would be better. But would it be better than a hundred super intelligent or a thousand super intelligent AGI's sort of in a liberal democratic system? - Arguing. - Yes. - Oh, man. - Now, also, how much of that can happen internally in one super intelligent AGI? Not so obvious. - There is something about, right, but there is something about, like, tension, the competition. - But you don't know that's not happening inside one model. - Yeah, that's true. It'd be nice. It'd be nice if whether it's engineered in or revealed to be happening, it'd be nice for it to be happening. - And, of course, it can happen with multiple AGI's talking to each other or whatever. - There's something also about, I mean. Stuart Russell has talked about the control problem of always having AGI to have some degree of uncertainty. Not having a dogmatic certainty to it. - That feels important. - So, some of that is already handled with human alignment, human feedback, reinforcement learning with human feedback, but it feels like there has to be engineered in, like, a hard uncertainty. - Yeah. - Humility, you can put a romantic word to it. - Yeah. - You think that's possible to do? - The definition of those words, I think, the details really matter, but as I understand them, yes, I do. - What about the off switch? - That, like, big red button in the data center we don't tell anybody about? - Yeah, don't use that? - I'm a fan. My backpack. - In your backpack. You think that's possible to have a switch? You think, I mean, actually more seriously, more specifically, about sort of rolling out of different systems. Do you think it's possible to roll them, unroll them, pull them back in? - Yeah, I mean, we can absolutely take a model back off the internet. We can, like, we can turn an API off. - Isn't that something you worry about, like, when you release it and millions of people are using it and, like, you realize, holy crap, they're using it for, I don't know, worrying about the, like, all kinds of terrible use cases? - We do worry about that a lot. I mean, we try to figure out with as much red teaming and testing ahead of time as we do how to avoid a lot of those. But I can't emphasize enough how much the collective intelligence and creativity of the world will beat OpenAI and all of the red team members we can hire. So, we put it out, but we put it out in a way we can make changes. - In the millions of people that have used ChatGPT and GPT, what have you learned about human civilization, in general? I mean, the question I ask is, are we mostly good or is there a lot of malevolence in the human spirit? - Well, to be clear, I don't, nor does anyone else at OpenAI, sit there, like, reading all the ChatGPT messages. - Yeah. - But from what I hear people using it for, at least the people I talk to, and from what I see on Twitter, we are definitely mostly good. - But, A, not all of us are all of the time. And, B, we really want to push on the edges of these systems and, you know, we really want to test out some darker theories for the world. - Yeah. Yeah, it's very interesting. It's very interesting. And I think that actually doesn't communicate the fact that we're, like, fundamentally dark inside, but we like to go to the dark places in order to, maybe, rediscover the light. It feels like dark humor is a part of that. Some of the toughest things you go through if you suffer in life in a war zone. The people I've interacted with that are in the midst of a war, they're usually joking around. - They still tell jokes. - Yeah, they're joking around and they're dark jokes. - Yep. - So, that part. - There's something there, I totally agree. - About that tension. So, just to the model, how do you decide what isn't misinformation? How do you decide what is true? You actually have OpenAi's internal factual performance benchmark. There's a lot of cool benchmarks here. How do you build a benchmark for what is true? What is truth, Sam Altman. - Like, math is true. And the origin of COVID is not agreed upon as ground truth. - Those are the two things. - And then, there's stuff that's, like, certainly not true. But between that first and second milestone, there's a lot of disagreement. - What do you look for? Not even just now, but in the future, where can we, as a human civilization, look to for truth? - What do you know is true? What are you absolutely certain is true? (Lex laughing) - I have a generally epistemic humility about everything and I'm freaked out by how little I know and understand about the world. So, even that question is terrifying to me. There's a bucket of things that have a high degree of truthiness, which is where you put math, a lot of math. - Yeah. Can't be certain, but it's good enough for, like, this conversation, we can say math is true. - Yeah, I mean some, quite a bit of physics. There's historical facts. Maybe dates of when a war started. There's a lot of details about military conflict inside history. Of course, you start to get, you know, I just read "Blitzed", which is this... - Oh, I wanna read that. - Yeah. - How is it. - It was really good. It gives a theory of Nazi Germany and Hitler that so much can be described about Hitler and a lot of the upper echelon of Nazi Germany through the excessive use of drugs. - Just amphetamines, right? - Amphetamines, but also other stuff. But it's just a lot. And, you know, that's really interesting. It's really compelling. And, for some reason, like, whoa, that's really, that would explain a lot. That's somehow really sticky. It's an idea that's sticky. And then, you read a lot of criticism of that book later by historians that that's actually, there's a lot of cherry picking going on. And it's actually is using the fact that that's a very sticky explanation. There's something about humans that likes a very simple narrative to describe everything - For sure, for sure, for sure. - And then... - Yeah, too much amphetamines caused the war is, like, a great, even if not true, simple explanation that feels satisfying and excuses a lot of other probably much darker human truths. - Yeah, the military strategy employed. The atrocities, the speeches. Just the way Hitler was as a human being, the way Hitler was as a leader. All of that could be explained through this one little lens. And it's like, well, if you say that's true, that's a really compelling truth. So, maybe truth, in one sense, is defined as a thing that is, as a collective intelligence, we kind of all our brains are sticking to. And we're like, yeah, yeah, yeah, yeah, yeah. A bunch of ants get together and like, yeah, this is it. I was gonna say sheep, but there's a connotation to that. But, yeah, it's hard to know what is true. And I think when constructing a GPT-like model, you have to contend with that. - I think a lot of the answers, you know, like if you ask GPT4, just to stick on the same topic, did COVID leak from a lab? - Yeah. - I expect you would get a reasonable answer. - It's a really good answer, yeah. It laid out the hypotheses. The interesting thing it said, which is refreshing to hear, is something like there's very little evidence for either hypothesis, direct evidence. Which is important to state. A lot of people kind of, the reason why there's a lot of uncertainty and a lot of debate is because there's not strong physical evidence of either. - Heavy circumstantial evidence on either side. - And then, the other is more like biological theoretical kind of discussion. And I think the answer, the nuanced answer, the GPT provided was actually pretty damn good. And also, importantly, saying that there is uncertainty. Just the fact that there is uncertainty as a statement was really powerful. - Man, remember when, like, the social media platforms were banning people for saying it was a lab leak? - Yeah, that's really humbling. The humbling, the overreach of power in censorship. But the more powerful GPT becomes, the more pressure there'll be to censor. - We have a different set of challenges faced by the previous generation of companies, which is people talk about free speech issues with GPT, but it's not quite the same thing. It's not like this is a computer program, what it's allowed to say. And it's also not about the mass spread and the challenges that I think may have made the Twitter and Facebook and others have struggled with so much. So, we will have very significant challenges, but they'll be very new and very different. - And maybe, yeah, very new, very different is a good way to put it. There could be truths that are harmful in their truth. I don't know. Group differences in IQ. There you go. Scientific work that, once spoken, might do more harm. And you ask GPT that, should GPT tell you? There's books written on this that are rigorous scientifically but are very uncomfortable and probably not productive in any sense, but maybe are. There's people arguing all kinds of sides of this and a lot of them have hate in their heart. And so, what do you do with that? If there's a large number of people who hate others but are actually citing scientific studies, what do you do with that? What does GPT do with that? What is the priority of GPT to decrease the amount of hate in the world? Is it up to GPT or is it up to us humans? - I think we, as OpenAI, have responsibility for the tools we put out into the world. I think the tools themselves can't have responsibility in the way I understand it. - Wow, so you carry some of that burden and responsibility? - For sure, all of us. All of us at the company. - So, there could be harm caused by this tool. - There will be harm caused by this tool. There will be harm. There'll be tremendous benefits but, you know, tools do wonderful good and real bad. And we will minimize the bad and maximize the good. - And you have to carry the weight of that. How do you avoid GPT from being hacked or jailbroken? There's a lot of interesting ways that people have done that, like with token smuggling or other methods like DAN. - You know, when I was like a kid, basically, I worked once on jailbreak in an iPhone, the first iPhone, I think, and I thought it was so cool. And I will say it's very strange to be on the other side of that. - You're now the man. - Kind of sucks. - Is some of it fun? How much of it is a security threat? I mean, how much do you have to take it seriously? How was it even possible to solve this problem? Where does it rank on the set of problem? I'll just keeping asking questions, prompting. - We want users to have a lot of control and get the models to behave in the way they want within some very broad bounds. And I think the whole reason for jailbreaking is, right now, we haven't yet figured out how to, like, give that to people. And the more we solve that problem, I think the less need they'll be for jailbreaking. - Yeah, it's kind of like piracy gave birth to Spotify. - People don't really jail break iPhones that much anymore. - Yeah. - And it's gotten harder, for sure, but also, like, you can just do a lot of stuff now. - Just like with jailbreaking, I mean, there's a lot of hilarity that ensued. So, Evan Murakawa, cool guy, he's an OpenAI. - Yeah. - He tweeted something that he also was really kind to send me to communicate with me, sent me long email describing the history of OpenAI, all the different developments. He really lays it out. I mean, that's a much longer conversation of all the awesome stuff that happened. It's just amazing. But his tweet was, DALL·E-July '22, ChatGPT-November '22, API is 66% cheaper-August '22, Embeddings 500 times cheaper while state of the art-December 22, ChatGPT API also 10 times cheaper while state of the art-March 23, Whisper API-March '23 GPT4-today, whenever that was, last week. And the conclusion is this team ships. - We do. - What's the process of going, and then we can extend that back. I mean, listen, from the 2015 OpenAI launch, GPT, GPT2, GPT3, OpenAI five finals with the gaming stuff, which is incredible. GPT3 API released. DALL·E, instruct GPT Tech, Fine Tuning. There's just a million things available. DALL·E, DALL·E2 preview, and then, DALL·E is available to 1 million people. Whisper second model release. Just across all of the stuff, both research and deployment of actual products that could be in the hands of people. What is the process of going from idea to deployment that allows you to be so successful at shipping AI-based products? - I mean, there's a question of should we be really proud of that or should other companies be really embarrassed? - Yeah. - And we believe in a very high bar for the people on the team. We work hard. Which, you know, you're not even, like, supposed to say anymore or something. We give a huge amount of trust and autonomy and authority to individual people and we try to hold each other to very high standards. And, you know, there's a process which we can talk about but it won't be that illuminating. I think it's those other things that make us able to ship at a high velocity. - So, GPT4 is a pretty complex system. Like you said, there's, like, a million little hacks you can do to keep improving it. There's the cleaning up the data set, all that. All those are, like, separate teams. So, do you give autonomy, is there just autonomy to these fascinating different problems? - If, like, most people in the company weren't really excited to work super hard and collaborate well on GPT4 and thought other stuff was more important, they'd be very little I or anybody else could do to make it happen. But we spend a lot of time figuring out what to do, getting on the same page about why we're doing something and then how to divide it up and all coordinate together. - So then, you have, like, a passion for the goal here. So, everybody's really passionate across the different teams. - Yeah, we care. - How do you hire? How do you hire great teams? The folks I've interacted with OpenAI are some of the most amazing folks I've ever met. - It takes a lot of time. Like, I spend, I mean, I think a lot of people claim to spend a third of their time hiring. I, for real, truly do. I still approve every single hire at OpenAI. And I think there's, you know, we're working on a problem that is like very cool and that great people wanna work on. We have great people and some people wanna be around them. But, even with that, I think there's just no shortcut for putting a ton of effort into this. - So, even when you have the good people, it's hard work. - I think so. - Microsoft announced the new multi-year multi-billion dollar reported to be 10 billion investment into OpenAI. Can you describe the thinking that went into this? What are the pros, what are the cons of working with a company like Microsoft? - It's not all perfect or easy but, on the whole, they have been an amazing partner to us. Satya and Kevin McHale are super aligned with us, super flexible, have gone like way above and beyond the call of duty to do things that we have needed to get all this to work. This is, like, a big iron complicated engineering project and they are a big and complex company and I think, like many great partnerships or relationships, we've sort of just continued to ramp up our investment in each other and it's been very good. - It's a for-profit company, it's very driven, it's very large scale. Is there pressure to kind of make a lot of money? - I think most other companies wouldn't, maybe now they would, wouldn't at the time, have understood why we needed all the weird control provisions we have and why we need all the kind of, like, AGI specialness. And I know that 'cause I talked to some other companies before we did the first deal with Microsoft and I think they are unique in terms of the companies at that scale that understood why we needed the control provisions we have. - And so, those control provisions help you help make sure that the capitalist imperative does not affect the development of AI. Well, let me just ask you, as an aside, about Satya Nadella, the CEO of Microsoft. He seems to have successfully transformed Microsoft into this fresh, innovative, developer-friendly company. - I agree. - What do you, I mean, is it really hard to do for a very large company? What have you learned from him? Why do you think he was able to do this kind of thing? Yeah, what insights do you have about why this one human being is able to contribute to the pivot of a large company to something very new? - I think most CEO's are either great leaders or great managers. And from what I have observed with Satya, he is both. Super visionary, really, like, gets people excited, really makes long duration and correct calls. And, also, he is just a super effective hands-on executive and, I assume, manager too. And I think that's pretty rare. - I mean, Microsoft, I'm guessing, like IBM, like a lot of companies that have been at it for a while, probably have, like, old school kind of momentum. So, you, like, inject AI into it, it's very tough. Or anything, even like the culture of open source. Like, how hard is it to walk into a room and be like, the way we've been doing things are totally wrong. Like, I'm sure there's a lot of firing involved or a little, like, twisting of arms or something. So, do you have to rule by fear, by love? Like, what can you say to the leadership aspect of this? - I mean, he's just, like, done an unbelievable job but he is amazing at being, like, clear and firm and getting people to want to come along, but also, like, compassionate and patient with his people, too. - I'm getting a lot of love, not fear. - I'm a big Satya fan. - So am I, from a distance. I mean, you have so much in your life trajectory that I can ask you about. We can probably talk for many more hours, but I gotta ask you, because of Y Combinator, because of startups and so on, the recent, and you've tweeted about this, about the Silicon Valley bank, SVB, what's your best understanding of what happened? What is interesting to understand about what happened at SVB? - I think they just, like, horribly mismanaged buying while chasing returns in a very silly world of 0% interest rates. Buying very long dated instruments secured by very short term and variable deposits. And this was obviously dumb. I think totally the fault of the management team, although I'm not sure what the regulators were thinking either. And is an example of where I think you see the dangers of incentive misalignment. Because as the Fed kept raising, I assume, that the incentives on people working at SVB to not sell at a loss their, you know, super safe bonds which were now down 20% or whatever, or you know, down less than that but then kept going down. You know, that's like a classic example of incentive misalignment. Now, I suspect they're not the only bank in a bad position here. The response of the federal government, I think, took much longer than it should have. But, by Sunday afternoon, I was glad they had done what they've done. We'll see what happens next. - So, how do you avoid depositors from doubting their bank? - What I think needs would be good to do right now, and this requires statutory change, but it may be a full guarantee of deposits, maybe a much, much higher than 250K, but you really don't want depositors having to doubt the security of their deposits. And this thing that a lot of people on Twitter were saying, it's like, well it's their fault. They should have been like, you know, reading the balance sheet and the risk audit of the bank. Like, do we really want people to have to do that? I would argue, no. - What impact has it had on startups that you see? - Well, there was a weekend of terror, for sure. And now, I think, even though it was only 10 days ago, it feels like forever, and people have forgotten about it. - But it kind of reveals the fragility of our economic system. - We may not be done. That may have been, like, the gun show and the falling off the nightstand in the first scene of the movie or whatever. - There could be, like, other banks that are fragile as well. - For sure, there could be. - Well, even with FDX, I mean, I'm just, well that's fraud, but there's mismanagement and you wonder how stable our economic system is, especially with new entrance with AGI. - I think one of the many lessons to take away from this SVB thing is how fast and how much the world changes and how little I think our experts, leaders, business leaders, regulators, whatever, understand it. So, the speed with which the SVB bank run happened because of Twitter, because of mobile banking apps, whatever, was so different than the 2008 collapse where we didn't have those things, really. And I don't think that kind of the people in power realized how much the field had shifted. And I think that is a very tiny preview of the shifts that AGI will bring. - What gives you hope in that shift from an economic perspective? That sounds scary, the instability. - No, I am nervous about the speed with which this changes and the speed with which our institutions can adapt, which is part of why we want to start deploying these systems really early while they're really weak so that people have as much time as possible to do this. I think it's really scary to, like, have nothing, nothing, nothing and then drop a super powerful AGI all at once on the world. I don't think people should want that to happen. But what gives me hope is, like, I think the less zeros, the more positive some of the world gets, the better. And the upside of the vision here, just how much better life can be. I think that's gonna, like, unite a lot of us and, even if it doesn't, it's just gonna make it all feel more positive some. - When you create an AGI system, you'll be one of the few people in the room that get to interact with it first. Assuming GPT4 is not that. What question would you ask her, him, it? What discussion would you have? - You know, one of the things that I, like, this is a little aside and not that important, but I have never felt any pronoun other than it towards any of our systems, but most other people say him or her or something like that. And I wonder why I am so different. Like, yeah, I don't know, maybe it's I watched it develop. Maybe it's I think more about it, but I'm curious where that difference comes from. - I think probably you could be because you watched it develop, but then again, I watched a lot of stuff develop and I always go to him and her. I anthropomorphize aggressively. And, certainly, most humans do. - I think it's really important that we try to explain, to educate people that this is a tool and not a creature. - I think, yes, but I also think there will be a room in society for creatures and we should draw hard lines between those. - If something's a creature, I'm happy for people to, like, think of it and talk about it as a creature, but I think it is dangerous to project creatureness onto a tool. - That's one perspective. A perspective I would take, if it's done transparently, is projecting creatureness onto a tool makes that tool more usable if it's done well. - Yeah, so if there's like kind of UI affordances that work, I understand that. I still think we want to be, like, pretty careful with it. - Careful. Because the more creature-like it is, the more it can manipulate you emotionally. - Or just the more you think that it's doing something or should be able to do something or rely on it for something that it's not capable of. - What if it is capable? What about, Sam Altman, what if it's capable of love? Do you think there will be romantic relationships like in the movie "Her" with GPT? - There are companies now that offer, like, for lack of a better word, like, romantic companionship AI's. - Replica is an example of such a company. - Yeah. I personally don't feel any interest in that. - So, you're focusing on creating intelligent tools. - But I understand why other people do. - That's interesting. I have, for some reason, I'm very drawn to that. - Have you spent a lot of time interacting with Replica or anything similar? - Replica, but also just building stuff myself. Like, I have robot dogs now that I use. I use the movement of the robots to communicate emotion. I've been exploring how to do that. - Look, there are gonna be very interactive GPT4 powered pets or whatever, robots companions, and a lot of people seem really excited about that. - Yeah, there's a lot of interesting possibilities. I think you'll discover them, I think, as you go along. That's the whole point. Like, the things you say in this conversation, you might, in a year, say, this was right. - No, I may totally want, I may turn out that I like love my GPT4 dog robot or whatever. - Maybe you want your programming assistant to be a little kinder and not mock you for your incompetence. - No, I think you do want the style of the way GPT4 talks to you. - Yes. - Really matters. You probably want something different than what I want, but we both probably want something different than the current GPT4. And that will be really important, even for a very tool-like thing. - Is there styles of conversation, oh no, contents of conversations you're looking forward to with an AGI like GPT five, six, seven? Is there stuff where, like, where do you go to outside of the fun meme stuff for actual, like... - I mean, what I'm excited for is, like, please explain to me how all of physics works and solve all remaining mysteries. - So, like, a theory of everything. - I'll be real happy. - Hmm. Faster than light travel. - Don't you wanna know? - So, there's several things to know. It's like NP hard. Is it possible and how to do it? Yeah, I want to know, I want to know. Probably the first question would be are there other intelligent alien civilizations out there? But I don't think AGI has the ability to do that, to know that. - Might be able to help us figure out how to go detect. And meaning to, like, send some emails to humans and say can you run these experiments? Can you build this space probe? Can you wait, you know, a very long time? - Or provide a much better estimate than the Drake equation. - Yeah. - With the knowledge we already have. And maybe process all the, 'cause we've been collecting a lot of data. - Yeah, you know, maybe it's in the data. Maybe we need to build better detectors, which a really advanced AI could tell us how to do. It may not be able to answer it on its own, but it may be able to tell us what to go build to collect more data. - What if it says the aliens are already here? - I think I would just go about my life. - Yeah. - I mean, a version of that is, like, what are you doing differently now that, like, if GPT4 told you and you believed it, okay, AGI is here, or AGI is coming real soon, what are you gonna do differently? - The source of joy and happiness and fulfillment in life is from other humans. So, mostly nothing. - Right. - Unless it causes some kind of threat. But that threat would have to be like, literally, a fire. - Like, are we living now with a greater degree of digital intelligence than you would've expected three years ago in the world? - Much, much more, yeah. - And if you could go back and be told by an oracle three years ago, which is, you know, blink of an eye, that in March of 2023 you will be living with this degree of digital intelligence, would you expect your life to be more different than it is right now? - Probably, probably. But there's also a lot of different trajectories intermixed. I would've expected the society's response to a pandemic to be much better, much clearer, less divided. I was very confused about, there's a lot of stuff, given the amazing technological advancements that are happening, the weird social divisions. It's almost like the more technological advancement there is, the more we're going to be having fun with social division. Or maybe the technological advancements just revealed the division that was already there. But all of that just confuses my understanding of how far along we are as a human civilization and what brings us meaning and how we discover truth together and knowledge and wisdom. So, I don't know, but when I open Wikipedia, I'm happy that humans are able to create this thing. - For sure. - Yes, there is bias, yes, but it's incredible. - It's a triumph. - It's a triumph of human civilization. - 100%. - Google search, the search, search period, is incredible. The way it was able to do, you know, 20 years ago. And now, this new thing, GPT, is like, is, this, like gonna be the next, like the conglomeration of all of that that made web search and Wikipedia so *******, but now more directly accessible? You can have a conversation with a damn thing. It's incredible. Let me ask you for advice for young people in high school and college, what to do with their life. How to have a career they can be proud of. How to have a life they can be proud of. You wrote a blog post a few years ago titled, "How to Be Successful" and there's a bunch of really, really, people should check out that blog post. It's so succinct and so brilliant. You have a bunch of bullet points. Compound yourself, have almost too much self-belief, learn to think independently, get good at sales and quotes, make it easy to take risks, focus, work hard, as we talked about, be bold, be willful, be hard to compete with, build a network. You get rich by owning things, being internally driven. What stands out to you from that, or beyond, as advice you can give? - Yeah, no, I think it is, like, good advice in some sense, but I also think it's way too tempting to take advice from other people. And the stuff that worked for me, which I tried to write down there, probably doesn't work that well or may not work as well for other people. Or, like, other people may find out that they want to just have a super different life trajectory. And I think I mostly got what I wanted by ignoring advice. And I think, like, I tell people not to listen to too much advice. Listening to advice from other people should be approached with great caution. - How would you describe how you've approached life? Outside of this advice that you would advise to other people? So, really, just in the quiet of your mind to think, what gives me happiness? What is the right thing to do here? How can I have the most impact? - I wish it were that, you know, introspective all the time. It's a lot of just, like, you know, what will bring me joy, what will bring me fulfillment? You know, what will bring, what will be? I do think a lot about what I can do that will be useful, but, like, who do I wanna spend my time with? What do I wanna spend my time doing? - Like a fish in water, just going along with the current. - Yeah, that's certainly what it feels like. I mean, I think that's what most people would say if they were really honest about it. - Yeah, if they really think, yeah. And some of that then gets to the Sam Harris discussion of free will being an illusion. - Of course. - Which it very well might be, which is a really complicated thing to wrap your head around. What do you think is the meaning of this whole thing? That's a question you could ask an AGI. What's the meaning of life? As far as you look at it? You're part of a small group of people that are creating something truly special. Something that feels like, almost feels like humanity was always moving towards. - Yeah, that's what I was gonna say is I don't think it's a small group of people. I think this is, like, the product of the culmination of whatever you want to call it, an amazing amount of human effort. And if you think about everything that had to come together for this to happen. When those people discovered the transistor in the 40's, like, is this what they were planning on? All of the work, the hundreds of thousands, millions of people, whatever it's been, that it took to go from that one first transistor to packing the numbers we do into a chip and figuring out how to wire them all up together and everything else that goes into this. You know, the energy required, the science, like, just every step. Like, this is the output of, like, all of us. And I think that's pretty cool. - And before the transistor there was a hundred billion people who lived and died, had ***, fell in love, ate a lot of good food, murdered each other, sometimes, rarely. But, mostly, just good to each other, struggled to survive. And, before that, there was bacteria and eukaryotes and all that. - And all of that was on this one exponential curve. - Yeah. How many others are there, I wonder? We will ask, that is the question number one for me for AGI, how many others? And I'm not sure which answer I want to hear. Sam, you're an incredible person. It's an honor to talk to you. Thank you for the work you're doing. Like I said, I've talked to Ilya Sutskever, I've talked to Greg, I've talked to so many people at OpenAI, they're really good people. They're doing really interesting work. - We are gonna try our hardest to get to a good place here. I think the challenges are tough. I understand that not everyone agrees with our approach of iterative deployment and also iterative discovery, but it's what we believe in. I think we're making good progress and I think the pace is fast, but so is the progress. So, like, the pace of capabilities and change is fast, but I think that also means we will have new tools to figure out alignment and sort of the capital S, safety problem. - I feel like we're in this together. I can't wait what we together, as a human civilization, come up with. - It's gonna be great, I think, and we'll work really hard to make sure. - Me, too. Thanks for listening to this conversation with Sam Altman. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Alan Turing in 1951. "It seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machines to take control." Thank you for listening and hope to see you next time.