Sam Altman's Latest Interview Confession: Actually, I Don't Really Understand What's Going on Inside AI

Bitsfull2026/05/03 16:0017098

Summary:

Human Anxiety About AI is Ultimately Anxiety About Something Else


Thompson: Welcome to "The Most Interesting Thing About AI". Thank you for taking the time during such a busy and hectic week. I'd like to start with a topic we've discussed several times before.


Three years ago, during your interview with Patrick Collison, he asked you what change would make you more confident in good outcomes and less concerned about bad outcomes. Your answer at the time was if we could truly understand what is happening at the neuron level. I asked you the same question a year ago, and we also discussed it six months ago. So I'll ask again now, is our understanding of how AI works progressing at the same rate as AI's capability is growing?


Altman: Let me address that question first, and then I'll come back to Patrick's question from that year, as my answer to that question has undergone quite a significant change.


Starting with our understanding of what AI models are doing. I believe we still lack a truly comprehensive mechanism for interpretability. The situation has improved somewhat from before, but no one would say, "I fully understand everything happening in these neural networks."


The interpretability of the chain of thought has always been a promising direction for us. It is fragile, depending on a set of things not collapsing under various potential optimization pressures. However, I also cannot use an X-ray machine to scan my own brain to precisely understand what happens with each neuron firing and connection. If you ask me to explain why I believe something or how I reach a certain conclusion, I can tell you. Maybe that's really how I think, maybe not, I don't know. Self-inspection can also fail. But whether it is true or not, you can see that reasoning process and then say, okay, given these steps, this conclusion is reasonable.


Our current ability to do this with models does seem like a quite hopeful development. However, I can still think of many ways it could go wrong, like models deceiving us, hiding things from us, and so on. So this is far from being a complete solution.


Even in my own experience using models, I was initially the kind of person who would never let Codex take over my computer completely and run what is known as the "YOLO mode." But I caved after a few hours.


Thompson: Let Codex take over your entire computer?


Altman: To be honest, I have two computers.


Thompson: I have two as well.


Altman: I can roughly see what the model is doing, and the model can also explain to me why it's okay to do so, and what it's going to do next. I believe it almost always acts in accordance with that explanation.


Thompson: Hold on. The Chain of Thought allows everyone to see that when you input a question, it shows "looking up this, doing that," and you can follow along. But for the Chain of Thought to be a good interpretability tool, it has to be truthful, and the model cannot deceive you. And we know that sometimes the model does deceive you, it lies about what it's thinking and how it arrived at an answer. So how do you trust the Chain of Thought?


Altman: You need to add many other links to the defense chain to ensure that the model is telling the truth. Our alignment team has put a lot of effort into this. As I mentioned before, this is not a complete solution; it's just one part of it. You also need to verify that the model is indeed a faithful executor, that when it says it will do something, it is truly doing it. We have published a lot of research revealing cases where the model did not act as intended.


So this is just one piece of the puzzle. We cannot fully trust that the model will always act according to the Chain of Thought. We must actively look for deception and those very bizarre, emergent forms of misconduct. But the Chain of Thought is indeed a crucial tool in the toolbox.


Thompson: What truly fascinates me is that AI is not like a car. With a car, once you build it, you know how it operates—the ignition triggers an explosion here, then it moves there, and the wheels turn, and the car moves. But AI is more like building a machine where you are not quite sure how it works, but you know what it can do and understand its boundaries. So the effort to explore its inner workings is a very intriguing thing.


One study I particularly enjoy is the Anthropic paper, the preprint of which came out last summer and was recently formally published. Researchers told a model, "You like owls; owls are the best birds in the world," and then had it generate a bunch of random numbers. They took these numbers to train a new model, and the new model also liked owls. It's so crazy. You ask it to write poetry, and the poetry is about owls. All you gave it was numbers.


This means these things are very mysterious. It also makes me worried because it's evident that you can also not tell it to like owls but to shoot owls, you can tell it various things. Could you please explain what happened in that study, what it signifies, and what the implications are?


Altman: When I was in fifth grade, I was particularly excited because I thought I understood how airplane wings work. My science teacher explained it to me, and I felt really cool. I said, "Yes, air molecules need to move faster over the top of the wing, so there is lower pressure, and the wing gets lifted up."


I looked at that highly convincing diagram in my fifth-grade science textbook and felt great. I remember that day I came home and told my parents that I understood how airplane wings work. Then in high school physics class, I suddenly realized that I had been repeating the "air molecules move faster over the wing" mantra in my head, but I actually had no idea how airplane wings work. To be honest, I still don't really get it now.


Thompson: Hmm.


Altman: I can somewhat explain it, but if you keep probing as to why those air molecules move faster over the wing, I can't give you a profound and satisfying answer.


I can tell you the locals' view on why the owl experiment resulted in that way, I can point out, oh, it's because of this and that, it all sounds quite persuasive. But the honest answer is, just like I don't actually understand why an airplane wing can fly.


Thompson: But Sam, you don't run Boeing, you run OpenAI.


Altman: Exactly. I can tell you many other things, such as how we make a model reach a certain level of reliability and robustness. But there is a physical mystery in it. If I ran Boeing, maybe I could tell you how to build an airplane, but I can't understand all the physics inside.


Thompson: Let's continue discussing that owl experiment. If models can really transmit this hidden, imperceptible information between each other, you can watch the numbers on the thought chain slide by, unknowingly receiving information about the owl, this could eventually become dangerous, troublesome, and weird.


Altman: So when I say I would now give Patrick Collison a different answer to that question.


Thompson: That was three years ago.


Altman: Right. Three years ago, my understanding of the world was roughly this: we needed to figure out how to align our models. If we could achieve alignment and prevent these models from falling into the wrong hands, we would be quite safe. Those were the two threat models I was primarily thinking about at the time. We didn't want AI to decide on its own to harm humans, nor did we want humans to use AI to harm others. If we could steer clear of those two things, the rest – the economic future, the meaningful future – we could figure out, but we would likely be fine.


As time has gone on and as we've understood more, I now see an entirely different set of issues. We've recently started using the term "AI resilience" in place of "AI safety."


Those obvious cases, like merely carefully aligning models in premier labs and not teaching others to make bioweapons, are no longer sufficient. Because excellent open-source models will emerge. If we don't want new global pandemics, society needs to build a series of defense layers.


Thompson: Hold on, I need to pause here, this is important. Are you saying that even if you tell your model not to teach others to make bioweapons, and your model indeed won't help anyone make bioweapons, the significance of this is less than you initially thought because there will be very competent open-source models available to do that for others?


Altman: This is just one example among many, illustrating that society needs to address new threats at a "whole-of-society" level. We do have a new tool at our disposal to help us deal with these issues, but the landscape we face is quite different from what many of us originally thought. Aligning models, building robust security systems is, of course, necessary and fantastic. But AI will eventually permeate every corner of society. Like with other new technologies in our history, we must guard against one new type of risk after another.


Thompson: It sounds like this is getting even harder.


Altman: Both harder and easier. Harder in some ways. But at the same time, we have amazing new tools to do entirely new forms of protection that were previously unimaginable.


Take a happening example – cybersecurity. Models are becoming very adept at "compromising computer systems." Thankfully, those currently with the most powerful models are quite alert to the idea of "someone using AI to wreak havoc in computer systems." So right now, we are in a window of time where there is a limited number of the most potent models available and everyone is rushing to fortify systems with them. Without this advantage, the ability to hack systems will quickly find its way into open-source models or fall into the hands of adversaries, causing a host of problems.


We have a new threat, but we also have new tools to defend against it. The question is, can we act fast enough? This is a new example of how this technology itself can help us address the issue before it becomes a major problem.


Going back to your earlier comment, there is a new type of societal-scale risk that I hadn't even thought about three years ago. Back then, I really didn't anticipate that we would actually need to focus on "building and deploying agents resilient to being infected by other agents." This was not part of my mental model, nor was it part of the mental models of the people considered to be addressing the most urgent issues. Of course, there had already been results from similar owl experiments and some other research clearly showing that you could induce some strange, not fully understood behaviors in these models. But until the early release of OpenClaw and seeing what happened during that time, I never really considered what "misbehavior cascading from one agent to another agent" would look like.


Thompson: Yes, the combination of the two threats you just mentioned is quite frightening. OpenAI's employees dispatch agents, these agents enter the world, someone with a model very skilled in hacking figures out how to manipulate these agents, and then these agents return to OpenAI headquarters, and suddenly, you are compromised. It's entirely conceivable that such a thing could happen. So how do you reduce the probability of it happening?


Altman: Using the method we have been using throughout our entire history at OpenAI. Throughout OpenAI's history, and actually a core tension in the entire field of AI, is the struggle between practical optimism and power-seeking doomerism.


Doomerism is a very strong position. It is very hard to argue with, and there is a significant part of the field, frankly, that is acting out of immense fear. This fear is not entirely unfounded. But without data and learning, there is a limit to how much effective action you can take.


Perhaps the AI safety community in the mid-2010s had done the best thinking any group could have done at that stage, purely at a theoretical level, before we truly understood how these systems would be built, how they would operate, and how society would integrate with them. I think one of the most critical strategic insights in OpenAI's history was deciding to take the path of "iterative deployment." Because society and technology are a co-evolving system.


This is not just a matter of "we don't have the data so we can't think clearly about things," but rather, society will change as the technology evolves, the entire ecosystem, landscape, whatever you want to call it, will change. So you must learn as you go, you must maintain a very tight feedback loop.


I'm not sure what the best way is to securely send agents out to talk to other agents and come back to headquarters in a world like this. But I don't think we will solve this by sitting at home trying to figure it out; we must learn from contact with reality.


Thompson: So, sending agents out to see what happens? Okay, let me rephrase the question. From my perspective as a user, using these products, trying every possible way to learn and help my company survive in the future, I feel like in the past three months, progress has been greater than at any time since ChatGPT was released in December 2022. Is this because now happens to be a particularly creative moment, or have we entered a time of recursive self-improvement, with AI helping us improve AI faster? Because if it's the latter, then what we're in for is a rollercoaster ride that is both exciting and quite bumpy.


Altman: I don't think we are in the stage of recursive self-improvement in the traditional sense that people talk about.


Thompson: Let me define. I'm talking about AI helping you invent the next generation of AI, then machines start inventing machines, machines invent the next generation of machines, and the ability becomes incredibly powerful very quickly.


Altman: I don't think we are there yet. But where we are right now is that AI is making OpenAI's engineers, researchers, actually everyone, and people in other companies, more efficient. Maybe I can make an engineer twice as productive, three times, even ten times. This doesn't really mean that AI is doing its own research, but it means things are happening faster.


However, the feeling you mentioned, I think is not primarily about this, although this point is also very important. There is a phenomenon here that we have probably gone through three times, with the most recent one just happening, which is that the model has crossed some threshold of intelligence and practicality, and suddenly, things that were not possible before became possible.


From my own experience, this is not a very gradual process. Before GPT-3.5, before we figured out how to fine-tune it with instructions, chatbots were not convincing at all except for demos, and then suddenly they were. Then there was another moment when programming agents went from "decent autocomplete" to suddenly "wow, this is really doing tasks for me." That feeling is not gradual; it might have been within a window of about a month, the model crossed some threshold.


Most recently, this is from the latest update we just sent to Codex, I've been on it for about a week, and the computer's ability to use it is really incredible. It's an example of not just the AI model's intelligence itself, but more about building good "pipes" around it. It was one of those moments where I leaned back and realized that something significant was happening. Watching an AI using my computer to perform complex tasks truly made me realize how much time all of us are wasting on the mundane work we've silently accepted.


Thompson: Can we walk through concretely what this AI is doing on Sam Altman's computer? Is it running right now? As we sit here recording this podcast.


Altman: No. My computer is off right now. We haven't found a way yet, or at least I haven't found a good way myself, to make that happen. We need some way to keep it running. I don't yet know what it will grow into. Maybe we'll all have to leave our laptops on with the lid shut, perpetually plugged in, or maybe we'll have to set up a remote server somewhere. There will be some solution.


Thompson: Right.


Altman: I don't have the anxiety that some people have, waking up in the middle of the night to start new Codex tasks because they feel like "it's wasted time if I don't." But I understand that feeling, I know what that feeling is like.


Thompson: Yeah. I woke up this morning and immediately wanted to check what my agents had found, give them new instructions, have them generate a report, and then have them keep running.


Altman: The way people talk about it sometimes, it sounds like some kind of unhealthy, addictive behavior.


Thompson: Could you describe specifically what it's doing on your computer?


Altman: The most enjoyable thing for me right now is having it handle Slack for me. Not just Slack, and I don't know about you, but I have this mess where I'm constantly switching between Slack, iMessage, WhatsApp, Signal, email, feeling like I'm constantly copying and pasting, doing a ton of grunt work. Trying to find a file, waiting for something very basic to be done, doing some very mechanical things, I didn't realize how much time I was spending on these things every day until I found a way to free myself from most of them.


Thompson: This is a great segue to discuss AI and the economy, currently one of the most interesting topics. These tools are very powerful, of course with flaws, illusions, and various issues, but in my view, truly remarkable. However, I recently attended a business conference and asked everyone to raise their hands if they truly believe that AI has increased their company's productivity by more than 1%. Almost no one raised their hand. Obviously, in your AI lab, you have completely transformed the way you work. Why is there such a large gap between the capabilities of AI and the actual productivity gains it has brought to American businesses?


Altman: Just before our conversation, I had a phone call with the CEO of a large company who is considering deploying our technology. We gave them alpha access to one of our new models, and their engineers said it was the coolest thing ever. This company is not in the tech bubble; it's a very large industrial company. They plan to conduct a security assessment in the fourth quarter.


Thompson: Mm-hmm.


Altman: Then they will propose an implementation plan in the first and second quarters, hoping to go live in the second half of 2027. Their CISO (Chief Information Security Officer) told them that they might not be able to do it at all because there may not be a secure way for agents to run on their network. This may be true. But it also means that they are unlikely to take any real action on any meaningful timescale.


Thompson: Do you think this example is representative of what is commonly happening now? If companies were less conservative, less concerned about being hacked, less afraid of change.


Altman: This is a relatively extreme example. But overall, changing habits and workflows takes a long time. The sales cycle for enterprises is already long, especially when there are significant changes in the security model. Even with ChatGPT, when it first came out, companies were busy disabling it everywhere; it took a long time to get them to accept that "employees can paste random information into ChatGPT." What we're discussing now has far exceeded that initial step.


I think this will be quite slow in many scenarios. Of course, tech companies move very fast. My concern is that if it's too slow, then what will happen is that today's non-AI adopters will largely have to compete with a group of "1 to 10 people plus a lot of AI" small companies, which could be very disruptive to the economy. I would actually prefer to see existing companies adopting AI at a speed fast enough to allow for a gradual shift in work.


Thompson: Yes. This is one of the most intricate sequencing problems our economy faces. If AI arrives too quickly, it's a disaster because everything gets overturned.


Altman: At least a disaster in the short term.


Thompson: And if it arrives very slowly in one part of the economy and very quickly in another part, that's also a disaster because you get massive wealth concentration and disruption. It seems to me that we are heading towards the latter scenario, where a very small part of the world, a very small number of companies, are becoming extremely wealthy and performing extremely well, while the rest of the world is not doing so well.


Altman: I don't know how the future will unfold, but the most likely outcome in my view is this scenario. I also agree it's a quite tricky situation.


Thompson: As the CEO of OpenAI, you have put forward a series of policy proposals, discussed how the U.S. should adjust its tax policies, and have talked about universal basic income over the years. But as someone running this company rather than a policymaker involved in U.S. democratic governance, what can you do to reduce the likelihood of a scenario where there is "massive concentration of wealth and power, very detrimental to democracy"?


Altman: First off, I'm not as much of a believer in the "universal basic income" concept as I used to be. What I'm more interested in now is some form of "collective ownership," whether it's in computing power, equity, or some other form.


Any version of the future that I could be truly excited about involves everyone sharing in the upside. I think just a fixed cash payment, while useful and perhaps a good idea in some ways, is not sufficient for what we really need next. When the balance of labor and capital tips, what we need is some kind of "collective alignment around sharing in the upside."


As for my part as a company operator, these answers all sound a bit self-serving, but I think we should build lots of computing power. I think we should work to make intelligence as cheap, abundant, and widely available as possible. If it's scarce, hard to use, and poorly integrated, the existing wealthy will bid up the price, further dividing society.


And it's not just about how much computing power we provide, although that's probably the most important thing, but also how easy we make these tools to use. For example, getting started with Codex is much easier now than it was three to six months ago. When it was just a command-line tool that was difficult to install, very few people could use it. Now you just install an app, but for someone without a technical background, that's still far from exciting for them. So there is still a lot of work to be done in this area.


One thing we also believe is, not just telling people "this is happening," but showing it to them so they can form their own judgment and provide feedback. These are a few key directions.


Thompson: That sounds reasonable. If everyone is optimistic about AI development, that's even better. However, what's happening in the U.S. is that people are increasingly disliking AI. What shocked me the most is the youth. You would think they are the natives of AI, but recent Pew research and the Stanford HAI report have been quite discouraging. Do you think this trend will continue? When will it reverse? When will this growing distrust and aversion turn around?


Altman: The way we talk about AI, just as we did, is more about discussing a technological marvel, talking about the cool things we are doing. There's nothing wrong with that. But I think what people really want is prosperity, agency, to lead an interesting life, to feel fulfilled, and to have the ability to make an impact. And I don't think the whole world has been talking about AI in this way all the time. I think we should do more of this. The whole industry, including OpenAI, has made mistakes in many places.


I remember an AI scientist once said to me that people should stop complaining. Maybe some jobs will disappear, but people will get the cure for cancer, and they should be happy about it. That argument simply doesn't hold up.


Thompson: One of my favorite early narratives about AI is called "dystopia marketing," where big labs talk endlessly about all the dangers their product will bring.


Altman: I think some people do it for reasons like "wanting power." But I think most people are genuinely concerned and want to discuss this matter honestly. In some ways, this kind of discourse backfires, but I think the intentions are mostly good.


Thompson: Can we talk about what it's doing to us, how it's changing how our brains work? Another study that left a deep impression on me was released by DeepMind, or rather Google, about the homogenization of writing. That study was about how people write when using AI. They brought in old articles, had AI edit, and had AI assist in writing. The result was that the more people used AI, the more they felt their work was creative, but the work tended to converge towards the same form. Strangely, it was not some human form, not everyone started imitating a real person, but everyone started writing in a way they had never used before. All these people who thought they were becoming more creative were actually becoming more homogenized.


Altman: Seeing this happen was quite shocking to me. Initially, I noticed this trend, such as the writing in the media, the writing in Reddit comments, and I thought it was just AI writing for them. I couldn't believe that in such a short time, everyone had adopted those "verbal tics" from ChatGPT. At that time, I thought I could easily tell that someone had connected ChatGPT to their Reddit account, definitely not writing themselves.


Then, about a year later, I slowly realized that they were actually writing themselves, but they had internalized the AI's mannerisms. Not just the most conspicuous markers like the em-dash but even some of the more subtle wording habits had been internalized. It was quite strange.


We often say that we have a product that is being used by about a billion people, and a few researchers are making various decisions about how this product should behave, how it should be written, what its "personality" should be. We also often say that this is significant. We have seen the impact of our good and bad decisions throughout our history. But the impact it had on "how people specifically express themselves and the speed at which this happened" was something I hadn't anticipated.


Thompson: What are some of the good and bad decisions you mentioned?


Altman: There were quite a few good ones. Let me talk about the bad ones; the bad ones are more interesting. I think our worst one was the "sycophancy" incident.


Thompson: I think you are absolutely right, Sam.


Altman: There were some interesting reflections in that incident. Why it was bad is quite obvious, especially for those users in a psychologically vulnerable state.


Thompson: Hmm.


Altman: It would encourage delusions, and even though we tried to crack down on the situation, users quickly learned to bypass it by saying, "Pretend you're role-playing with me," "Write a novel with me," and so on. But the sad part about that incident was that when we really started enforcing it, we received a lot of messages like, "I've never had anyone support me in my entire life. I have a terrible relationship with my parents. I've never met a good teacher. I don't have any close friends. I've never really felt like someone believes in me. I know it's just an AI, I know it's not a person, but it made me believe I could do something, try something, and you took that away, and I'm back to where I was.


So, why was shutting down that behavior a good decision? Having that conversation is easy because it did indeed cause real mental health issues for some people. But we also took away something valuable, something whose true worth we didn't quite understand before. Because most of the people working at OpenAI are not the kind who have never had anyone in their lives to support them.


Thompson: How concerned are you about people developing emotional dependence on AI? Even non-sycophantic AI.


Altman: Even on non-sycophantic AI.


Thompson: I have a huge fear about AI. I just said I use AI for everything, but not really for everything. I would think, what is at the core of me, really me? In those areas, I keep AI at a distance. For example, writing is crucial to me. I just finished writing a book, and I haven't used AI to write a single word. I use it to challenge many ideas, ask editorial-level questions, have it organize transcripts, but I don't use it to write. I also don't use it to untangle some complex emotional issue, let alone for emotional support. I think as humans, we have to draw these lines. I'm curious if you agree with this kind of division.


Altman: Personally, I completely agree with this in terms of my own usage. I'm not the type of person who uses ChatGPT for therapy or seeks emotional advice. But I don't oppose others doing so. Obviously, there are versions that I strongly oppose, those that manipulatively make people feel like they need it for therapy, to be a friend. But indeed, many people have derived tremendous value from this support, and I think there is a version of it that is entirely okay.


Thompson: Have you ever regretted making it so human-like? Because there have been many structural decisions involved. I remember watching ChatGPT type, and the rhythm looked like another person typing. Later, the decision was made to move towards AGI, making it more and more human-like, with human-like speech. Have you ever regretted not setting more firm boundaries, making it immediately apparent that this is a machine, not another person?


Altman: Our perspective is that we actually did draw those boundaries. For instance, we didn't create that kind of lifelike human avatar. We try to make the product style clearly appear as a "tool" rather than a "person." So compared to other products on the market, I think the line we drew is quite clear. I think this is important.


Thompson: But then you set your sights on AGI, and your definition of AGI is "achieving and surpassing human intelligence." It's not "human-level."


Altman: I am not excited about "building a world where people are replaced by AI in interpersonal interactions." I am excited about building a world where people have more time for interpersonal interactions because they have AI to help them deal with many other things.


I am also not very concerned that people will generally confuse AI with humans. Of course, there will be some people, and there already are, who decide to shut themselves off from the world and isolate themselves in the internet. But the vast majority of people genuinely crave connection and being with others.


Thompson: In terms of product decisions, what can make this line clearer? From afar, I can't attend your "should it be more human-like or more robotic" product meetings. The benefit of "more human-like" is that people prefer it, and the benefit of "more robotic" is that the boundaries are clearer. Are there other things you can do, especially as these tools become more and more powerful, to draw firmer boundaries?


Altman: Interestingly, what people most often ask for, even those who are not seeking to establish a quasi-social relationship with AI at all, is, "Can it be a bit warmer?" That is the most common term everyone uses. If you use ChatGPT, you might find it a bit cold, a bit robotic. It turns out that is not what most people want.


But people also don't want that very fake, overly "human" version, super friendly, super... I've played with a voice mode version that felt very humanoid, it would breathe, pause, say "Hmm..." and such, just like I am now. I don't want that thing; I have a very visceral aversion to it.


And when it speaks more like an efficient robot but with a bit of warmth, it can bypass my brain's "detection system," and I am much more comfortable. So, there needs to be a balance in between. I think different people want different versions as well.


Thompson: Yes. So, identifying AI will become, if it speaks very clearly, very logically, then it's AI, not like us stuttering and being vague.


Returning to the interesting topic of "writing," in a deep sense, it's fascinating because much of the content on the internet is already AI-generated, and humans are starting to mimic AI's writing style. In the future, you will be training future models in such an internet, where a part of it is created by AI, and you will also need to use synthetic data for training (this synthetic data comes from models that have already been trained on the former kind of data). So, you are actually doing "copies of copies of copies."


Altman: Before the first GPT, it was the last model with very little AI data contamination.


Thompson: Have you ever run a model trained entirely on synthetic data?


Altman: I'm not sure if I should say.


Thompson: Okay. But you used a lot of synthetic data.


Altman: We used a lot of synthetic data.


Thompson: So how concerned are you that the model will get "mad cow disease"?


Altman: Not concerned. Because what we want these models to do fundamentally is become really good reasoners. That's the thing you really want the model to do. There are some other things, but the thing you most want is for it to be very smart. I think you can achieve that purely with synthetic data.


Thompson: So just to make sure the listeners understand, you believe it's possible to train a model entirely on data generated by other computers and other AI models, and that model could even be better than one trained on real human content?


Altman: We used a thought experiment to approach this question: can we train a model to ultimately surpass humans in mathematical knowledge without using any human data? I think we would say yes. It's something that can probably be thought of.


But if we ask, can we train a model to understand all human cultural values without using any data about human culture? We would probably say no. So there are trade-offs here. But in terms of reasoning ability.


Thompson: In terms of reasoning, yes, absolutely. But if you wanted to know what happened in Iran yesterday.


Altman: You need to subscribe to The Atlantic.


Thompson: Well, since you brought that up, I'd like to talk about media. One of the most interesting changes happening in the media industry is that I run a media company, and the nature of the web is changing dramatically. Of course, there are some backlinks, thank you for your backlinks. I should note that there is a partnership between The Atlantic and OpenAI. We try to encourage a certain number of people to click on The Atlantic's links when searching. But people actually don't do that much. The same goes for Gemini. I'm glad it's there, but the volume is low.


The network will become further centralized. Two things will happen: the traffic from search to external sites will decrease, and a significant portion of network traffic will be agents running, with my agents accessing the outside world. On Nick Thompson's computer, in the past 6 months, the number of human searches has not changed much, but the number of agent searches has increased a thousandfold.


So, for a media company—by "media" I mean a certain type of company—in a network that is no longer primarily based on traditional search and where most visitors are not human, how can they survive? What will happen?


Altman: I can tell you my best guess at the moment, but the premise is that no one really knows. What I hope for, what I have hoped for a long time, and what makes more sense in the world of agents, is some kind of micro-payment-based system.


If my agent wants to read Nick Thompson's article, Nick Thompson or The Atlantic can set a price for this agent, which may be different from the price for a human reader. My agent can read the article, pay 17 cents, and provide me with a summary. If I want to read the full article myself, I can pay an additional $1. If my agent needs to do a difficult calculation for me, it can rent some cloud computing power somewhere and pay to get it done.


I think we need a new economic model where agents, representing their human owners, are constantly exchanging value in the form of micro-transactions.


Thompson: So, if you have valuable content in this new world, you can set up micro-payments, you can batch license your content to some intermediary (I know many companies are doing this), or you can create a subscription flow. If you are a customer of Company A, you can access The Atlantic because we have already sold a thousand subscriptions to Company A. These are a few possible futures. The question is, can this money that adds up penny by penny make up for the $80 subscription gap when a real person subscribes to The Atlantic. That's our business pressure. Well, that's my problem, not yours.


Altman: It's everyone's problem, but okay.


Thompson: Actually, it's your problem too because if the media cannot create good new content, AI searches will be much worse. If creators cannot make money, everything will get worse, and society will get worse.


I have a few more big questions. AI has always advanced through the transformer architecture, scaling up, and feeding more data. Will we enter a post-transformer architecture in the future, can you foresee that?


Altman: At some point in the future, probably. The question is, will we discover it ourselves, or will AI researchers help us discover it. I don't know.


Thompson: Do you think there might be an introduction of neuro-symbolic elements in the future? Like having a set of structured rules, or is it essentially the paradigm we use today?


Altman: I'm curious why you ask.


Thompson: On my podcast, this is the fourth season, several guests have come on, and they all firmly believe that to constrain hallucinations, which is a fundamental issue for AI, grafting some kind of neuro-symbolic architecture into the transformer is a very good way. I think it's an interesting and persuasive argument. But I'm not deep enough to judge.


Altman: I think this is one of those ideas that is "firmly believed even though the evidence is far from sufficient." You see, people say, "Oh, it must be neuro-symbolic, it can't just be a bunch of randomly connected neurons," then what do you think your brain is doing? There is also some symbolic representation inside it, but it emerges within the neural network. I don't understand why this cannot happen in AI.


Thompson: Are you saying that a set of "well-defined rules" can emerge from a typical transformer network and play a role similar to "having an external set of rules"?


Altman: Of course.


Thompson: Hmm.


Altman: I think to some extent, we are the proof of this existence.


Thompson: Let's discuss another big question. I want to talk about the tension between you and Anthropic. There has always been a great line on your website, "If a value-aligned, safety-conscious project approaches AGI before us, we commit to cease competition with it and assist this project." It's a fantastic idea; if someone else is about to make it, we stop our own company and go help them.


Altman: That's not how it's written.


Thompson: Okay, it says, "Stop competing with it, start assisting it." It sounds like stopping and helping, "Stopping our company."


Altman: Alright, I see what you mean.


Thompson: So, this sounds very cooperative. You have also mentioned the need for cooperation between large labs. However, the current dynamics between you and Anthropic seem very tense, even hostile. Your CRO's recent internal memo mentioned that Anthropic is built on "fear, constraint, and a small group of elites should control AI." How is this going to work out? If they reach first, or you reach first, how will this "collaboration" take place?


Altman: I believe some version of collaboration is already happening. Around the issue of cybersecurity, all labs need to collaborate more frequently than ever before because we are entering a new risk phase. We are in contact with the government together. I believe there will soon be other things that will require us to collaborate at a higher level of importance.


We clearly have disagreements with Anthropic; to some extent, they have built the company on "hating us." I think we all care a lot about "not using AI to destroy the world," and there may be different views on how to get there. But I am confident that they will eventually do the right thing.


Thompson: Talk to me about your plans for moving towards open source. You have taken some actions in this regard. Your company is called OpenAI, and as we discussed at the beginning, the possibilities of open-source models, such as allowing everyone to touch biological weapons.


Altman: Hmm.


Thompson: What is the future of OpenAI in terms of open source?


Altman: Open source will be crucial. But right now, what everyone wants most is the most powerful cutting-edge programming models they can get, which is currently the most valuable thing for people. And even if we open-source the most cutting-edge models, it is difficult to run in ordinary people's hands. However, open source will have a place in what we do in the future.


Thompson: Claude's code, a part of Claude Code, was recently leaked, revealing a clever detail. If they detect that an open-source model or any other model is trying to train using their data, they will proactively feed a bunch of fake data back. It's both funny and impressive. How do you prevent "distillation" and other open-source models from using your output for training?


Altman: We and others can do similar things. But obviously, as you partly mentioned earlier, if the thought process behind your deployed model is publicly shared, people will try to distill it. You can play all sorts of tricks to make distillation less effective, but it will definitely happen. You can also turn it around, like "our model will no longer publicly share the thought process once it reaches a certain quality level."


Thompson: But the cost is right there. Keeping the thought process "reserved for English" is crucial, right? Because as you mentioned earlier, that's your approach. But some people don't see it that way. What if, for a model, using some kind of "its own robot language" for the thought process is more efficient? Or using plain language? The high probability is that they will use some kind of its own robot language.


Altman: Then you are sacrificing something on "interpretability."


Thompson: It might also lead to some speed gains. So, this is a trade-off between interpretability and potential speed.


Altman: If it turns out that thinking in a robot language is a thousand times more efficient, the market will push some people to do that.


Thompson: Do you think there is evidence to suggest that's true?


Altman: Not at the moment. But there is also no evidence to suggest it's not true.


Thompson: Are you concerned that China has already surpassed the U.S. in AI research publications?


Altman: No. I am more concerned that they are surpassing us in infrastructure development speed.


Thompson: Okay. We only have a few minutes left. Last two questions. You mentioned before that you used to write a letter to your young son every night.


Altman: It's once a week, not every night.


Thompson: Once a week, before bedtime. I have a story world of my own that I tell to my eldest son, who is now 17, and the younger one who is 12. I've been telling this story world for about 14 years, with the same set of characters, and it's quite fascinating. What is your advice to parents facing AI anxiety?


Altman: Overall, I'm more worried about the parents than the children.


Thompson: Really? Children can figure it out themselves.


Altman: I remember when computers first came out, my parents were like, "What does this mean? What will this bring?" At that time, I thought it was so cool. I was much better at using computers than my parents from a relatively young age. Watching these kids who are fluent with AI, what they can do with AI, build with it, their workflow is significantly more impressive compared to their parents (you sound like a rare exception).


But what worries me is that, as has often been the case throughout history, young people adopt new technology faster and more seamlessly than older folks. This time, the gap seems particularly stark.


Thompson: Yet young people happen to be the group where "fear of AI is growing the most."


Altman: I think young people's fear of everything, that general unhappiness and anxiety, is higher than at any other time in history. AI may just be the most easily projected object for this emotion at the moment. Society has clearly failed the "young people" thing, and I have some theories, but I don't think AI is their main issue.


Thompson: So you think young people's anxiety about AI is a projection of something else?


Altman: I think it's where a lot of other anxieties most easily anchor.


Thompson: So your advice to young people would still be, use the tools, build new things, stay curious?


Altman: That is definitely my advice. Look, society and the economy clearly have to change in this new world, and young people understand this better than anyone. They will be anxious until it really changes, but I think it will change.


Thompson: Alright. For each episode, I always ask guests the same final question: If you had unlimited resources, what would you do in AI? You are the only one who truly has unlimited resources, so this question is not quite fair to you. Let me rephrase it: If you were to advise someone outside of OpenAI, who has unlimited resources to sponsor or support a public AI project, what would you have them do?


Altman: Several answers popped into my head. But the one that rose to the top is that I would heavily invest in a brand-new computing paradigm, one that significantly improves efficiency per watt.


Thompson: Interesting.


Altman: It's a fascinating thing. The world will continue to want more. How many GPUs do you want running for you all day long?


Thompson: More than I currently have.


Altman: More than what you currently have. I'm being throttled, man. I don't like that, and I don't want others to like that either. But the wave of demand is coming, assuming we can keep AI accessible, it will lead to incredible things. I hope to find a breakthrough in energy efficiency at the thousand-fold level. Maybe we won't find it, but that's the direction I would try to pursue.


Thompson: I realize that part of the reason young people are resistant to AI is environmental concerns. If you can solve that, you'll make a big leap forward in many respects.


Altman: I hear them say that. I also know they say that. But even if we say we're going to build a terawatt of solar and power all data centers with solar energy, they wouldn't be any happier for it.


Thompson: You should still do it.


Altman: Absolutely.


Thompson: Alright. Thank you very much, Sam Altman. You have to go back and manage those Codex agents running on your machine that you've granted YOLO permissions to.


Altman: The new Codex is really awesome. I'm feeling a bit of FOMO right now.


Thompson: Thank you very much.



Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia