DeepMind Founder Interview: AGI Architecture, Agent Status, and Scientific Breakthroughs in the Next Decade

Bitsfull2026/04/30 18:075360

概要:

「Some aspects of continuous learning, long-term reasoning, and certain aspects of memory are not yet solved, which AGI needs to address in full.」


Editor's Note


Google DeepMind CEO and Nobel Prize in Chemistry winner Demis Hassabis appeared on Y Combinator to discuss the key developments towards AGI, offer advice to entrepreneurs on how to stay ahead, and speculate on where the next major scientific breakthrough might occur.


The most practical advice for deep tech entrepreneurs is that if you were to start a ten-year deep tech project today, you must incorporate the emergence of AGI into your plan. He also revealed that Isomorphic Labs (an AI drug discovery company spun out of DeepMind) is about to make a significant announcement.



Key Quotes


AGI Roadmap and Timeline


· "These existing technological components will almost certainly be part of the eventual AGI architecture."


· "Issues related to continual learning, long-range reasoning, and certain aspects of memory are yet to be resolved, and AGI will require all of these to be addressed."


· "If your AGI timeline, like mine, is around 2030, and you start a deep tech project today, you must factor in the possibility of AGI emerging halfway through."


Memory and Context Windows


· "The context window is roughly equivalent to working memory. Human working memory averages only seven digits, while we have a context window of millions or even tens of millions of tokens. However, the problem is that we cram everything in, including unimportant and incorrect information, making the current approach quite crude."


· "If you were to process a real-time video stream and store all tokens, one million tokens would only last approximately 20 minutes."


Flaws in Reasoning


· "I like to use Gemini to play chess. Sometimes it realizes it's a bad move, but can't find a better one, so it ends up making that bad move after going around in circles. But a precise reasoning system should not encounter such a situation."


·"On one hand, it can solve IMO Gold-level problems, but on the other hand, if you rephrase the question, it makes elementary math mistakes. In introspection of its own thinking process, it seems like something is still missing."


Agent and Creativity


·"To achieve AGI, you must have a system that can actively solve problems for you. Agent is the way to go, and I feel like we are just getting started."


·"I haven't seen anyone use vibe coding to create a top-ranking 3A game on the app store. With the current level of effort being put in, this should be possible, but it hasn't happened yet. It indicates that something is still lacking in terms of tools or processes."


Distillation and Small Models


·"Our assumption is that, six months to a year after a cutting-edge Pro model is released, its capabilities can be compressed into a very small model that can run on edge devices. We have not yet reached the theoretical limit of information density."


Scientific Discovery and the 'Einstein Test'


·"I sometimes refer to it as the 'Einstein Test,' which is whether you can train a system with knowledge from 1901 and then have it independently deduce Einstein's 1905 achievements, including the theory of relativity. Once this can be achieved, these systems are not far from inventing truly new things."


·"Solving a Millennium Prize problem is already remarkable. But what's even harder is whether you can propose a new set of Millennium Prize problems that top mathematicians consider equally profound and worthy of a lifetime of research."


Deep Tech Entrepreneurship Advice


·"Pursuing difficult problems and pursuing simple problems are actually quite similar, just in different ways of difficulty. Life is short, so why not focus your energy on something that no one else is doing."


AGI Implementation Path


Gary Tan: You've probably spent more time thinking about AGI than anyone else. Looking at the current paradigm, how much of the final architecture for AGI do you think we already have? What is fundamentally missing right now?


Demis Hassabis: Large-scale pretraining, RLHF, thought chains, etc. I am confident they will be part of the final AGI architecture. These technologies have already proven too much today. I can't imagine in two years we'll find that this was a dead end; that doesn't make sense to me. But on top of what we already have, there may be one or two more things. Continuous learning, long-term reasoning, certain aspects of memory—some problems are still unsolved.


The AGI needs to be fully achieved. It may be possible to expand the current technology to that extent with some incremental innovation, but there may still be one or two major key points that need to be breakthrough. I don't think there would be more than one or two. In my personal view, the probability of whether there is such an unsolved key point is roughly fifty-fifty. So at Google DeepMind, both of our pathways are advancing.


Gary Tan: I deal with a bunch of Agent systems, and the most surprising thing to me is that fundamentally it's the same set of weights going back and forth. So the concept of continuous learning is particularly interesting because right now we are basically using temporary fixes, like those "dream cycles" and the like.


Demis Hassabis: Yeah, those dream cycles are pretty cool. We have previously thought about this issue in the integration of episodic memories. My doctoral research was on how the hippocampus elegantly incorporates new knowledge into existing knowledge systems. The brain does an excellent job in this regard.


It accomplishes this process during sleep, especially during rapid eye movement (REM sleep), replaying important experiences to learn from them. Our earliest Atari program DQN (Deep Q-Network published by DeepMind in 2013, the first to achieve human-level performance in Atari games using deep reinforcement learning) mastered Atari games, and a key method was experience replay.


This was learned from neuroscience, replaying successful paths. That was in 2013, considered ancient history in the AI field, but it was crucial at the time.


I agree with what you said, we are indeed using temporary fixes now. Trying to cram everything into a context window. It doesn't feel quite right. Even if what we are doing is with a machine and not a biological brain, theoretically, we could have million- or even billion-level context windows, and memory could be perfect, but the cost of retrieval and lookup still exists. At the moment of needing a specific decision, finding truly relevant information is not easy, even if you can save everything. So I think there is still a lot of room for innovation in the field of memory.


Gary Tan: To be honest, a million-token context window is already much larger than I expected, and can do a lot.


Demis Hassabis: For most of the scenarios it should be used in, it is large enough. But think about it, the context window is roughly equivalent to working memory. The average human working memory is only about seven numbers, and we have million- or even billion-level context windows. The problem is that we are stuffing everything in there, including unimportant, incorrect information. Currently, this approach is quite rough. Also, if you are now trying to process real-time video streams and naively record all the tokens, a million tokens will only be enough for about 20 minutes. But if you want the system to understand your life over the past month or two, that's far from enough.


Gary Tan: DeepMind has always been deeply committed to reinforcement learning and search. How deeply ingrained was this philosophy in the process of building Gemini? Is reinforcement learning still underestimated?


Demis Hassabis: It may indeed still be underestimated. The level of attention in this area has had its ups and downs. We've been working on agent systems at DeepMind since day one. All the work on Atari and AlphaGo fundamentally falls under reinforcement learning agents, systems that can autonomously achieve goals, make decisions, and devise plans. Of course, we chose the gaming domain at the time because of its controllable complexity, and then gradually tackled more complex games. For example, after AlphaGo, we worked on AlphaStar, and basically, we've done all the games we could.


The next question is whether we can generalize these models into a world model or a language model, not just a game model. Over the past few years, that's exactly what we've been doing. The thought process and reasoning chains of all the leading models today are essentially a return to the pioneering work of AlphaGo.


I believe a lot of the work we did back then is highly relevant today. We are reassessing those old ideas, approaching them at a larger scale and in a more universal way, including various reinforcement learning methods such as Monte Carlo tree search. The insights from AlphaGo and AlphaZero are extremely relevant to today's foundational models, and I think a significant part of the progress in the coming years will come from this.


Distillation and Small Models


Gary Tan: Now, to be smarter, you need larger models, but at the same time, distillation technology is advancing, and small models can become quite fast. Your Flash model is very powerful, achieving about 95% of the performance of cutting-edge models but at a tenth of the cost. Is that right?


Demis Hassabis: I think that's one of our core strengths. You have to build the largest model first to get cutting-edge capabilities. One of our biggest strengths is being able to quickly distill those capabilities and compress them into smaller and smaller models. We invented this distillation method, and we are still world-class at it. And we have a strong business drive to do this. We are probably the world's largest AI applications platform.


With AI Overviews and AI Mode, as well as Gemini, now every Google product, including Maps, YouTube, and more, integrates Gemini or related technologies. This involves billions of users and products with tens of billions of users. They must be extremely fast, highly efficient, very low cost, and low latency. This gives us great motivation to push Flash and even smaller Flash-Lite models to extreme efficiency, and I hope this will eventually benefit various user tasks.


Gary Tan: I am curious about how smart these small models can really get. Is there a limit to distillation? Can a 50B or 400B model be as smart as today's largest cutting-edge models?


Demis Hassabis: I don't think we have hit the information-theoretic limit, at least not as far as anyone knows yet. Perhaps one day we will hit some kind of information density ceiling, but for now, our assumption is that after a cutting-edge Pro model is released, its capabilities can be compressed into very small models, almost capable of running on edge devices, within six months to a year.


You can also see this with our Gemma model; our Gemma 4 model performs very well at a similar scale. All of this is achieved through extensive distillation techniques and small model efficiency optimization. So, I really don't see a theoretical limit; I think we are far from that limit.


Gary Tan: There is a ridiculous phenomenon right now where the amount of work an engineer can do is probably 500 to 1000 times what it was six months ago. Some people in this room are probably doing work equivalent to what a Google engineer in the 2000s would do a thousand times over. Steve Yegge talked about this.


Demis Hassabis: I find this very exciting. Small models have many uses. One is that they are low-cost, and the speed advantage is also beneficial. In writing code or performing other tasks, you can iterate faster, especially when collaborating with systems. A fast system, even if it's not the cutting edge but only at 90% to 95% of the state-of-the-art, is more than sufficient, and the speed of iteration brings back much more than that 10%.


Another major direction is running these models on edge devices, not only for efficiency but also for privacy and security. Think about devices processing very personal information, as well as robots. For your home robot, you would want to run a high-efficiency and powerful model locally, only delegating tasks to a cloud-based large model in specific scenarios. Processing audio and video streams locally, keeping data local—I can imagine this would be a great ultimate state.


Memory and Reasoning


Gary Tan: Going back to context and memory. The models are currently stateless. What would it be like if they had continuous learning capability? What would the developer experience be like? How do you guide such models?


Demis Hassabis: This question is very interesting. The lack of continuous learning is a key bottleneck preventing the current Agent from completing full tasks. The current Agent is useful for local segments of tasks, and you can string them together to do some cool things, but they cannot adapt well to the specific environment you are in. This is why they still cannot truly "fire and forget"; they need to be able to learn about your specific scenario. To achieve full general intelligence, this issue must be addressed.


Gary Tan: Where are we in terms of reasoning? The current model's chain of thought is strong, but it still falters on errors that a smart undergraduate student wouldn't make. What specific improvements are needed? What progress do you anticipate in terms of reasoning?


Demis Hassabis: There is still a lot of room for innovation in the paradigm of thinking. What we are doing is still quite rough, quite brute force. There are many directions for improvement, such as monitoring the process of the thought chain and intervening midway through thinking. I often feel that, whether it's our system or a competitor's system, they tend to overthink to some extent and get stuck in a loop.


Sometimes I like to observe Gemini playing chess. In fact, all leading foundational models are quite poor at chess, which is very interesting.


It is very valuable to see their thinking process because chess is a well-understood domain, and I can quickly judge whether they are deviating from the right path and if the reasoning is effective. What we observe is that sometimes it considers a move, realizes it's a bad move, but can't find a better one, so it ends up going in circles and still making that bad move. A precise reasoning system should not exhibit this behavior.


This significant gap still exists, but fixing it may only require one or two adjustments. That is why you see the so-called 'jagged intelligence,' where it can solve IMO-level problems on one hand, but when asked in a different way, it makes elementary math mistakes. In terms of introspection into its own thinking process, it seems like something is still missing.


Real Ability of the Agent


Gary Tan: The Agent is a major topic. Some say it's hype. I personally feel we are just getting started. How much does DeepMind's internal research's true assessment of the Agent's capabilities differ from external marketing?


Demis Hassabis: I agree with what you said; we are just getting started. To achieve AGI, you must have a system that can actively solve problems for you. This has always been very clear to us. The Agent is the way forward, and I think we are just at the beginning.


Everyone is exploring how to better integrate the Agent into their work. We have conducted a lot of personal experiments to see how the Agent can become part of the workflow, not just as a nice addition, but actually doing something fundamental. We are currently still in the experimental phase. It may have only been in the past two or three months that we have truly found particularly valuable use cases. The technology has just reached that point where it is no longer a toy demonstration but actually adds value to your time and efficiency.


I often see people launching dozens of Agents to run for several hours, but I am still not sure if the output justifies the input.


We have not yet seen anyone use vibe coding to create a chart-topping AAA game for the app store. I have done it myself, and many others in the room have also created some nice small demos. Now I can create a prototype of a "Theme Park" in half an hour, whereas when I was 17 years old, it took me six months.


I have a feeling that if you spend a whole summer working on it, you can create something truly incredible. But it still requires craftsmanship, human spirit, and taste; you must ensure that you bring these things into any product you build. In fact, no child has yet created a mega-hit game that has sold ten million copies, which should be possible given the current tooling investment. So, what is still missing? It may be related to the process, or it may be related to the tools. I expect to see such results in the next 6 to 12 months.


Gary Tan: To what extent will this be fully automated? I don't think it will be fully automated right from the start. The more likely path is for people in this room to achieve 1000x efficiency first, and then someone uses these tools to create a best-selling app or game, after which more steps will be automated.


Demis Hassabis: Yes, that's what you should expect to see first.


Gary Tan: Part of the reason is also that some people are indeed doing this, but they are not willing to publicly disclose how much the Agent has helped.


Demis Hassabis: Possibly. But let's talk about the issue of creativity. I often cite the example of AlphaGo, everyone knows the 37th move of the second game. For me, I have always been waiting for moments like that to occur; it was only after that moment that I initiated scientific projects like AlphaFold. We started working on AlphaFold the day after we returned from the first match, which was ten years ago. I am traveling to Korea this time to celebrate the tenth anniversary of AlphaGo.


But just making it to Move 37 is not enough. It's cool, it's useful. But can this system invent the game of Go itself? If you give it a high-level description, like "a game you can learn the rules of in five minutes but will take a lifetime to master, aesthetically elegant, can be played in an afternoon," and then the system returns to you with the result being Go. Today's systems can't do that. The question is why?


Gary Tan: There may be someone in this room who can do it.


Demis Hassabis: If someone were to do it, then the answer is not that the system lacks something, but that our way of using the system is the issue. That might be the correct answer. Perhaps today's systems do have this capability, but it needs a genius creator to drive it, to provide the soul of the project, while being highly integrated with the tool, almost becoming one with the tool. If you immerse yourself day and night in these tools and have profound creativity, perhaps you can create something beyond imagination.


Open Source and Multimodal Models


Gary Tan: Shifting gears to talk about open source. Recently, Gemma's release allowed very powerful models to run locally. What are your thoughts? Will AI become something users control themselves rather than primarily residing in the cloud? Will this change who can use these models to build products?


Demis Hassabis: We are staunch supporters of open source and open science. Regarding AlphaFold that you mentioned, we have made everything free and open. Our scientific work is still published in top-tier journals. As for Gemma, we aim to create world-leading models for similar-scale. Currently, Gemma has been downloaded around 40 million times, and it has only been two and a half weeks since its release.


I also believe that having a presence in the open-source space with a Western tech stack is crucial. Chinese open-source models are excellent and currently leading in the open-source field, but we believe Gemma is very competitive at a similar scale.


For us, there is also a resource issue. No one has spare compute to run two full-scale cutting-edge models. So our current decision is: edge models for Android, Glass, robots, etc., are best suited as open models because once they are deployed on devices, they are exposed entities, so might as well just be completely open. We've unified our open strategy at a nanometer level, which also makes sense strategically.


Gary Tan: Before coming up on stage, I demonstrated the AI operating system I built, where I can interact directly with Gemini using voice. I was quite nervous to show you something, but it surprisingly worked. Gemini was designed as a multimodal build from the start. I've used many models, and none can compare to Gemini in terms of direct voice-to-model interaction combined with deep tool invocation capabilities and contextual understanding.


Demis Hassabis: Yes. One underestimated advantage of the Gemini series is that we built it from the beginning in a multimodal way. This made the initial stages more challenging than just doing text, but we believe it will pay off in the long term, and we are already seeing the benefits.


For example, in the realm of world models, we built Genie on top of Gemini (a generative interactive environment model developed by DeepMind). The same applies to robotics, where Gemini Robotics will be built on a multimodal foundational model, and our advantage in multimodality will be a competitive moat. We are also increasingly using Gemini in Waymo (Alphabet's autonomous driving company).


Imagine a digital assistant that accompanies you into the real world, perhaps on your phone or glasses, needing to understand the physical world and environment around you. Our system excels in this regard. We will continue to invest in this direction, and I believe our leading edge in these types of problems is significant.


Gary Tan: The cost of inference is rapidly decreasing. When inference is essentially free, what becomes possible? Will your team's optimization direction change as a result?


Demis Hassabis: I'm not sure if inference will truly be free; Jevons' Paradox is there. I think everyone will ultimately use up all the compute they can get.


Imagine millions of agents working together as a collective, or a small group of agents thinking along multiple dimensions simultaneously and then integrating. We are experimenting with all these directions, all of which will consume the available inference resources.


In terms of energy, if we solve a few of the issues like controlled nuclear fusion, room-temperature superconductivity, optimal batteries through materials science, which I think we will, then the energy cost can approach zero. However, there are still bottlenecks in physical chip fabrication processes, at least for the next few decades. So there will still be constraints on the inference end, and we still need to use it efficiently.


The Next Scientific Breakthrough


Gary Tan: Fortunately, small models are getting smarter. Many founders in the room are from the biological and biotech fields. AlphaFold 3 has already surpassed proteins and expanded to a broader spectrum of biological molecules. How far are we from modeling complete cellular systems? Is this an entirely different level of difficulty?


Demis Hassabis: Isomorphic Labs is progressing very well. AlphaFold is just one step in the drug discovery process. We are working on adjacent biochemical research, designing compounds with the right properties, and will soon have a major release.


Our ultimate goal is to create a complete virtual cell, a fully functional cell simulator on which you can impose perturbations, with outputs close enough to experimental results to be practically useful. This will allow you to skip many search steps, generate a large amount of synthetic data to train other models, and have them predict the behavior of real cells.


I estimate we are about ten years away from a full virtual cell. At DeepMind Sciences, we are starting from the virtual cell nucleus because the nucleus is relatively self-contained. The key to such problems is whether you can slice out a subsystem of appropriate complexity that is self-contained, you can reasonably approximate its inputs and outputs, and then focus on that subsystem. The cell nucleus seems suitable from this perspective.


Another issue is the lack of data. I have talked to top scientists working on electron microscopy and other imaging technologies. Imaging live cells at the nanometer resolution without killing them would be groundbreaking. This would turn it into a visual problem, and we know how to solve visual problems.


However, as far as I know, there is currently no technology that can image live dynamic cells at the nanometer resolution without destroying them. You can get a static image at that resolution, which is already very detailed and exciting, but it is not enough to directly turn it into a visual problem.


So, there are two paths: a hardware-driven, data-driven approach, and building better learnable simulators to simulate these dynamical systems.


Gary Tan: You don't just look at biology. Materials science, drug discovery, climate modeling, mathematics. If you had to rank them, which scientific field do you think will be most radically transformed in the next five years?


Demis Hassabis: Every field is exciting, which is why it has always been my greatest passion and the reason I have been in AI for over 30 years. I have always believed that AI will be the ultimate tool of science, used to advance scientific understanding, scientific discovery, medicine, and our understanding of the universe.


We initially framed our mission in two steps. The first step is to solve intelligence, i.e., to build AGI; the second step is to use it to solve all other problems. Later, we had to adjust the wording because people would ask, "Are you really saying you will solve all problems"?


We really mean that. Now people are beginning to understand what that entails. Specifically, I'm referring to tackling what I call the "root node problems" in scientific fields, those areas that, once cracked, can unlock entirely new branches of discovery. AlphaFold is a prototype of what we aim to achieve.


Over three million researchers worldwide, almost every biological researcher, is now using AlphaFold. I've heard from some executive friends in pharmaceutical companies that almost every drug discovered in the future will involve AlphaFold at some stage of the drug discovery process. We take pride in this, and it's the kind of impact we hope AI can have. But I think this is just the beginning.


I can't think of a scientific or engineering field where AI couldn't be helpful. Those fields you mentioned, I think they are at the "AlphaFold 1 moment," with promising results already, but the major challenges of that field have not yet been fully addressed. In the next two years, we will see a lot of progress in all these fields, from materials science to mathematics.


Gary Tan: It feels like a Promethean gift, giving humanity a whole new capability.


Demis Hassabis: Exactly. Of course, as with the moral of the Promethean story, we must also be cautious about how this capability is used, where it is applied, and the risk of the same set of tools being misused.


Success Stories


Gary Tan: Many of you here are trying to found companies that apply AI to science. In your opinion, what is the difference between truly advancing frontier-pushing startups and those that simply add a layer to a base model API and then claim to be "AI for Science" startups?


Demis Hassabis: I'm thinking, if I were in your position today, sitting in Y Combinator, looking at projects, what would I do. One thing is that you have to anticipate the direction of AI technology, which is difficult in itself. But I do believe there is a huge opportunity in combining the direction of AI with another deep tech field. This intersection, whether it's materials, medicine, or other truly challenging scientific fields, especially those involving the atomic world, will not have shortcuts in the foreseeable future. These fields will not be overtaken by the next base model update. But if you are looking for a defensively strong direction, this is what I would recommend.


I have always personally favored deep tech. Things that are truly enduring and valuable are never easy. I have always been drawn to deep tech. When we started in 2010, AI was deep tech—investors told me, "We already know this thing doesn't work," and the academic community also thought it was a niche direction that was tried and failed in the 90s.


But if you have conviction in your idea—why is this time different, what unique combination does your background bring—ideally, you are an expert yourself in machine learning and applications, or you can build such a founding team—then there is tremendous impact and value to be created here.


Gary Tan: This information is crucial. Something looks obvious once it's done, but before it's done, everyone is against you.


Demis Hassabis: Of course, so you must do something that you are truly passionate about. For me, no matter what happens, I would do AI. I decided when I was very young that this was the most impactful thing I could think of. The reality has also proven this, but it may not as well, maybe we are 50 years too early.


And it is also the most interesting thing I can think of. Even if today we are still squatting in a small garage, AI hasn't come out yet, I would still find a way to continue. Maybe I'll go back to academia, but I will find a way to keep going.


Gary Tan: AlphaFold is an example of you pursuing a direction and getting it right. What makes a scientific field suitable for generating breakthroughs like AlphaFold? Is there a pattern, such as a certain objective function?


Demis Hassabis: I really should take the time to write this down. From all the Alpha projects such as AlphaGo and AlphaFold, the experience I have gained is that our existing technology works best in the following situations.


First, the problem has a huge combinatorial search space, the larger the better, so large that no brute force enumeration or special algorithm can solve it. The move space in Go and the configuration space of proteins far exceed the number of atoms in the universe. Second, you can clearly define the objective function, such as minimizing the free energy of proteins, or winning in Go, so the system can do gradient ascent. Third, there is enough data, or there is a simulator that can generate a large amount of synthetic data within the distribution.


If these three conditions are met, then with today's methods, you can go far and find that "needle in a haystack" you need. Drug discovery follows the same logic: there is a certain compound that can treat this disease and has no side effects, as long as the laws of physics allow it to exist, the only problem is how to efficiently and feasibly find it. I think AlphaFold proved for the first time that such systems are capable of finding this needle in a massive search space.


Gary Tan: I want to level up a bit. We're talking about how humanity has created AlphaFold using these methods, but there's another meta-level, where humans use AI to explore the space of possible hypotheses. How far are we from AI systems being able to do true scientific reasoning (not just pattern matching on data)?


Demis Hassabis: I think we're very close. We're working on these kinds of general systems. We have a system called AI co-scientist, and algorithms like AlphaEvolve that can do things beyond the basic Gemini. All the leading labs are exploring this direction.


But so far, I personally haven't seen a truly significant scientific discovery made by these systems. I think it's coming soon. It might be related to what we discussed earlier about creativity, truly breaking through known boundaries. At that level, it's not about pattern matching because there's no pattern to match. It's also not purely extrapolation, but some kind of analogical reasoning. I don't think these systems have it yet, or we haven't used them in the right way.


One standard I often mention in the scientific field is, can it propose a truly interesting hypothesis, not just validate one? Because validating a hypothesis itself can be groundbreaking, like proving the Riemann hypothesis or solving a Millennium Prize problem, but maybe we're only a few years away from that.


And even harder than that is, can it come up with a new set of Millennium Prize problems, and top mathematicians would consider them equally profound, worth a lifetime of research. I think this is another order of magnitude harder, and we don't yet know how to do it. But I don't think it's magic. I believe these systems will eventually be able to do it, maybe missing one or two things.


One way we can test it is what I sometimes call the "Einstein test," where you can train a system with the knowledge of 1901 and then let it independently derive Einstein's accomplishments from 1905, including the theory of relativity and his other papers from that year. I think we should really run this test, try it repeatedly, and see when we can achieve it. Once we do, these systems will be close to inventing truly new things.


Entrepreneurial Advice


Gary Tan: One last question. Many people here have a deep technical background and want to do things at a scale close to yours; you're one of the world's largest AI research organizations. Coming from the forefront of AGI research, what's one thing you know now but wish you knew at 25?


Demis Hassabis: We actually already touched on some of this. You'll find that pursuing difficult problems and pursuing simple problems are equally challenging, just in different ways. Different things have different difficulties. But life is short, energy is limited, so why not put your vitality into something that truly no one else would do if you don't? Use that as your criterion.


Another point is that I think in the next few years, interdisciplinary combinations will become more common, and AI will make interdisciplinary work easier.


One final point depends on your AGI timeline. Mine is around 2030. If you start a deep tech project today, that typically means a ten-year journey. So, you have to factor in the possibility of AGI emerging halfway through. What does this mean? It's not necessarily a bad thing, but you have to take it into account. Can your project leverage AGI? How will AGI systems interact with your project?


Returning to our earlier discussion on the relationship between AlphaFold and general AI systems, one scenario I foresee is a universal system like Gemini, Claude, or a similar system using specialized systems like AlphaFold as tools. I don't think we'll try to fit everything into one massive monolith.



Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia