Editor's Note: Against the backdrop of AI entering a competition-intensive phase of capital, computing power, and product, industry discussions are shifting from "Will the model's capabilities continue to advance" to "Who can truly stand out in this round of infrastructure reconstruction." Over the past two years, the market has been accustomed to understanding AI competition through model parameters, benchmarks, funding size, and valuation changes. However, as large model capabilities continue to converge and the gap between leading labs temporarily narrows, a more fundamental question has emerged: Does the long-term advantage in the AI era come from technical leadership or from a systemic combination of talent, computing power, distribution, organization, and market positioning?

This article is translated from a lengthy conversation between Tim Ferriss and Elad Gil. Elad Gil is a well-known Silicon Valley entrepreneur and early-stage investor who has invested in companies such as Airbnb, Stripe, Coinbase, Perplexity, Harvey, and Anduril, with a long-standing focus on technology cycles and high-growth company evolution.
In this conversation, Elad Gil did not attempt to predict which AI company would ultimately succeed but instead broke down AI competition into a set of more fundamental structural issues: how talent is being repriced, how computing power bottlenecks restrict the gap between leading labs, how application companies identify their exit window, and how startups transition from product capabilities to true organizational expansion.
This conversation can be understood from five aspects.
First, there is a change in the AI talent market. In the past, wealth transfer usually occurred after a company's IPO, with a company going public, and early employees and the founding team completing an asset revaluation. Now, Meta's aggressive bidding for top AI talent has forced other tech giants to match compensation packages, causing a small number of researchers dispersed across different companies to experience a "personal IPO" ahead of time. This means that AI talent is no longer just a company's internal R&D resource but is becoming a scarce asset that determines the speed of the technological race.
Second, the constraint of computing power has shifted from a single-chip issue to a supply chain issue. In the past, the market often understood AI infrastructure as "who can buy more NVIDIA GPUs." But Elad Gil emphasized that the current real bottleneck may be in areas such as memory, packaging, data center construction, and electricity. In the short term, this supply chain constraint may actually make it difficult for leading labs like OpenAI, Anthropic, and Google to significantly widen the gap. In other words, AI competition is not about a single breakthrough but is a long-term war revolving around capital expenditures, manufacturing capacity, and infrastructure coordination capabilities.
Third, the AI Application Company's Lifecycle. In the past, entrepreneurs often equated high growth with long-term value, especially in the early days of a technological wave, where valuations, revenue, and user growth would all rapidly inflate. However, Elad Gil's assessment takes a more cyclical view: in every technological revolution, the vast majority of companies will eventually disappear, and AI is no exception. Therefore, for many successful AI application companies, the next 12 to 18 months may not be a fundraising window but an exit window for value maximization. The real question is not whether the company is growing, but whether it has durability for a decade later.
Fourth, the Redefinition of Moats. In the past, a software company's advantage often came from product experience, data, channels, or brand; now, the key for AI application companies is whether they can embed themselves in the customer's workflow and become an indispensable system. Strengthening the underlying model does not automatically benefit all AI applications. Only those companies that simultaneously strengthen their product as the model advances, while deeply integrating with enterprise processes and proprietary data, may be able to survive the cycle.
Fifth, a Reinterpretation of Startup Company Expansion. Elad Gil, in discussing the book "High Growth Handbook," emphasizes that high growth does not happen naturally. Boards, funding, organizational management, distribution systems, and acquisition decisions all need to be actively designed. True large companies not only have good products but often have a very strong distribution mechanism. The Google toolbar, Facebook buying user name ads, TikTok's large-scale advertising, all demonstrate that growth is never a romantic story but rather a set of systematically executed commercial engineering.
The long-term competition in AI will not be determined solely by model capabilities but by talent, computing power, market windows, distribution capabilities, and organizational design. In this sense, the subject of this article is no longer just about how AI companies can win but about what kind of companies are qualified to survive to the next stage in the new technological cycle.
The original content is as follows (slightly edited for better readability):
TL;DR
· The AI talent war has shifted from recruitment competition to wealth revaluation, with Meta's talent grab essentially allowing a small group of top researchers to undergo a "personal IPO" ahead of time.
· The key bottleneck in AI's short-term competition is not just chips but the supply chain system composed of memory, packaging, and data center, making it difficult for leading labs to completely widen the gap in the next one to two years.
· The growth rate of AI companies is rewriting tech history, but the historical pattern has not changed: while the overall trend holds, it does not mean that the majority of companies can make it through the cycle.
·The next 12 to 18 months may be a valuation window for many AI application companies, as once growth slows down and products are replicated in the lab, exit value will rapidly decline.
·The truly enduring AI companies are not just those that use off-the-shelf models, but those that can control the entry point, embed in customer processes, and strengthen as the underlying models improve.
·Elad Gil's investment approach is not about chasing hot concepts, but rather assessing if the market is sufficiently large and newly opened, and then examining if the team can seize that window.
·Scaling a startup is never a natural process; the board, financing, organizational expansion, and distribution machine all need to be actively designed.
·The greatest industrial impact of AI is not to make software smarter, but to reopen formerly closed markets such as law, enterprise services, white-collar work, and shift from "selling tools" to "selling cognitive labor".
Interview Transcript
AI Talent Is Going Through a Personal IPO
Tim Ferriss: Elad, great to see you. Thank you for taking the time, really appreciate it.
Elad Gil: Good to see you too, as always.
Tim Ferriss: I think we can start from the topic we were just discussing before recording, or rather, a new phenomenon you were explaining. Can you recap what we were just talking about?
Elad Gil: Of course. We were just discussing some acquisitions happening in the AI field. For example, it seems like xAI has just received an actual acquisition of Cursor's option. Also, Scale was partially acquired by Meta. These kinds of transactions have been happening quite a bit over the past year or two.
Furthermore, we were also talking about: What does this mean for the AI research community and the entire AI community? I think one of the most interesting things that has happened in the past year or so is that Meta has started to aggressively bid on AI talent. This is actually a very rational strategy. Since they are investing tens of billions of dollars in compute power, it is reasonable to allocate a real budget to snatch people.
Usually in the tech industry, there is a scenario where a company goes public, a group of people from that company thus gain huge wealth. Then some of these people will continue to work hard, focusing on the original mission; others will start to get distracted. They may embark on some projects to serve society, get involved in politics, start a new business, or simply withdraw and go lie on the beach.
And the recent development is that, due to Meta's high offers, other tech giants had to match the corresponding offer for their top researchers. As a result, approximately 50 to a few hundred people actually went through an "IPO" — not as part of a company, but as a group. They are not in the same company but scattered throughout Silicon Valley. However, their compensation packages suddenly skyrocketed, experiencing a wealth leap similar to a company going public. This is very rare and can be called a "personal IPO."
The only similar situation I can think of historically might be in the cryptocurrency industry. At that time, a group of very early cryptocurrency holders or founders, as a collective, in 2020 — or more precisely, around 2017, suddenly achieved a sort of "collective listing"; similar situations have occurred again more recently.
But this is indeed very interesting, and the discussion is far from over. It may not necessarily have a huge long-term impact, but it does mean that the focus of some people will shift. They may embark on grand scientific projects, trying to help humanity; they may also turn to directions like AI for Science. Some people may leave their original path to pursue a personal mission or other endeavors.
Tim Ferriss: Yeah. Or simply engage in a "quiet resignation," start indulging in various desires, and chase after them. I mean, that situation will also happen.
Elad Gil: Well, of course, it definitely will.
Tim Ferriss: In this case, you look at Austin, and you have a group of so-called "Dellionaires" — early employees of Dell or related individuals who became wealthy from the stock post IPO. But looking at them as a group, when something like this happens, I don't think we know the extent of its impact or how long it will last, but obviously, there will be consequences.
Among the people I know who are both tech-savvy and have a broad vision and network to continually observe AI, there are actually very few. To some extent, if someone can observe this field relatively comprehensively, I would put you in that category.
You wrote an article this week that also discussed other factors at play here, such as the computational constraints AI labs face and how that might impact the next one to five years. Everyone should read this article, titled "Random Thoughts on the Frontier of AI Amidst the Thickening Fog." By the way, the title is nice.
Elad Gil: Quite dramatic.
Tim Ferriss: Yes, it's very dramatic, I love it, very cinematic. But before we dive into the topic of computational power constraints — I do hope you'll touch on that next — for those who may not have much background on the talent war, you mentioned earlier that Meta has started aggressively poaching talent. At the high-end talent level, what are these salaries, equity packages, or overall compensation packages approximately?
Elad Gil: I don't have the full range of exact details, nor do I know all the specifics. But based on rumors and some claims that have already been covered in the media, these offers are probably ranging from tens of millions to hundreds of millions of dollars per person.
Of course, the number of people who can command such sky-high treatment is very small. But the core logic is that we are in one of the most crucial technological races in history. The faster AI becomes stronger, the greater the economic value it unleashes. Therefore, for the few truly world-class individuals in this field, companies are willing to pay well above standard prices.
Five or ten years ago, these individuals certainly also received high salaries, but that was a completely different story. Because at that time, AI was not the core of the entire tech industry. More importantly, from a societal, political, educational, medical, and other perspectives, AI will have a very broad impact. I believe that overall these impacts will be positive, but it is indeed a transformative moment, hence the sudden surge in these compensation packages.
The bottleneck of the Computational Power War is currently more in memory than in chips
Tim Ferriss: What is the computational power constraint you mentioned in your recent article?
Elad Gil: Nowadays, everyone refers to these companies as "labs" — such as OpenAI, Anthropic, Google, xAI, and so on. All these labs are essentially training giant models.
Specifically, you need to purchase a large number of chips from NVIDIA. But in reality, you are building a whole system: it includes NVIDIA chips, memory from SK hynix, Samsung, and other manufacturers, and you also need to construct a data center. Building such a large-scale system and data center involves many steps.
Basically, you are building a cluster of tens of thousands or millions of systems, and the scale is constantly increasing. These systems come from NVIDIA, and may also come from other suppliers. Google has its own TPU, and there are other systems in the industry. You use this infrastructure to train AI models.
This means that you run massive amounts of data on these huge cloud clusters. The most insane part is that the final output model is literally something like a flat file. It's kind of like outputting a text file or something. And then you load that file to run AI. Think about it carefully, this is very crazy: you run for several months on a huge cloud system, and in the end, what you produce is actually a small file.
And this small file, to some extent, combines the human knowledge available on the Internet, as well as logic, reasoning ability, and other capabilities.
You can also understand this from the perspective of the human brain. Humans have around three to four billion DNA base pairs, which are enough to dictate everything about you as a physical individual, including your brain, mind, operations, determining how you see things, how you speak, how you taste, how you have various senses. All of this is actually encapsulated in a relatively small set of genes.
Similarly, human knowledge can also be effectively encapsulated in such a small file.
Tim Ferriss: So how do you see these constraints? Where are the constraints specifically?
Elad Gil: Every year, building these large-scale cloud clusters for training AI encounters some constraints. Additionally, there is also the so-called inference, which is the reasoning stage: when you truly use these chips to understand, operate the AI system itself, you also need a large number of NVIDIA chips, or TPUs, and other chips.
But besides the chips themselves, you need other things. For example, you need packaging capabilities to truly encapsulate the chips. So, around the construction of these systems, there is a whole supply chain.
Various parts of this supply chain will encounter different bottlenecks at different times. The main bottleneck now is memory, or more precisely, a specific type of memory. This type of memory is mainly produced by Korean companies, but of course, there are other more widespread suppliers.
It is widely believed in the industry that this memory bottleneck may last for about two years, with some fluctuation up and down. Ultimately, because the production capacity of these companies is lower than that of other parts of the system.
Some believe that in the future, other constraints may evolve into the construction capacity of data centers themselves, or the power and energy required to operate these systems. But for now, the main bottleneck is memory.
The entire industry is currently limited by how much computing power it can acquire and then put that computing power into model training and operation. This leads to one result: in the short term, it adds a ceiling to the scale of models. Because every lab is buying computing power as much as possible, many startups are also buying computing power as much as possible, and everyone is stuck.
This means that in the short term, there is an artificial ceiling on how big models can be, how much reasoning can be done, and how much you can actually do with AI right now.
But it also means that it actually creates a situation where no lab can far outstrip everyone else. Because it cannot buy ten times more computing power than other people.
And there is a law of scale here: the more computing power you have, the more likely you are to be able to train larger AI models; in many cases, the model's ultimate performance will also be stronger.
This may mean that in the next two years, the capabilities of these labs will likely be relatively close. Because no one has enough capacity to suddenly pull ahead.
But after this constraint is lifted, there is indeed a possibility: a company could suddenly take a significant lead over all other companies. Right now, OpenAI, Anthropic, and Google are actually quite close in terms of capabilities, although some companies may be ahead in some areas while others are ahead in other areas. It is generally believed that due to this bottleneck, this relatively close state should continue for at least another two years.
Tim Ferriss: Is Google also constrained by memory supply from companies like Samsung and Micron? Are they under similar constraints as other players?
Elad Gil: At the moment, basically everyone is under similar constraints. Some labs are either developing their own chips or their own systems. For example, Google has things like the TPU, and Amazon has developed its own chip called Trainium. Different companies have different systems, but fundamentally, they are limited by how much they can produce or purchase.
A year or two ago, the main bottleneck was packaging; now it's memory. Who knows what it will be two years from now, maybe it will be something else. In the process of advancing this round of infrastructure construction, we will continue to encounter new bottlenecks.
Tim Ferriss: My question may sound naive because I am a "Muggle" and cannot write technical whitepapers or anything close to that. But in my opinion — and I'm certainly not alone in saying this — we may be better at predicting problems than predicting solutions.
For example, a long time ago, when the price of gasoline rose above a certain level, everyone started predicting disaster and collapse. But when the price of oil per barrel exceeded a certain level, new extraction methods suddenly became viable, and funds began to flow into technologies like hydraulic fracturing.
So is there a possibility that in the case of the AI computing power bottleneck, there will also be some kind of workaround? Something like this logic, I don't know if it makes sense to say that. Maybe not at all.
Elad Gil: As far as I know, at least not yet. Part of the reason is that the way these things are built makes it difficult to bypass them.
For example, the capacity needed for memory fundamentally relies on a certain type of semiconductor fab. So you need time to build the fab, procure equipment, and set up the production line. This is a traditional capital expenditure and infrastructure development cycle.
These companies had previously underinvested in this area because they didn't fully believe others' predictions of AI demand at that time. Now they can only strive to catch up.
So it has become a situation where everyone is saying, "AI is growing so fast, how can it sustain this pace?" But it is indeed continuing to grow at this relentless pace. The reason is that the influence of these capabilities is too significant and too critical.
Looking at the revenue of these companies is very interesting. I can send you the chart later. Our team's Jared created a chart that summarizes how long different companies took to go from $1 billion in revenue to $10 billion, then from $10 billion to $100 billion, and then from $100 billion to $1 trillion.
In history, the number of companies that have truly achieved these scales is actually very small. You can look at the companies of different generations to see how long it took them. For example, I can't remember exactly, maybe companies like ADP took 30 years to reach $1 billion in revenue. Whereas Anthropic and OpenAI did it in one year.
Google probably took four years back then, I can't recall the exact number, but roughly like this: the later the generational company, the faster the scale. Now it is rumored that both OpenAI and Anthropic have annualized revenues of around $30 billion.
Tim Ferriss: This is insane.
Elad Gil: Because four years ago, they had no revenue at all. And $300 billion is roughly equivalent to 0.1% of the U.S. GDP. So AI may have grown from zero to 0.5% of GDP, at least from a revenue contribution perspective.
If we extrapolate further, assuming they reach $100 billion in revenue in the next year, two years, or at some point in time, we will be close to a scenario where each of these companies could account for 1% or 2% of GDP. Think about it, it's really outrageous.
Tim Ferriss: It's insane, truly insane.
Elad Gil: These things are indeed very significant and very useful. And this doesn't even include the cloud revenue Azure receives from its AI business, nor the related revenues of Google Cloud or Amazon. This is just about OpenAI and Anthropic. It's really extreme.
Tim Ferriss: I'm very interested in diving deep into your thought process. Because among all the people I've met, you are the best at first principles thinking and one of the best at systematic thinking. I enjoy our conversations because I always learn something new, and it's not necessarily a specific data point; many times it's a different perspective on an issue or a thinking framework.
And your framework itself is constantly evolving. For example, I remember seeing an interview you did with First Round Capital a long time ago. Back then, you talked about how you used to look at the market first when making investments, and then consider the strength of the team. You also mentioned missing out on investing in Lyft during its Series C. At that time, your judgment partly depended on your assessment of the market landscape: whether it was going to be a winner-takes-all situation, an oligopoly, or some other form.
I'm curious, in the field of AI, how are you thinking about this issue now? Because among the people I know, you were one of the earliest to start moving in this direction, maybe even the very first.
So, what are your thoughts now? This also ties back to a statement you made in your article. I haven't heard anyone else say this, but I can bring up this sentence as a hint—although I feel like you don't need a hint.
You wrote: Founders currently running successful AI companies should seriously and calmly consider exiting in the next 12 to 18 months. This may be the time window to maximize the outcome value.
You also revisited the survival rate of companies during the burst of the internet bubble, and the percentage of companies that later emerged as true winners. How do you think about this issue? Could you explain this statement?
Elad Gil: Of course.
Tim Ferriss: Also, I'd like you to explain how you currently view what kind of landscape this market will ultimately form. Do you think it will be a winner-takes-all situation, an oligopoly, or will there be other dynamics?
Elad Gil: If you look at historical precedents—of course, this doesn't mean AI will necessarily follow the same path—almost every technological cycle, 90%, 95%, or even 99% of companies eventually fail.
This can be traced back to a hundred years ago in the so-called "high tech" industry, the automobile industry. Back then, Detroit had dozens of car companies and hundreds of suppliers, but eventually, the entire industry consolidated into a few major automakers. This is not a new story.
Looking back at the internet cycle of the 90s, or the internet bubble, around 1999, there were approximately 450 companies that went public, and in the first few months of 2000, another 450 or so companies went public, adding up to about 900 companies. Adding to that the 500 to 1000 companies that had gone public in the years before, the total is roughly between 1500 to 2000 companies.
These companies are all already public, which means to some extent they have already been "successful." But how many of these companies are still around today? Maybe a dozen, perhaps two dozen. That is to say, out of 2000 companies, roughly 1980 have in some form disappeared or been acquired at very low prices.
So we have no reason to believe the AI cycle will be any different. Every cycle is like this. SaaS was like this, the mobile internet was like this, and the crypto industry was like this too. Most companies will not succeed; only a few will survive. We can discuss which companies will survive.
So, if you are running an AI company now, you should ask yourself one question: What kind of longevity does your company really have? Ten years from now, will you be one of those dozen or two dozen truly important companies? Or is now actually a good selling window? Because what you are doing may be commoditized, may be directly competed by big model labs, or market and tech changes may make you obsolete.
Of course, there will be a few companies that continue to become very great. They should not sell or exit but should keep moving forward. But there are likely many companies for whom now, or in the next 12 to 18 months, is the best time they will ever have to get the highest valuation for what they are doing.
For every company, there is a moment of value maximization. They reach a certain peak, and this peak is usually a window. It is usually 6 months, 12 months. During that time, what you are doing is important enough, growing fast enough, everything is running smoothly, and some headwinds have not really hit yet.
Sometimes, these headwinds are actually foreseeable, and you can see them coming. And a lot of the time, you can see them in the second derivative of growth. That is to say, your growth rate starts to flatten a bit. At this point, you either keep pushing upward, or you should consider selling. That's really what my saying is all about.
You can also see from our previous conversation that I am extremely bullish on AI. So, this is not to say I am not bullish on the overall transformation AI will bring, but to say that in this transformation, ultimately only a few companies will remain significant. The key question is: Are you one of them?
If you are, then you should never, ever, ever sell.
How Do AI Companies Navigate the Cycle? Either Control the Gateway or Embed in Workflow
Tim Ferriss: So what are the characteristics of these few companies? I mean, those that truly have a lasting advantage. Looking back to 2000, you wonder, what criteria should have been used to pick out Google and Amazon at the time?
Elad Gil: Yes.
Tim Ferriss: I'm not saying that the Internet bubble is the best point of comparison. But in the current wave of AI company proliferation, which companies do you think have enduring advantages?
Naturally, some prominent large-model labs come to mind. Perhaps they will become the gateway for all other applications, who knows. But how would you answer? Looking at common characteristics or specific company names, what do you think distinguishes the few companies that will survive from the rest?
Elad Gil: I think the core large-model labs will exist for quite some time. For example, OpenAI, Anthropic, Google, as long as there is no kind of accident, disaster, or internal implosion, they seem to be in a relatively stable position.
As for the market structure you mentioned, I wrote a Substack post about three years ago, predicting that this might be an oligopoly: there will be only a few companies, and they will be tied to cloud providers. Looking at it now, that's largely the case. Of course, there are still Meta, xAI, and other players that could change the landscape. These variables did not exist when I wrote that article.
But in my view, in the short term, this is still an oligopoly. There is no natural reason for it to turn into a monopoly market unless one of them is so far ahead in capabilities that it naturally becomes the default choice for everyone. This scenario is possible, but it has not happened yet. And as for the compute constraint I mentioned earlier, it may prevent this situation in the short term, or at least place some limits on it.
If you look up the tech stack into the application layer, you will see different types of application companies. For example, Harvey in the legal field, Abridge in the medical field, and Decagon and Sierra in the customer success field. There are some companies in each application direction.
To judge whether these companies can establish themselves in the long term, you can look at them from three or four perspectives.
First, if the underlying models get better, will your product or service significantly improve for customers and make them willing to continue using you?
Second, from a product perspective, how deep and broad are you going? Are you building multiple products? Are these products integrated into a coherent whole? Is it really embedded in a company's internal processes and to an extent that is difficult to uproot?
Many times, the real issue companies face with AI is not "how good is this AI," but "how much do I need to change existing workflows and how employees do things to adopt it." This is often a change management issue, not a technical one.
So, if you have deeply embedded yourself in the client's workflow, business processes, organizational collaboration, and how various systems interconnect, this position often becomes more enduring.
Third, are you capturing, storing, and leveraging proprietary data? Sometimes this can be very useful. Overall, I think the so-called "data moat" is often overhyped, but in certain cases, it does hold significant value. This usually corresponds to a "system of record" worldview.
Therefore, assessing the longevity of a defensive strategy involves a set of criteria. At the application layer, this is often a key perspective.
Tim Ferriss: So I have a question. Suppose someone in the audience is in this position: they might be a founder and should consider identifying the transient window when their company is most valuable, and then to some extent, "parachute out." What are their options?
Because I'm thinking of some companies—no names mentioned—that now have valuations in the tens of billions of dollars. From my mostly outsider perspective, what these companies are selling now doesn't seem very difficult to replicate in a large model lab.
Should these companies aim to be acquired by some big model lab? If so, then the lab faces a "build vs. buy" decision. Or should they target not companies like OpenAI, Anthropic, but those looking to more deeply engage in this game, such as Amazon, or similar players? How do you view their exit options?
Elad Gil: I think there are actually many exit options. And one crazy thing right now is that if you go back 10 or 15 years, the largest companies globally were probably valued at around $300 billion. The largest tech companies, I remember, were maybe around $200 billion in valuation. The largest companies back then seemed to be energy companies like Exxon.
But over the past 10 to 15 years, things suddenly changed: we started having a bunch of trillion-dollar market cap companies. Everyone thought that was crazy, but in reality, company sizes will likely only continue to grow. The biggest winners in the future might see even stronger aggregation effects rather than more dispersion.
Now, more and more companies are in the range of $100 billion to trillions in valuation, which is unprecedented. This means they have enormous purchasing power. Because 1% of a $3 trillion market cap company is $30 billion. In other words, diluting just 1% ownership allows you to acquire a company for $30 billion. That's extremely mind-boggling.
This is indeed unprecedented. And it is precisely because of this that these mega-acquisitions can now happen.
Tim Ferriss: For those companies that come to mind for me — I won't name names — they may seem to have a limited lifespan. I often chat in small groups with friends, many of whom are highly successful tech investors. And then I ask them: "Alright, imagine these five companies are lined up here, and you have 10 chips, how would you allocate them?" Some companies, although not low-profile, almost always end up with 0 chips. So why would these labs go and acquire such companies?
Elad Gil: It depends on what specific company it is. And the buyer may not necessarily be a large-scale lab, it could also be a large tech giant. Like Apple, Amazon, and to some extent, Google. There are also Oracle, Samsung, Tesla, and now even SpaceX is starting to enter this market and do related things. There are actually many different types of buyers. And then there's Snowflake, Databricks. If you are in the financial services space, there might be Stripe, Coinbase. In fact, there is a large group of companies that are already very large in scale, and that is the key.
So, a company usually ends up selling to one of four types of buyers.
The first type is large-scale labs, hyperscale cloud providers, or large tech companies.
The second type is companies that are very focused on your specific vertical. For example, if you are in law, accounting, or a related field, companies like Thomson Reuters may be interested.
Furthermore, I think one thing that hasn't happened enough is mergers between competitors, especially mergers between private companies. Because if your primary goal is to win the market, and you and another competitor are evenly matched, competing in every deal and undercutting each other's prices, maybe a better option is to merge.
That situation is actually like the X.com and PayPal scenario in the 90s. Elon Musk and Peter Thiel were running separate companies at the time and later chose to merge because they realized, "Since we are both doing the same thing, why keep fighting?"
Tim Ferriss: Yes. Or like the early days of Uber and Lyft. That might not be considered a merger, more of an acquisition.
Elad Gil: Yes. The rumor is that it almost happened, but then Uber backed out. But all the money Uber has spent over the years to compete with Lyft might not be as much as if they had just bought it back then. Of course, it may not be the case, I don't have the specifics.
However, many times, choosing to say, "Let's stop competing against each other, merge instead, and go win together," actually makes sense. Because if the primary goal is to win the market, and you are already competing with a group of existing giants, why make it more difficult?
Tim Ferriss: You know, we often discuss this. But this time, I want to talk about your perspective as an investor. However, before you truly put on this "full-time investor" hat, you already have a lot in your background that may have helped you, or may not have. I'm curious, looking back at your biological background, mathematical background, do you think these things, or other experiences, have substantially influenced your investment thinking? Have they given you some kind of advantage? Of course, winning trades have different stages, but let's first talk about screening and the selection process.
Elad Gil: I think math has helped me in two ways.
First, it has helped me understand certain technical issues, especially things related to algorithms, computer science, and sometimes, this is very useful for understanding how things work in AI. Or at least it makes me more familiar with numbers and data. I wouldn't necessarily call it "nerd language," but it's probably something like that.
To be honest, I majored in math at the time just because I liked it. I think the really helpful part is also here. I just did an undergraduate degree in math, didn't go too deep, but what I studied was very abstract pure math.
I think this is a very good training; it forces you to really think logically step by step. At least when I was learning how to do proofs, the general way was: you first establish a logical sequence, but sometimes you also make some intuitive leaps, and then you try to prove it to yourself afterwards or complete the reasoning behind this intuition.
I think investing is sometimes a bit like this.
Tim Ferriss: When was the first time you realized you might be good at investing? This investment can be a broad investment, or in the context of our conversation, investing in startup companies, angel investment. When did you first feel, "Hmm, maybe I'm not bad at this"? Was there a moment, a particular transaction, or something else that made you think of that?
Elad Gil: Actually, no. I am very demanding of myself, so even now, I often question myself. Someone once told me that the two people who like to repeatedly blame themselves most afterwards are me and another very well-known founder and investor.
So, I don't have a single moment where I think, "Wow, this thing is really suitable for me." It's more like it naturally continues to happen. Because I invested in some very strong companies, and that allowed me to keep going. Yes, I also hope to have that kind of "epiphany" moment.
Tim Ferriss: Damn, you've got to, like every great founder, rewrite your early story.
Elad Gil: Yeah, I've been thinking about investing in tech companies since I was seven.
Tim Ferriss: How did you get into those deals? Some people have an information edge, and they put themselves in a position to have that edge. I don't want to lead the witness on this question, but for me, if I hadn't moved to Silicon Valley in 2000 and then stayed there, especially moving to San Francisco, nothing I've done in angel investing would have happened.
But clearly, your story is more than that. Because a lot of people moved there with hopes of getting rich through startup companies, in whatever capacity. Of course, I'm not saying you moved there for that. But what allowed you to get into those deals? Based on our past conversations, I have some factors in mind, but I'll hold off on saying them. Why were you able to get in, or selected for those deals?
Elad Gil: I think what happened early on versus what happens now is different. Those are two different stages.
Just as you said, for anyone trying to get into any industry, the most important thing is to go to the headquarters of that industry, or where its cluster is based. You have to move to where things are actually happening. The advice that says "you can do anything from anywhere" is nonsense. It's not just the tech industry; it's all industries.
If you want to get into the movie industry, people won't tell you, "You can write movie scripts from anywhere, do digital music scores from anywhere, edit from anywhere, and also shoot from anywhere. So go to Dallas and join their thriving film community." They'll say, "Go to Hollywood."
If you want to get into finance, you might say, "I can fundraise from anywhere, think of trading strategies from anywhere, hedge fund strategies from anywhere." But people will say, "Go to New York, or to some financial center."
It's the same with the tech industry.
Our team member Shreyan has been doing a "Unicorn Analysis," studying where the market cap of private tech companies is concentrated. Traditionally, about half is in the U.S., and within the U.S., about half is in the Bay Area. But in this AI cycle, 91% of the market cap of private tech companies is in the Bay Area. 91% of the global AI private market cap is concentrated in roughly a 10-mile by 10-mile area.
So, if you want to do AI, you should probably be in the Bay Area. The second option might be New York, but then it drops off a cliff. The real core place is still the Bay Area.
If you want to do defense technology, you might want to go to Southern California, close to where SpaceX and Anduril are located, such as Irvine, Orange County, El Segundo, and so on. There are many startups there.
If you want to do fintech and crypto, it's probably New York.
But the reality is, these industry clusters are very strong. So the first point, as you said, I was indeed in the right place at the time. I was in the right network. Another default condition is that I myself am running a startup. I worked at Google for many years, then left to start my own business. People started coming to me for advice.
For example, the way I eventually invested in Airbnb was when they had about eight people, and I was helping them with their Series A. I introduced them to some people and provided very light strategic help. Of course, they would have completed the fundraising without me. In the end, they said, "Hey, when this round is closing, do you want to invest a bit?" I said, "Sure, it sounds great." It was a very natural thing.
Another example is how I invested in Stripe. At that time, I had sold my own early-stage API infrastructure company to Twitter when Twitter had about 90 people. Then I sent an email to Patrick, the CEO of Stripe, saying, "I've heard a lot of good things about you, and I also really like what Stripe is doing. If it were my own startup, I would use it too. I just sold an API company myself. Do you want to chat about these things?"
We took a few walks together. One or two weeks later, he texted me, "Hey, we're fundraising, would you like to invest?" So my earliest investments happened very naturally. Founders would say, "I hope you'll join us."
At that time, I didn't think, "Oh, I should become an investor and then go chase projects." I just really enjoyed talking to smart people, solving certain business problems, and loving technology and how it translates into the real world. I'm just a nerd, and then I met other nerds, and we hit it off. That's my early story.
Tim Ferriss: I suddenly thought of a saying, you've probably heard it, and I'm sure everyone has: If you want money, seek advice; if you want advice, seek money. I just suddenly realized that this can work the other way around. In other words, if you keep offering a lot of advice, many times you will eventually get the opportunity to invest money. Conversely, if you initially want to give money, others may come to you for advice.
Elad Gil: Yes, well said.
Tim Ferriss: When did you write the "High Growth Handbook"? When was that book published?
Elad Gil: It's been a while. Probably around seven years ago, more or less.
Tim Ferriss: Seven years ago. Okay, we'll come back to this topic later. Because you are indeed in the right place geographically. You are at the center of the switchboard. As you said, the earliest prominent investments were very organic.
What I'm curious about is, as you mentioned earlier, in the past, you were doing one thing, and now you are doing another. But between these two, there has been an evolution. For example, I want to ask, do you still agree with this statement? This is from that First Round interview I mentioned earlier: "As a general rule, when I make investments, I first look at the market, then at the strength of the team." There is more to it. But do you still agree with this statement?
Elad Gil: I agree 90%. Occasionally, you come across a very special individual, and then you support them, especially in the very early stages.
For example, the first round of funding for Perplexity, I led that, it was very, very early. The reason for that was, the CEO of Perplexity, Aravind, seemed to have sent me a message on LinkedIn. At that time, no one was working on AI, he was then an engineer or researcher at OpenAI.
He said, "Hey, I'm at OpenAI." Of course, no one really cared about OpenAI at that time. "I'm considering doing something related to AI. I heard you talking about these things, and not many others are. Can we meet?"
Then we started meeting every two weeks, brainstorming together. Later, this turned into an investment. It was a "people-first" thing because he was just so excellent. Every time we finished our discussion, a week later he would come back with a finished product of what we had talked about. Who does that?
Tim Ferriss: Yes, that's a very good signal.
Elad Gil: He was really impressive.
Another example is how I eventually invested in Anduril. At that time, Google shut down Maven, their defense project. I thought, "If these existing giants are not willing to do it, isn't this a great opportunity for a startup to step in?" Because Silicon Valley and the defense industry have a long history, like HP, and many early brands were like this.
So I was looking to see if anyone was working on this at the time. This direction was very unpopular back then. Later, I think it was at a brunch or similar event, I met Trae Stephens, one of the co-founders of Anduril, who was also at Founders Fund.
Once again, this highlights the importance of being in the right city. He said, "Oh, I'm working on a new defense project." I said, "Great, let's talk about it."
So sometimes, I actively look for these things in the market; sometimes, I meet people first. Anduril saw the market first and then found exceptional people. Perplexity, on the other hand, was in between: I was always looking at various things in AI at the time because I believed it would become extremely important, but there weren't many people focusing on it then. Then I met someone outstanding.
My investment in OpenAI was also like this. My investment in Harvey, the early legal AI company, was also like this. I invested in many very early-stage projects because they were among the few truly working on what I believed to be crucial markets at the time.
Tim Ferriss: I'd like to go back to a few things you said earlier. You mentioned the founder of Perplexity, or the person who later became a founder, saying he saw or heard you talking about AI. Where exactly was that? Was it in your blog post? Or somewhere else? How did he actually discover you were talking about these things?
Elad Gil: I think he reached out to me, in part because I had previously been involved in many last-gen tech companies like Airbnb, Stripe, Coinbase, Instacart, Square, and others. I already had some visibility as both a founder and an investor at that time.
Furthermore, at that time, I was actively "harassing" AI researchers, constantly asking them what was happening right now because it was so fascinating. Many people were using something called GAN to do art at the time, which stands for Generative Adversarial Network.
I was also playing around with these things. I had tried to hire engineers to help me build something fundamentally similar to Midjourney because I felt that if AI art creation could be made very easy, it would be very cool.
Tim Ferriss: I'll pause here for a moment because this leads perfectly into my second question. When you mentioned AI earlier, you said you believed it would become extremely important at the time. What signs led you to that conclusion? What was the distant "smoke" that made you think, "Oh, this is an interesting direction"?
Elad Gil: I think there are probably two or three factors.
AI has always been something that people have talked about for a long time. When I was studying math, I took many theoretical computer science courses and was exposed to early neural network classes and the underlying mathematical foundations. People have always been anticipating building some form of artificial intelligence.
In a sense, you could even say Google was the first AI-first company. It's just that at that time, we called it machine learning, and in a sense, the technical foundation was also different.
I think 2012 was a key inflection point. That year, AlexNet appeared, proving that you could start scaling up models, and as the scale expanded, AI systems would exhibit very interesting features.
Then in 2017, a team at Google invented the Transformer architecture. Now almost everything is built on top of this architecture, or roughly based on it. For example, when you look at the GPT in ChatGPT, that "T" stands for Transformer.
Then around 2020, GPT-3 appeared. It was a huge leap compared to GPT-2. At that time, it wasn't good enough to truly be widely applicable, but you would realize: "Wow, the scaling laws papers are out, and the leap in capabilities is so significant."
All of a sudden, you have a general model that can be called via API, accessible to anyone. If you extrapolate this further, you will find that it is bound to become very important.
So basically, I'm watching this leap in capabilities, trying out these technologies firsthand, and reading scaling laws-related papers. Or more broadly, I found that scaling laws seem to apply to many things. You would think, "Wow, this thing is going to become very, very important, so I should start getting involved."
Tim Ferriss: Do you think if you didn't have a math background, you would still be able to make this kind of judgment? I guess others might have done it too. But this also brings me to my question: How did you discover and absorb this information? Was this a hot topic in the circle at the time? In other words, in your social circle and network, were people already publicly discussing this, so you naturally got involved? Or were you already absorbing a lot of information from different fields, and AI happened to be one particular direction that particularly attracted you?
Elad Gil: I think there are three things.
First, I have always absorbed a wealth of information from many different fields because I enjoy learning about various things. I am someone who blends mathematics, biology, anime, art, and other subjects together and have always been in this mixed state.
Second, this was indeed something my friends would talk about, but at that time, it was more like a playful discussion. For example, "Oh, this is cool, look at what it generated." However, most people did not take it any further. It was a bit like early-stage cryptocurrency or Bitcoin: everyone was talking about it, but very few actually bought into it. I think that's part of the reason.
Thirdly, to be honest, I just found these things very interesting, so I kept playing around with them.
This brings us back to the matter of GANs and AI art. Different models would keep emerging at that time, and you could try them out.
Regarding this round of base models, AI, and all related changes, the importance of one thing has actually been severely underestimated. The way AI or machine learning used to work in the past was usually like this: you would have a team in your company or elsewhere, and then there would be the so-called MLOps team. That is the machine learning operations team. Their job was to help you set up all the data, pipelines, and related processes to train a model.
The model you trained was tailored to your specific use case, tailored to what you wanted to accomplish. Then you had to build a bunch of internal services to interact with this model.
So, making a usable machine learning system truly run and enter a production environment was a very painful thing.
Then suddenly, things turned into: you just need to call an API. With one line of code or a few lines of code, anyone anywhere in the world can access it.
And not only that, it is also generic. It is no longer specialized for one scenario, like spell-checking, for example. You can use it for anything. In a sense, its knowledge base is embedded with the entire Internet. It also began to possess more advanced reasoning capabilities.
But one of the most important points is this: you can get it with just a few lines of code. You don't need to assemble an MLOps team, host it yourself, deal with a bunch of interaction processes, or do all this extra work. It's just usable.
This is really crucial.
Tim Ferriss: This is so crucial. It's really hard to overstate this.
I have a million questions to ask you. The problem is that we have too many directions to talk about, it's almost "embarrassingly rich".
My team and I are now using Claude Code and various tools to do a lot of things. One of them happens to align closely with your expertise: angel investing.
For the first time, I feel like I truly have the capability to do this. Of course, as you might expect, some manual input is still required. But now I can look back and analyze my two decades of angel investing experience and attempt to do many different things.
I suspect many of the things that pique my interest may not have much practical value, such as conducting some counterfactual analysis: What if I held onto each investment for three years, five years, or other periods of time? This is essentially a form of self-flagellation akin to Opus Dei, often just whipping oneself on the back.
However, while conducting this analysis, some questions immediately come to mind and may actually be worth exploring. I want to hear if you would do this and if so, how you would go about it.
To be honest, part of this is purely out of curiosity. I want to know if the stories I've been telling myself all along are actually true. For example, I'd be interested to know: Who exactly made certain introductions? Did some people just refer those terminally ill, almost-on-life-support companies to me for a last-ditch effort? Or, were there indeed some individuals consistently recommending good projects to me?
There are a million ways I could interrogate and enrich this data. We are currently doing this with Claude and other tools, and it's going well. OpenAI is also very strong in this area.
Looking back, such as in my case, with approximately 20 years of investment records, what do you think are some more intriguing questions or analysis paths worth examining?
Elad Gil: Yes. I've been doing something quite odd recently: I upload a founder's photo and have a model predict if they will become outstanding founders.
Tim Ferriss: Oh, wow.
Elad Gil: Because when you think about it, we've actually been doing this when we meet people. We quickly try to make a judgment about someone: What is their personality like, what kind of person are they.
There are many subtle features. For example, do you have crow's feet at the corner of your eyes, which may suggest if your smile is genuine. What does that imply about your sense of humor? Or, if you frown a lot, what does that mean?
There are many of these subtle features. When you meet someone, you quickly form an initial impression of them. Of course, this doesn't mean it's always accurate. But as human beings, we do indeed do this very rapidly.
So I've been playing with a whole set of telltales just for fun. The question is: Can you extrapolate a person's personality based on a few photos? And if you can, can you somewhat predict their behavior? I find this very interesting.
Tim Ferriss: Yes. Have you discovered any signals in it yet? Or are you still unsure?
Elad Gil: Actually, the results are not bad. I've been doing some really weird tests recently, like with shirts, right?
Tim Ferriss: Yes, practicing observing people's smiles.
Elad Gil: Yes, exactly.
But I find it very interesting because we've always been reading peo
