The AI model is becoming increasingly affordable, so why is "VVV" becoming more valuable?

Bitsfull2026/05/12 15:2115998

概要:

As the model approaches free, value will shift to the user


Editor's Note: VVV's recent market performance has pushed Venice into the forefront of the AI x Crypto narrative. The CoinMarketCap page shows that the Venice Token's latest price is around $17.28, with a 24-hour gain of about 19% and a circulating market cap of approximately $7.95 billion; CoinGecko indicates a price increase of over 60% in the last 7 days, with a market cap of around $6.94 billion. This collectively points to one fact: the market is once again paying attention to this "privacy AI + tokenomics" project.


But what this article truly discusses is not VVV's short-term gains, but a more foundational issue: when model capabilities are rapidly commodified, where will the value of AI platforms ultimately settle?


The author's core argument is that leading AI labs such as OpenAI and Anthropic are falling into an "equity structure trap": their valuation is built on the premise of the model layer maintaining long-term scarcity and a high premium. However, the rise of open-source models in China, low-cost training, an open weight ecosystem, and cloud deployment are quickly driving down the price of model capabilities themselves. In other words, the most expensive part of the AI industry may be turning into the most challenging to retain profitability.


In this framework, Venice is seen by the author as a reverse structure: it does not train models but harnesses open-source model capabilities; it does not rely on centralized data retention but emphasizes privacy and TEE proof; it does not turn users into training data but through mechanisms such as VVV staking, subscription burn, DIEM computational power rights, etc., it makes users part of the platform economy. What the author truly wants to express is that Venice is not a "tokenized AI application" but an experiment that redefines consumer-software relationships through tokens.


What is most worth noting is not whether Venice can directly challenge OpenAI, but whether the AI market is splitting into two parts: one continuing to serve clients willing to pay for cutting-edge models, accept enterprise-level compliance, and data retention; the other turning towards "good enough" open-source model capabilities, placing more emphasis on privacy, censorship resistance, low cost, native access for agents, and user ownership. If this split occurs, Venice's opportunity lies not in winning the entire model war but in becoming the reasoning layer and settlement track of the open intelligent agent economy.


Therefore, this article is a typical structural multi-pronged argument: it is not just a bet on VVV's price increase but a bet that the curves of model layer commoditization, open-source model catch-up, agent payment rise, and user ownership economy will all converge simultaneously.


The risk also lies here - once the open-source model slows down, token burning cannot sustainably match growth, or Venice fails to truly solidify user relationships, this narrative will be reevaluated. But at least at the current stage, VVV's market performance has already indicated that the market is willing to pay a higher premium for this "same demand, opposite economic model" story.


The following is the original text:


These labs are pouring in hundreds of billions of dollars, trying to defend a moat that is evaporating in real-time. GLM-5.1 beat GPT-5.4 in the toughest programming benchmark test - it's open-source, MIT-licensed, and trained on Chinese hardware that the U.S. is trying to block. The cost of training cutting-edge capabilities has dropped by about 95% in eighteen months. Every dollar in OpenAI's $852 billion valuation is built on one assumption: these changes don't matter. But they do. And Venice is the only consumer-grade AI platform: when all of this finally has to be repriced by the market, its economic structure will directly benefit; even if that repricing never occurs, its investment thesis still stands.


The core argument in that article from April was that Venice holds a unique position in the intelligence economy. That judgment still holds - usage has tripled, ledger burns have surpassed 42% of the genesis supply, DIEM has repriced 75% in six weeks, and the token price has more than doubled since my in-depth analysis.


But the "Seven Levers of Value" framework I presented in April may have underestimated what's happening. Venice is not an AI company with a privacy label that incidentally issued a token. It is a new economic structure for consumer software: users are owners, the platform is the track, and value is priced not in equity but in compute rights.


This structure is not a stack of features but the only configuration that can survive the imminent changes at the model layer. Whatever the bubble is built on, Venice stands on the opposite side. The same market, the same demand, a completely opposite economic model. This is the mirror.


This is the argument I didn't make clear in April. Now I am making it.


Equity Structure Trap


OpenAI, Anthropic, and Together AI have one thing in common, unrelated to their products: their investors expect dollar-denominated equity returns, at the multi-billion-dollar level, and require them to be delivered on an accelerated timeline.


It all sounds mundane until you follow the logic through.


OpenAI's $852 billion valuation implies that by 2030, it would need to achieve annual revenues of around $200 billion to $280 billion to support that valuation multiple. The company is currently earning $2 billion in monthly revenue, with a projected loss of $13.5 billion in the first half of 2025. Meanwhile, as the inference cost skyrockets fourfold to $8.4 billion, its adjusted gross margin drops from 40% to 33%. Compute and talent costs consume 75% of total revenue. Microsoft is also set to siphon off another 20% by 2032. OpenAI anticipates that by 2028, its compute expenses will reach $121 billion, with a loss of $85 billion just for that year, and profitability may only come after 2030.


Anthropic falls into the same trap, just on a different scale. With a $380 billion valuation, $300 billion in ARR run rate, and projected training costs of $42 billion by 2029. Google pledged $40 billion last month, and Amazon injected another $25 billion—though both essentially cycle through cloud services quotas, rather than genuine equity capital. The top five hyperscale cloud providers committed to investing $660 billion to $690 billion in AI infrastructure in 2026 alone. Goldman Sachs estimates that spending between 2025 and 2027 will reach $1.4 trillion, roughly three times the expenditure from 2022 to 2024. Sam Altman himself has inked $1 trillion in AI deals, while OpenAI's revenue stands at just $13 billion.


These are not your average companies. They are sovereign-level infrastructure bets disguised as software firms. Their valuation demands that the model layer must remain prohibitively expensive. Yet, the reality is, the model layer is becoming increasingly cheaper.


Decoupling


In the past 60 days, the relationship between AI capital spending and AI capability has ruptured. The release of three open weighted models illustrates this.


Z.ai's GLM-5.1 released on April 7 scored 58.4 on SWE-Bench Pro, surpassing GPT-5.4 at 57.7 and Claude Opus 4.6 at 57.3. It is open-sourced under the MIT license, trained entirely on Huawei Ascend chips without using any NVIDIA hardware; yet Z.ai itself is on the U.S. Entity List, barred from accessing the H100. Its API is priced at $1 per million tokens input and $3.2 output, making it 5 to 8 times cheaper than Claude Opus's $5/$25 pricing.


Moonshot's Kimi K2.6, released on April 20, has become the number one ranked open-weight model on the Artificial Analysis Intelligence Index, scoring 54, surpassing the Frontiers Closed Labs' score of 57. It outperformed GPT-5.4: HLE-with-tools, which scored 54.0, higher than GPT-5.4's 52.1. SWE-Bench Verified scored 80.2, nearly matching Claude Opus's 80.8. Cloudflare priced it at $0.95 per input and $4 per output, making it around 15 times cheaper than Claude Opus under heavy load conditions. The initial training cost of Kimi K2 was only $4.6 million.


DeepSeek V4-Pro, released on April 24, ranks second on the Intelligence Index, right after Kimi K2.6, surpassing all models except the top three from Frontiers Closed Labs. It is released under the MIT license. The training cost of DeepSeek V3 was $5.6 million.


Three Chinese labs, within 60 days, all open-source, all reaching or surpassing the frontier level on at least one major benchmark, priced 5 to 15 times cheaper, with one even running on sanctioned hardware. The kind of ability that supported OpenAI's valuation in 2024 can now be freely downloaded on Hugging Face, deployed on rented hardware, and continues to improve every quarter.


This is not the so-called "China AI moment." This is structural arbitrage at the model layer happening in real-time. An academic paper from March 2026 directly states: "Pre-training scale has decoupled from frontier AI capabilities." The share of global usage by Chinese open-source models has grown from 1.2% in 2025 to 30%. Apple is evaluating the use of DeepSeek, Qwen, and Doubao for iOS 27. AWS, Azure, and Google Cloud all offer DeepSeek deployment. Today, 80% of startups seeking VC funding are built on open-source models. Meta's Llama series is intentionally released to drive the commoditization of the model layer—when a $16 trillion company is the most steadfast deflator in your market, it's clear where the profit margin is heading.


Every dollar of OpenAI's $852 billion valuation assumes these changes are inconsequential. It assumes enterprise customers will indefinitely pay a premium for token-priced high-end capabilities, even though GLM-5.1 can offer similar capabilities at one-eighth of the price; it assumes that the open-weight nature of Kimi K2.6 is not significant; it assumes that DeepSeek selling for less than 3% of the price of a frontier model is immaterial. It assumes these labs can operate in a market where competitors offer products for free, while achieving 10x revenue growth and expanding profit margins.


Sapphire Ventures' Jai Das refers to OpenAI as the "Netscape of the AI era." Mark Zuckerberg has also publicly acknowledged the existence of the AI bubble dynamics. In March, the Pentagon formally identified Anthropic as a supply chain risk because Anthropic refused to allow Claude to be used for large-scale surveillance and autonomous weapons; while OpenAI and Google have signed an "all lawful uses" agreement to avoid a similar fate. Centralized AI companies are subject to government coercion, and their architecture cannot resist this compulsion. Venice's architecture can.


These labs are not unaware of the issue. They just can't pivot. Those investors who wrote the checks valuing the company at $852 billion did not buy into a future where a model would be commoditized. They bought into a future where a model would always command a high premium. These are two entirely different companies, and for the latter to truly materialize, it must first write down the valuation of the former.


That's the trap. The problem is not in the refusal mechanism stack, nor in the logging architecture. The real problem is that the only investors who can tolerate an economic structure like Venice are precisely those who already hold VVV.


Not One Market, but Two Markets


From here, this argument no longer requires a bubble burst to hold.


Assuming these labs barely make it through. Assuming GPT-6 still remains the best in its class, Claude Opus 5 continues to maintain its lead in reasoning, and Gemini still occupies the multimodal frontier. Assuming enterprise contracts can last long enough for these companies to complete the refinancing and survive their valuation pressure.


That doesn't matter either. The market will split.


Frontier intelligence accounts for only a small fraction of total reasoning demand. The vast majority of real workloads—programming assistance, writing, analysis, image generation, video, agent execution, customer service, research, summarization—reached a "good enough" level several months ago. The coding capability of GLM-5.1 in a production environment is now equivalent to GPT-5.4. Kimi K2.6's ability to run agents is now on par with Claude Opus 4.6. DeepSeek's general reasoning ability is also roughly on par with any model outside the absolute top tier. For 80% of real-world demands, the open-weight ecosystem is already sufficient and improving every quarter.


These demands do not require stronger intelligence but rather intelligence attributes that these labs cannot provide structurally: privacy, censorship-resistant outputs, accountless operation, loglessness, native agent access, predictable costs, and user ownership. The labs cater to a small part willing to pay enterprise prices and accept monitoring for high-end demands. Venice caters to everyone else, and this happens to be the larger, faster-growing half of the market.


The bull market scenario is: these labs collapse, and Venice takes over the entire market. The baseline scenario is: the market splits, with Venice on the larger side. Even in a bear market scenario—where these labs have long dominated cutting-edge capabilities with no revaluation events occurring—Venice remains one of the few consumer AI platforms able to serve the 80% inferencing needs: these needs do not call for the cutting edge capability, nor can they stomach the lab’s business model.


This argument does not require a meltdown. It merely requires the open-source curve to continue along the path it has already taken.


Why does Venice capture this larger half of the market? Not because it is destined to win it all. It may, but the structural answer is simpler than that.


Venice is the only consumer AI platform that lets users own the rails of their usage. Stake VVV, earn rewards and lifetime Pro permissions. Lock sVVV, mint DIEM, own a durable computational right and appreciate as the cost of inference is commoditized. Every paying user drives a burn flywheel, compounding the positions of all other users. This is not a feature but an entirely different relationship between consumer and product—one that Big AI cannot provide because its ownership structure does not accommodate “users as owners.”


Look again at what users truly need, which labs cannot offer. Privacy is not a policy but verifiable TEE proofs, zero retention, and an architecture where nothing can be shut down. For intelligent use cases that 99% do not need a pass through an enterprise brand security council, uncensored outputs are crucial. Open-source frontiers can go live within days of model releases as Venice does not need to defend a moat that must keep the model layer perpetually expensive. Autonomous agent access—private API keys, x402 wallet payment, no human in the loop—is because the agents being deployed today cannot support anything else.


Every one of these forces is incrementing independently. As data leaks rise and regulations tighten, privacy demands grow. As users become increasingly disillusioned with “brand-safe AI products” that routinely refuse everyday tasks, anti-censorship demands rise. Open source is narrowing the gap between “good enough” every quarter. Agents are doubling their share of total inference needs. None of these forces point to the lab. They all point to Venice.


Mirror


A platform built on the inverse of every bubble assumption—many of its attributes can look accidental until you see the whole picture.


Zero Training Cost. Venice has never spent a dollar to train models. Every release from Llama, Qwen, Mistral, GLM, DeepSeek, Kimi has been a free upgrade. While those labs spent trillions trying to maintain a lead measured in "months," Venice's cost was zero, riding directly on their paid-for curve. When GLM-5.1 was released at one-eighth the price of Claude, it was a margin expansion event for Venice; but for companies trying to charge a premium for equivalent capability, it was an existential threat.


Zero Retention Liability. Privacy is a policy commitment in the lab; in Venice, privacy is a mathematical structure. The OpenAI enterprise edition, by default, will not use customer data to train models, and customers can also set retention windows, but at inference time, prompts will still flow through OpenAI's servers and may be accessed by authorized personnel for abuse investigations, support, and legal matters. Policies can change. Vendors can also be breached—In November 2025, Mixpanel leaked API customer names, emails, and organization IDs through SMS phishing. Runtime data can also be leaked through new vulnerabilities—Check Point disclosed a ChatGPT vulnerability in March that silently leaked conversation content through a DNS side channel. Even with a zero-retention stipulation in the contract, the architecture is still based on trust. Venice's TEE proof turns privacy assurances into cryptographic guarantees. The secure enclave processes prompts, returns results, proves execution, and then discards input. Venice can't see your data because the architecture doesn't allow it to. This is not a privacy moat; it's a legal balance sheet that gets stronger with tightening data regulations.


Token Appreciation Mechanically Bound to Usage. Every payment request will buy VVV on the open market and burn it. Layered subscription burns will expand with revenue growth: Pro around $2, Pro+ around $5, Max around $10. Emissions have been reduced five times in the past 18 months and are planned to halve again before midsummer. 42% of the genesis supply has been burned. There is no allocation going to investor returns because there are no investors at all. Every dollar of revenue compounds back into the assets held by the stakers.


Users are an asset class, not a product. This is a point that no one really articulates. On centralized platforms, users generate data, data becomes training input, and training input becomes the platform moat. Users are the product. Whereas on Venice, users consume tokens through staking, subscriptions, and payment of Inference Fees, the tokens are burned, thus enhancing the value of every holder's position. Users are the asset. The economic vector is entirely opposite to almost all other consumer software businesses in the world.


DIEM is an AI-powered fixed-income instrument. 1 staked DIEM = a $1 daily automatically renewing limit, permanently valid. It can be traded on Aerodrome and unlocked through destruction to redeem the original sVVV stake. During the lockup period, it also earns approximately 80% of the regular VVV stake rewards. This is not a regular token but a fixed-income instrument supported by AI infrastructure. As the underlying computational power becomes commoditized, each DIEM can purchase more inference capabilities annually while the nominal stake remains unchanged. The Lab issues equity based on a depreciating asset; Venice issues perpetual equity based on an appreciating asset.


Put it all together, and you don't get a "crypto-flavored AI company." You get an entirely different form of consumer software: every economic relationship between users and the platform is intermediated by assets that users own, price, trade, and derive revenue from. And those properties hold true regardless of whether those Labs survive. They are not a bet-the-farm deal but a structural advantage that compounds in any macro environment.


Why Now


The age of the intelligent economy is dawning, coinciding with these Labs running out of financial runway.


The transaction volume on the Coinbase Agentic Wallets on x402 has exceeded 165 million. Google AP2 has launched with over 60 partners. Visa released the Trusted Agent Protocol. Mastercard acquired stablecoin infrastructure for $1.8 billion—the largest stablecoin transaction in history. Coinbase launched Agent.market in April, with 69,000 active intelligent agents trading on it. McKinsey predicts that by 2030, consumer-business mediated by intelligent agents will reach $30 trillion to $50 trillion.


Every one of these intelligent agents requires inference service providers. But they cannot use OpenAI or Anthropic in a serious context. Lab's compliance architecture demands KYC; their revenue model requires logging; their content policy demands rejection. Intelligent agents cannot fill out registration forms, input CVVs, or agree to terms of service that may change next quarter. The CEO of Coinbase bluntly stated: AI intelligent agents cannot meet KYC requirements or use traditional banking systems.


Thus, at a time when the core businesses of these Labs are being arbitraged from below by Chinese open-weight models, the most critical new demand category in AI infrastructure—autonomous intelligent agents—is structurally incompatible with their architecture. Intelligent agents have further exacerbated market fragmentation: the high-end demand remains at the top, while everywhere else moves towards intelligent agent native.


Venice serves both ends of this transaction. The self-sovereign API key flow is now live — smart staking VVV, token signing, key forging, DIEM payment, all without human intervention. x402 wallet payments are live on all paypoints. A credential accesses JSON-RPC for 11 chains. Each Eliza, Fleek, OpenClaw, Hermes, and NanoClaw smart body is plug-and-play. The reason the smart bodies being deployed today are running on Venice rails is that there is no other option that is permissionless, private, uncensored, and natively supports smart bodies.


As the commercial scale of smart body intermediation reaches the tens of trillions of dollars predicted by McKinsey, and those labs hit the wall built into their equity structure — whether they actually hit it or not — Venice has become the reasoning layer of this economy.


Compounding Something


The April argument is no longer speculative. On April 7th, the daily volume reached 500 billion tokens and 1 million images. GLM-5.1, Kimi K2.6, and DeepSeek V4 all landed on Venice within days of release, with privacy contracts unchanged. DIEM's execution discount repriced from 57% in early March to around 32% today — the market repriced for reliability, not incremental utility. As long as the discount falls below 20%, DIEM will mechanically step over $1500. Staking inflows exceed $15 million. Over 32 million VVV tokens are staked, locking about 70% of the circulating supply. The tiered subscription burn mechanism went live in April and is yielding significant monthly burns; at the current pace, without even considering the next emission cut, VVV will transition to net deflation in Q3.


Every judgment in that April article has either compounded or become clearer. None have weakened.


The April article posited Venice as the only platform that combines seven specific advantages. That judgment still stands. But what I didn't make clear at the time is why: these seven advantages are not a stack of features but a consumer software company that doesn't require meeting venture capital return requirements, a form that naturally presents itself. What venture capital bought into was an equity based on an asset about to be commercialized.


There are two evolutionary paths in this market. One, these labs are crushed by their own equity structures, and Venice takes over the entire tech stack. Two, the market splits — labs retain that tiny slice of high-end demand willing to pay enterprise pricing and accept monitoring, while Venice owns everything else: a larger and faster-growing half-market, where "good enough" intelligence is combined with privacy, uncensored output, smart body native access, and user ownership.


The endpoint of both paths is for Venice to become the reasoning layer of the Open Intelligence Economy. This proposition does not require a bubble burst. It only requires the open-source curve to continue evolving in the direction it has already taken—and the fact is, it does so every quarter, at a pace faster than the market can update its model.


Venice is built on this bet. Three months ago, I made this assessment at $2 when no one was listening. One month ago, as the price reached $8, some started to take notice. Now, with the price at $18, the market still has not fully grasped this structural thesis—the yet-to-be-priced-in part is what happens when the two scenarios ultimately converge to the same answer.


The bubble is predicated on the assumption of the model layer maintaining a high premium. Venice's compounding is based on the model layer trending towards free. Whether the bubble suddenly bursts or slowly deflates, the endpoint of this transaction is the same.


Same market. Opposite economic models.


The lab cannot keep up. Hash power providers cannot capture users. The protocol is being handed to the foundation. Value will eventually concentrate, as usual, in a few places: the brands people choose, the tracks on which intelligences run, and the currency they use to price things.


Venice is building the brand, running the track, and issuing the currency.


The next chapter is not a celebration. The real question is: Will the structural thesis proposed in the April article be repriced when venture-backed comps gradually run out of road, or will it be repriced as the market organically cleaves around them?


From the current evidence, both are happening right on schedule.


Not investment advice. Please do your own research.


[Original Article Link]



Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia