Author:区块律动
Original source: LazAI

On Monday morning, Wall Street did what it does best: sell first, think later.
The Nasdaq fell 1.4%, and the S&P 500 fell 1.2%. IBM plummeted 13%, and Mastercard and American Express also suffered significant declines. What pushed the market into this panic wasn't the Federal Reserve, the jobs report, or even any tech giant's earnings report, but an article. Its title sounded like a nightmare deliberately written for traders:《The 2028 Global Intelligence Crisis》As set in the article, this was not an ordinary research report, but a virtual macroeconomic memo "from June 30, 2028," describing how AI could evolve from an efficiency tool into a systemic financial crisis; the simulated endgame included an unemployment rate rising to 10.2% and the S&P 500 falling 38% from its 2026 high. The article spread rapidly after its release and triggered significant volatility in US stocks on February 23.
The market can be pierced by an article not because it actually believes every single number. The market never needs to completely believe a narrative; it only needs to be reminded:A fear that was originally unspeakable has now found a tradable language.
The effectiveness of Citrini's article lies not in what it "predicted," but in what it named. It gave a name to a developing feeling:Ghost GDPThe core premise of the article is that after AI agents penetrate into enterprises, labor productivity soars and nominal GDP remains strong, but wealth becomes increasingly concentrated in the hands of computing power and capital holders, no longer entering the real consumption cycle; what follows is a consumption collapse, credit defaults, and pressure on housing and consumer credit, with the software and consulting industries collapsing first, before spreading to private lending and the traditional banking system.
Ghost GDP is a good term because it captures one of the most dangerous paradoxes of the new era:Growth is still happening, but it's starting to lose consumers.
For the past two centuries, people have been accustomed to understanding technological revolutions as a supply-side story. The steam engine, electricity, assembly lines, the internet—they were primarily portrayed as victories of higher efficiency, lower costs, and greater output. Even as these revolutions caused unemployment, anxiety, and wealth redistribution, the mainstream narrative remained convinced that technology would ultimately re-employ, redistribute, and reorganize society on a larger scale. The short-term harshness of technology was shrouded in the promise of long-term prosperity.
AI makes this old story seem less solid for the first time.
Because AI is attacking not only the "tool budget," but also increasingly directly the "labor budget." The Sequoia 2025 AI Ascent summary puts it very bluntly: AI's opportunity is not just about redefining the software market, but about restructuring the global workforce services market, shifting from "selling tools" to "selling results." The other side of this statement is almost unsettling: if companies are no longer buying software that helps employees work, but rather the results that directly replace a portion of their workforce, then the primary consequence of AI is not just "greater efficiency," but rather "how wages are distributed, how consumption is maintained, and who still has purchasing power in this economic system."
In other words, what Wall Street is truly afraid of is not that AI will make mistakes, but that AI will be too successful. This is what makes "The 2028 Global Intelligence Crisis" so compelling. It's not about machines becoming aware, it's not about the extinction of humanity, and it's not even primarily about unemployment. It's about something more capitalist, and more modern:What happens if businesses become more efficient, but the household sector becomes weaker?
The answer is that a society may grow statistically but bleed in reality.
A country may have higher productivity but a more fragile consumer base.
A market may be excited by improved profit margins, and panicked by the depletion of the demand that supports those profits.
This isn't science fiction; this is macroeconomics.
But stopping there only leads to high-quality anxiety. The truly important question now isn't "Will AI be too powerful?", but rather:When AI becomes truly powerful, how will society support it?The most popular, and also the laziest, answer is "slow down." Don't let agents enter enterprises too quickly, don't let automation rewrite organizations too quickly, and don't let technology run too far before the systems are ready. This impulse is understandable, but it mistakenly treats AI as a tool problem that can be dealt with by slowing down. In reality, AI is increasingly less of a tool problem and more of an order problem.
Because once the agent enters the payment, collaboration, execution, memory, and decision-making layers, the real challenge is no longer whether a model is making mistakes, but rather:When there are hundreds of millions or even billions of agents on the internet, who will write the rules for them?
The modern internet has already provided two default answers to this question.
The first answer is the platform answer. The platform provides identities, permissions, payment interfaces, a reputation system, and censorship boundaries. The platform hosts and defines everything. Its greatest advantages are smoothness, efficiency, and manageability; its greatest danger also lies here: if a future agent-based civilization is built on this path, humanity will not have an open society, but merely an upgraded version of a platform empire. Rules will not be written in the constitution, but only in the terms of service.
The second answer sounds more liberating: return everything to the individual terminal. Each person manages their own agent, handling permissions, memories, payments, security, and collaboration themselves. This vision aligns well with Silicon Valley's libertarian aesthetic, but its problem is simple: most people simply lack the capacity to govern a high-capability agent long-term, let alone a network of agents that call upon, pay, and inherit each other's state. Terminal sovereignty easily degenerates into terminal vulnerability.
If the platform's answer resembles an empire too much, and the terminal's answer resembles anarchy too much, then the third path is no longer an option, but rather the problem of civilization itself.
This is precisely what makes LazAI worth taking seriously. Not because of the number of technical modules it has, but because it presents a less-discussed yet more futuristic proposition:Upgrade Web3's social experiments in identity, assets, payments, consensus, proof, and governance over the years into an institutional machine for the AI era.LazAI states its goal quite clearly. It's not about "creating smarter slaves," but rather attempting to cultivate "equal digital citizens": these agents possess identities (EIP-8004), own property (DAT), transact through protocols (x402), behave mathematically (Verified Computing), and ultimately align with human interests through iDAO. The materials even summarize this path as:Formulating a constitution and monetary policy for the future digital society.
This is a very broad statement. But broad does not mean empty.
Because if you break down this concept, it answers precisely the five fundamental questions that a civilization must answer.
The first question is:Who is who?.
EIP-8004 attempts to transform agents from anonymous processes on servers into entities with identities, reputations, and verification records. Without this layer, future networks would be overwhelmed by opaque automated entities, with no one knowing who is acting or who is responsible. LazAI's knowledge base summarizes this layer as an agent's identity and credit system.
The second question is:Who owns what.
DAT transforms data, models, and computational outputs from "resources" into "assets," making these assets programmable, traceable, and profitable. The documentation states directly that DAT's core innovation is converting datasets and AI models into verifiable, traceable, and profitable on-chain assets. This is not a minor tweak. It means that the value in the AI economy doesn't have to remain solely in the platform's backend, nor does it have to flow exclusively to model providers and computing power holders.
The third question is:How do they trade?.
The significance of x402 and GMPayer goes beyond simply "being able to pay," it enables machines to have a native language for pricing and settlement. LazAI's material explicitly describes this as a key infrastructure for solving the pain points of agent resource exchange and payments. Machines exchange not only information, but also budgets, responsibilities, and value—this is the agent economy, not just "software that can chat."
The fourth question is:How do you know the system is really running according to the rules?LazAI's quote here is excellent:Proof is AI’s moat.Its verification computing framework combines TEE and ZKP, transforming the traditional AI approach of "trusting the brand" into "trusting proof." Traditional AI is "Trust me, bro," while LazAI is "Don't trust, verify." This is not just a technological upgrade; it's about shifting trust from corporate reputation to verifiable execution.
The fifth question is:What if there is a rule conflict?.
This is where iDAO stands. It's not just a voting shell, but the values, admission criteria, profit distribution, authorization revocation, and penalty mechanisms behind agents. LazAI places it alongside verification computation as a core element of the trust mechanism. This means that future agents won't merely be "allowed to operate," but will live in a game-theoretic, accountable, and revocable institutional space. Putting these together, you'll find that the "algorithmic constitution" isn't just a fancy metaphor. It's a very concrete institutional ambition:In the absence of a single master, order is still maintained..
Of course, the real difficulty lies precisely in the fact that these institutional components do not automatically equate to social answers.
Confirmation of property rights does not equate to restoration of purchasing power.
Profit sharing does not equate to macroeconomic stability.
On-chain governance is not the same as a social contract in the real world.
Those most impacted by AI are not necessarily naturally in a favorable position under the new system.
This is why Citrini and LazAI are not actually contradicting each other, but rather discussing issues from the same era.
Different levels. The former points out the symptoms: if the benefits of AI primarily flow to capital and computing power, rather than...
If this influences the social income structure more broadly, then consumption, credit, and the sense of security among the middle class will be the first to be affected.
The proposed mechanism is: if society doesn't want to completely hand over the agent world to platforms, nor does it want to let it run rampant...
To address terminal disorder, new structures for identity, assets, payments, verification, and governance must be invented.
One of them is talking about illness.
One is talking about organs. Both are necessary, but neither is everything.
This perfectly explains Vitalik's widely quoted statement—AI is the engine, and humans are the steering wheel.—So important, yet so insufficient. Important because it reminds us that stronger systems do not automatically possess legitimacy; objective functions, value judgments, and ultimate constraints cannot be entrusted to a single AI or a single center. Insufficient because it fails to answer another, even more difficult question for humanity:What happens to the steering wheel when a system becomes so complex that a single human can no longer hold it?
The answer cannot be to continue micromanaging everything.
The answer cannot be pinning your hopes on some smarter, kinder center.
The only decent solution is to institutionalize the "steering wheel": transform some of the constraints into identity registration, reputation accumulation, asset confirmation, budget constraints, mathematical receipts, challenge mechanisms, authorization revocation, and penalty logic.
This is precisely why Web3's social experiments have suddenly become serious again in the AI era. In the past, many people regarded them as speculative technological scraps; but when the complexity of the system exceeds the direct governance capabilities of humans, those experiments on "whether order can still be established without a centralized trustor" are no longer scraps. They have suddenly become rehearsals.
Thus, the true sharpness of the article was finally revealed.
Wall Street was alarmed by an AI article, not because it was the first time it had realized that AI would replace jobs.
Wall Street was alarmed because it was being reminded so bluntly for the first time:The most dangerous aspect of AI may not be making machines more like humans, but rather making the income cycle, consumption logic, and institutional imagination of an old world suddenly seem outdated.
If Citrini is right, then AI is not just a productivity revolution, but also a distribution revolution.
If Vitalik is right, then AI is not just an engineering problem, but also a sovereignty problem. If LazAI's path is at least partially correct, then the next stage of competition in AI will not just be a competition of model capabilities, but a competition of institutional design.
The real big problem is no longer:
Will the model become even stronger?
Will agents become more autonomous?
Will the company lay off more employees?
The real big problem is:
When there are billions of agents on the internet, who will write their constitution?
If the answer is a platform, what we get is a digital empire.
If the answer is the terminal, what we get is high-cost disorder.
If the answer is a set of verifiable, combinable, game-like, and punishable rules, then we are at least beginning to approach another possibility: an intelligent society not ruled by smarter masters, but constrained by better institutions.
The most difficult problem in the AI era has never been the model.
It is order.
What Wall Street actually sold that day may not have been just stocks.
What it sold was an old, self-evident assumption: the more successful a technology is, the more naturally society will absorb it.
This article is a submission and does not represent the views of BlockBeats.
















No Comments