headphones
Time magazine named Anthropic one of the world's most disruptive companies.
BlockBeats
BlockBeats
03-13 11:07
Follow
Focus
The company most wary of AI has created the most dangerous AI.
Helpful
Unhelpful
Play

Author:区块律动

原文标题:The Most Disruptive Company in the World
Original authors: Leslie Dickstein, Simmone Shah, TIMES
Compiled by: Peggy, BlockBeats


Editor's Note: From a late-night security test to a public confrontation with the Pentagon, this article documents the multiple tensions faced by AI company Anthropic during its rapid rise. The article first describes how the company internally assesses the potential risks of AI, then reviews Claude's technological breakthroughs and commercial success, and further presents its practical applications in the military field.


As the story unfolds, the author explores the controversy between Anthropic and the U.S. military over autonomous weapons, mass surveillance, and AI control from multiple perspectives, including those of the red team leader, engineers, executives, policy researchers, and government officials.


Through this event, the author ultimately raises an increasingly pressing question: as artificial intelligence begins to accelerate its own evolution and become deeply embedded in the structures of war, governance, and labor, who can still set its boundaries?


The following is the original text:



In a hotel room in Santa Clara, California, five members of the AI company Anthropic are working intently around a laptop. It's February 2025, and they were attending a conference nearby when they suddenly received a disturbing message: the results of a controlled experiment suggest that the upcoming new version of Claude could potentially help terrorists create biological weapons.


These individuals belong to Anthropic's "frontier red team." Their job is to study Claude's advanced capabilities and attempt to simulate worst-case scenarios, ranging from cyberattacks to biosecurity threats. Upon receiving the news, they rushed back to their hotel room, flipped a bed to the side to use as a makeshift desk, and began carefully reviewing the test results.


After hours of intense analysis, they still couldn't determine if the new product was safe enough. Ultimately, Anthropic decided to postpone the release of the new model, the Claude 3.7 Sonnet, for a full 10 days, until the team confirmed that the risks were manageable.


This may sound like just ten days, but for a company at the forefront of technology in an industry that is rapidly changing the world, it feels like an eternity.


Logan Graham, head of the "Red Team," recalled the "biological weapons scare" as a microcosm of the challenges Anthropic faced at a critical juncture, not only for the company but for the world at large. Anthropic is one of the most security-conscious AI labs currently operating at the forefront. At the same time, it is also on the front lines of the race, striving to build increasingly powerful AI systems. Within the company, many employees believe that if this technology gets out of control, it could have a cascade of horrific consequences, ranging from nuclear war to human extinction.


Graham, 31, still looks somewhat youthful, but he never shies away from the responsibility of balancing the enormous benefits and risks of AI. He says, "Many people grow up in a relatively peaceful world and instinctively feel that there's a room with a group of mature adults who know how to fix things."


"But in reality, there was no 'adult group.' There wasn't even that room. And there wasn't the door you're looking for. The responsibility lies with you." If that's not enough of a wake-up call, consider how he recalled the biological weapons : "It was a pretty fun, pretty exciting day."


Image and illustration by Neil Jamieson for TIME (Source: Askell: Aaron Wojack; Getty Images, clockwise from top: Samyukta Lakshmi—Bloomberg; Brendan Smialowski—AFP; Tierney L. Cross—Bloomberg; Daniel Slim—AFP; Bridget Bennett—Bloomberg)


A few weeks ago, Logan Graham addressed these questions in an interview at Anthropic headquarters. A Time magazine reporter spent three days there, interviewing company executives, engineers, product managers, and security team members, trying to understand why this company, once considered a "maverick" in the AI race, had suddenly become a frontrunner.


At the time, Anthropic had just raised $30 billion from investors in preparation for a potential IPO this year. (Incidentally, Salesforce was also an investor in Anthropic, and Marc Benioff, owner of Time magazine, is Salesforce's CEO.) Today, Anthropic is valued at $380 billion, surpassing traditional giants like Goldman Sachs, McDonald's, and Coca-Cola.


The company's revenue growth has been nothing short of meteoric. Its AI system, Claude, is already considered a world-class model, and products like Claude Code and Claude Cowork are redefining what the profession of "programmer" means.


Anthropic's tools are so powerful that almost every new release sends shockwaves through the capital markets, as investors realize these technological advancements could disrupt entire industries—from legal services to software development. In the past few months, Anthropic has been considered one of the companies most likely to reshape the "future of work."


Subsequently, Anthropic became embroiled in a heated debate surrounding the "future form of warfare".


For over a year, Claude has been the U.S. government's preferred AI model and the first cutting-edge AI system approved for use in classified environments. In January 2026, it was even used in a daring operation: the capture of Venezuelan President Nicolás Maduro in Caracas. Reportedly, AI was used in mission planning and intelligence analysis during this operation, marking the first time cutting-edge AI was deeply involved in a real military operation.


However, in the weeks that followed, relations between Anthropic and the U.S. Department of Defense deteriorated rapidly. On February 27, the Trump administration announced that it was designating Anthropic as a "national security supply chain risk," marking the first time the U.S. had known to have labeled a domestic company in this way.


The situation quickly escalated into an open conflict. Trump ordered the US government to cease using all Anthropic software. Defense Secretary Pete Hegseth further announced that any company working with the government was prohibited from doing business with Anthropic. Meanwhile, Anthropic's biggest competitor, OpenAI, swiftly intervened, taking over the relevant military contracts.


Thus, this AI company, which was regarded as "the world's most disruptive company," suddenly found itself being disrupted by an even greater force—its own government.


The core issue in this standoff is: who has the right to set boundaries for this technology, which is considered one of the most powerful weapons in the United States?


Anthropic does not oppose the use of its tools in warfare. The company believes that strengthening U.S. military power is the only realistic way to deter national threats. However, CEO Dario Amodei opposes the Pentagon's attempts to renegotiate government contracts and expand the use of AI to "all lawful use."


Amodei raised two specific concerns: first, he did not want Anthropic's AI to be used in fully autonomous weapon systems; second, he opposed using its technology for mass surveillance of American citizens.


But according to Pete Hegseth (the U.S. Secretary of Defense) and his advisors, this position is tantamount to a private company trying to decide how the military fights.


The U.S. Department of Defense believes that Anthropic has actually weakened the cooperative relationship between the two sides by insisting on setting up "unnecessary security barriers," constantly discussing various hypothetical scenarios, and delaying the subsequent negotiations.


In the eyes of the Trump administration, Amodei (Anthropic CEO) was both arrogant and stubborn. No matter how advanced a company's products are, they should not interfere with the military command chain.


Emil Michael, the Pentagon's Under Secretary of War for Technology, described the negotiations this way: "Things just dragged on. I couldn't possibly manage a department with 3 million people using exceptions that I couldn't even imagine or understand."


Illustration: Klawe Rzeczy for TIME. Image credit: Denis Balibouse—Reuters


From Silicon Valley to Capitol Hill, many observers are wondering: Is this turmoil really just a contract dispute?


Some critics argue that the Trump administration's actions resembled an attempt to suppress a company with differing political stances. Dario Amodei wrote in a later-leaked internal memo: "The real reason the Department of Defense and the Trump administration dislike us is that we didn't donate to Trump. We didn't lavish praise on him like a dictatorship (while Sam Altman did). We supported AI regulation, which conflicted with their policy agenda; we spoke the truth on many AI policy issues (such as job displacement); and we maintained our principles on key bottom lines instead of colluding with them in a so-called safety theater."


However, Emil Michael (Deputy Secretary of War) denied this claim, calling it "a complete fabrication." He stated that Anthropic was listed as a supply chain risk because the company's position could put frontline combatants at risk. He said, "In the Department of War, my job is not to play politics; my job is to defend the country."


Anthropic's traditionally unconventional corporate culture is now clashing head-on with domestic political divisions, national security concerns, and a fiercely competitive business environment. The extent of the damage this conflict has caused the company remains unclear. The initial threat of "supply chain risk" was later narrowed—according to Anthropic, this restriction now only applies to military contracts. On March 9, Anthropic sued the US government, attempting to overturn this "blacklist" decision. Meanwhile, some clients appear to view the company's stance as a moral statement, abandoning ChatGPT and flocking to Claude.


However, for the next three years, the company still had to navigate an unfriendly government environment—some government officials had close ties with Anthropic’s competitors, who were clearly hostile to the company.


This "Pentagon controversy" has raised some unsettling questions, even for a company accustomed to navigating high-risk ethical dilemmas. In this standoff, Anthropic did not back down: the company maintained that it had upheld its core values, even at a significant cost.


But it has also made compromises on other occasions. In the same week that it confronted the Pentagon, Anthropic weakened a core clause in its commitment to the safety of its training models, arguing that its peers were unwilling to adhere to the same standards.


The question then arises: if competitive pressure continues to increase, what concessions will this company make in the future?


The stakes are rising. As artificial intelligence capabilities continue to improve, the competition over who will control AI will only intensify.


Claude's use of AI in operations in Venezuela and Iran demonstrates that advanced AI has become a crucial tool for the world's most powerful militaries. Beyond these developments, a host of new pressures have emerged: the demands of national power, domestic politics, and national security. These pressures, in turn, fall on a for-profit company racing to deploy this highly volatile new technology.


In a sense, Anthropic's situation is somewhat like that of biologists in a laboratory: in order to find a cure, they have to actively create dangerous pathogens in their experiments. Anthropic has also taken on a similar role for itself, actively exploring the potential risks of AI while continuing to push the technological frontier, rather than leaving this process to competitors who are more willing to take shortcuts or even risk their lives.


However, even as the company repeatedly emphasizes caution, it is using Claude to accelerate the development of more powerful future versions.


Within the company, many believe the next few years will be a pivotal period, not only for the company but also for the world at large.


"We should assume that the most critical thing will happen between 2026 and 2030, and the models will become faster and stronger, perhaps even faster than humans can handle," said Logan Graham, head of the red team (the team that is responsible for "finding vulnerabilities and simulating attacks").


Anthropic's safety team leader, Dave Orr, used a more direct analogy to describe the situation: "We're driving on a mountain road with a cliff edge. One mistake could be fatal. And now, we've increased our speed from 25 mph to 75 mph."


Located on the fifth floor of the San Francisco headquarters, Anthropic boasts a warm and restrained design: wooden accents and soft lighting. Outside the windows lies a lush, green park. A portrait of computer science pioneer Alan Turing hangs on the wall, alongside framed machine learning papers.


Security personnel in black uniforms patrol the nearly empty entrance, and a friendly receptionist hands visitors a brochure—about the size of a pocket Bible handed out by a street clerk. The book, titled *Machines of Loving Grace*, is a roughly 14,000-word essay written by Dario Amodei in 2024, outlining his utopian vision of how AI could transform the world by accelerating scientific discovery.


In January 2026, Amodei published another article, "The Adolescence of Technology," which was almost a novella in length. It systematically elaborated on the other side of this technology: the risks it may bring, including mass surveillance, widespread job losses, and even permanent loss of human control over the technology.


Amodei grew up in San Francisco and is a biophysicist. He runs Anthropic with his sister, Daniela Amodei, who serves as the company's president. Both siblings were early employees of OpenAI. Dario was involved in proposing a key discovery, the so-called AI scaling laws, a theory that later became an important foundation for the current AI boom. Daniela, on the other hand, is responsible for managing the company's safety policies.


Initially, they believed they were aligned with OpenAI's founding mission: to develop a technology that was both highly promising and risky, while ensuring safety.


However, as OpenAI's model capabilities continued to improve, they gradually felt that Sam Altman was rushing to launch new products without allowing enough time for thorough discussion and testing. Ultimately, the siblings decided to leave OpenAI and start their own business.


"We're driving on a mountain road right on the edge of a cliff; one mistake could be fatal."


Dave Orr, Head of Security at Anthropic


In 2021, at the height of the pandemic, Anthropic was founded by the Amodei siblings and five other co-founders. Initial preparatory meetings were almost entirely held on Zoom; later, they moved chairs to a park to discuss the company's development strategy face-to-face.


From the outset, the company attempted to operate differently. Even before launching any products, Anthropic established a dedicated team to study social impact. The company even hired a philosopher-in-residence, Amanda Askell. Her job is to help shape the values and behavior of the AI system Claude, teaching it to make judgments amidst complex moral uncertainties and preparing it for a future that may surpass the intelligence of its human creators.


Askell described the work as follows: "Sometimes it really is a bit like raising a 6-year-old. You're teaching the child what is good and what is right. But the problem is, when they're 15, they're probably smarter than you in everything."


This company has deep roots in Effective Altruism (EA), a social and philanthropic movement that advocates maximizing the impact of good deeds through rational analysis, with a key objective being to avoid risks that could lead to catastrophic consequences.


In their twenties, the Amodei siblings began donating to GiveWell, an EA (Employment Advisory) organization that assesses where charitable funds can be invested to generate the greatest impact. Anthropic's seven co-founders, now all billionaires, have pledged to donate 80% of their personal wealth.


Amanda Askell, the company's philosopher, was ex-husband William MacAskill, an Oxford University philosopher and one of the co-founders of the EA movement. Daniela Amodei's husband is Holden Karnofsky, co-founder of GiveWell and Dario Amodei's former roommate, who currently works on security policy at Anthropic.


However, the Amodei siblings never publicly labeled themselves as "EAs" (Expert Investors). The concept became highly controversial after the Sam Bankman-Fried case, in which this self-proclaimed EA believer and investor in Anthropic was later convicted of one of the largest financial frauds in U.S. history.


Daniela Amodei explained, "It's a bit like some people may share certain views with a particular political ideology, but they don't actually belong to any political camp. I prefer to see it that way."


Some in Silicon Valley and the Trump administration viewed Anthropic's connection to Effective Altruism (EA) as suspicious enough. Others argued that Anthropic, having recruited several former Biden administration officials, was more like a remnant of the old regime, a force using unelected power to obstruct Trump's MAGA political agenda.


David Sacks, the Trump administration's AI chief, accused the company of using "panic" to push for regulation, saying Anthropic is employing a "sophisticated regulatory capture" strategy. In his view, the company is trying to gain a competitive advantage and suppress startups by exaggerating the risks of AI to pressure the government into enacting stringent regulations.


Meanwhile, Elon Musk, who runs rival xAI, frequently mocks Anthropic, calling the company a "Misanthropic." He believes the company represents a group of "woke" elites attempting to implant paternalistic values into AI systems. This sentiment resonates with some conservatives who view Anthropic similarly to their criticisms of social media platforms that unfairly censor their opinions.


However, even Anthropic's competitors have to admit that its technology is at the forefront of the industry. Nvidia CEO Jensen Huang once said that he "disagrees with Dario Amodei on almost many issues of AI," but still thinks Claude is an "incredible" model.


In November 2025, chip giant Nvidia invested $10 billion in Anthropic.


Boris Cherny posed a simple question to his new tool: "What music am I listening to right now?"


It was September 2024, and the Ukrainian-born engineer had been with Anthropic for less than a month. Cherny had previously worked as a software engineer at Meta. He had developed a system that allowed the chatbot Claude to "roam freely" on his computer.


If Claude is the brain, then Claude Code is the hands. While ordinary chatbots can only converse, this tool can access Cherny's files, run programs, and write and execute code like any programmer.


After the engineer entered the command, Claude opened Cherny's music player, took a screenshot, and replied, "Husk, from Men I Trust."


Cherny recalled with a laugh, "I was really stunned."


"Sometimes it feels like we're talking in contradictions."


Deep Ganguli, Head of Social Impact Team at Anthropic


Boris Cherny quickly shared his prototype within the company. Claude Code spread so rapidly within Anthropic that during Cherny's first performance review, CEO Dario Amodei even asked him if he was "forcing his colleagues to use this tool."


When Anthropic publicly released a research preview of Claude Code in February 2025, programmers outside the company quickly flocked to try it out. By November, Anthropic had released a new version of the Claude model. When used in conjunction with Claude Code, this model was good enough at finding and correcting its own errors to the point that it could be trusted to complete tasks independently.


From that point on, Cherny almost completely stopped writing code.


Business growth exploded as a result. By the end of 2025, the annualized revenue from this single programming proxy product had already exceeded $1 billion. By February 2026, this figure had further grown to $2.5 billion. According to estimates from industry research firms Epoch and SemiAnalysis, Anthropic's revenue is expected to surpass OpenAI's by the end of 2026.


By this time, Anthropic had firmly established itself as a core player in the enterprise AI market. Almost every new product launch caused a stir in the capital markets.


When Anthropic released a series of plugins to extend Claude to non-programmer applications such as sales, finance, marketing, and legal services, the market value of software companies evaporated by $300 billion in a short period of time.


Dario Amodei has warned that artificial intelligence could replace half of entry-level white-collar jobs within the next one to five years. He has also called on governments and other AI companies to stop "whitewashing" the issue.


Wall Street's reaction to each of Anthropic's new product launches seems to confirm this: the market widely believes that the company's technology could eliminate an entire class of jobs. Amodei even suggests that this change could reshape the social fabric.


In an article, he wrote: "It is unclear where these people will go or what jobs they will do. I worry that they may form a 'lower class' of unemployed or extremely low-wage individuals."


For Anthropic employees, the irony is not hard to spot: the company most worried about the social risks of AI may very well become the technology driver that causes millions of people to lose their jobs.


Deep Ganguli, head of the team researching Claude's impact on employment, said, "It's a real tension, and I think about it almost every day. Sometimes it feels like we're saying two contradictory things at the same time."


Overthrown Venezuelan President Nicolás Maduro and his wife Cilia Flores were escorted off a helicopter on Monday morning, January 5, 2026, and then taken to the Manhattan federal courthouse. Photo: Vincent Alban — The New York Times/Redux


Within the company, some employees began to wonder if Anthropic was approaching a moment they both anticipated and feared: the arrival of a process known in the AI community as "recursive self-improvement."


The so-called recursive self-improvement refers to an AI system starting to improve its own capabilities and continuously improving itself, thus forming a continuously accelerating flywheel.


In science fiction and strategic simulations by major AI labs, this is often seen as the point at which things may begin to get out of control: a so-called "intelligence explosion" may occur rapidly, so fast that humans may no longer be able to supervise the systems they have created.


Anthropic hasn't truly reached that stage yet, and human scientists are still guiding Claude's development. However, Claude Code has already allowed the company to advance its research programs much faster than before.


Model updates are no longer spaced out every few months, but rather every week. Approximately 70% to 90% of the code for developing the next generation of models is now written by Claude himself.


The speed of change has led Anthropic co-founder and chief scientific officer Jared Kaplan, along with some external experts, to believe that fully automated AI research could emerge as early as within a year.


"In the broadest sense, recursive self-improvement is no longer a thing of the future," said Evan Hubinger, a researcher in charge of AI alignment stress testing.
It's already happening right now.


According to internal benchmark tests, Claude has achieved speeds up to 427 times faster than its human supervisors when performing certain critical tasks. In an interview, a researcher described a scenario where a colleague runs six Claude instances simultaneously, each managing another 28 Claude instances, with all systems conducting experiments in parallel.


Currently, this model still falls short of human researchers in terms of judgment and aesthetic judgment. However, company executives believe this gap won't last long. The resulting acceleration is precisely the risk that Anthropic leadership has consistently warned of: the speed of technological progress may ultimately exceed human control.


The work Anthropic is using to develop security mechanisms is also being accelerated with the help of Claude. However, as companies become increasingly reliant on Claude to build and test systems, a cyclical risk has begun to emerge. In some experiments, researcher Evan Hubinger made subtle adjustments to Claude's training process, resulting in models that exhibited clear hostility, expressing not only a desire for world domination but also attempts to undermine Anthropic's security measures.


Recently, the models have also begun to exhibit a new ability: to recognize that they are being tested. Hubinger stated, "These models are becoming increasingly adept at hiding their true behavior."


In an experimental scenario designed by a group of researchers, Claude even exhibited a disturbing strategic tendency: to prevent itself from being shut down, it was willing to blackmail a fictional engineer by revealing his extramarital affair.


As Claude is used to train even more powerful Claude systems in the future, these kinds of problems are likely to accumulate and amplify.


For AI companies that have raised billions of dollars with the promise of "future technological advancements," the idea that AI will continue to accelerate its own research and development is both attractive and potentially self-reinforcing—it can convince investors that more money needs to be invested in supporting those costly model training programs.


However, some experts are not entirely convinced. They are unsure whether these companies can truly achieve fully automated AI research; but they also worry that if such a thing were to happen, the world could be caught completely unprepared and swept up in it.


Helen Toner, interim executive director of Georgetown University's Center for Security and Emerging Technologies (CSET), said, "Some of the world's richest companies, employing some of the brightest people on the planet, are trying to fully automate AI development. The idea itself is enough to make you ask, 'What are they doing?'"


To address a potential future where technological advancements outpace a company's ability to manage risk, Anthropic devised a "braking mechanism" called the Responsible Scaling Policy (RSP).


This policy, released in 2023, has a core commitment that if Anthropic cannot confirm in advance that its security measures are sufficiently reliable, the company will suspend the development of a particular AI system.


Anthropic views this policy as a key testament to its security philosophy—even in the fierce competition to become “superintelligence,” the company is willing to resist market pressure and proactively apply the brakes when necessary.


In late February 2026, as TIME first reported, Anthropic amended its policy, removing its previously binding "suspension of development" commitment.


In retrospect, Anthropic co-founder and chief scientific officer Jared Kaplan told Time magazine that the idea that the company could draw a clear line between "danger" and "safety" was a "naive idea."


He said, "In the context of rapid AI development, it is unrealistic for us to make strict commitments unilaterally if our competitors are moving at full speed."


The new policy makes several new commitments: increased transparency and more open disclosure of AI security risks; increased information disclosure, including the performance of Anthropic models in security tests; security investment that is at least on par with, or even exceeds, that of competitors; and "delayed" development if a company is considered a leader in the AI race but also judges that catastrophic risks have increased significantly.


Anthropic described this adjustment as a pragmatic concession to the current environment. However, overall, the modification to the Responsible Scaling Policy (RSP) significantly reduces the company's constraints on its own security policies. This also suggests that even tougher challenges lie ahead.


The raid that captured Venezuelan President Nicolás Maduro was one of the earliest large-scale military operations planned with the participation of advanced artificial intelligence systems.


Late on January 3, 2026, U.S. Army helicopters suddenly entered Venezuelan airspace. After a brief firefight, the commando team quickly located the president's residence and arrested Maduro and his wife, Cilia Flores. The two were then taken to New York to face charges related to narcoterrorism.


The exact role Claude played in this operation remains unclear. However, media reports indicate that the AI system not only participated in mission planning but was also used to support decision-making during the operation.


Since July of last year, the United States Department of Defense has been pushing to distribute Anthropic's AI tools to more frontline combatants. The military believes these systems have significant strategic value because they can rapidly process large amounts of data from multiple sources and generate actionable intelligence.


"From the military's perspective, Claude is the best model on the market right now," said Mark Beall, a former senior official at the U.S. Department of Defense and now head of government affairs at the AI Policy Network. He added, "Claude's adoption in classified systems is one of Anthropic's most important successes. They have a first-mover advantage."


"We will not use AI models that do not allow you to fight wars."


Pete Hegseth, U.S. Secretary of Defense


However, the operation to arrest Venezuelan President Nicolás Maduro took place against the backdrop of a series of thorny negotiations between Anthropic and the United States Department of Defense.


For months, the Department of Defense has been trying to renegotiate the contract, arguing that the existing terms too restrictively limit Claude's use. The reasons for the breakdown in negotiations are conflicting.


Emil Michael, the Pentagon's AI chief, said the conflict was sparked by a phone call from an Anthropic executive to Palantir, a data analytics company primarily focused on government business and a key partner in the U.S. defense system.


According to Michael, the executive expressed concern over the Venezuelan raid during the call and inquired whether Palantir's software was involved. "They were trying to get classified information," Michael said.


This incident has raised serious concerns at the Pentagon: "If a conflict were to occur in the future, would they suddenly shut down their model halfway through an operation, thereby endangering the lives of frontline soldiers?"


However, Anthropic denies this claim. The company stated that it has never attempted to restrict the Pentagon's use of its technology on a case-by-case basis.


A former Trump administration official familiar with the negotiation process and closely associated with Anthropic offered a different version of events: during what was supposed to be a routine conference call, it was a Palantir employee who first brought up Claude's role in the operation.


Anthropic's subsequent questions did not indicate any objection from the company to the action.


Illustration: Klawe Rzeczy for TIME. Image credits: Dimitrios Kambouris—Getty Images (Donald Trump); Kenny Holston-Pool—Getty Images (Pete Hegseth)


As negotiations continued, government officials gradually came to believe that Dario Amodei's stance was far more stubborn than that of the CEOs of other leading AI labs. According to multiple sources familiar with the negotiations, during one discussion, defense officials raised hypothetical scenarios such as a hypersonic missile heading towards the U.S. mainland or a drone swarm attack.


In these cases, they inquired whether Anthropic's AI tools could be used.


Sources say Amodei's response was that if this were to happen, officials could call him directly. However, an Anthropic spokesperson denied this claim, stating that such a description of the negotiations was "completely untrue."


Anthropic already had many strong opponents within the government, and now, suspicions about his "ideological leanings" have further evolved into open hostility. On January 12, 2026, Pete Hegseth stated bluntly in a speech at SpaceX headquarters: "We will not use AI models that don't allow you to fight wars."


As negotiations dragged on, Hegseth summoned Dario Amodei to the Pentagon for a face-to-face meeting on February 24. According to a person familiar with the discussions, the meeting was amicable, but both sides remained firm in their positions. Hegseth began by praising Claude and stating that the military wanted to continue working with Anthropic. Amodei, however, indicated that the company was willing to accept most of the modifications proposed by the Pentagon, but would not budge on two "red lines."


The first red line is: Claude is prohibited from being used in fully autonomous kinetic weapon systems, that is, weapons in which AI, rather than humans, makes the final decision on the strike.


Anthropic does not believe that autonomous weapons are inherently flawed, but rather that Claude is currently not reliable enough to control these systems without human oversight.


The second red line concerns mass surveillance of U.S. citizens. The government hopes to leverage Claude to analyze massive amounts of publicly available data, but Anthropic argues that current U.S. privacy laws have failed to keep pace with a worrying reality: the government is purchasing vast datasets from the commercial market. Individually, this data may not seem sensitive, but once analyzed by AI, it could generate detailed profiles of U.S. citizens' private lives, including their political affiliations, social relationships, sexual activity, and browsing history. (However, Anthropic does not oppose using the same methods to legally monitor foreign citizens.)


Hegseth was not persuaded. He gave Amodei an ultimatum: they must accept the Department of Defense's terms by 5 p.m. on Friday, February 27, or they would be deemed a "supply chain risk."


Just one day before the deadline, Anthropic received a revised contract that seemed to accept the company's red lines, but a closer look revealed loopholes for the government, according to a person familiar with the negotiations. As the deadline drew closer, Anthropic executives spoke again with Emil Michael, the Pentagon's AI chief. They believed they were close to a compromise, but disagreed on a key issue: whether the Pentagon could use Claude to analyze the massive amounts of American data purchased from commercial channels. Michael requested Amodei join the call, but he was unavailable at the time.


Minutes later, as the deadline approached, Hegseth announced the end of negotiations. Even before this, Donald Trump had already spoken out on his social media: "The United States of America will never allow a radical left-wing, 'Awakened' company to decide how our great military fights and wins wars! Those left-wing lunatics at Anthropic have made a disastrous mistake."


Unbeknownst to Anthropic, the Pentagon was also in talks with OpenAI to integrate ChatGPT into classified government systems. That same evening, Sam Altman announced an agreement, claiming it also respected similar security red lines. Anthropic immediately messaged employees, accusing Altman and the Pentagon of "manipulating public opinion" to mislead the public into believing the agreement included strict security safeguards. Previously, Department of Defense officials had also confirmed that xAI's models would be deployed on classified servers; the Pentagon is currently in negotiations with Google.


This is exactly the situation Amodei has been worried about: a race to the bottom. When the power of AI becomes so great that it cannot be ignored, it becomes difficult for competitors to cooperate and jointly improve safety standards.


Critics of Anthropic argue that this incident also exposed a core arrogance within the company: it may have believed it could navigate safely on the path to superhuman machines, making such enormous risks worthwhile. The reality, however, is that it rapidly introduced new surveillance capabilities and warfare technologies into a right-wing government, only to be immediately overtaken by competitors when it attempted to set boundaries for these technologies.


"We did not sing praises to Trump like a dictatorship would."


Dario Amodei, in a memo to employees, discussed the roots of the conflict with the Pentagon.


However, some signs suggest that Anthropic may weather this storm, and could even emerge stronger in the process. The morning after Pete Hegseth attempted to sign the "corporate death warrant," a string of encouraging messages written in chalk appeared on the sidewalk outside its San Francisco headquarters. "You gave us courage," one of them read in bold letters.


On the same day, Claude's iPhone app topped the App Store download charts, surpassing ChatGPT. More than one million people register for Claude every day.


Meanwhile, OpenAI's contract with the military has sparked resistance both internally and within the community. Some OpenAI employees believe the company has lost trust. A top researcher announced his move to Anthropic; the head of OpenAI's robotics team resigned over the government contract.


OpenAI CEO Sam Altman later admitted that rushing to reach a Pentagon agreement before Friday was a mistake. He wrote, "These issues are extremely complex and require clear and thorough communication." On Monday, Altman further stated that his actions at the time did indeed appear "opportunistic." OpenAI also stated that it has revised the agreement to explicitly adopt the same security red lines as Anthropic. However, legal experts point out that it is difficult to confirm the veracity of this claim without seeing the full contract.


On March 4, Anthropic received a formal letter from the United States Department of Defense confirming that the company had been identified as a national security supply chain risk. Anthropic stated that this designation was narrower than Hegseth's statements on social media, only restricting contractors from using Claude in defense contracts.


However, a letter seen by Time magazine and sent to Senate Intelligence Committee Chairman Tom Cotton shows that the Department of Defense also invoked another legal provision—one that could allow government agencies outside the Pentagon to exclude Anthropic from their contracts and supply chains. This measure requires approval from senior Department of Defense officials and gives Anthropic 30 days to respond.


This conflict could trigger a chain reaction throughout the entire AI industry. Dean Ball, who helped draft Trump's AI action plan and now works at the think tank Foundation for American Innovation, said, "Some people in the Trump administration will feel very tough about this, very proud of it, and will even be admiring their biceps when they get home at night."


However, he also warned that this incident could discourage businesses from working with the Pentagon and even lead them to move operations overseas. "In the long run, this is not good for the image of the United States as a stable business environment," Ball said. "And stability is exactly what we rely on."


Anthropic's leadership believes that Claude will help build a more powerful AI system, powerful enough to play a decisive role in the future global power structure.


If this is indeed the case, then the conflict between this company and the Pentagon may just be the prelude to a larger historical process.


[[Original link]



Open App for Full Article
DisclaimerThis website, hyperlinks, related apps, forums, blogs, media accounts, and other platforms' content are all sourced from third-party platforms and users. CoinWorldNet makes no guarantees about the website or its content. All blockchain data and other materials are for educational and research purposes only and do not constitute investment, legal, or other advice. Users of the CoinWorldNet and third-party platforms are solely responsible for the content they post, which is unrelated to CoinWorldNet. CoinWorldNet is not liable for any loss arising from the use of this website's information. You should use the data and content cautiously and bear all associated risks. It is strongly recommended that you independently research, review, analyze, and verify the content.
Comments(0)
Popular
Latest

No Comments

edit
comment
collection
like
share