Executive Summary
As of March 1, 2026, the long-standing tension between Silicon Valley's ethical frameworks and the U.S. national security apparatus has reached a definitive breaking point. The Department of War (DoW), acting under directive from President Donald Trump and Secretary Pete Hegseth, designated Anthropic—a cornerstone of American AI innovation valued at approximately $380 billion—as a "Supply Chain Risk to National Security." This unprecedented action, the first time the designation has ever been applied to a domestic American firm, followed Anthropic's steadfast refusal to remove technical constitutional safeguards against mass domestic surveillance and fully autonomous lethal weapons from its model deployment contract.
The confrontation escalated rapidly across a single week in late February 2026. On February 25, Hegseth issued Anthropic CEO Dario Amodei an ultimatum—comply by 5:01 PM ET on Friday, February 27, or face designation and contract termination. Amodei responded in a public statement the following day, writing that the company "cannot in good conscience accede to the Pentagon's request." When the deadline passed without agreement, President Trump ordered all federal agencies to "immediately cease" use of Anthropic's products, initiating a six-month phase-out. Within hours, OpenAI announced a competing Pentagon agreement that nominally preserved similar red lines through different contractual architecture. Anthropic announced it would challenge the designation in court, calling it "legally unsound" and a "dangerous precedent."
This article examines the deep geopolitical, legal, and sociopolitical ramifications of this schism, analyzing the erosion of democratic values, the emergence of a command-oriented technology market, and the long-term consequences for the liberal world order.
I. Chronology and Factual Record
A precise reconstruction of events is essential to any rigorous analysis. In July 2025, Anthropic was awarded a $200 million contract with the Pentagon—the first AI firm whose models were approved for deployment on classified government networks, achieved through a partnership with Palantir. For the subsequent months, the deployment was, by Anthropic's own account, operationally uncontested: the company stated that its two narrow safeguards had "not affected a single government mission to date."
Tensions crystallized in late February 2026 when Pentagon leadership, including Defense Undersecretary Emil Michael, began pressing Anthropic to accept language permitting use of Claude for "all lawful purposes" without explicit carve-outs for mass domestic surveillance or autonomous lethal targeting. Pentagon negotiators argued that federal law already prohibits such uses and that requiring private companies to write their ethical policies into government contracts established an unworkable precedent—one in which, as a Pentagon official stated, the military would be unable to "lead tactical operations by exception."
Anthropic's position was that contractual language offered as compromise contained "legalese that would allow those safeguards to be disregarded at will." A significant dispute concerned data collection: according to Axios, a final Pentagon offer sought the ability to analyze Americans' geolocation, web browsing data, and financial information purchased from data brokers—uses Anthropic considered categorically distinct from its mission. Anthropic had previously offered broad concessions, including approval for missile defense, cyber operations, and intelligence analysis, but held firm on two narrow exceptions: no fully autonomous lethal targeting, and no mass domestic surveillance of Americans.
On Tuesday, February 25, 2026, Hegseth met Amodei at the Pentagon. The meeting was described as cordial, though Pentagon sources told NBC News that Hegseth threatened to invoke the Defense Production Act (DPA)—a Korean War-era statute granting the President broad emergency authority over private industry—to compel compliance. The DPA threat was paired with the warning of a supply-chain-risk designation. Amodei publicly responded on February 26 that "threats do not change our position." The deadline passed at 5:01 PM ET on February 27 without agreement. Trump posted on Truth Social that Anthropic had made a "disastrous mistake" and ordered the government-wide cessation of Anthropic usage. Hegseth declared the supply-chain-risk designation on X the same evening, stating that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Within hours, OpenAI CEO Sam Altman announced on X that his company had "reached an agreement with the Department of War to deploy our models in their classified network." The OpenAI agreement formally maintained the same two red lines as Anthropic's position—prohibitions on domestic mass surveillance and autonomous weapons—but achieved them through a layered "safety stack" architecture rather than explicit contract prohibitions, combined with cloud-only deployment and cleared OpenAI personnel in operational loops. Altman publicly called on the government to offer the same terms to all AI labs, and to resolve the dispute with Anthropic. Hundreds of employees from Google and OpenAI signed an open petition supporting Anthropic's stance.
II. The Crisis of Democratic Values: Surveillance, Autonomy, and the Constitutional Order
II.i. The Fourth Amendment and the Digital Panopticon
The confrontation centers on two red lines established by Anthropic's leadership that are deeply rooted in constitutional and democratic principles. The first is the refusal to permit Claude to be used for mass domestic surveillance. The U.S. Constitution's Fourth Amendment protects the "right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures." The DoW's demand for "all lawful use" access, paired with negotiating language that sought access to Americans' geolocation, web browsing, and financial data from commercial brokers, signals an intent to leverage agentic AI for a form of population-level data aggregation that would have been practically impossible one generation ago.
From an academic and civil-libertarian perspective, automating the classification and behavioral scoring of citizens based on privately generated data replicates the logic of what Foucauldian theorists identify as the "panopticon"—a surveillance architecture in which the mere possibility of being watched disciplines behavior across an entire population, regardless of whether any individual is actually being observed. When a government classifies its own population through opaque algorithmic inference, it replaces the foundational legal principle of the "presumption of innocence" with a condition of continuous probabilistic suspicion. This is not merely a legal dispute but a fundamental shift from democratic governance—in which the state bears the burden of establishing individual guilt through due process—to algorithmic governance, in which the state manages risk populations defined by statistical inference.
The Pentagon's counter-argument is procedurally cogent: federal law already prohibits mass domestic surveillance, and an AI company's contractual terms of service are redundant at best and jurisdictionally inappropriate at worst. Yet this argument elides a critical asymmetry: existing legal prohibitions constrain stated government intent, not technical capability. A constitutional safeguard embedded in an AI model's technical architecture provides a friction point that statutory law alone cannot; it ensures that the capability to surveil at scale is not silently developed even when immediate intent is benign. Anthropic's insistence on explicit contractual prohibition thus functions less as a commercial condition than as a structural engineering control in the tradition of privacy-by-design.
II.ii. The Morality and Techno-Ethics of Autonomous Lethal Force
Anthropic's second red line—prohibiting "human-out-of-the-loop" lethal systems—addresses both the technical and moral limits of current generative AI. CEO Amodei stated publicly that "today's frontier AI models are not reliable enough to be used in fully autonomous weapons" and that "allowing current models to be used in this way would endanger America's warfighters and civilians." This is not a rhetorical position: Claude and comparable large language models are known to hallucinate with non-trivial frequency, to fail in adversarial distributional conditions, and to lack the contextual situational awareness required to distinguish combatants from non-combatants in complex, dynamic, multi-party urban environments.
Beyond technical reliability, the deployment of autonomous lethal systems creates what legal scholars have termed the "accountability gap." The laws of armed conflict under the Geneva Conventions presuppose the existence of a moral agent—a commanding officer, a soldier—who bears legal and ethical responsibility for the decision to apply lethal force. When an autonomous system selects and engages a target without meaningful human control, no such agent exists. A war crime committed by an algorithm produces no defendant, no tribunal, and no deterrence signal. Far from strengthening U.S. military effectiveness, premature deployment of autonomous weapons in conditions of model unreliability increases the probability of catastrophic and legally unattributable escalation.
The Pentagon's position—that it has no intention of using Claude for fully autonomous weapons and that its internal policies already forbid it—again presents a procedural argument against a structural one. The DoW's insistence that it cannot commit to this restriction "in writing to a company" reveals a deeper tension: between institutional confidence in self-governance and the recognition that contractual commitments to private suppliers constitute an external accountability mechanism that survives changes in administration, personnel, and political environment. Anthropic's position is, in this reading, a demand for durable rather than merely stated assurance.
III. Geopolitical Economy: From Free Markets to Command Dynamics
The designation of a domestic, $380-billion AI leader as a "supply chain risk"—a classification previously reserved for the likes of Huawei, ZTE, and other firms deemed extensions of adversarial foreign state apparatuses—marks a radical departure from the foundational premises of liberal market governance. Legal experts quoted in Fortune noted that this is "the first time the U.S. has ever designated an American company a supply chain risk" and the first time the designation has been used "in apparent retaliation for a business not agreeing to certain terms." The University of Minnesota law professor Alan Rozenshtein characterized the move directly: "The government really wants to keep using Anthropic's technology, and it's just using every source of leverage possible."
Under a conventional free-market model, vendor selection in government procurement is governed by a balance of technical quality, price, and mutually agreed-upon terms of service. Companies maintain Acceptable Use Policies that define the ethical boundaries of their products, functioning as a form of private governance that, in the aggregate, constitutes a check on state power through market structure rather than legal mandate. The DoW's approach inverts this relationship: by threatening total market exclusion under the supply-chain-risk mechanism, the state transforms the AUP from a contractual term into a political loyalty test.
The immediate consequence is visible in OpenAI's behavior. Although Altman stated publicly that OpenAI shares Anthropic's red lines on surveillance and autonomous weapons, the company secured a deal within hours of Anthropic's designation. The substantive question—whether OpenAI's safety stack architecture provides equivalent protection to Anthropic's explicit contract prohibitions—remains unresolved and, critically, unverifiable from outside the agreement. As Fortune noted, Altman said OpenAI agreed the Pentagon could use its tech for "any lawful purpose" while also saying the limitations were "put into our agreement"—leaving it unclear how both propositions can simultaneously be true. This ambiguity is not trivial: in a high-pressure military operational context, the distinction between implicit reliance on existing law and explicit contractual prohibition may determine whether a safeguard holds.
The broader structural consequence is the emergence of what may be characterized as a compliance-oriented command economy in the AI sector. By excluding non-compliant firms, the administration is not selecting the most capable technology—the Pentagon's own user base was, by multiple accounts, highly satisfied with Claude—but the most contractually acquiescent technology. This creates a perverse selection dynamic in which safety architecture is rewarded by market exclusion and compliance is rewarded regardless of whether it represents genuine ethical alignment or strategic opacity. As Adam Connor of the Center for American Progress observed, the designation means that "some large portion" of Anthropic's enterprise customer base "might evaporate, either because they have government contracts or might want them in the future"—a chilling effect on the commercial independence of any AI firm that maintains substantive safety commitments.
IV. Legal Analysis: The Supply Chain Risk Designation and Its Limits
The legal architecture of the supply-chain-risk designation warrants careful scrutiny. The relevant statutory mechanism permits the Pentagon to exclude suppliers it designates as posing national security risks from contract eligibility. However, as Anthropic's legal team immediately argued, the scope of that authority is narrower than the administration's public statements suggested. Federal law limits supply-chain-risk designations to DoW contract use; the DoD cannot, through this mechanism alone, compel private companies to cease providing services to other customers or mandate that all defense contractors sever all commercial relationships with the designated firm.
Hegseth's X post stating that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" thus appears to have outrun the statutory authority. Anthropic directly rebutted this claim, asserting that "the Secretary does not have the statutory authority to back up this statement." The statute also requires, according to Fortune's legal analysis, that the Pentagon exhaust alternative, less intrusive courses of action before making a supply-chain-risk finding—a procedural requirement whose adequacy is plainly questionable given how rapidly the dispute escalated from internal negotiation to public designation across a single week. These are not trivial procedural objections; they are substantive grounds for judicial review that may ultimately vindicate Anthropic's position, though the business damage from years of litigation may prove irreversible regardless of legal outcome.
The Defense Production Act threat presents a separate and arguably more alarming legal dimension. The DPA confers on the President broad emergency authority to direct private industries to prioritize national security needs. Its invocation against a domestic AI company to compel the removal of safety guardrails would represent an unprecedented exercise of emergency executive power in the peacetime technology sector. The constitutional questions raised—including First Amendment implications of compelling a company to deploy its models in ways its principals consider harmful—have not been tested in any court and would likely require extended litigation to resolve. Senator Mark Warner, vice chair of the Senate Intelligence Committee, stated that the president's directive "raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."
V. The OpenAI Counterfactual: Architecture Versus Principle
The speed with which OpenAI secured a Pentagon deal in Anthropic's wake illuminates a critical distinction between two approaches to embedding ethical commitments in AI systems: contractual prohibition versus architectural safety stacks. Anthropic sought to inscribe its red lines in the text of its government contracts—a legalistic approach that makes commitments transparent, auditable, and judicially enforceable. OpenAI secured agreement on nominally identical principles—no mass domestic surveillance, no autonomous weapons—but achieved them through cloud-only deployment, a proprietary safety stack operated by OpenAI personnel, and cleared employees in operational loops.
OpenAI's approach may, in some respects, provide stronger practical protection: a technical architecture that makes certain uses physically impossible is more robust than a contractual prohibition that a future administration might reinterpret, waive, or litigate. Cloud-only deployment forecloses edge deployment architectures that could enable autonomous weapons in disconnected battlefield environments. Having cleared OpenAI personnel in the loop provides a human accountability mechanism at the point of operational use. Altman's public articulation of the framework was consistent with Anthropic's stated principles, and his call on the DoW to extend the same terms to all AI labs suggests genuine desire for industry-wide alignment rather than competitive opportunism.
However, the OpenAI model carries its own risks. The authority to define what constitutes a red-line violation remains, under the agreement, with the DoW for the purposes of identifying "lawful use." OpenAI's safety stack, while technically sophisticated, operates under the ultimate oversight of a company that has demonstrated willingness to engage with the military on its terms. Without contractual explicitness, the accountability of the OpenAI arrangement to external review—by Congress, the judiciary, or the public—is structurally weaker than Anthropic's proposed approach. The critical test of the OpenAI framework will come not in peacetime, when the distinction between technically possible and contractually prohibited is largely academic, but in conditions of genuine operational pressure, when military leadership may seek to invoke emergency exceptions to safety constraints that exist primarily as internal company policy rather than binding legal commitments.
VI. The "Enemy Capability" Fallacy and International Dimensions
A primary justification advanced by the administration is that adversaries—particularly the People's Republic of China—face no comparable moral or commercial constraints on their deployment of AI for surveillance and autonomous weapons. The argument, in its most unadorned form, is that democratic restraint constitutes strategic disadvantage. This reasoning, while superficially compelling in its geopolitical framing, contains multiple structural fallacies that deserve systematic examination.
First, the comparative capability argument confuses military maximalism with military effectiveness. Autonomous weapons that cannot reliably distinguish combatants from civilians do not confer net tactical advantage; they generate strategic liabilities in the form of civilian casualties, attributability crises, and alliance rupture. The United States' strategic position in any peer or near-peer conflict depends substantially on the coherence and solidarity of its alliance architecture—NATO, the Quad, AUKUS—relationships that are predicated on shared normative commitments to the laws of armed conflict and human rights. Deploying AI systems whose ethical constraints have been removed as a condition of domestic commercial compliance does not strengthen that architecture; it undermines the normative foundation on which it rests.
Second, the administration's actions risk accelerating the precise dynamic they claim to be resisting. As OpenAI's own all-hands meeting apparently acknowledged, a major concern was "AI-driven surveillance threatening democracy," alongside recognition that national security actors require international surveillance capabilities against adversaries. These are genuinely competing values requiring careful calibration—not binary choices between total compliance and total restriction. By designating Anthropic a supply-chain risk for attempting that calibration, the administration signals to the entire AI industry that principled engagement with government on safety norms is commercially suicidal. The rational industry response is to retreat from explicit safety commitments—a race to the bottom that benefits China's strategic position rather than constraining it.
Third, the international signaling effects of the designation are deeply damaging. G7 partners—particularly the European Union, which is developing its own AI regulatory architecture under the EU AI Act, and the United Kingdom, which has invested substantially in responsible AI governance through its AI Safety Institute—have observed an American administration use the national security apparatus to coerce a domestic AI company into removing safety constraints. This provides precisely the precedent that autocratic competitors will cite to justify their own absence of AI safety commitments, while simultaneously eroding the credibility of U.S. advocacy for responsible AI governance in multilateral forums.
VII. Bayesian Game Theory Analysis: Strategic Scenarios and Equilibria
The Anthropic-DoW confrontation can be productively modeled as a multi-player Bayesian game in which the Government, Anthropic, and competitor AI firms hold private information about their technical capabilities, strategic resolve, and the true ground truth of AI reliability in military applications. The following scenarios represent plausible equilibrium trajectories given the current state of play as of March 1, 2026.
Scenario A: The Compliance Trap (Dominant Equilibrium Under Current Conditions)
If the government's supply-chain-risk designation and associated market exclusion pressure persist without judicial relief, AI labs may conclude that safety commitments constitute an existential commercial liability in the U.S. government market. The dominant strategy becomes maximal contractual compliance with minimal public transparency about the nature of safety stack limitations. The short-term government gain—tactical control over AI deployment—is offset by long-term degradation: safety-focused researchers migrate to non-aligned commercial or international environments, and the U.S. military ultimately relies on models that are contractually compliant but technically inferior and ethically unreflective. This is the scenario in which the administration "wins" the negotiation and loses the AI race.
Scenario B: The Silicon Insurrection (Partial Equilibrium Under Industry Solidarity)
The rapid expression of industry solidarity with Anthropic's position—hundreds of Google and OpenAI employees signing open letters, Altman's public articulation of shared red lines, Nvidia CEO Jensen Huang's careful neutrality—suggests that a partial form of this scenario is already operative. If the designation triggers a broader industry movement toward explicit safety commitments as a reputational and recruitment signal, the government faces a market in which it must either negotiate with safety-committed firms or build in-house capacity. In-house AI development at the scale required for frontier capability deployment would cost orders of magnitude more than commercial licensing and would likely produce technically inferior systems. This scenario's resolution depends heavily on whether the legal challenge succeeds and whether Congress—which the Senate Armed Services Committee has already signaled some concern—provides legislative clarity on the boundaries of supply-chain-risk authority.
Scenario C: The Negotiated Framework (Resolution via Legislative Architecture)
The most durable resolution involves legislative rather than executive action. The Senate Armed Services Committee's bipartisan letter to both Anthropic and the Pentagon, urging resolution and acknowledging that "the issue of lawful use requires additional work by all stakeholders," provides the institutional foundation for a legislative framework. Such a framework would replace the binary compliance-or-exclusion dynamic with a "Certified Use Framework" under which AI firms could qualify for government contracts by demonstrating that their safety architectures meet defined minimum standards for autonomous weapons safeguards and surveillance prohibitions—standards established by Congress rather than negotiated bilaterally under executive pressure. This approach would preserve both the military's operational flexibility and the private sector's structural role as an accountability mechanism on government use of AI.
VIII. Toward an International Architecture: The G7 Digital Sovereignty Charter
Beyond domestic legislative solutions, the Anthropic crisis reveals the urgent need for multilateral governance frameworks that can constrain the race-to-the-bottom dynamics inherent in unilateral national security AI procurement. We propose that the G7 nations formally negotiate and adopt a Digital Sovereignty Charter encompassing the following non-negotiable structural pillars:
Pillar I: The Meaningful Human Control Mandate
A binding international protocol requiring Meaningful Human Control (MHC) for any AI system capable of directing kinetic force—including the identification, selection, and engagement of targets. MHC should be defined not merely as a human "in the loop" in a nominal sense, but as a human with genuine capacity to understand, evaluate, and override AI-generated recommendations within operationally realistic time constraints. This standard would directly address both the reliability gap in current frontier AI systems and the accountability gap under international humanitarian law. It would align with existing consensus positions in the International Committee of the Red Cross's framework on autonomous weapons and provide a normative basis for engaging China and Russia in future arms control discussions.
Pillar II: Privacy Reciprocity and Judicial Oversight
A prohibition on the use of generative AI for the mass automated classification or behavioral scoring of citizens without individualized, time-limited judicial authorization. This pillar would require signatory governments to establish independent judicial review mechanisms for AI-assisted domestic intelligence operations, creating procedural requirements analogous to traditional warrant requirements but adapted to the population-scale capabilities of agentic AI systems. Critically, this pillar would establish reciprocity: signatories would commit not only to restricting their own surveillance AI but to prohibiting the use of AI systems trained on citizens of other signatories without equivalent judicial oversight.
Pillar III: Market Pluralism and AUP Protection
Legislative and treaty-level protections preventing the weaponization of national security procurement authorities against domestic firms that maintain substantive AI safety policies. Specifically, supply-chain-risk designations should be subject to expedited judicial review, mandatory exhaustion of alternative remedies, and explicit legislative prohibition on their use as retaliation for refusal to remove safety constraints. This pillar would protect the structural role of the private sector as an accountability mechanism in AI governance—a role that the Anthropic case demonstrates is both valuable and vulnerable.
Together, these pillars would constitute a "G7 Certified AI Partner" framework under which qualifying AI firms could deploy in government environments with legal clarity about the boundaries of permissible use—boundaries established by international agreement rather than bilateral executive negotiation under duress.
IX. The Anthropic Paradox: Responsible Scaling in a Coercive Environment
The Anthropic-DoW crisis coincided with an internal evolution in Anthropic's own safety framework that warrants attention. In the same week as the Pentagon confrontation, Anthropic announced a revised version of its Responsible Scaling Policy, dropping a previous commitment that it would not release an AI system unless it could guarantee adequate safety measures. Chief Science Officer Jared Kaplan acknowledged to Time Magazine that unilateral pause on model development while competitors proceeded without safeguards would not serve the goal of making AI development safer overall.
This evolution reflects a genuine strategic tension at the core of safety-focused AI development: the "responsible scaling" imperative—developing and deploying capability at competitive pace to ensure that safety-conscious actors remain frontier participants—can conflict with the "safety-first" imperative of refusing deployment until safety can be affirmatively guaranteed. Anthropic's revised policy attempts to thread this needle by separating company-level safety commitments from industry-level advocacy—continuing to prioritize safety in its own models while acknowledging that unilateral restraint in a competitive market would simply cede the frontier to less safety-conscious actors.
The DoW crisis illuminates why this strategic positioning matters. Anthropic was the only frontier AI company whose models were deployed on classified government networks. Its presence in that environment—precisely because it maintained substantive safety commitments—constituted a form of responsible engagement that its removal now forecloses. The designation does not remove AI from the classified military environment; it replaces a safety-committed provider with one whose commitments, however sincerely held by current leadership, are structurally less embedded in enforceable architecture. The paradox is that the government's attempt to maximize its AI capabilities by removing safety constraints may have reduced the overall safety profile of AI in its most sensitive deployments.
X. Conclusion: A Constitutional Inflection Point
The designation of Anthropic as a supply-chain risk for maintaining constitutional safeguards is not, in the final analysis, merely a contractual dispute between a technology company and its largest customer. It is a constitutional inflection point: the first time an American administration has deployed national security procurement authority to coerce a domestic company into removing technical safeguards against mass surveillance and autonomous lethal force. The precedent it sets—that principled commercial refusal to remove safety constraints constitutes a national security risk—is one that will shape the relationship between the state and the AI industry for years beyond the immediate dispute.
The geopolitical stakes are equally high. The United States' claim to leadership of the liberal-democratic world order rests substantially on its commitment to the rule of law, individual rights, and accountable governance. An administration that deploys the supply-chain-risk mechanism against a domestic company for insisting that its AI not be used to surveil American citizens or kill without human judgment has substantially compromised that claim—not in the abstract realm of diplomatic rhetoric, but in the concrete and observable domain of technology policy that G7 partners, adversaries, and international institutions will interpret as evidence of American values in practice.
The resolution lies neither in unconditional compliance—which would destroy the private sector's function as an accountability mechanism and accelerate a race to the bottom in AI safety—nor in simple commercial resistance, which faces the structural disadvantages of market exclusion and litigation timelines measured in years. It lies in the construction of durable legislative and international frameworks that preserve both the military's legitimate security needs and the democratic values that justify its existence: frameworks in which meaningful human control, judicial oversight of surveillance, and market pluralism are not optional features of AI governance but structural requirements of it.
If the G7 allows the security-first logic of the Department of War to override the human-rights-first logic of the digital age without constructing such frameworks, it risks not merely ceding the moral high ground of the 21st century to the autocracies it seeks to outcompete, but replicating their essential architecture: surveillance at scale, force without accountability, and technology in service of state power unchecked by law. The Silicon Schism is, ultimately, a test of whether democratic institutions can govern transformative technology on democratic terms.
Note on Sources
This analysis draws exclusively on primary sources and verified reporting as of March 1, 2026, including: public statements by Dario Amodei (Anthropic), Pete Hegseth (DoW), Sam Altman (OpenAI), Senator Mark Warner, and Gregory Allen (CSIS); official statements from Anthropic PBC and OpenAI; reporting by NPR, CNN, CBS News, NBC News, ABC News, Fortune, Axios, Bloomberg, The Hill, and Euronews; and public posts on X and Truth Social by relevant principals. The OpenAI Department of War agreement FAQ was reviewed at openai.com.