In December 2025, OMB issued M-26-04, “Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles,” and federal IT leadership applauded.
The memo requires agencies to contractually bind AI vendors to two core standards: truth-seeking and ideological neutrality. It mandates transparency documentation. It creates enforcement authority including contract termination for non-compliance. It looks rigorous. It reads like accountability. And then there is one sentence, buried in Section 3(b), that makes most of it operationally irrelevant for the foreseeable future: agencies should modify existing contracts for LLMs “to the extent practicable.”
Four words. That is the entirety of the governance requirement for the AI your agency deployed last quarter.
The Gap Most Agencies Are Missing
The phrase “to the extent practicable” is the largest escape hatch in modern federal IT policy. The LLMs currently contracted and operating inside your agency — OpenAI, Anthropic, Amazon, Microsoft, Google, and Meta, all listed on USAI.gov — are not required to meet M-26-04 standards unless and until a contract option period is exercised or a new contract is issued. The government has already acknowledged this in plain language: for LLMs contracted under USAI.gov, there are “no guardrails beyond those provided by our model providers.” The vendor decides what governance looks like. Not your Chief AI Officer. Not your contracting officer. The vendor.
Meanwhile, the March 11, 2026, deadline for agencies to update procurement policies has come and gone. Most CFO Act agencies are now technically compliant: future contracts will include the right language on truth-seeking, bias evaluation, and transparency documentation. The compliance box is checked. But the AI answering internal help desk queries, drafting policy memos, summarizing acquisition proposals, and supporting mission decisions today operates under none of those standards. The governance clock is ticking on the next purchase. Nobody started it on the current one.
The Perspective That Changes Everything
Federal agencies are deploying AI at the fastest pace in government history while the governance frameworks being celebrated apply to the next purchase, not the current one. This is the equivalent of installing a fire suppression system in the new wing of a building that is already on fire. The paperwork looks good. The risk is live and unmanaged.
The conventional federal IT response to a new OMB memo is predictable: update the acquisition checklist, brief the CAIO, publish the AI use case inventory, and move on. That response is inadequate here. The memo established a floor; it did not prevent agencies from doing more. Nothing in M-26-04 prohibits an agency from voluntarily renegotiating existing AI contracts, commissioning independent bias evaluations on deployed models, or suspending use of models that cannot produce the transparency documentation the memo requires for new procurements. The authority is there. The urgency is there. The political will is where it usually is: pointed at the next acquisition cycle.
The NSA Zero Trust Implementation Guidelines (ZIGs) published in January and February 2026 offer a useful contrast. Those documents gave organizations 77 discrete activities, phased timelines, and a hard Target-level deadline of FY2027. The ZIGs work because they are operationally specific, backward-looking as well as forward-looking, and time bound. Federal AI governance under M-26-04 needs the same treatment applied internally: specific models, specific deployments, specific accountability owners, and defined re-evaluation timelines, regardless of whether those deployments fall under the memo’s contractual requirements.
What This Means Practically
For federal CISOs and CIOs, the immediate action is not another strategy document. It is an inventory. Every LLM currently in use, the contract under which it was procured, whether that contract predates M-26-04, and whether the vendor has ever provided documentation equivalent to what the memo now requires for new contracts. This is not a FISMA exercise. It is a risk characterization with a narrow, answerable question at the center: if one of these models produces a biased or factually incorrect output that affects a citizen benefit determination, a procurement decision, or an intelligence product, what is our accountability chain?
For most agencies today, the honest answer is: there isn’t one beyond vendor discretion. That is not a governance posture. That is an abdication framed as compliance.
Program managers deploying AI in mission contexts should treat M-26-04’s enhanced transparency requirements as the practical baseline for all high-impact AI, contract date notwithstanding. Require vendors to provide model cards, bias evaluation results, and pre-training disclosure documentation. Ask directly. Vendors with nothing to hide will provide them. Vendors who push back have told you something important about their product, and that information has operational value. The CAIO designation requirement under M-25-21 created a named owner for AI governance. That owner should be running the retrospective review, not just standing up the prospective one.
A governance framework that applies to your next AI purchase but not your current deployments is not a governance framework. It is a compliance posture aimed at the future while the present runs unsupervised. Federal leaders who are serious about AI accountability do not wait for the contract renewal. They start the review now.
Evaluate Your Existing AI Governance Controls
Assess your current AI risk posture before your next contract cycle.



