The Department of Health and Human Services (HHS) is signaling a decisive shift in how artificial intelligence is treated across health, human services, and public health programs. AI is no longer positioned as an experimental capability confined to pilots or research labs. Instead, it is being established as core infrastructure that must deliver measurable improvements in surveillance, fraud prevention, administrative efficiency, and mission support.
For federal and healthcare IT leaders, HHS’s AI Strategy is both a roadmap and a mandate. It outlines not just where AI should be used, but how it must be governed, secured, and operationalized to earn trust at scale.
HHS’s AI Vision and Goals
HHS frames its AI Strategy around four clear goals: catalyzing AI innovation and adoption, promoting trustworthy and responsible use, democratizing access to AI capabilities, and cultivating an AI empowered workforce and organizational culture.
Taken together, these goals reflect a shift in mindset. AI is not treated as a niche technology function, but as an enterprise capability that spans clinical, administrative, research, and public health domains. HHS also emphasizes that AI should augment human judgment rather than replace it, reinforcing the importance of governance, training, and accountability from the outset.
This approach closely mirrors what Slalom’s 2026 AI Outlook identifies as a critical inflection point across public and private sectors. Organizations that succeed with AI are those that treat it as business transformation, not just technology adoption. Strategy, governance, workforce readiness, and execution must advance together, or progress stalls.
Strategic Pillars and the Operating Model Behind Them
HHS structures its AI Strategy around five strategic pillars that define how AI will be governed, deployed, and scaled across the department. These pillars emphasize governance and risk management to maintain public trust, infrastructure and platforms designed around user needs, workforce development and burden reduction, research and reproducibility, and modernization of care and public health delivery.
Operationally, this signals that AI must be embedded into the core mission stack. AI systems are expected to meet the same standards as other high impact systems, including security controls, lifecycle management, and compliance oversight. The Strategy anticipates increased use of shared services, internal platforms, and standardized patterns for experimentation, testing, and production deployment.
Slalom’s experience supporting federal health agencies reinforces this model. Scaled AI adoption depends on repeatable operating patterns, not one-off use cases. Agencies that establish shared AI services, common governance frameworks, and reusable architectural patterns are far better positioned to grow from dozens of use cases to hundreds without increasing risk or operational burden.
Adoption Trajectory and What the Growth Really Means
HHS’s AI use case inventory shows rapid acceleration, with documented use cases growing substantially in recent years. Across the federal government, agencies have reported thousands of AI use cases, with HHS leading in volume.
What matters more than raw counts is where these use cases are concentrated. Nearly half of federal AI use cases are mission-enabling, supporting functions like finance, HR, cybersecurity, IT, and procurement. This aligns with Slalom’s research, which shows that organizations often see the fastest and most sustainable returns when AI is embedded into core operational workflows rather than isolated innovation projects.
HHS’s trajectory suggests a move from experimentation to pervasive integration, with projections indicating continued growth as AI becomes embedded across internal operations and public facing services.
Healthcare AI Adoption and Market Signals
Broader healthcare adoption reinforces the direction HHS is taking. US healthcare AI spending has grown rapidly and is projected to increase several-fold by the end of the decade. Predictive AI is already standard practice in many provider environments, with a majority of acute care hospitals using AI capabilities integrated into EHR platforms.
These market signals raise expectations for federal health agencies. Beneficiaries, providers, and partners increasingly expect AI-enabled capabilities that improve responsiveness, reduce administrative friction, and enhance program integrity. The new HHS Strategy acknowledges this reality and positions AI as a core enabler of modernization rather than a future aspiration.
Public Health and Program Integrity Outcomes
AI is already delivering measurable outcomes when deployed with the right data, infrastructure, and governance. In public health, AI-powered surveillance has been associated with significantly faster detection and response times for outbreaks, enabling earlier intervention and better resource allocation.
In program integrity, AI-assisted fraud detection is helping agencies analyze massive claims datasets in near real time, reducing improper payments and strengthening stewardship of public funds. These results reinforce a key principle highlighted in Slalom’s AI research. AI value is realized when models are tightly integrated into operational workflows, decision-making processes, and accountability structures.
Governance, Policy, and Trustworthy AI
HHS’s AI Strategy does not separate innovation from governance. It explicitly aligns with federal mandates such as OMB Memorandum M-25-21 and the NIST AI Risk Management Framework, which require agencies to inventory AI systems, apply structured risk management to high impact use cases, and maintain transparency and accountability throughout the AI lifecycle.
Slalom’s work with federal agencies shows that trustworthy AI cannot be bolted on after deployment. Governance must be operational, embedded into intake processes, development pipelines, testing protocols, and monitoring practices. Agencies that integrate AI governance with existing security, privacy, and compliance functions are better positioned to scale AI while maintaining public trust.
Security, Risk, and the Role of Zero Trust
AI systems expand the attack surface by introducing new data flows, models, and third-party dependencies. HHS’s strategy recognizes cybersecurity as a prerequisite for AI success, not an afterthought. High-impact AI systems are expected to meet minimum risk management practices covering data quality, security controls, testing, validation, and impact assessments.
From Slalom’s perspective, this reinforces the importance of Zero Trust aligned architectures for AI-enabled environments. Continuous verification, strong identity and access management, and robust monitoring are essential when AI systems interact with sensitive health, benefits, and financial data. Secure infrastructure and disciplined operations are what allow AI innovation to move quickly without increasing risk.
Implications for Federal Health Modernization
The HHS AI Strategy offers a blueprint for AI-enabled modernization that is ambitious yet grounded in governance discipline. Modernization is no longer just about cloud migration. It is about building AI-capable platforms that unify data, analytics, automation, and security under consistent standards.
Agencies that align data modernization, cybersecurity, procurement, and workforce strategies around AI as a central organizing principle will be better positioned to deliver value at scale. This convergence is a recurring theme in Slalom’s 2026 AI Outlook, which highlights that fragmented efforts are the primary reason many organizations struggle to move beyond pilots.
From Strategy to Practice
HHS has made its direction clear. AI will be embedded across public health, human services, and internal operations, with a strong emphasis on trust, governance, and workforce enablement.
The next challenge for federal health leaders is execution. Translating the strategy into operational reality requires more than models and tools. It requires secure, scalable platforms, clear governance frameworks, modernized procurement approaches, and sustained investment in people.
Slalom works with health agencies to operationalize AI strategies like HHS’s by aligning governance, technology, and workforce transformation into a single execution model. The result is AI that delivers measurable outcomes while remaining defensible, auditable, and trusted.

Turn HHS’s AI Strategy into Operational Reality
Get practical guidance on how federal health agencies can operationalize trustworthy AI, aligning governance, security, and execution to scale impact with confidence.
Operationalize AI Strategies for Measurable Outcomes
Talk to RavenTek and Slalom about AI readiness for federal health.



