States are developing their plans for artificial intelligence (AI) to help address pressing human needs. Within health and human services, there are countless potential applications.
The nonprofit organization Code for America recently released its Government AI Landscape Assessment, which evaluates how AI is transforming public service delivery. It found that states are at varying stages of AI readiness, with most of them navigating early or developing phases and building foundational capabilities. And they’re taking a variety of paths to get there.
According to the Center for Democracy & Technology, a nonpartisan, nonprofit organization that shapes technology policy, governance, and design, more than a dozen states have issued executive orders (EOs) that address how AI should be used in state government. Analysis of these EOs shows that states don’t follow a consistent definition of AI and implement varied approaches to managing risk, setting clear goals, and ongoing governance.
America’s AI Action Plan, released by the White House in July 2025, sets a clear agenda to encourage AI adoption among government agencies. Those in earlier stages of adoption must determine what it would mean for their organization to succeed on their terms.
5 questions states should ask as they apply AI
Given the wide range of perspectives on AI, and readiness for AI, health and human services agencies must consider fundamental questions when developing their plans. Every journey will look a little different, but here are five questions states should explore as they integrate AI:
- What are the best use cases for AI?
AI is a broad category of technologies, and adoption has increased dramatically across industries over the last few years. With countless options to integrate AI with health and human services systems, it’s essential to prioritize the integration that will bring the most value, and balance that with use case feasibility.
For example, let’s take eligibility systems and review some high-potential use cases for AI across this process. AI-powered virtual assistants can screen clients during initial inquiries and direct them to their next steps. AI can verify identity documents and paperwork at intake. Caseworkers can use natural language processing capabilities to summarize case notes and extract important information from client communications. AI systems can detect unusual patterns that may require intervention.
Choosing the best use case is not just a technical decision, because ethical, regulatory, legal, security, and operational angles are part of this process. Also, each agency has different needs – this is not a “one size fits all” endeavor. A holistic review can help agencies prioritize the best uses of AI within their organization.
- What data will we need?
For any system to make intelligent decisions, it needs large volumes of trusted data. Health and human services agencies tend to have specific data challenges. For instance, unstructured data in case worker notes or client communications can hold a lot of important information that can be difficult to access. It is common for data to reside in multiple departments – or even separate agencies – and it can be challenging to combine data across systems.
Today, Large Language Models (LLMs) and GenAI offer use cases that make it easier for organizations to leverage unstructured data that weren’t available in the past. But data quality is a real issue with AI. If your data is inconsistent, redundant, or out of date, an AI model trained on this data will not produce high-quality results.
For the desired outcome, be clear about what data you need to be successful and begin to chart a path to access, secure, and combine that data.
- What skills do we need to implement AI?
It’s important to know if teams have the right skills to successfully implement AI, from integration to deployment and ongoing use. At the start of the project, teams need people skilled at data curation and management, as well as security and privacy. As integration progresses, users must be trained and know how to use this technology. Sharing and acting on ongoing feedback is also essential.
America’s AI Action Plan also advises capacity building for agency teams, such as AI literacy, up/re-skilling, or creating new roles. It advises balancing AI innovation while upholding safety and privacy standards. For every role, agencies must determine to what extent to train in-house team members or secure assistance from a vendor. Agencies should beware external parties who promise quick prototypes, leaving the challenge of finishing a production-ready solution. Consider the full range of skills needed to fulfill the entire implementation.
- How will we manage risk?
Risk is inherent in any new technology implementation. First and foremost, agencies must protect private information; it must be a top priority in any implementation of AI. It’s especially critical to manage this risk when automating processes that can impact people’s benefits and lives. Every health and human services system must deliver accurate information to continue providing services that support people and their families.
Certain use cases should be approached with greater caution, such as automated decision-making or creating a “sole source of truth.” At this point in AI development, humans should review and be accountable for decisions.
- What does success look like?
Articulating clear outcomes for AI implementations from the outset is key to success. Setting a clear vision, such as reducing manual work for caseworkers or improving user experience, will enable greater focus throughout.
Health and human services agencies often complete a major procurement effort and move on to the next technology priority. This is understandable, but continuing to monitor and update the system is a more sustainable approach that can enable long-term success.
Agencies must build continual monitoring into their plans for success. Key considerations include pilots, extensive testing, and periodic audits to enable the effective and ongoing use of the technology. An “AI evaluation ecosystem” is an essential part of every plan.
Asking these questions will build an important foundation for health and human services agencies curious about AI. There are tremendous possibilities for those who move forward with a strategic and focused plan.
Jeff Reid is Executive Vice President and General Manager for Cúram at Merative, a business with more than 25 years of experience helping national, regional, and local governments to transform the delivery of health and human services, supporting them as they modernize benefits and explore AI. He is a seasoned global technology services leader with experience spanning health and human services, healthcare insurance, and related industries, focusing on complex SaaS operations, digital transformations, and customer-centric initiatives. Jeff holds a master’s degree in business administration from the Moore School of Business at the University of South Carolina.
Published by
About our partner
Cúram by Merative
Cúram, by Merative, has over 25 years of experience helping national, regional, and local governments, and organizations across health and social ecosystems, to transform the delivery of social services, empower caseworkers, and help individuals and families access the programs they need to achieve better outcomes. Cúram solutions and services expertise are trusted in 12 countries and jurisdictions, and support over 970 government programs. Available in 7 languages, the Cúram platform connects benefits administrators, social services agencies, and case managers, to serve and protect 187 million citizens annually.
Learn more