Ahead of the Government Innovation Showcase Arizona on October 1 in Phoenix, we connected with Steven Hintze, Chief Data and Product Officer at the Arizona Department of Child Safety, to explore how his agency is moving AI from theory to practice. We discuss tangible wins in service delivery, the critical balance between automation and human judgment, and the strategic steps every agency can take to foster a culture ready for responsible AI adoption.
His experience offers a critical framework for reflection—essential considerations for any agency navigating the adoption, governance of data, and scaling of AI solutions.
Enjoy the insight.
- Please share with us a bit about your role and focus areas.
Steven Hintze: I am the Chief Data and Product Officer at the Arizona Department of Child Safety (ADCS). Over my 15-year tenure, I began in case management, helping families reunify with their children. I then led the highest volume office in Arizona for reports of abuse and neglect, reducing open reports by 90% with an incredible team. I also rolled out a statewide performance and process management system and managed the agency's strategic plan. Currently, I lead the team responsible for the Comprehensive Child Welfare Information System, focusing on leveraging data and technology to better serve families and improve child welfare outcomes in Arizona.
2. Could you share specific examples of how AI implementation has improved service delivery or outcomes at the Arizona Department of Child Safety?
Steven Hintze:
- We've implemented a RAG model assistant that helps employees quickly find information buried in extensive policy documents, addressing high turnover by aiding new workforce members. For example, it clarifies complex questions like "how do I do drug testing?" for different stakeholders has different answers. This tool accelerates learning in a high-churn environment, a long-standing challenge.
- We're also beta testing a selective redaction tool to expedite the disclosure of information in court cases. This tool automates the redaction of specific information, reducing the manual effort of reviewing thousands of pages. These tools are built within a scalable framework behind our firewalls, ensuring data privacy and security. This is going to save hundred of hours a week for a team that has been buried an unable to meet demands.
- Which child safety processes have seen the greatest efficiency gains through AI adoption, and how has this translated to improved staff capacity or response times?
Steven Hintze: The RAG model accelerates learning so that knowledge can be used in decision-making, while the redaction tool speeds up information disclosure in court cases which is used in decision making for permanency of children . These improvements enhance staff capacity and response times, ultimately benefiting the families we serve.
- What kinds of decisions are being made from AI tools to make informed decisions?
Steven Hintze: We adopt a reasonable risk-based approach, ensuring AI tools support but do not replace human decision-making. For instance, AI can scan case management details to direct people to key events, but not just blanket summarize history because a human must have the knowledge to actually validate the information. We focus on tasks that staff can fully verify, maintaining a human-in-the-loop approach to ensure accuracy and reliability.
- How is your agency preparing caseworkers and staff to work effectively with AI tools—whether through training, change management, or redesigned workflows?
Steven Hintze: Organizational Change Management is crucial for us. We track engagement scores of our materials to ensure effective reach. Our solutions prioritize user experience and business process efficiency, reducing the need for extensive training. When training is necessary, we offer multiple formats to cater to different learning styles and use informal language to make the content more accessible. As your agency implements AI solutions, what strategies are proving most effective for maintaining public trust, and what operational safeguards have you prioritized to ensure responsible AI use?
- What key safeguards—whether technical, operational, or governance-focused—should agencies prioritize to ensure AI systems are deployed responsibly?
Steven Hintze:
- Creating a psychologically safe culture is essential for adopting new technologies. This encourages safe risk-taking and open communication about system flaws. Governance of unstructured data is also critical, as language and definitions evolve over time. Building a knowledge base for advanced RAG implementations helps models understand local context, ensuring more accurate and responsible AI use.
- All of our AI use is done behind the firewall with security controls in place, we don't train models on PII/PHI, and we are starting with internal low risk use cases. We are also building the public profile of ADCS technology to get the message out in many forums so that the community can see what we are doing.
7. What advice would you give other agencies as they implement AI for public good?
Steven Hintze:
- Focus on building a culture of safety and openness. Start with low-risk use cases and ensure robust governance of data. Communicate transparently with the public to build trust and demonstrate the positive impact of AI on service delivery and outcomes.
- The big thing here is to also rely on the tech people for the tech, but not to rely on them for the use case. If you don't have a specific business problem that will be solved that you actually care about, then don't bother putting in a solution.
AI becomes a force multiplier for mission impact, transforming how the agency fulfills its sacred duty to serve vulnerable families. The greatest impact lies in the restoration of time—returning hundreds of hours to an overwhelmed team, allowing them to redirect their expertise from manual reviews to meaningful family engagement while safeguarding AI responsibly.