Driving Digital Outcomes in Government: A Conversation with Chris Fechner, CEO of the DTA

Chris Fechner, CEO of the Digital Transformation Agency, shares insights on outcome-driven governance, AI adoption, and the role of partnerships in shaping the future of digital government.

Avatar
James Ireland 18 June 2025
Driving Digital Outcomes in Government: A Conversation with Chris Fechner, CEO of the DTA

How do you define and measure success when it comes to digital governance in the public sector, and what needs to shift to focus more on outcomes rather than inputs?

That's a really good question. So a lot of the work that we do in the Commonwealth is really about making sure that we've got a good understanding of where we want to be. So in our case, it always starts with the strategy. The data and digital government strategy that we have is our North Star.

We add to that things like the purpose and vision of our agencies and we think about what is it that we need to do to support our outcomes. So what do we need to do to provide services to our people and business? And how do we make sure that what we're doing is actually delivering on those outcomes?

One area that we've really focused on is benefits management. Benefits management is really thinking about what is the outcome that is going to be experienced—either inside a government, with a customer or with a business, or in policy development, or any other areas that we work with in government.

So when we think about benefits, we think about how they change and mature over time. Often, investments in digital in government are seen as unavoidable or as a component of something else, like new policy development or a cyber uplift. But what we're focusing on is making sure that what we actually achieve continues to deliver, and we measure how it actually does deliver. What that does is create confidence in future investments. When you can really closely tie those benefits to the original business investments, then you're actually going to get much more likelihood of future investments.

We're also looking at how we can make sure that we keep those investments small enough that we can do many of them, because we realise that sometimes those benefits won't be achieved. So we try and decrease the blast radius size when they don't succeed. And by having a lot more successful projects at a smaller scale, we get a virtuous cycle of future investment.

But that measurement component is probably one of the key things that we continue to work on and why benefits are so important to us.


Don’t miss Chris Fechner live at the Government Innovation Showcase ACT on the 12th of November 2025. He will be discussing Shaping Australian Public Service into 2026 and Beyond - Delivering Connected, Accessible, Customer Centered Services, sharing his insights on government uplift. Register here



What's helping drive a mindset shift in government around digital and AI investment, and how do we ensure the right investments are made at the right time?

The mindset is something that I think has been spurred on by the introduction of Generative AI. It's actually been a very good catalyst inside of government.

Over the last 50 years, data and digital have been very important for government moving from internal processes out into delivering services through the online environment. And now we're actually thinking about how we can hyper-personalise and really change those environments. But artificial intelligence has come as a bit of an amplifier.

It's amplified risks and benefits. It's also captured the imagination of the entire public. So whereas historically a lot of data and digital could sit in the back office or be delegated down to the IT areas, artificial intelligence—and to a degree cyber—have really focused into the top decision-making within government.

That’s gotten people more aware of what it is they need to be accountable and responsible for, which really focuses them on their other jobs. AI disrupts the workplace, it disrupts service delivery, it disrupts investments, it disrupts policy. All of those changes mean that it's no longer avoidable. So people are leaning into it.

Certainly within the Commonwealth and the public service, we've seen a really high degree of engagement—both from a fear perspective, because they're frightened about what it might do, and from an anticipation perspective, because of the opportunities it creates.

To support those elevated expectations, we’ve done things like set up minimum protections. Digital has opportunities for both good and harm, and AI amplifies both. So these minimum protections, transparency standards, and accountability measures are applied more directly to senior leaders—not just technologists—and that’s been another driver of behaviour.

The second element is our benefits management policy.

The third is changing the systems of investment—stop thinking about digital as a separate type of investment, and start thinking about how it integrates with policy development or service delivery, and why it’s important that leadership understands the impacts of digital.

We've done a lot of work with Senior Responsible Officers—the people who run big digital initiatives—on how they should be confident or otherwise that their project is doing what it should. We've also been working with academics, like at the University of Queensland, on the characteristics of good governance inside a board for digital initiatives.

So we're really driving accountability, responsibility, and measurement of outcomes for digital. And as I said, digital underpins almost all government services, so it's not something you can avoid. More and more, we're seeing leaders needing to embrace digital because it cuts through everything they do.




In the world of AI and data, how do you approach setting clear baselines and linking adoption to measurable outcomes?

Artificial intelligence is seemingly being used for everything, everywhere within government. But the truth is, traditional AI—like what we use for fraud detection or for analysis and projections—has been embedded in government for a long time.

Government is now experimenting a lot with generative AI and understanding how rapidly that’s evolving. Three years ago we had practically nothing. Now we’re talking about agentic AI.

The first thing we did in the DTA was set up controls—guardrails. Our policy on safe and responsible use of AI in government was an important acknowledgement that we needed to engage with AI, but do it in a way that avoids harm. People don’t necessarily trust government to use AI—and that’s clear from all our survey work—but they do expect us to use it. So this compromise between transparency and explainability, and setting up guardrails before we do anything, is quite important.

The next step was a stocktake: knowing what types of AI we’re using and in what domains. Some applications—like forecasting growth—aren’t controversial. But others—like using AI to reduce the cost of social services or identify fraud—can create distrust. So we needed to understand the tech, the context, the benefits, and the harms. We aligned that with the AI Ethics Framework and considered societal, environmental, and public goods more than just risk control.

Where AI provides benefits to people—that’s where we should focus.

We also need to consider the workforce impact. AI especially disrupts white-collar workers, who make up most government employees. So we’ve looked at upskilling—so public servants can augment their skills with AI skills. The old adage is: you won’t be replaced by AI, but you might be replaced by someone who knows AI.

Third, we need settings that accelerate safe, responsible adoption. If multiple parts of government are working on similar things, we want them to collaborate and produce scalable, safe solutions.

Finally, AI isn’t a “set and forget” function. Much digital tech is built and left alone—but AI needs ongoing review to ensure it still meets expectations or to identify if it starts to fail. Our assurance processes seek to detect ecosystem changes and identify harms before they occur. Then we reset ourselves in governance to maintain beneficial outcomes and prevent those potential harms. There’s a lot of fear around AI, and it’s never going to be foolproof—which is why continuous monitoring is critical.




What’s worked well in engaging and enabling senior leaders to think differently about digital transformation beyond just the tech teams?

As I said earlier, there’s a lot of anticipation around artificial intelligence, and we use that purposefully as a magnifier to connect with senior leaders. They're both frightened and eager.

Frightened because government doesn’t always forgive mistakes. But excited because they’ve seen AI applied effectively in retail and banking. Government is a bit behind in adoption. Right now, it’s being used for low-risk things like chatbots, document drafting, and summarising meetings—all great for productivity. But the bigger value will come when we apply AI to service delivery and policy.

We need to start with educating senior leaders. We’ve created policies and standards that now bind them. But we’ve also developed training. The DTA has a Foundations of Generative AI course. We’re working with the Public Service Commission on what capability uplift we need—not just for tech teams or general staff, but for executives too.

Services Australia has done great training in partnership with ANU on AI for SES. We’re working on getting the right level of understanding, accountability, opportunity measurement, and risk definition in place for senior executives.

If we train both the bottom and top layers well, we can bridge the middle. But if we don’t get top-level commitment, we’ll hit a wall of resistance in the mid-layers—where people fear AI more than they value it.

By addressing that fear, encouraging use, and getting senior executives to openly support calculated risk in AI—we create space to test, learn what works, and accelerate adoption.




What role do partnerships with industry or events like Government Innovation Week, by PSN, play in advancing your digital agenda?

So we believe that AI is a very—especially—AI is a special, new capability. And one of the things about it is that there is such a great eagerness for people to learn, but there aren’t actually that many people who really have the expertise and the capabilities to deliver on that.

We’ve dealt a lot with industry, we’ve dealt a lot with academia, and more and more, we’re actually talking to people that support these learning environments—like these conferences that we go to. You can see just how much people want to know by the attendance at these conferences. And by getting the right people together who have got experience in this activity, and thinking about how that experience you listen to in these processes can be applied in your own context—I think that is one of those ways of accelerating.

We can also learn from the mistakes of others in these things. We can understand what industry is coming forward with in its roadmaps.

And government is one of those areas that I believe is really great in that we don’t have to compete with other governments. When all governments are good, everybody benefits. There’s no leaderboard that you need to actually be in front of.

So working with and bringing people in from government—so, I’m from the Commonwealth and there’s a large number of New South Wales people here today—one of the great outcomes is we share our learning. And it finds those connection points where we can collaborate further.

And I would say artificial intelligence is one of those things that’s actually driven states territory, and federal governments to work a lot closer together. But these forums like Government Innovation Week are where you can actually find those connections to those places.

Don’t miss Chris Fechner live at the Government Innovation Showcase ACT on the 12th of November 2025. He will be discussing Shaping Australian Public Service into 2026 and Beyond - Delivering Connected, Accessible, Customer Centered Services, sharing his insights on government uplift. Register here

Communities
Regions
Australia Australia

Published by

James Ireland Marketing Manager