Is it time for AI to get real?
We need to ask the hard, specific questions about where AI can help improve government service delivery, where it can’t, and what barriers stand in the way.



AI isn’t new, but the hype of Gen AI has grabbed the headlines, sparked our collective imagination and raised new levels of ethical and governance debate. The hype would lead you to believe it’s fast becoming ubiquitous and can solve any problem, and leadership FOMO is real as organisations scramble to leverage this new technology.
Yet, to date much of the talk is of governance, trials and experiments as organisations unpack the different types of AI, understand what it can do for them, and learn the risks and controls they need. With so many types of AI, and new jargon coming up daily, how are we navigating the reality and keeping the focus on the actual problems to be solved? How engaged and ready are we? Do we have the social license? And has the conversation moved beyond governance or are the risks just too great?
We want to start an honest conversation about the potential, attitudes to adoption and realistic ways forward for AI in government. That’s why Liquid and The Mandarin have teamed up to run a public service AI benchmarking survey. Love it, hate it or just curious – we want to hear from you and understand the reality of AI in your department or team.
Hype or game-changer?
What’s your perspective on AI in government?
- Is it a game-changing technology looking for a problem to solve?
- Is it a cutting-edge distraction that needs to mature and address its inherent challenges before it becomes valuable to government?
- Is it a scalable toolset that helps solve previously intractable problems?
- Does it enable democratisation of data and information that creates new paradigms?
- Is it a risk that needs consistent careful governance, ethics and controls even if that stifles innovation?
- Will it struggle to live up to expectations due to poor data and restrictive legacy systems?
In some part, all of the above are true. AI has the potential to increase efficiency, improve decision making and drive innovation. But only if it’s directed to solving the right problems. It also brings an unparalleled level of unknowns and risks to a project, which must be carefully managed without stifling innovation. Getting the right balance takes experimentation and courage, and as we all navigate new waters, sharing our learnings will be vital. That’s what this survey is all about.
Time for AI to get real?
If we’re serious about delivering on the potential of AI, we need to stop thinking in terms of abstract capability and start focusing on context. We need to demystify the technology and ask the hard, specific questions about where it can help, where it can’t and what barriers stand in the way.
AI is not a panacea. Neither is it one thing, although it is often spoken of that way. A quick glance at Gartner’s Hype Cycle for Artificial Intelligence shows the variety of types of AI that exist today. While the terminology can be a barrier to entry, a general understanding of the types of AI is needed to understand the options and the risks. For example, Generative AI is often seen as an uncontrolled black box, and as such a significant risk. Yet Composite AI can provide transparency or checks and balances.
Choosing the right AI approach and tools depends entirely on the problem you’re trying to solve. It’s rarely a single tool. The richness of AI comes from overlaying different functions and specialisations. Just like creating a multidisciplinary human team, a good AI solution needs a multidisciplinary approach.
The latest shiny thing?
But this isn’t about the tech. For more than a decade, we’ve been navigating the opportunities and promises of digital transformation in government and other sectors worldwide. And while there’s been some notable successes, there’s also been many failures. Change is hard, and research shows when it’s driven by a tech-first approach it’s more likely to fail.
Many of these failed projects started with people keen to use the “latest shiny thing” –
a much hyped tech that would solve any problem or, worse still, a solution without a defined problem. Too often they never really did what was promised, missed the mark and were eventually abandoned.
There’s a real risk that AI is for many the latest shiny thing and the current hype around it is accelerating, increasing the risk of more failed IT projects. To avoid this, we need to be intentional in choosing to use AI when it’s the right solution to the problem we’re trying to solve. It’s not about the tech, it’s about the outcome to be achieved.
AI has the potential to be a real accelerator of digital innovation and transformation in government. It solves many of the data and IT challenges that have blocked progress, but not the human ones. Appetite for change combined with well-honed strong critical thinking, service design and change management skills remain just as critical if we are to make progress.
Ground up or top down?
Given the uncertainty and risk of AI, the human barriers to adoption in government are very real. Worldwide there’s been much debate about governance and AI needing to be driven top down to create the vision, appetite, rules, permissions and safety before it can be successfully adopted.
In 2024 the Australian Government launched the Policy for the responsible use of AI in government along with the National framework for the assurance of AI in government, a Voluntary AI Safety Standard and proposed mandatory guardrails for AI in high-risk settings. The DTA is currently piloting the AI assurance framework. Together this suite provides the guardrails for safe AI adoption across the Australian Government. But has it started cutting through to enable AI adoption?
While strong leadership and a clearly stated risk appetite is needed, a top-down only approach misses the mark. AI implementation needs to be combined with bottom-up multidisciplinary involvement driving ideation, experimentation and adoption. This builds skills and confidence, as well as showing how it can solve real problems such as easing the load on overburdened teams, improving communication across systems or enhancing customer responsiveness.
For AI to be successfully adopted in government there are significant challenges in skills and capacity building we need to first overcome. It’s a complex technical topic with a steep learning curve for a workforce where many are still building basic digital skills.
To move forward, a tiered change management and capacity building approach that builds skills, encourages early adopters, and creates beacons of inspiration and learning in parallel will be critical. Government/industry collaboration will play a vital role in leading the way here. Analysis of the Australian Government’s recent trial of Microsoft Copilot showed improvements in quality and productivity, but also highlighted the systemic cultural, legal and technical barriers to wider adoption. While trials like these are helpful in building workforce familiarisation, they lack the strategic focus to showcase the potential to redesign systems, and process and service delivery models.
Where are we going?
Governments worldwide are grappling with the potential of applied AI. Where can it have biggest impact? Where is the safest place to start? How long does it take to build the capabilities and capacity needed to create significant sustained change? And how do we leverage the benefits while carefully managing career transitions for those impacted?
In the AFP or ATO where investigative teams need to handle vast amounts of varied format data, AI is a no-brainer. Insights from trials in these areas show how AI can augment and support staff by analysing a quantum of data that is beyond human capacity.
AI-powered cameras and drones are also attracting interest in diverse applications including community safety, identifying non-compliance in Sydney’s urban planning and feral cats in Kangaroo Island. In all cases, they’re being used to direct human focus to areas of concern, maximising impact and effectiveness.
Food safety is another area where AI is enabling better auditing, tracking and assessment. Dairy farms, food transporters and food importers in NSW and Victoria are trialling AI-powered decision making for licensing applications. This is showing quicker results for customers and more rigorous evidence-based decision making that immediately highlights areas of concern.
Quicker results are also being seen in radiology where AI is being used to complete a first pass assessment of scans, highlighting those that need further human assessment. AI scribes at Gold Coast Hospital are helping doctors spend more time with their patients, and less time recording details in patient files. The Japanese Government is trialling an LLM to help doctors improve their diagnoses of patients.
These trials are vital to point the way, but what will AI in government look like at scale? Does it have the potential to create better social outcomes? And how far is too far?
Governments in some early-adopting countries are actively tackling these issues. Estonia for example, has used AI to streamline processes such as tax administration, social services and legal systems – right through to the AI predicting court case outcomes and automating document processing, enhancing efficiency and accessibility in the justice system.
Human-at-the centre?
The value of a human-centred approach is something we’ve learnt from digital transformation. Designing with and for humans is critical to meaningful and sustained change. Put simply, humans create change, technology enables it. And as we embark on AI adoption, that approach becomes even more vital. Robodebt significantly eroded public trust in government use of AI and recent global study by the University of Melbourne and KPMG found that Australians exhibit the lowest trust in AI among 47 countries surveyed. Australian government has a twin pronged challenge, which is to build community trust and skills to ensure Australia is well-placed for the future and to build the social license for government use of AI.
Building the social license requires the adoption of a human-at-the-centre approach. This is more than just a ‘human in the loop’ approach which seeks to validate or QA AI outputs. A human-at-the-centre approach is about using AI to improve human or community outcomes. It’s about understanding what sort of community we want to be, and what’s important to us. And then it’s about redesigning processes, services and delivery models to better support all humans to achieve their best possible outcomes. AI has both the potential to improve inclusiveness in government service delivery, or (without a human-at-the centre approach) increase the risk of exclusion.
Join the conversation
There’s a lot that we can learn from each other and the pockets of AI innovation happening across government. This survey is one step toward sharing those insights and making AI a game-changer in government. By surfacing the real barriers and opportunities, we can support informed, confident decisions, knowing where AI is the right fit, and when something simpler or more systemic might be needed.
Take the survey today and we’ll share the insights: Where are the hottest opportunities according to your government peers? Do you start with internal improvements to support quicker staff decision making or create more automated self-service options to reduce demand? Is there an easy way to navigate the plethora of AI tech or get traction to try it out?
A version of this article was published in The Mandarin, as part of Liquid's AI and innovation content series.
- Communities
- Digital Services and Customer Experience
- Regions
-
Australia
Published by

About our partner

Liquid
Liquid brings together strategic design, applied technology and human interaction to truly transform people’s lives, for good. For 25 years we’ve helped local, state and federal government work in complex ecosystems to design customer experiences, products and services that are equitable and enabling. Radical evolution starts here. Our philosophy is that people create change and technology enables that change. We work with people to deeply understand their needs and behaviours, and we design new services and ways of working that improve outcomes. That could include working with regulators to change behaviours to drive better outcomes, service delivery partners to prevent people falling through the cracks, or collaborating on infrastructure projects to embed human needs, thereby improving adoption and impact. We're here to help departments navigate change, build new capabilities and ways of working, implement new technologies and create new experiences for their customers and staff.
Learn moreRecommended

Industry Trends

Whitepapers & Reports

Industry Trends

On Demand

Whitepapers & Reports
Sign up
Most Popular Insights
Most Popular Partner Content