Cale Hubble: All right. Good afternoon, everyone. Thank you very much. Welcome back to BI Connect 2025. Thanks again to Laura, Alex and Elizabeth for the presentations this morning. I've been a big fan of BETA's energy work and also New South Wales Sludge audits actually for many years.
The NSW sludge methodology is really making a splash all around the world, so really exciting to see where it's up to, kind of within the NSW government. So, my name's Cale Hubble. I'm from BETA and in this next session it's going to be a panel discussion.
And we're going to broaden the conversation a little. So, we've been talking about behavioural science specifically so far this morning. Think about the policy cycle and the use of behavioural science and service delivery. But we're going to broaden the picture a little bit now to think about evidence used in government more broadly.
So for over a decade, really all around the world, behavioural scientists and government have been part of a broader evidence-informed policy movement. One example to illustrate those connections is David Halpern, who led the United Kingdom's Behavioural Insights team, and then went on to lead the UKs network of What Works centres, which are general clearing houses for policy relevant research. And we've seen that kind of interplay all over the world. So in this discussion, we're going to take that broader perspective and think about how the Australian Public Service is set up to encourage and enable the use of evidence in policy, whether that's from behavioural science or indeed from other disciplines and from other sources. So over the years, the Australian Public Service or the APS has developed a group of evidence related units in central departments. These units together ensure that the Australian government's policy decisions are made with access to high quality and relevant evidence. Our units try to set up the structures, the systems and the processes for evidence generation and consideration within the APS.
So I'm really excited to have the heads of three of those units with me here this afternoon. We've got Susan Calvert, who you met earlier, Managing Director of BETA. We've got Joanna Abhayaratna, who's the Executive Director of the Office of Impact Analysis and we have Eleanor Williams. Eleanor is the Managing Director of the Australian Centre for Evaluation, or ACE. So before I turn to the panel, I'm aware that there's a number of people who are not familiar with the acronyms that we throw around, so I might just share a screen, to a little slide that talks through some of the ways that our units fit together.
So here's our policy cycle that we heard a bunch about already earlier today. From identifying a problem on the left, to thinking about what you might do about it, to implementing that decision and then thinking about what you can learn from what you've done.
As we heard from Laura's presentation earlier this morning, BETA comes in at these early stages often where we think about how we can understand a policy problem better by thinking about the people that are involved. And then by designing and testing possible solutions, the Office of Impact Analysis or OIA supports policy officers through this decision phase in the middle here. So, when Cabinet is making a major decision, the OIA supports the relevant APS department to conduct what's called an impact analysis.
So an impact analysis is a structured process that helps policy designers think through different policy options and carefully consider the impacts that those options might have on businesses, on individuals and on the community more broadly. So, this ensures that the decision maker has the access to the evidence base that they need to make a really well-informed decision. So, by bringing the right evidence to bear at that critical moment, the OIA helps to ensure that the policy options themselves are well designed, well targeted and fit for purpose.
So once the decision is made, the APS then moves on to an implementation phase, which typically entails many more decisions about the details. Laura's presentation this morning exposed how many small decisions are involved in implementing any high scale decision.
And BETA again, kind of come in at this point to help in optimising some of these details. Sometimes we call these last mile problems, and the Australian Centre for Evaluation or ACE can come in here as well. So ACE is the APS Centre for Excellence in evaluation.
ACE helps policy designers set up the systems that they need to monitor the impact of their policy during its rollout, and finally, in the evaluation phase, it's where we look back and we assess whether everything that we've done has worked. Did we have the impact that we thought we would have? ACE helps policy officers learn the lessons from their past decisions, ensuring that future decisions are steadily better, and better informed by a growing body of rigorous and useful evidence.
All right, enough for me. Let's turn to our panel. I'm going ask each of you. How does your unit help the Australian government deliver for Australians? Feel free to point out all the ways that this slide misrepresented your unit with its pretty stylistic diagram, so we can get rid of that slide now. And yeah, Susan, pass to you. How does BETA help the Australian government deliver for Australians?
Susan Calvert: Thanks, Cale. Yeah, you've heard a little bit about BETA already today. We're all about designing policy based on an understanding of Australians and then rigorously testing policy solutions to see what works. We work in partnership with teams across the Australian Public Service and we try to work early in the policy cycle just as your
diagram showed Cale. Policy teams come to us with a challenge or a question, and we do our very best to help understand and solve their problem. Sometimes we provide quick advice, drawing on the existing literature and our experience working in similar contexts but most of the time, we do our own research, we talk to people to understand the problem. We design possible policy interventions based on that understanding and then we test those interventions to see if they're going to work in the real world.
And this can involve undertaking surveys, randomised control trials or field trials.
Our team is a fabulous mix of behavioural and social scientists, economists, data analysts, policy experts, and we're all supported by our brains trust of an incredible academic panel from top universities who provide us advice.
I guess BETA is an in-house research capability and a knowledge broker. We help policy makers apply empirical evidence to their work. So success for us looks like, firstly we want to bring Australians voices into the room where our policy decisions are being made. We try to understand where people are coming from so government programs and services work with people and we don't add extra burdens or complexity or barriers and secondly, we want to check if policy proposals work in practice and, ideally, before big funding decisions are made, so that we're investing in things that do work. We try to test ideas with real people who will be using the service or the program and that helps government focus its efforts and investments where they're most likely to be effective. Thanks, Cale.
Cale Hubble: No worries. Thanks, Susan. Joanna, may I pass to you. How does the OIA help the Australian government deliver for Australians?
Joanna Abhayaratna: Thanks, Cale. Yeah, the work of the Office of Impact Analysis has evolved for a long time. It's been around actually since the late ‘80s when it was probably more well-known as the Office of Best Practice Regulation.But over that time, the core focus of our work has been on promoting quality impact analysis. Really an impact analysis, as the name suggests and as your diagram kind of points to, is really focused on examining and providing information to decision makers as well as stakeholders about those expected impacts of proposed policy and a range of alternative policy options by providing better evidence. Impact analysis helps to ensure that the policy development process delivers solutions, compares outcomes of alternative ways of doing things and lands on, that provide, the greatest net benefit to the community. Thinking through all the costs imposed and the trade-offs. So the analysis really goes beyond, a lot of what we look at is supporting Cabinet process, but we also support other decision making processes, including Comm-State Ministers meetings and we really go beyond what's in the budget to look at the broader costs and benefits, looking at intended and unintended effects of policies, direct and indirect effects and really drawing on all the information that's available at that point in time, being, any like literature reviews, any evidence from these behavioural studies or the like, as well as more traditional cost benefit analysis.
So really what we're looking for and what we sort of feel works with departments to contribute is quality impact analysis around thinking about how a policy can change outcomes for people, businesses and the community. So today, yeah, again, it's been around as a very long-standing function across different central agencies in the Commonwealth. But really, today, the application of impact analysis applies not just to that regulatory policy, but any policy that's likely to have a major impact on again people, businesses and the community. For example, we look at lots of big changes. In our social service systems, as well as very small changes to particular regulatory approaches, for example licenced medical providers. So, the OIA, we're very much an enabling function. So, we work with agencies, policy agencies still are responsible for developing and providing advice up to Ministers and decision makers around the impacts of the impact analysis. But what we do is, yeah, we're supporting agencies in working out when a detailed impact analysis is required to support a decision. And then we also work with agencies, we’re providing advice and support about how to improve the quality of that analysis. Thanks Cale.
Cale Hubble: Great. Thanks, Joanna. Eleanor, looking forward to ACE’s 40th birthday? Perhaps in a couple of decades. Yeah? Over to you.
Eleanor Williams: We had our 2nd birthday this year and it felt like quite a milestone. Thanks so much and I think you'll see ACE's role is very much complementary to the roles of BETA and of OIA. And so we were established in 2023 and had some really strong championship from Assistant Minister Andrew Leigh, and we really established with the vision, that we put evaluation evidence at the heart of policy design and decision making and we generally try to talk about doing this in three ways and this all sort of rolls off the tongue. You say we improved the volume, the quality, and the use of evaluation evidence. But actually, each component of that is important. So, when there was something called the Thodey review in 2019, which was a capability review of the APS that particularly said, look, evaluation activity is a bit patchy and it's a bit variable in quality. And I know that some people heard, that probably, there's no evaluation happening and it's all bad quality, but it genuinely was. We've always had pockets of great practice across the APS, but what we're trying to do is so improving the volume, we're trying to get much more consistency about more policies and programs being evaluated, improving the quality and we're particularly passionate about experimental and quasi experimental approaches to evaluation which bring a lot of rigour to the idea of causal inference and then the use of evaluation evidence, and I think this is the sort of big challenge that we're all tackling in our various ways, which is to say you really want the decision makers to pick this stuff up and run with it.
We do that in a few different ways, and ultimately what we hope is that if we do those things, we'll embed a culture of evaluation across the Australian Public Service, which is something that lots of governments across the world are trying to do at the moment. So, what does this look like in practice in our day-to-day work? We partner with departments to directly deliver impact evaluations that are rigorous and ethical to really kind of demonstrate what great practice looks like. We provide training and a toolkit with guidance and templates, and a lot of that training we sort of cross reference with BETA and with OIA to make sure we're sort of producing complementary materials. We engage in budget and Cabinet processes to really think about where we can strengthen evaluation plan.
And we do some work to provide leadership and direction that builds a culture of evaluation across the APS, and this is things like leading the Evaluation Profession that you heard Steven Kennedy speak about at the start of this session, and also running things like the Commonwealth Evaluation Policy. And so, the Evaluation Profession, which we which we co-lead with the APSC, has now hit over 4000 members. And so, if you are kind of interested in this evaluation evidence world, it's a great profession to join, to sort of stay up to date and hear what's happening across the APS, that's just a little plug for that as well. Back to you, Cale.
Cale Hubble: Thanks, Eleanor. All right. Thanks. For the three of you, I might throw to you each of you now and ask for your favourite anecdote or story about evidence being effective. So yeah, let's plunge down into some little details. Susan, should we start with you?
Susan Calvert: Thanks, Cale. Yeah, sometimes I think what's exciting is that BETA can make such a huge difference by just sharing a small bit of knowledge or bringing a different perspective to a policy problem. So, for example, if a program has a low uptake and we can look into the processes from a customer perspective and see the barriers that they're facing. So first they'll need to know about the project program, and then they'll need to be able to go through the process to access the program, and sometimes these can be less than user friendly. So, my favourite story is where we can just come in and provide some simple advice, so for example, to remove all the barriers by providing a rebate directly to customers who meet the criteria using our big data assets and making it automatic. So, it's a really simple solution, but it really can take a program from being very inaccessible to actually 100% uptake.
So yeah, other times, as in the BETA project that Laura shared earlier today, we work closely with an agency through each of the steps of the policy cycle and bring evidence to help them make decisions at each point to build a solution that works. So yeah, there's so many good stories.
Cale Hubble: Fantastic. Thanks Susan. Joanna?
Joanna Abhayaratna: From an OIA perspective, we don't produce the work ourselves I suppose, and get involved in the same way as BETA and ACE. But I think what's a very big strength of the approach that the Commonwealth has, to impact analysis is, one of the key principles is the informed decision making. That the information is available to policymakers, the information that policymakers use for a decision is published at the earliest opportunity. So that means we get to see how this plays out a little bit in practice in public debate. So past that decision point, for example when bills are going in. So yeah, part of the OIA’s role is really to work with agencies. You're sure that those major policies, particularly those introduced into the Parliament, have attached to them, the analysis, attached to the explanatory memorandum. We also maintain a database on our website of all impact analysis. So yeah, it can be difficult to trace what the influence of an impact analysis is on a specific decision. But yeah, again, we get to really hear how that analysis can shape the debate, the public debate and really contribute to that shared understanding of what is the evidence, as we currently know, for a particular policy. I mean a recent one, where we got to see that debate evolve was around the new vehicle efficiency standards which recently came into a force in July this year. So that's the standards where they impose a limit on the CO2 each year across a fleet of vehicles that a supplier can bring into the country. In that case, they had a draft impact analysis that was used in the public consultation process, and then, when a final one was published, we got to hear the analysis referenced in both the Parliament and in the Senate. New Senators getting up and talking about it. We also see analysis being used across social media, again to inform that debate. So yeah, I suppose it's interesting to see how it kind of gets used by decision makers and then by that broader public, in shaping that shared understanding.
Cale Hubble: Yeah, great and great to offer the structure that enables that evidence to then be well articulated and neatly packaged in a way that we can for that conversation. Eleanor, your favourite anecdotes?
Eleanor Williams: Yeah, I'm going to be cheeky. I'm going to make two. Just because I think it's so important to think about how fit for purpose evaluation looks. So, I thought I'd give an example of a small one and a larger scale one, because sometimes we need to do something quite quick and targeted. And sometimes we're trying to do a big multi-year project. So, one piece of work we did last year was to support an apprenticeship review that was being led by Ian Ross and Lisa Paul. This was using quasi experimental methods. So, what they wanted to know was do apprenticeship, wage subsidies increase take up of apprenticeships? and by how much? And then what's the tradeoffs? Because as Joanna said, it's really important to think about those components. It was actually, in some ways, a very straightforward piece of work. So, we did some modelling to show how apprenticeship take-ups would have likely been over the pandemic period. Then what happened when the subsidy came in, and then compared the two. So, it was actually quite clear cut because the subsidy came in apprenticeship numbers came up, the subsidy went away, and apprenticeship numbers went down. And there's a bit of modelling needed to sort of show what would have happened otherwise. But the interesting part of it then, is to look at what's the cost effectiveness? So that had two components, a quasi-experimental comparative to say, look, what's the causal impact of the subsidy and then a cost effectiveness and as comparison and it basically showed, yes, you can subsidise lots more apprenticeships, but they'll be very expensive. And so, this was a just a useful input to the overall apprenticeship review that's all been published now. And you can see the bit of ACE analysis that sat behind it. But I think that was sort of an 8-week project and it actually goes to show where you've got high quality data and some good technical skills, how you can get a really punchy set of findings to really support policy decision making quite rapidly. But on the other hand, one of our partnerships is with the Department of Employment and Workplace Relations. And that's looking at online employment services and that's a massive system, hundreds of thousands of people go through that every year.
But you've had multiple kind of inquiries and reviews say, hey, I think it would work more effectively if we had more career advice. If we reduce mutual obligation requirements, if we pull people off online services more quickly into more intensive services. But these are all untested propositions. These are ideas people have had for reviews. So effectively what we're doing is running reasonably large-scale randomised trials in that system to kind of test what is the impact when you bring on more career advisors. And I think sometimes people feel a bit worried about the ethics of randomised trials, but you can see in that context, you've now got a group of people who'll try all the career advice. We'll see if they get better outcomes because we genuinely don't know, and career advice is expensive. So, it's a big policy decision that needs to be taken. So those trials are out in the field at the moment. I'd love to say we had findings to share today, but we're still a little way out from that. But I think that will be a really great demonstration project of how with government running these big systems, we can run substantive trials that tell you, give you a lot of very rich information about what happens if we make a substantive policy change. It's a couple of examples that I really like at the moment.
Cale Hubble: I think I think we're all really excited for your employment trials to come out Eleanor. There'll be a few downloads on your report when it goes online, I think. So, it might be nice to get a couple of examples about how our units have worked together. Susan, can I pass to you for one, about BETA and OIA?
Susan Calvert: Thanks, Cale. Yeah, a nice example of BETA's research feeding into an impact assessment that Joanna and her team were running, was some work we did on cyber labels for smart devices. So of course, in our house now, we have lots of smart things like smart fridges, light bulbs, even smart kettles. And of course there's a risk of these being hacked. The Department of Home Affairs under the government cybersecurity strategy, committed to explore regulatory and voluntary incentives to improve cybersecurity right across the economy, and this included the idea of a label that could go on to these smart products, telling consumers at the point of sale about how secure the product is. So, the idea being, to help consumers make a more informed choice and to incentivise manufacturers to lift their game, if their competitors were able to achieve a higher security rating.
So Home Affairs asked BETA to test whether a label would be effective in Australia, and we knew from surveys that Australian consumers say they want to consider cybersecurity along with other factors. Of course, like price and brand, and the other features for me - colour. So, BETA conducted a discrete choice experiment which allowed us to measure the relative impact of a label on people's choices. It was in a hypothetical shopping scenario, and that research told us that the presence of a cyber label on a product made Australians up to 19% more likely to buy that product. And of course, a label showing a high security level made people even more likely to choose that product over others. So those finding gave Home Affairs confidence to go ahead and propose a new labelling scheme and that new policy proposal included an impact assessment. So, I'll pass to Joanna to let her tell you that part of the story.
Joanna Abhayaratna: Thanks, Susan. Obviously, BETA’s research in that particular case played a pivotal role in strengthening the quality of the analysis by the Department of Home Affairs and ultimately the advice to decision makers. Yeah, it really enabled the analysis to have a better grasp of the different policies and the options, cause a key part of the impact analysis is walking through different alternatives, regulatory and non-regulatory in nature and really unpack that real world situation. What's likely to work in practice in this particular case. This was able to make that conclusion that a voluntary labelling scheme could positively influence consumer choice, but also that careful label design and consumer education was needed to ensure its effectiveness. So ultimately, the analysis and this is all sort of public on our website, which is fantastic, really found that a combination of mandatory product standards and voluntary labelling scheme for smart devices was the best option. So, they worked through a mandatory production standard to ensure the smart devices were built to minimum security, but that combined with that voluntary cyber labelling scheme, to provide that additional guidance to consumers to help inform their choice around the smart purchasing decisions, and to overcome that information asymmetry for consumers. I think it's a very nice example of how it helped shape what is before a policy comes into effect, what was going to be more likely rather than just imposing, you know, the burden of a mandatory standard and not seeing consumers make those choices.
I think in particular, that example and other examples. So, I suppose we really showcase how it's really important to understand those real-world practices to have reasonable assumptions and to be able to design the right combination of policies.
Susan Calvert: I think that's right, Joanna. Home Affairs really valued the evidence in that scenario, and they came back to us again after it was approved to inform the implementation of that new labelling scheme. So, we did some further interviews and a survey to have a look at how consumers think about cybersecurity when they're buying those smart devices, and now they're working with an industry body on the details of the scheme. So, we're staying closely involved in that and looking forward to seeing how successful that is when it's out in the real world.
Cale Hubble: Yeah, looking forward to that. Eleanor, we're a little bit behind time, but maybe a quick little outline of some work that ACE and BETA have worked together on?
Eleanor Williams: I think I can do a very quick run through because I recently evaluated an initiative that BETA helped to design, so that's a it was a great example of ACE and BETA supporting evidence use at different stages of the policy cycle like your diagram showed, Cale. But basically, we worked with the Australian Charities and not-for-profit Commission. We were testing a behavioural approach that's meant to improve on-time submission rates for charities on their annual information statement. So, it's kind of quite dry in a lot of ways, but lots of application across other situations where you've got organisations needing to submit information. Basically, we're testing an additional e-mail, and it really significantly increased the number of charities submitting their materials on time. So, I think there'll be a lot of organisations quite interested in this, both the initiative that BETA designed and then the findings of the evaluation, that say this really will give you a proper bump up in your on-time submissions of information.
Susan Calvert: We really enjoyed working with ACE on this project. We do this a fair bit, BETA and ACE working together because we have very complementary skill sets and capabilities. And so, we can review each other's work and sort of attend workshops to see that we're all learning from each other and applying best practice, but that's probably all I need to say about that one.
Cale Hubble: Yeah, fantastic. Now I know all of us, we do our specific job on our specific policies really well, but we also try to kind of infuse our perspectives and our ways of working around the entire APS. We'd like to hear from each of the three of you, about the work that your units do. Building the APS’s capacity for evidence, informed decisions, kind of more broadly. Eleanor, should we start with you?
Eleanor Williams: So I think that is a common goal across all our organisations, is this idea of capability building. So, there's a few things we've done. So, some things that we worked on together, we worked with BETA on a series of online modules that are available on APS Learn which are kind of foundational modules. And if you think about it, in terms of the learning needs of our APS community, it's really good to have those foundational level modules that was funded by the Capability Reinvestment Fund. And then we've since gone to build on those a little bit more, about looking at how AI can also be used in evaluations. So now that's getting to be quite a good set of materials across APSLearn. I might pass to others, but I'm really interested in this idea of capability building. So, I'm so conscious it's a multi strand approach, like online modules are one thing, face to face training is another thing, but actually, a lot of it is the joint projects we do with departments and agencies that make the biggest difference around capability.
Cale Hubble: Yeah, fantastic Eleanor. How about your capability building, if it's at OIA?
Joanna Abhayaratna: Thanks, Cale. I was thinking about this, in some ways, when we think about impact analysis, we'll look at thousands of policy decisions that might come forward in a particular year, but we’d only require this kind of formal analysis and process on a handful of those proposals really.
But the impact analysis, the big value it adds is this kind of framework, it is a really useful tool for all policymakers and decisions, and the framework really sets out a way to evaluate a public policy problem and outcome, providing a really important nexus between public policy theory and public policy practice. It really breaks down into seven elements or questions that policy advisors should consider in an evidence-based policy advice. We ask people to really set out the policy problem, be clear about the policy objectives and intent. It's probably harder than it sounds, and then doing that before you start thinking about what, what are your options that are viable in this case? Obviously, we ask people to look at the net benefit and again we are really agnostic. We can draw on a range of different evidence bases there. Consulting with stakeholders is another kind of thing, how do you draw together in evidence and finishing that off with implementation and evaluation considerations. So again, I think when we provide a lot of training, particularly to graduates and people new to the APS, on thinking about how to apply those seven questions to a particular policy context. We do try to do face to face training around that and then, in the context of someone who has an impact analysis they're bringing forward, that's where we will provide more specific workshops and we might actually work through the technical details of a particular proposal. And lastly, we also do quite a bit of work with Asian Pacific countries who are trying to build their regulatory impact analysis capability, and they're very interested in kind of how we work through that process, how you work through the Cabinet process. So that's lots of fun to work with those kinds of partners.
Cale Hubble: Yeah, that'd be fascinating. And that guide is published, is published on the OIA website. If anyone's interested. Susan.
Susan Calvert: Thanks Cale. The capability building has similarly been one of BETA’s core objectives as well. And like Eleanor and Joanna, we have networks, events, training, published guidance. We work collaboratively with teams across the government.
Each year we train over 2000 public servants, and we work really closely with the behavioural teams embedded in agencies across the public service. But we're really passionate about sharing our learnings and learning from others. And for example, last year a group of agencies came together to co-develop a suite of training modules on different aspects of evaluation, which are now available on APSLearn and that the APS is continuing to build on. And we worked with ACE on this as well, to create a new e-learning module about randomisation. So, BETA has got lots of experience in randomisation. We're really keen to share that knowledge and of course events like this Cale are also part of our capability building efforts.
Cale Hubble: Absolutely. All right. So just to finish up, let's take a bit of a future perspective. Where would you like to see the APS's evidence use in a few years’ time? Eleanor, I know you've got thoughts on this.
Eleanor Williams: Do you have thoughts on this? I'm really interested, I've been involved in a little bit of international work and it's interesting how some other countries have really built out their evidence ecosystem. And I think what we're seeing, like actually there's quite a lot of organisations and entities in government, including ACE and BETA, which would just come in the last sort of decade or so. So, I think we're starting to build out our evidence ecosystem in Australia. But if you look at other countries where some of this is much more embedded and you've got particularly more bridges between government and academia, which really help us learn from the research and evaluation evidence that exists in the world and also the expertise from the academic sector, if we can harness the best of both worlds and really build that partnership across. So, I think you see some examples like in the UK, there's big What Works network, in places like the Netherlands, you've got a really embedded process for parliamentary reviews where almost all policies, some program report back to a parliamentary committee. I think we've got a lot to learn from those other countries about how we get that really rich network of evidence entities just starting to work much more as business as usual in government.
Cale Hubble: Yeah, fantastic. Susan might pass to you next on that one.
Susan Calvert: Thanks, Cale. I absolutely agree with Eleanor there. We really do need to make better use of the experts and evidence we already have across Australia. And I think that includes sort of local communities and people with lived experience because they're experts as well. But I think, as I mentioned earlier, BETA has worked to build a bridge to academia and we do work closely with the academic researchers and partner with them, particularly in undertaking some of our more complex work. We have an incredible panel of academic advisors, as I mentioned earlier, that we meet with regularly, they provide us with advice on research methods, innovative tech that we can use, data analysis and they connect us with experts across the world. So, we really think that's a good model. But I think one of the challenges we currently have, is that it takes time to answer questions rigorously.
And often policy advisors will come to us with great questions and real interest in listening to evidence and to help them make a decision and get a good outcome. But the time frames they need that information on, prevent us from doing anything other than quick advice. But our response to that has been to move up the policy cycle and get involved earlier in policy design, and this involves being a bit more proactive in offering where we can assist for new government priorities, it obviously helps that we're located in the Department of Prime Minister and Cabinet. So very close to the policy cycle. So, we can do that.
But we also have had to get faster at testing policy solutions and sharing our insights in a way that's immediately usable to policy teams. And I think there's a lot more we can do here. So, my vision for the future is that we're building methods that we can do quickly. So quick user testing and of course AI has a role to play here, I know BI teams around the place that are already using and testing AI. How it can help us synthesise existing literature more quickly and also help us with some of our testing.
But we're a bit cautious, and rightly so, because AI models come with their own constraints and biases, so we'll do this carefully in how we use it, but they certainly are fast. And so, it's worth pursuing that.
Cale Hubble: Thanks Susan and Joanna.
Joanna Abhayaratna: Thanks, Cale. I was going back to your diamond diagram at the start. You can see that where the Office of Impact Analysis is often involved is that more pointy end, just before and around its decision point. So, this means there can be limited time to do new and novel analysis.
We can see that ecosystem of evidence developing, in all those diamonds that you had. And together I think that will be drawn through impact analysis and other decision-making processes into informing better decisions. That increased ability to draw on the evaluations as they build up in a more public, kind of transferability of learnings from stuff that BETA does in one area and ACE does in one area, to apply it to other policy problems coming forward.
As the Secretary talked about at the start, improvements in linked data and the ability, especially combined with AI to be able to quickly analyse that and create new insights in the context of a particular policy problem that you know governments are facing. There is really great scope there to improve the quality analysis as well as thinking about that evidence ecosystem has improved consultation with all Australians and building that better into decision making processes.
Like for example, recently as part of the Australians disability strategy, there was a release of good practice guidance and of engaging with people with disabilities. That new guidance and more inclusive consultation coming forward is an area where
combined, can bring together great evidence base to think about, what do we think the likely impacts are before a policy has commenced?
Cale Hubble: I'll thanks Joanna and I respect that the word evidence has been used in a narrow way in the past, and perhaps we need to crack that open, but that's perhaps a spicy one. Interesting question from the audience. I might just throw at you. Just quickly before we wrap up.
What should we do if the evidence that we find is not in line with the decision maker’s expectations? So, anyone want to put their hand up to answer that one first? If not, I'll just say Eleanor, how about you go?
Eleanor Williams: I find this super interesting about how we frame and communicate evidence, because there is a very purist view that you can take, which is to say look, you know we don't sugar code evidence. We put it like it is, and they take it or leave it. But I do think, having worked in evaluation for so long, what I've realised is that there's ways you can frame it. Often you'll find in almost every evaluation, some things are going well and some things aren't going well, and there's ways to write a report which says you've failed on A, B and C and it makes it very hard for the end user to engage with it and feel like it's constructive. And actually, if you sort of say, look if it starts out with, here's several things that are going quite well, you've noticed that these are opportunities for improvement on here, here and here. All of a sudden, exactly the same finding is much more palatable, and you really haven't crossed any integrity boundaries there. You've still stated what the evidence has shown you, in an accurate way, and you've communicated things that need to be addressed, but you haven't hit that nerve that people have and you get quite into psychology on this, about the sort of the fear and shame that that can engender in someone. Whereas I think we're really trying to be evidence partners. If we're all in this together to ultimately achieve better policy, there's value in really thinking about how your findings land, particularly when they're challenging.
And I think the stronger the relationships are between the evidence producer and the end user, the more likelihood you have of being able to say, look, there's some bad stuff in here, but I reckon you've got ways to come at it, and all of a sudden it feels like your you’re partners in improving something.
Cale Hubble: It's really interesting. Susan, anything you want to react to that?
Susan Calvert: I think Eleanor answered that beautifully. I guess in BETA, we're lucky we get to test options before they're being rolled out. So, it's a little bit easier to say this one works and this one doesn't, unless it was someone’s pet idea that didn't work. And then it can be challenging. But yeah, look, I think it certainly requires an open mind to be evaluated and treat it as a gift, to help you go forward knowing what works and what doesn't work. So yeah, that's it from me. Thanks Cale
Cale Hubble: Joanna, did you want to add anything to that?
Joanna Abhayaratna: From an OIA perspective, I think that that evidence is important for having the opportunity to explore different options and ways of addressing a problem. And I think it very much relates to some of those findings, what people are probably already experiencing on the ground, and feeling so. I suppose that's an interesting challenge that hopefully, keeps us all in a job, when AI takes off.
Cale Hubble: Fantastic. Maybe we've got time for one more question. Teams like ours, always talk about trying to get in the room early. Eleanor, I know that's something that you, I've heard you talk about as well. How do you do that in practice? Have you got any tips?
Eleanor Williams: I think this is something we don't talk about enough at all, but how relational the whole operation of government is, very relational in terms of how it genuinely operates. And so, if you're going to be in the tent, I think you do have to be a constructive partner who really tries to get in the shoes of the end user and participate. And I I do. I feel we've been working on analogy too. I think we're a bit bad for sometimes playing tennis when the other one's not ready. So, lobbing over a bit of evidence saying, you know that the end user hasn't necessarily requested, or isn't quite ready for, and then it doesn't land well. And similarly, I think we get requests for evidence that come out of the blue.
And sometimes they don't make sense and as evidence generators, we have this tendency to go well, that's a stupid question to ask, rather than trying to understand where you are coming from. What are you trying to understand here? And then going back and forth in a bit more of a constructive way. So, I do think the more that we can prove ourselves to be very valuable, helpful empathetic partners, the more chance we have of getting in the right.
That's an interesting space, right?
Cale Hubble: Might go around again. Susan, anything.
Susan Calvert: I completely agree with that. It's about working in partnership with the policy problem solvers and we sometimes will see an emerging policy issue and do a bit of research and then sit down with our shadow policy teams and work through where some of the challenges are from their perspective and then do a little bit more thinking about how we might be able to be helpful early in the process, but I really think you're right, Eleanor. It's about taking a partnership approach.
Joanna Abhayaratna: Have to say the same. We focus on those relationships always, creating a contact for a department, a face to the name, to call. We're happy to workshop things and work with you and yeah, I think that kind of partnership approach is kind of key.
Cale Hubble: And to build on that, someone else has asked how would someone start a discussion with your teams to understand what kinds of questions they need to come to you with, and what they need to bring along to figure out if you're the right people to help them at a certain point in their process. Any like rapid fire, quick comment on that one around what people should bring when they approach you. We will do the same order, Eleanor.
Eleanor Williams: We've got an easy e-mail address, you can do evaluation@treasury.gov.au and you'll get to us. But we also have got heaps of materials on our website. We're thinking about making an AI chat bot, but we're just trying to work out whether that's good, bad or different. But also, we really encourage people to reach out. If you're working within a department or agencies that has an in-house evaluation unit. Reach out to them first because they're likely to both understand your content material better, but also that they're likely to be the ones who will help you broker a solution in the end. But ACE always can be contacted if you've got a general question or you don't quite know where to turn, we're happy to be the starting point. Using evaluation@treasury.gov.au.
Susan Calvert: Similarly BETA@pmc.gov.au. We really want to help; you don't have to do too much before you come and talk to us. Come and ask us some questions. That's our job. We're here to help.
Abhayaratna, Joanna: We also have a help a help number, give us a call. We talk to people every day about policies coming forward. So really happy to answer the small questions. The big questions too, it could be quite daunting starting a policy process. OIA is happy to work through and then we also talk quite a bit with the other impact framework holders. So happy to help people, refer them on and help them through that process. So yeah, happy for people to pick up the phone, just give us a call.
Cale Hubble: Amazing. I think the e-mail addresses; someone's put in a response to one of the questions. All right, so thank you so much, Susan. Eleanor and Joanna, really appreciate your time this afternoon. I'll let you go, and I'll actually let everyone go for a 10-minute break. But please come back after 10 minutes. We've got a few presentations from some European based colleagues which should be excellent. So, see you all very soon.
Behavioural science has long been at the forefront of the evidence-informed policy movement. The Australian Public Service (APS) has developed a cluster of entities in central departments tasked with ensuring the Australian Government’s policy decisions are made with access to high-quality, relevant evidence. Together these entities maintain strategic oversight of the structures, systems and processes for evidence generation and consideration within the APS. This panel will feature the heads of the Behavioural Economics Team of the Australian Government (BETA), the Australian Centre for Evaluation (ACE) and the Office of Impact Analysis (OIA). The panel will discuss the APS’s evidence architecture, how different entities collaborate to embed behavioural science and evidence at different parts of the policy cycle, how the APS’s structures and processes of evidence brokerage and generation have changed over time, and where there are still opportunities for improvement.
Presenters
Cale Hubble
BETA, Department of the Prime Minister and Cabinet
Eleanor Williams
Australian Centre for Evaluation, The Treasury
Joanna Abhayaratna
Office of Impact Analysis, Department of the Prime Minister and Cabinet
Susan Calvert
Managing Director, BETA, Department of the Prime Minister and Cabinet