Sludge and complex consumer choices

Saving customers time, money and effort by simplifying processes and choices

- Good morning everybody, and welcome to this session of BI Connect 2022. This is the second of three sessions in the virtual series, which explores leading work across the BI industry. Today's session focuses on the New South Wales Behavioural Insights Unit work on eliminating sludge from government services and BETA's work on comparison websites supporting consumers to make difficult choices about superannuation and the retail energy market. I would like to begin by acknowledging the traditional custodians of the land on which we meet today and pay my respect to their elders past and present. And I extend that respect to Aboriginal and Torres Strait Islander people here today. So my name's Rox Armstrong. I'm one of the senior advisors in BETA. My team looks after a suite of projects focusing on social policy issues, and I'm really pleased to be here today, because the capability program also sits within my team. So, I have been involved in a couple of these sessions and it's really nice to be hosting one here today. So for those of you less familiar with BETA, we're the Australian government's first central agency unit applying BI to public policy. We operate in the Department of the Prime Minister and Cabinet, and we improve the lives of Australians by generating and applying evidence from the behavioural and social sciences to find solutions to complex policy problems. Part of our mission involves building capability, and this event is among a range of initiatives we conduct to share knowledge. If you're feeling inspired by today's talks, please visit our website and you'll find various tools that can help you learn more about BI in a project of your own. We also have all of our published reports on our website available as well. So we have four, we have five speakers today, actually, three in one presentation and two in the other. And we'll have plenty of time for questions at the end of each presentation, and you can submit them via the Q&A tab. So as a practice for this, and I'm thinking there's some little questions up there, if you wouldn't mind going to the Q&A tab and we've put a couple of statements that might describe you, that you can see yourself. Please navigate to that now and hit the one that best describes you. And I'll just give you a moment to do that. There's quite a few coming in from somewhere in Australia, which is not surprising. We've got just one from somewhere else in the world. A lot obviously in the public sector. Fantastic. Just one from the private sector so far. No one from not-for-profit and plenty of BI practitioners. That's great. That's really great. Thank you. And that just gives you a little familiarity with how to use the Q&A, which we'll be using a little bit later. Great. So now let me introduce you to our speakers. We are joined today by the New South Wales Behavioural Insights Unit. Dave Trudinger is the director of the New South Wales BI Unit, and we also have Eva Koromilas and Alex Galassi who are leading the BI Sludge Program at work. Their presentation is called ‘Sludge reduction: improving the customer experience and BI capability’. So, welcome all the speakers and we're looking forward to your presentation. So over to you.

 

- Thanks so much to you Rox and to BETA for having us today. It's really great to be here. I'm joining you from the beautiful lands of the Dharug people today, and I want to also acknowledge the traditional owners and elders past and present of that land that I'm joining from, and also all the lands that you're joining from. Today, Alex and I are going to talk to you about the work that the New South Wales Behavioural Insights Unit is doing to reduce sludge. It's been a journey that we've been on for a few years now, and we're really excited to be able to share some of the insights and lessons that we've been learning along the way. So, the presentation is going to be broken into four sections. In the first section, I'm going to talk to you about the context of customer experience in New South Wales Government and where behavioural insights fits into that. In the second section, I know many of you are familiar with the concept of sludge. I'm going to introduce you to it and I'm going to talk a bit about how we define it in the types of sludge that we're concerned about in our work. In the third section, Alex is going to talk to you about how we've operationalized sludge reduction in New South Wales Government. She's going to take you through our sludge audit method and some of the tools that we've created to assist with that. And in the last section, Alex is going to reflect on some of the lessons that we've learned as we've been trying to scale some of this work through building capability in New South Wales Government. Okay, so before we dive into some of the actual sludge reduction work, I want to set the scene with a bit about how the BIU's work fits into the wider context of New South Wales Government. So we're lucky in New South Wales to work in a government that really does care about customer experience. Our department was formed in 2019 and to my knowledge, it's the first department of its kind in the world. Since we were established, a lot of the focus has been on creating an effortless government. And in doing that, the government brought together a lot of the customer-centric teams in government to be able to do that. One of the focuses for our department has been on things like digitizing services. We have, for example, a Service New South Wales app where you can transact with government. We're also in the process of the moment of combining all our websites into one mega website through our one CX program. We've also done a bit of intervening in markets as well. So, we have what's called the FuelCheck app, where you can check fuel prices in your region. And we've also have some government-owned like price comparison type tools like for green slips. And whilst we're doing all that, we're continually evaluating how we're going with our State of the Customer Report and regular surveys, so that we're constantly listening to the customer and reacting to the issues that the customers are facing. We have a number of guiding principles that we're trying to embed in our government services. I'm not going to go through all these, but I just wanted to flag them as that they have informed some of the development of the tools that we have created for sludge reduction. And what we try and do in our work and in the work and in the broad New South Wales Government, is to measure our progress against meeting these. So where does BI fit in with all of this? We were one of the teams that, as I said, were brought into the department when it was created and at this time, sludge was actually emerging as a concept in BI. So we had, you know, Richard Thaler come up with the term. He used to talk about it a lot on Twitter, used to like to, you know, point out any sludge that he came across in his own life. And then we, after that, we started to have actually academics taking it very seriously and Cass Sunstein started to publish papers about sludge. And so in this context, it provided a ripe opportunity for our team to look at, okay, how can we use behavioural insights to address the specific challenge of improving customer experience? Where are the opportunities? And where does BI fit in with us? So we saw this real value in using BI to complement traditional approaches, to improving CX, but at the same time, it was a way to transform BI, bring it back to its roots and thinking about how people are humans and with real customer needs and we need to design for that. So, we developed a method which we piloted, then we've done a lot of sludge audits since then. We've also conducted sludge-a-thons that we do every year now. And it's really just become a core stream of our work. Okay, so with that backstory aside, let's dive a little bit into what we mean by sludge as a theoretical concept and some of the multiple forms that we see it taking. So many of you are familiar with the term, sludge. I know that it's basically a play on the word nudge and you know, while in our work where we're nudging people, trying to get to the direction that they want to go, the sludge is the opposite of that. So. It's all the stuff that's holding people back. And I think in BI, traditionally, the majority of our time is spent on the nudging. But we need to also think about those countervailing forces. And I think sludge presents a new set of opportunities for us. As I mentioned earlier, Cass Sunstein has helped define the concept of sludge and also nudge, of course. And he describes it as the excessive or unjustified frictions that make it difficult for consumers, employees, or really anyone to get what they want or do as they wish. But what I'm going to do is I'm going to break it down a little bit further. So when Richard Thaler first started talking about sludge, this was the kind of sludge that he was often referring to. It's that sort of sludge, that deliberate type of sludge that we often see in the private sector. And I think we've all experienced. A really common example that gets thrown around is, you know, subscription models. We know how easy it is to subscribe to anything. You can do it in one click, but the minute you want to unsubscribe to something, they make it really hard. Like recently, I was trying to unsubscribe from a newspaper and you know, of course I couldn't do it online. I had to ring a phone number, I had to do it during business hours. When I did ring I, you know, I was interrogated with all these questions. So, you know, that was certainly deliberate, right? Nothing stopping them from just having a little, you know, button that I could just press on the website to unsubscribe. Clearly all those extra steps were intended to make it harder for me and goes against my best interest that they themselves know. Another way that sludge manifests in the great presentation that was done last week by the ACCC is in dark patterns. And this is, you know, the particularly sort of sneaky use of sludge online where actually, interfaces are deliberately designed to trick us. And this is where they might trick us into buying something we weren't intended or we weren't intending to buy, or sharing information that we weren't intending to share as well. The sludge that we see in government tends not to be as deliberate, right? So, you know, on the whole, in government, our interests do align with our customers. We want customers to get what they want, we want them to get it quickly, we want them to have a seamless experience. So for us in the work that we do, we always start with the customer need. It's about what is the customer wanting to get out of this process, not governments. And that's obviously a principle that's relevant for nudging as well, but nonetheless, there is still sludge that exists even in this context. And we can all recognize this type of sludge. This is sort of the more typical kind of sludge that we talk about in customer experience. It's the convoluted websites, it's the forms that are really painful to complete, the wait times, you know, we make decisions really hard for people. All that stuff is that unnecessary and unjustified friction. Even though it's not deliberate, it still shouldn't be there. But sludge still isn't limited to that either. So, we also see sludge as being about the psychological barriers that make it hard for people to engage with us or even the psychological consequences of the way that services are designed and the impact that, that has on our customers. So an example of that, I've got a few on the slide or an example is embarrassment. So when you're having to disclose, oh sorry, I think I've, okay here we go. When you're having to disclose sensitive information or if there's processes that are associated with stigma, like accepting welfare payments. That can lead to embarrassment for some customers and can be a real barrier. We know that a lot of people won't access welfare payments that they're entitled to because of this stigma. So, when we talk about sludge reduction, we talk about meeting the customer where they're at. So, if the customer has those kind of psychological barriers that are stopping them from engaging with government, that's our responsibility to overcome those and to meet them where they're at. And that's a big part of the work that we do in sludge reduction. Last point I wanted to make is that not all friction is sludge. In fact, some friction is helpful and can actually help us make better decisions. Got some examples here. We know in BI, sometimes it's actually a good idea to try and move people into a system 2 mindset to help them reflect on their decision more and make a decision that is more aligned with what they want. And an example of that, which we know we all experience every day, is when you get, for example, pop-up messages on your computer. You try and close a file or close your computer and you get, you know, do you really want, you know, do you want to save this file? So that's obviously an extra step, but it's a step that we want to have, because we know that it helps to prevent human error. So, that's a bit about sludge. What I'm going to do now is hand over to Alex who's going to talk about how we've actually applied this in practice in in New South Wales Government.

 

- Thanks Eva. So yes, I'm going to talk to you about what we're doing about it. So, how are we taking all of this theory and operationalizing it for a New South Wales Government context? So what we've done is we have created a sludge audit method, which is used to identify, quantify, and ultimately eliminate sludge, which is, of course, the ultimate goal. Now the sludge audit isn't a concept that we've come up with ourselves. In fact, Cass Sunstein makes an appearance once again. He conceived of this concept of sludge audits and advocated for them as a way to protect consumers. So he says that we should conduct sludge audits to catalogue the costs of sludge and decide when and how to reduce it. And what we've done is we've taken this concept and we've run with it and we've developed our own kind of version of this, which is tailored to a New South Wales Government context. And what you can see here is Cass at our first sludge-a-thon. So we can happily say that Cass endorses our sludge audit method. Now going into this a little bit deeper, the way that we've done this is we've drawn from the multiple metrics to measure sludge and its reduction. So by doing this, we've designed this really standardized way to identify sludge looking at time, cost, and experience. And we've done this by drawing from the New South Wales Government customer commitments, which Eva showed us earlier, as well as a range of different disciplines. So, looking at behavioural science, transaction costs, and a whole bunch of different areas. And so that's been the kind of basis of our sludge audit method and the formation of our sludge audit method or the components of it is the following metric. So we look at the time to the customer and the time to government, the cost to customer and to government, the the effort to the customer. So, how effortful is interacting with government at each step. And as well, of course, as inclusion. And going back to that point that Eva made earlier, we don't just see sludge as a cost in terms of efficiency, it's also something that can lead to exclusion of some customers over others. And the great thing about it is that once you've kind of completed a sludge audit and major improvements, you can then measure the impact with a sludge audit again, as well as any other process-specific metrics that you might have. So, if for example, you were to have customer satisfaction metrics, you'd be able to look at the before and after ratings of those. And the same goes with any behavioural data that you might have. Now I'm going to show you what the sludge audit method looks like in a little bit more detail. So there are five key steps. Now we first start with a behavioural journey map, which is a step-by-step journey map of what the customer actually needs to do to interact with a government service. And this differs from a traditional journey map in that we actually are mapping every step that a customer might need to take, including the mistakes that they make. We then gather inputs, so any data that we might have, which helps us understand what that looks like for the customer in terms of the time that they're spending on the process, as well as what the customer experience looks like. We then use those metrics to inform the two key steps of the sludge audit itself, which are first, the time and cost audits, and then the customer experience audit. After that's done, we reflect on the results of those two key components of the audit to conduct our behavioural analysis, which is essentially looking at what are the drivers of sludge? What are the drivers of that poor customer experience? And what should we prioritize when we're developing solutions? Once we've developed those solutions, we implement them and conduct a follow-up sludge audit to measure our impact. Now one of the key components of this, of the sludge audit method is the customer experience audit. And we do this through our sludge scales, which are, they set out the criteria to measuring a good customer experience. Now the majority of our sludge scales are interaction-based. So you can see a couple of those in navy here. And you can see that we have a range of different government interactions and a range of different types of behaviours that a customer might need to undertake when they interact with a particular service. And these are standard criteria for measuring the quality of those interactions. In addition to those, we have access and equity checks, which are in the lighter blue. And these are a set of criteria which we can use to kind of check whether if there are any steps in a process which might lead to the exclusion of some customers over others. All of these sludge scales have been informed by, you know, the research that we've conducted as well as the New South Wales Government customer commitments and CX principles. Now I'm going to show you in a little bit more detail what one of these sludge scales looks like. So, this is an example sludge scale for accessing forms. So, for example, if a user were to be completing a sludge audit and one step of the process involves the customer looking at the form, looking at a form, they would access the sludge scale for forms and use this one to five rating rubric to assign a value to what that particular customer interaction point or what that particular customer experience looks like at that point. And you can see that with that one to five rating, you're given a kind of objective way of measuring something that is generally quite kind of difficult to measure objectively. So we've used our sludge audit method quite a lot in the past couple of years, and this is a case study from when we first kind of started it and during our first kind of iteration, and it's a case study from home building license application. So, this was back in 2020 and what we did was we wanted to look at the process of tradies applying for a home builder's license application. And what we found from the kind of initial audit was that it took an average of 26 hours for someone to apply for this license. Now during that 26 hours, you know, they could be working and also to say that, that 26 hours is spread over a number of weeks or a number of months. So, that's a lot of time lost. We also found that 25% of applicants needed a follow-up. So that meant that when they were submitting their application, there were mistakes in it. So, they needed a follow-up from Fair Trading to ask for clarification. Now when we delved into the results of the sludge audit itself, and you can see in the middle of the screen there is an actual screenshot of the output of that result, we found that customers were spending on average 10 hours just collating the documents that they needed for their application and then keeping in mind that 1/4 of those needed follow up. So, that's 10 hours spent doing something that, you know, there were mistakes in. So what we identified as a key driver of this time was that there were kind of unclear instructions to tradies about what kind of documents they needed for their application and how to compile their application. So what we did is we applied behavioural insights to the instructions and to the letters that were sent to tradies, and reduced follow up inquiries by 32%. And what that means in practice is approximately 3,000 more builders receiving their licenses quicker, which means they can work faster. It also translates to a saving to government because we don't need to follow up with people who've had an unclear application. So I hope that gives you a little bit of a taster of how a sludge audit can lead to some really great outcomes. Now since that pilot, we have completed 40 sludge audits. So, we've done a lot in that time and we currently have 10 underway. We've also had over 500 public servants in New South Wales learn about our sludge audit method, which is fantastic. And we now have 80 of those sludge scales. So those navy sludge scales that I showed you earlier, we have 80 of those. And I've just got a snapshot here of a couple of the sort of impact stories that we've had with those 40 sludge audits. So we've had five reaching impacts to health. So seven times more patients taking up a virtual care application, to education with more students applying to education programs and a 30% increase in First Nation students applying. We've shaved at least an hour of various different types of applications, seven minutes for customers signing into Service New South Wales. And considering the fact that there are over a million customers signing into Service New South Wales, you can understand the collective impact of that. And also, a reduction in the time for toll transactions as well. Now as well as this really great customer impact, and you know, one way that we've been able to do this is through our sludge-a-thon. So these are events that we've held two of so far and they're a way to build capability not only in the sludge audit method, but also in behavioural insights and also a really great way to do this in scale. So, they're based off kind of hackathons and we bring a whole lot of teams together to firstly learn how to complete a sludge audit, which is depicted in that purple circle to the bottom left-hand side of the diagram. Once they've learned how to do a sludge audit, they then go away and actually apply what they've learned to a real life problem that their team is working on. So they audit a process that they want to improve. Once that process has been audited, they come to a two-day solution building workshop where they learn about behavioural insights and develop a prototype that's informed by behavioural insights. And they walk away from those two days with an action plan and a plan to implement that solution which they do so with our support afterwards. So as I said, we've now had two sludge-a-thons, so once a year for the past two years. We've had 25 teams go through that process. And what's been really great to see is that we've had every cluster in New South Wales Government participate in those. And even better is that we've had a really great impact in terms of capability building. So, 80% boost in in participants being able to understand customer frictions. And also, 70% of participants who participated in the last sludge-a-thon said that they've used behavioural insights on other projects. So that's just in the past six months. So it's really been great to see that they've been able to use behavioural insights and embed it into other parts of the work they do, not just this live problem that they were kind of working on. And so what I think that tells us is that there are heaps of lessons we can learn from capability building in terms of the work that we've done on sludge. And I think there are a couple of key ways that we've been able to have this impact. The first is that when we use our sludge program and the way that we've approached sludge reduction with teams is that we've had them work on a live problem. So, it's been completely relevant to something that they're actually trying to solve for. And we've generated this interest in participating in a sludge-a-thon or in completing a sludge audit from the ground up. So we've approached teams who actually work with these problems on a day-to-day basis and promoted the sludge-a-thon and promoted a sludge audit to gain their expressions of interest. And so it's not just a directive from leadership, it's actually something that they're passionate about solving. And what we found with that, which has been really encouraging to see, is that public servants really do have a deep interest in improving the lives of their customers. And so, with this ground-up kind of approach and their drive and motivation to improve the lives of their customers, we've been able to kind of develop this kind of sludge community of practice, so to speak. So they start with the customer problem as Eva said, and we've also kept it really practical. So, we've given them tools and resources to apply to their live problems. So that's none of this kind of teaching of theory. It's always how can we apply impact and have impact now? The second thing that I think has made this really successful is that we aim for a really sludge-free program. So, we can't really be specialists in sludge reduction if we ourselves are sludgy. And that's something that we always check ourselves on as we're going about our sludge program. So, we've done this in a couple of different ways. The first is that we've created a digital sludge audit tool, which is the tool that you can see in the top left-hand corner. It's a live demonstration. Originally, we actually had an Excel spreadsheet which users would complete the sludge order in, but now we've translated this into a web application which is way more easier to use and allows more public servants to be able to complete sludge audits. We've also developed a set of practical guides which include templates that a user can use to guide them through the sludge audit process as well as reduce sludge. So, on the screen, in the bottom left-hand corner, are six of our sludge reduction guides, which are a set of channel-based guides which contain really practical tips to apply behavioural insights to reduce sludge across various different channels. Now in addition to that, we've actually applied BI to our capability building approach. So we've through our sludge-a-thons and through the sludge audits that we conduct with teams, we have clinics which are timely. So always providing support at the point at which our customers, our New South Wales Government colleagues need it. We also use commitment and goal setting devices. So for example, with our latest sludge-a-thon, we told participants that four months after the sludge-a-thon, there would be a sludge showcase where they'd be required to share their impact, which kind of kept them focused and kept them towards their goal. In addition, we always tried to imbue our program with a sense of fun and provide encouragement to our participants as well. And we also have dedicated to support to really tailor our support to their needs. So, now we want to hear from you. Sludge is a problem that, you know, is everywhere. Even if we do want to try and reduce it, even if it's unintentional, we do see it. So we want to hear what you are doing about sludge and we'd love to collaborate for even greater customer impact.

 

- Fantastic, thank you Eva and Alex, that was really interesting, and I'm sorry about the delay there. I'm just getting myself into gear. I just love how it's improving, like, people's interactions with government and their access to government on something that's relatively unseen and you've put this, like, fantastic framework around something that's probably very, very tricky to define, and you can see the work that's gone into it. It's really interesting and really, really great work. And it's lovely to hear the fantastic outcomes from sludge-a-thon as well. And I also really love that you turned it onto yourselves as well and made yourselves unsludgy, which was lovely to hear. So, great work. Fantastic and thank you for that presentation. We've got a little bit of time for questions and I'm just going to have a look, and there's one coming in. And there's one, and I'll just read it out now and I'll just direct it to the team, that someone, whoever feels most comfortable to jump on. So it says, "How did the savings made, for example, the 80,000 per year compare to the costs of conducting the order in terms of salary time of those involved."

 

- Thanks Rox. I'm happy to take that. Great question. So when we represent cost as a metric in the sludge audit, it's not meant to be a return on investment. So, the metric of cost is what is important for the process that we're auditing. But in terms of how it compares to the time that we're spending on it, the cost and time per audit depends on the scale and the scope of the program that we are auditing. It can take anywhere from one to two hours if you've got all the data available, it's really just a matter of plugging it into the tool or up to a few weeks if you need more time to gather the data. So, it really just depends on how complex the process is and what data's already available.

 

- Great, thank you so much, Eva. Another question coming in and I'll just read it out. "It appears that most applications to date have been to external customers, I.e., members of the public. Have there been any notable application of sludge reduction between departments or government agencies?" That's a good question.

 

- Yeah, that's a really good point as well. Look, we do tend to focus on customer impact 'cause that aligns with the mission of our department. We have done some work though on government processes for example, we looked at, we did a review of the recruitment process for our department and we embedded a whole lot of sort of behavioural changes into that. We would be open to any kind of sludge reduction that's going to have an ultimate positive impact on the customer. So, things like inter-department sludge is definitely one that's right for desludging and definitely an opportunity we would be open to. Particularly in areas where it's a policy area that is shared across different jurisdictions. I think there's a real opportunity there for us to see how we can work together for the benefit of the customer.

 

- Fantastic, fantastic. A little quick one, "Is the digital tool only available to New South Wales Government teams? It seems very valuable."

 

- Yeah, the digital tool is currently only available to New South Wales Government teams. We are in the process of publishing a lot of our tools and resources though at the moment. So, we've got a sort of a webpage that's coming and we've already published some of our sludge guides, sludge busting guides we call them. So, there will be an opportunity to, yeah, definitely see more of and access more of our tools. If you want to access our sludge audit tool in the meantime, feel free to reach out to us at sludge@customerservice.newsouthwales.gov.au and yeah, we're happy to talk to you about it and see how we can share but yeah, there's some tools if you want to go on our website that you can check out now. I think Alex will also post them in the chat.

 

- Great. And I might just do one more. This one touches on the funding, so it's great you get them to commit and do a showcase after four months. Do you find that teams or agencies will say they can't make any changes, because they have a lack of funding? Often the changes require ICTN funding. Any thoughts on how that can be addressed?

 

- Yeah, sure, I might hand that over to Alex actually 'cause she works most closely with the subsequent teams.

 

- Yeah, so one thing that we do initially is we make sure, or when we go out to teams to complete a sludge audit, we always, you know, set the expectation that a sludge audit is going to uncover things that need to change. So unless you are able to make those changes then, you know, it may not be worth doing a sludge audit. We also, you know, we have a way to prioritize different types of frictions, and you know, while one friction might be really difficult or really kind of expensive to tackle, we can of course, look at, you know, easier to solve ones. But we have certainly encountered the old like IT issues being a barrier to removing sludge, but there's always something else you can do.

 

- Great. That's very positive. Thanks Alex. And thanks Dave for popping those helpful links in the Q&A as well. We've got one more little quick one. How long did the sludge audit of the recruitment process take?

 

- Yeah, so that one was one of the more intensive ones. So, we probably spent about a couple of weeks doing the audit. We had five day recruitment accelerator where we worked on solutions to address the friction in that process. And it wasn't just our team, it was across government, effort, across, sorry, team effort within our department and because of that sort of broader investment, we did have more data that we could draw from and more opportunities to design solutions. So yeah, it took probably a couple of weeks to actually do the audit part though.

 

- That's great. Thanks Eva. That's very helpful. That's fantastic, thank you. That brings us to the end of the questions. Was there any closing comments you wanted, Eva, before we move on to the next session?

 

- No, all good. Very happy if anyone has got any questions reach out to that, to our email address that Dave's just posted. Really keen to hear about the work that you guys are doing or thinking about doing in sludge reduction. It'd be really great to connect with some of the people on the call today. So, thanks for having us.

 

- Lovely, thank you so much. Fantastic. All right, well, we might move on to our next session now. We've got Katie Anderson-Kelly and Laura Bennetts-Kneebone. So these are two of my colleagues in BETA and they're very kindly going to present here today. Their presentation is called ‘Comparison websites supporting consumers to make difficult choices about superannuation and the retail energy market. So just quickly, Katie's one of our advisors, as I said, she's worn many hats working roles across the not-for-profit sector and government with a focus on research, design, evaluation, and data communication. Laura is also an advisor in BETA and a data analyst, designing behavioural solutions to policy problems and testing them with randomized control trials. She's worked on projects encouraging small businesses to improve their cybersecurity behaviours and designing a simulated gambling game to test activity statements and experimenting with energy bills. So, welcome to my colleagues, Katie and Laura, looking forward to your presentation and I'll hand over to you.

 

- Thanks Rox. I'll just share my screen. Okay, great. So thank you very much, looking forward to this today. Coming to you today from the beautiful town of Rubibi otherwise known as Broome. So, I'd like to acknowledge the traditional owners here, the Yawuru people and Katie will be speaking from Canberra, from Ngunnawal country. Okay, so I'd like to start by talking about choice overload in general, the concept, the evidence, and the controversy around it. And then I'll dig down into the intersection where government actually gets interested in choice overload. And I'll link this to some recent examples from BETA's work on two government websites, YourSuper and Energy Made Easy, which we worked on last year and earlier this year. And you'll be able to see how comparison websites can be given a nudge to make them more effective for consumers. And then I'm going to hand over to Katie. She's done a desktop review of 15 government comparison websites and really broken down the main design patterns and features in a systematic way. I really wish I'd had this resource when we started our journey looking at YourSuper and Energy Made Easy, which I'll maybe call EME, because this is an awesome tool, which really can help you to think through the main options and understand the consequence of different types of choice architecture. So it's not just the consequences for the look and feel of the page, but the consequences for the consumer journey whether they ultimately make a choice and whether it is a sound an informed choice. So in the earlier literature, choice overload was used when the complexity of the decision problem faced by an individual exceeded the individual's cognitive resources. That's how it was defined. But more recent research focus has been on a particular subset of choice overload. So situations where the decision complexity is caused by the number of options available. Now, the most frequently cited study that seems to have kicked off the spike in research in this area is often known as That Jam Study. And here is a quick summary of that particular trial. They set up tasting displays in a supermarket of either 6 or 24 jars of jam. The 24 jars attracted more interest, but the 6 jars resulted in substantially more sales and there have been a significant number of trials of this phenomenon since then looking at a variety of different products and scenarios. So other areas where choice overload's being documented has been in coffee, pens, gift boxes, choosing between retirement savings options, electronic goods, and vacation resorts and packages. And here is a quick timeline showing some of the key events in the history of choice overload. So, there was initial excitement over That Jam Study in 2000, which was followed by failed replications and a matter of you saying there was on average no effect of choice overload on decision-making. That meta review found that some studies showed large effect sizes and some found none. And importantly, there was a six year period where there was a lot of talk doubting choice overload, and this is a significant period of time and a good reason why people could hold the view that choice overload probably isn't a thing. Now, the second and third meta-analysis did find evidence of a choice overload effect and the research matured and became more nuanced. So, one of the things that was found was that there was an evidence of an inverted U-shape, essentially showing that too little choice and too much choice are both negative, but there is a sweet spot in the middle where choice is a bit less paralysing. The second meta-analysis found moderator effects and I do want to dip into those in a little more detail. So here are the four moderators from their model that they found have a reliable and significant impact on choice overload. And I'll describe these in my own words. So firstly, the context makes a difference. For example, they found that having a dominant option among the choice that made a decision more likely, but it's a bit like a default. You could consider a Big Mac as the dominant option on the Maccas menu. Or another example of context, if there's a clearly inferior choice, it's easier to choose than if all the choices seem kind of the same. Now, task factors. Kind of goes to how challenging the task is. So outside the specifics of the product, does the person have much time to decide or are they under time pressure? Do they have to compare lots of attributes like size, price, quality, and brand? Or is that just one like the flavour of the jam? If people have existing preferences, they found that they'll be less likely to experience choice overload. For example, choosing jam is probably a lot easier if you're familiar with the flavours and know which ones you like, but compare this to ordering from a menu of a foreign cuisine that you've never tried before, that'd be a lot harder. And decision goal is referring to whether you are making a final or an intermediate decision. So with final decisions, more likely to lead to choice overload. So let's talk about why the government would be interested in any of this. There are many areas where government regulates essential services and competitive markets such as super funds, electricity and gas plans, telecommunications, property insurance, private health insurance, and banking products. And the government has an interest in ensuring that the market is competitive, and consumers aren't misled or taken advantage of, especially in areas of essential services. So compared to choosing jam, choosing an essential service is harder of all four moderators that were previously described. Was there some other issues? So firstly, the size of the choice set. So, the government doesn't want to reduce the choice set, because that would limit competition. Choice overload. It also interacts with loss aversion. So, people aren't just choosing between the items in the choice set. They usually have to give up what they have at home, their current plan or service in order to get the new one. And they might not want to do that if they feel like they'll be worse off. Products may be designed specifically to be difficult to understand. So, marketing may drive poor decisions, because the retailers and government's goals may not be aligned. Technical language and complex concepts are really rife in these fields. Investment, energy, insurance, they're very complex technical fields and they each have their own jargon and their own absurdities, different fees, conditions, exceptions, things you have to understand and learn about in order to make an informed choice. Now, to make this a bit more real, I'm going to talk about some of these moderators in the context for search for an energy plan. So, here at BETA, we started a project about 12 months ago with the Australian Energy Regulator, looking at the comparison site, Energy Made Easy. Now, around the same time I moved from Canberra to Broome, and in my Canberra home, search for an energy retailer revealed 145 plans from 9 retailers. Now each of these plans have different supply charges, usage charges, solar feed-in tariffs and fees, some have membership fees, time-of-use plans, stepped rate plans, and demand charges. Some of the people who also have gas, some of the people with the battery, some offer green power, monthly bills, direct debit. I could go on probably for a long time. Now when I arrived in Broome and we found a house here, I thought, "Right, I know all about the energy retail market, now I'm confident I can find us the best plan." I couldn't even find a government comparison website for WA, so I tried Canstar Blue and it found no offers in my area. Now, it's not quite true. In fact, there is actually one. Horizon Power is subsidized by the WA Government to provide electricity to remote areas. So, that's what I have. So I found that moving to a remote area is one solution to choice overload. But going back to the left side of the slide, what solutions are there to the problem of having 145 plans to choose from? Now while we've established that we don't want to reduce the number of energy plans, it's bad for competition, there are a few ways that a good comparison site can do exactly that, reduce the number of plans. By asking a few pertinent questions upfront, plans that aren't relevant can be excluded. Technical language and complex concepts can be explained and simplified on a good comparison website sometimes with an infographic explaining something or maybe a checklist showing the switching process. Now if people share their data, the comparison site can use that to personalize the information such as providing estimates of the quarterly bill under different plans. But should we be nudging consumers to make better choices or should we be neutral? Now the problem is in choice architecture, there is no neutral. If we don't make some decisions about which information is most important for consumers, then they'll be flooded with information as I've just described. We need to choose which information to place up front, we need to choose how to sort the results, and those choices aren't neutral. Standardizing the presentation of information, it doesn't even seem like a nudge, but it kind of is. So, standardizing information takes some of the power out of marketing. Marketing is focusing on the features the retailer is trying to sell and standardizing the information makes it clearer where a plan falls short and easier to compare it to other plans. I'll give some specific examples from two projects we worked on here at BETA, and both of these were much bigger projects looking at multiple aspects of the website design. Some are going to be from the YourSuper comparison tool, which was published a few months back in our report, and our results were considered in the initial development of the YourSuper comparison tool. And they're also helping to inform the current review of the Your Future, YourSuper laws. And for Energy Made Easy, that's not yet published, but our results are being fed into the decision-making and new designs for the site. And the Australian Energy Regulators still have some further testing to do and they're working on that build. So, examples shown below are for both the prototypes. They were built for testing, they're not necessarily reflective of what the final sites might look like. And some of our testing was done using survey experiments. So, we surveyed people and we asked them to select an option, a hypothetical Super product or an energy plan. And this way, we could see whether a particular way of presenting the information led people to make objectively better choices. Now, the first problem we had with Energy Made Easy or one of them was how to work out what matters, what to make salient. So we found that retailers, they can only compete in brand recognition, pricing, green credentials, and special offers. Everybody gets the same quality of energy. So customers said, told us in the user testing that they wanted to know about special offers, but we questioned whether this was information that we wanted to really spotlight or make salient. So the problem with discounts is how attractive they are. And we knew from consumer testings that customers really wanted to see special office displayed prominently. However, on Energy Made Easy, there is an estimated cost on the right-hand side of the list of results and it states that it includes discounts. They're built into the estimate. But we found when we tested it that people were more likely to choose plan two on this slide when it had the special offer flag compared to when it didn't. So, not everyone realized that the estimate includes the discount, so there were effectively double counting it. And this is something to think carefully about when deciding what to feature on the main results list. Which information will have the greatest practical impact for the most consumers? Another thing we tested was remembering to account for the fact that people are usually comparing to a plan that they already have, and they're not just looking at the the list in the results and picking the cheapest or the best plan, they're comparing to what they have at home. Now for Energy Made Easy, we literally couldn't put their current plan on the page. The site only lists plans that are currently available to new customers and many of people's existing plans are not. And energy bills do not display your existing plan in a format that's consistent with Energy Made Easy, and they certainly don't display a total estimate that would be consistent with the algorithm that Energy Made Easy uses. Now the solution we tested was to see whether putting the current retailer's cheapest plan at the top would assist. We thought that it might be helpful for them to see how their retailer compares generally with others in the market. If their retailer is really competitive, they would give them a call and they might find out that the plan would be cheaper than their current plan. And we were thinking that if their retailer is not competitive, then they might consider going straight to a competitor. Now when we trialled them, we found that pinning the current retailer's cheapest plan to the top increased the number of people who said that they would switch plans. It increased the number of people picking the cheap plan that we pinned. It also increased the number of people saying they would switch to a competitor. So, win-win. Our really important point here is to think about how real people will compare the info on your site to other products that they already have. Now in terms of the Super, with around 70-something MySuper products to compare, BETA felt sure that people wouldn't review all of them. So how could we make it easier for people to choose a high-performing fund? Now, we wondered whether the choice process could be improved by adding a third performance category, top, fair or poor. And we also tested the additional impact of sorting by category. So, some people saw two categories unsorted, some people saw three categories unsorted with a new top performer category, and some people saw three categories sorted so that all the top performers appeared at the top. Now everyone saw the same 10 products, only 10, and they were all imagined products, not real products. And now we looked at the percentage in each group who chose a product with a high net return and all the information was there in the net return column to enable you to choose a top performing fund. So would a top performance category make any substantive difference? Well, yes it did. The addition of a top performer category made a big difference, 21 percentage points. The proportion picking a top plan dumped from 34% to 55%. But would sorting by performance category make any difference when there are only 10 plans in our test, not 70, and they could all be viewed on the same page? Surprisingly, yes, it did make a further difference and we would expect that difference to be much greater if people were choosing from 70 plans spread over four or more webpages. So now that I've given you a taster, I'm going to hand over to Katie, where she'll reveal the inner workings of a comparison website. So, the internal choice architecture laid bare.

 

- Thanks Laura. As Laura mentioned, BETA's worked on a couple of comparison websites and tools over the years. As it's likely we'll do similar work in future, we've decided to do a desktop review of comparison tools to help us build our understanding and knowledge base. We've taken to affectionately referring to this resource as the comparison comparison. We reviewed 11 government comparison websites and 14 commercial websites across a range of sectors, but with a particular focus on insurance, superannuation, and utilities. Our review was focused on identifying common design patterns and notable features. We were looking for anything that was particularly novel or helpful for the user. We didn't do any accessibility or usability testing of the tools or evaluation of whether features are successful at achieving their presumed goals. The output of this review is quite a detailed deck about comparison tools. I'll be talking through parts of this deck at quite a high level, but we're hoping that it can also be a general resource that people can access to get more detailed information as needed. During our initial broad search for government comparison websites, we identified that sites and tools tended to have two key functions. They were either focused on helping people to find suitable products or services, or on helping people to compare between a selection of products and services. Most of the tools located in our initial search provided a combination of these functions, but some were quite specialized. We excluded tools from our review that were only focused on allowing people to identify products or services, but did not enable direct comparison. And we included tools that were just focused on comparison of known products and tools that provided a combination of finding and comparison functionality. 2/3 of the comparison websites we looked at fell into this last category. I'll start by talking through some of the common design patterns we observed in our desktop review. The first few that I'll talk about are search funnels, which is a way of describing the steps that the user takes between entering the tool and reaching their end goal. We observed what we're calling a long search funnel, generally being used where there is a really large underlying data set of possible products. An example of this is privatehealth.gov.au, which covers around 3,500 health insurance products. The user completes a detailed questionnaire about their health insurance needs and then they are shown a much smaller set of products, only around 30 to 200 products that fit their criteria. A long search funnel generally starts the process with a personalization or customization process that is used to tailor the search results before they become visible to the user. Viewing tailored results in the first instance rather than seeing the full list, can help to reduce information overload. It avoids showing the user products that aren't suitable for their needs. The more information gathered during the customization process, the more tailored the results can be. The trade-off here is that the process can be quite time-consuming and require the user to enter detailed information which can create additional barriers. Short search funnels, on the other hand, require minimal information, so do not introduce the same sorts of frictions, but have other pros and cons. We identified three types of short funnel, which are used in different contexts. In a short find and compare funnel, users have initial visibility of all possible products. This is effective for tools with small underlying data sets, but users may still experience some choice or information overload with the number of products often still way over their choice threshold. A short known comparison funnel is used where the user already knows the products that are of interest and is using the tool for direct comparison of products or a particular aspect of products. For example, the Green Vehicle Guide compare vehicles tool allows the user to select up to three vehicles and compare their emissions and fuel consumption. These tools can highlight important information that may be minimized by retailers and can support decision-making confidence by supplementing other research activities. Short contextualized comparison funnels are the last type of funnel design pattern we'll talk about. These are used where there is very limited number of products to compare. In these funnels, the user provides some information about their circumstances or preferences, but there's no search result listing. The tool goes straight to a comparison view of all the available products. As you can imagine, this pattern is only appropriate in quite specific context. For example, the New South Wales Green Slip Price Check is a tool that allows users to easily compare prices between the six available compulsory third party insurers. In this situation, there's limited value in providing search or shortlisting steps and it reduces friction for the user to take them straight into the comparison view. The funnel patterns I've talked about describe how users move through tools and these last two types of design patterns, I'll talk about are related to the way in which content is displayed within those tools. The first patent is a stacked results list, which we saw in 90% of the tools we looked at that had search results listings. Initial search results are presented as a vertically stacked list with products displayed with metadata about their key attributes. Typically, they were around five or six key attributes for each product displayed, although some sites displayed up to 12. The vertical listings support product-based information acquisition by encouraging the viewer to focus on the attributes within a product and assess whether the product is suitable for their needs. This is consistent with the purpose of these pages, which is to allow users to create a short list of products for detailed comparison. There are a few things to consider here. Information and choice overload is a major factor if there are a large number of results or product attributes and it's likely that users won't consider the full list of products. As Laura mentioned, default ordering can have a strong influence on the products that users consider and the available filtering and sorting options will also influence which attributes users focus on. We also saw a tendency within commercial products to pin promoted products to the top of the list, over-emphasizing their relevance to the user. Side-by-side views are used in the next step of the process when the user is making the detailed comparison of products. There's typically more information included in this view than in the search results view. We found most tools included between 8 and 16 attributes. Shortlisted products are presented as a table or row of cards with the attribute of each product displayed as a column of information. This layout supports attribute-based information acquisition by encouraging users to compare specific attributes across products. The next thing I'll talk through is the notable features we identified in our review. These were features we thought were particularly novel or helpful for the user. I'm mindful of time, so I've only selected my top 10 features to discuss and I will move through these quite quickly. But again, more than happy to take questions about any particular feature at the end of the presentation. Personalization refers to using data about the user collected for other purposes as an input to the tool. This might be fully automated such as the YourSuper comparison tool, giving people the option to prefill certain details using their myGov account, or it can be a more manual process where the user uploads a copy of a utility bill or manually enters some key information. Personalization can make it much easier for a tool to be contextualized to a user's circumstances. Depending on the personalization process though, it can introduce additional friction for the user and it's important to make sure the value add is worth the effort. As the consumer data rolls out across more industries, we're likely to see more and more government comparison tools employing some sort of personalization to make it easier for people to compare products. Customization is quite similar, but we're using this term to refer to situations where users are answering a series of questions about their circumstances rather than inputting existing data. Quite a few comparison websites use a long search funnel based on a questionnaire to gather detailed information and then tailor the search results appropriately. This can reduce information overload by removing unsuitable products before the user sees them, and the questionnaire can also be used as a way of guiding people through unfamiliar industry terminology. Completing a customization questionnaire can be quite a time-consuming process though and it can be daunting or frustrating for the user. Contextualization refers to displaying product attributes with values that are based on the user's circumstances. This is generally used for financial attributes where there's no fixed price. For example, energy comparison tools contextualizing the annual estimated cost of a utility based on the information the user has provided about their energy use. This can help people to make a more informed decision based on what they're likely to have to pay and it can also reduce the cognitive load of needing to convert an average cost into a context-specific cost. It's important to consider how to make it clear to the user that they're seeing a price that's been influenced by the data that they've provided and it can reduce user trust if they don't understand how the price is calculated or don't believe that it's accurate. Some tools also included a feature that allowed users to directly compare their current product with those on offer in the market. This was most likely to be used in context when most users would be expected to already have a product and the aim of the comparison tool is to encourage switching behaviour. People who get overwhelmed when making a decision often abandon the decision-making process and just stick with what they've got. Making it easy for people to compare potential products with their current product actively encourages them to consider the cost of remaining with the status quo. Filtering and ordering are very important aspects of the choice architecture of any price comparison tool. Providing multiple filtering and ordering options allows the user to prioritize the attributes that matter most to them. Price isn't the only thing that people care about. In our review, we found that all sites that allowed filtering also allowed filtering by non-price attributes and most sites also allowed ordering by non-price attributes. Users are likely to get frustrated if they can't filter or order products based on an attribute that is important to them. Defaults are also very important to consider as the default filter and order will influence which options people will see first and which options they consider first. Throughout some of our research, we've heard some concern about governments influencing markets through default options in the tools we provide. It's important to consider that no design choice is neutral and all choices will have an impact. For instance, randomizing initial results may seem fairer to the market, but it is likely to make it harder for the consumer to make the best choice for their circumstances. Almost all of the comparison sites we looked at also had a shortlisting process where people could select a limited number of products from the initial search and then look at them in more detail than the comparison view. Some sites provided a multi-stage shortlist that allowed the user to create an initial long list and then shortlist further from there. This allows the user to break up the task by doing a broad and then narrow selection of products to compare. It also allows the user to adjust their selective products without returning to the full search results, which is another mechanism for reducing information overload and search fatigue. In context help provides help text or additional information to the user at the point where it's most relevant to their current task. It avoids diverting people to a different page to get the information they need and makes it easier for them to stay on task. Users are most likely to be receptive to information if it's provided as they need it and it's important that these prompts are easy to understand. They can also be a really helpful way of guiding people through industry-specific terminology and in helping them to understand why questions are being asked and how their answers will be used. Around 1/2 I'd say of the tools that we reviewed provided a way for users to save or export their search results for later reference. This was often through being able to email them to themselves, print them or save as a PDF or downloading to another file format. Providing a way for people to revisit their results supports switching behaviour by making it easier for people to break this quite complex task down into smaller steps. People can generate their comparison and then come back to it later to look at it in more detail and then take the next step towards actioning a change without having to redo the entire search process. On a related note, one feature I'd particularly like to draw some attention to is the use of customized print style sheets for comparison views. A print style sheet is used to define content layout and styling when a webpage is sent to a printer or saved as a PDF through the browser print options. Particularly for comparison views that use tables, default print stylings will often misprint content cutting off large sections of tables and making it difficult to read the resulting document. Around 1/2 of the tools we reviewed did not have a save or export function. Of those tools, none appeared to have a print style sheet configured properly and 3/4 had print versions that were missing information due to cut off content. Let your example shown here show tools with customized print style sheets which have been configured to ensure that the full content is displayed to the user. A custom print style sheet is generally a low-cost development option that maximizes the utility and reusability of content, provides an easy mechanism for the user to save their comparison results. The last feature I'll talk about is the adapted mobile experience. This means that content is adapted to effectively display on a mobile device rather than a desktop computer. In particular, we observed that comparison tools that laid information out in a table, often required a horizontal swipe action on mobile that users may not notice or may have difficulty with. Some sites drew attention to this with an alert, while others chose to lay their content out differently on mobile devices. The prevalence of users accessing a tool from a mobile device will be highly context specific. FuelCheck New South Wales is a comparison tool that's optimized for mobile experience as it's highly likely that most people would be comparing fuel prices while they're out and about. While other comparison tools may be more likely to be viewed on desktop, we're increasingly seeing people do different parts of tasks on different devices and it's important to research and test what will best meet the needs of a specific tool's users. That brings me to the end of the design patterns and features I wanted to talk about today. BETA's comparison comparison is a living document that we'll continue to build on over time. We've already been able to feed our learnings from this desktop review into some of our current projects and we'll also be using it to inform future work in this space as well. As I mentioned, we're keen to make this a resource available to the BI community, so please just reach out if you'd like a copy. As we come to the end of the session, I'd like to bring it back to thinking about how government can support consumers to make complex choices. People don't make choices in a vacuum. There's always a range of factors influencing the choices they make. One of those factors is the choice architecture they're presented with. Choice architecture really matters that can strongly influence people's decision-making. The market's government tends to get involved with where we might look at providing a service such as a comparison tool are complex markets where people are making very complex choices. These choices can have a huge impact on either their day-to-day life, such as in their choice of an energy provider or in their longer term future, such as in their choice of a superannuation fund. Making a poor choice can seriously financially disadvantage people. Laura started the session talking about choice overload. We know that choice overload is much more complicated than just the number of choices on offer. It's also about the tasks the person is trying to complete, their goals and preferences, the context of the choices, the information provided, and how that information is presented. I talked through some of the design elements of comparison tools and the benefits and considerations for employing particular design patterns and features. One thing that's really important to consider is that no tool is neutral, no design is neutral. The design patterns and features a tool employs affects the choice architecture, which in turn affects the choices that people make. The examples Laura shared about Energy Made Easy and the YourSuper comparison tool highlight the importance of doing research and testing to ensure that the tools we design are supporting people to make the decisions that best fit their circumstances. As policymakers, designers, or behavioural insights practitioners, we need to be very mindful that the choice architecture we provide will always have an influence on the choices that people make. Laura and I are really happy to take any questions you have, so I'll pass back to Rox now for the Q&A session.

 

- Fantastic, thank you, Katie. Thank you, Laura. That's a really impressive bank of work in terms of choice architecture and it's really interesting to see the, you know, to use a word, you know, the journey that it's been on over the years, but really impressive and really interesting and thank you so much for taking us through that. I think that was worth going through your top 10 Katie, you did very well. We're even a bit ahead of time, so you've done especially well. I might stay with you Katie. There is a couple of questions around the comparison comparison, so I might shoot one one through to you. And the person's asking, "Does comparison design have a greater impact on people experiencing stressful events like rising energy prices or does it not matter as much when it's all expensive?"

 

- That's something we haven't looked at specifically, like, we haven't tested necessarily a person's context, so we would need to do more research I think to confirm that, but you would certainly imagine that, you know, decision-making when you're under stress is already more difficult and your cognitive sort of abilities are really affected when you're under high levels of stress. So, it's certainly a consideration worth keeping in mind.

 

- Yeah, if I might add to that, certainly when we did user testing, you know, people talked a lot about frustration when it doesn't work or if you can't get a result or if something affects your ability to even get results, and a lot of people are using those energy comparison websites at the moment. With so many retailers withdrawing from the market, they have to find something, and they are in a scramble to find something else. Even now, when everything's expensive, there is still a substantial difference between the cheapest plan and the most expensive. I did a quick search yesterday to put that image in my slide and the cheapest plan that I could find for my size family was around $2,100 and the dearest was $4,900. So, that's still a big difference between cheaper and more expensive plans.

 

- Terrific, thank you. There's another question as well and it's got a few segments to it, but I'll read out the whole thing and it's about providing your personal email. "So has there been any research into the sludge come in on comparison tools of asking for your email to be able to see the comparison results, I.e., after the search tunnel questions? Are people usually willing to supply their email or do people abandon the search results? Not sure if this is relevant to government comparison tools."

 

- Certainly the government comparison tools we reviewed didn't require a user to provide their email address to get the results. Like they had the option in some of the tools to provide it to get a copy of the results or have it saved, but it wasn't a wall in the same way it is for a lot of commercial comparison tools where they do, you know, you get to the end of the long and arduous process of clicking all the things, and telling them all about yourself, and then they tell you that you just have to give them all of your personally identifying information before you can see the results of that task. So we didn't review any commercial tools that required that level of sort of information to be provided. Like we noted them in our initial search but didn't review them in detail. So, off the top of my head, we don't have any research answering that specific question, but there might be some out there. And Laura, was this something that you guys came across at all?

 

- We came across it in the qualitative user testing we did for Energy Made Easy, so it was more along the lines of people reporting how frustrating it was to be asked for their email address or their phone number and then being bombarded with, like, marketing emails and phone calls and so only in that sense when they were testing the tools, the government tools, which many of them didn't know about, because they not as widely known, they were delighted when that wasn't a feature.

 

- Okay, thank you. And I might stay with you, Laura. I think this is one for you. It's a good question. "How does trust or distrust in energy companies impact people's behaviours? Sort of like better the devil I know than the one I don't."

 

- Yeah, I think it definitely does. So, some people when we did the user testing would talk about using the comparison site to find what the best deals were, but then when they told us about what they would do with that information they said, "Oh, I'd still rather stick with my energy retailer and I like their customer service" or "I think they're pretty good." And so they said they would often call the retailer and ask them to match the price or ask them to put them on a better plan. So, some people did have a preference for that. Other people not so much, and we've also come across this a little bit in terms of when people receive calls from their retailers suggesting they go on other plans or cheaper plans. There is that distrust there about why they trying to offer me a discount? Why are they trying to put me on a cheaper plan? And so one of the things we've had to do with some of the recommendations we made around energy bills and introducing better offer messages where a retailer has to tell you if there's a cheaper offer. One of the things there is to say that it's a government requirement because that makes people trust it a little bit more.

 

- Fantastic, thank you. That appears to be the end of the formal questions in the chat. Was there any closing remarks, Katie or Laura that you had that I might go to Eva for any closing remarks while we wrap up? All good, fantastic. Well thank you so much. That was really impressive and very comprehensive. Eva, any last minute remarks from you before we close?

 

- Oh great. It was really great to see the work on all the price comparison research, really valuable resource for us to have in the BI community. So, thank you to Laura and Katie for presenting on that today, it was awesome.

 

- Great, thank you so much Eva, and thank you Alex and Dave as well today. All our speakers were fantastic, amazing, very informative. The session will be available afterwards and I can see there's some more questions and answers in the chat as well. And thank you everyone for joining us online. We had a really good amount of participants today and just a reminder that the next BI Connect session will be next Tuesday, the 13th of December, so it's in the afternoon, and that one will be all BETA. So, we're doing some thinking around BI upstream in the policy cycle, so really topical and hoping to get like a really sort of engaging discussion in that session. So, please do join us for that if you can. Otherwise I think we can bring it to a close. We are a little bit early so that's nice. We can go to lunch early. Thank you everybody for joining us and we'll leave it there. Thank you.

Presenter
Alex galassi

Alex Galassi

Senior Behavioural Advisor in the NSW Behavioural Insights Unit

Alex Galassi is a Manager in the NSW Behavioural Insights Unit. She has driven projects to improve customer outcomes and create accessible services through the application of behavioural science. Alex has worked closely with agencies from across NSW Government in a range of policy areas to quantify, identify and reduce unnecessary frictions for customers. She holds qualifications in Rehabilitation Counselling and Psychology, and is passionate about shaping and delivering equitable services.

Eva Koromilas

Manager in the NSW Behavioural Insights Unit (BIU)

Eva is a Manager in the NSW Behavioural Insights Unit (BIU). She is qualified in behavioural economics and has extensive experience in applying behavioural insights to improve customer outcomes across multiple jurisdictions. Eva has led field experiments and program redesign in service and policy areas including Covid-safe behaviours, domestic violence, cyberbullying, drought resilience and regulatory compliance.

Katie Anderson-Kelly

Adviser in the Behavioural Economics Team of the Australian Government

Katie is an adviser in the Behavioural Economics Team of the Australian Government. Katie has worn many hats working in roles across the not-for-profit sector and government, with a focus on research, design, evaluation, and data communication. Katie holds a Bachelor of Visual Arts (Digital Media), a Graduate Diploma of Applied Data Analytics from the Australian National University (ANU), and a Master of Human Service (Childhood Studies) from Griffith University.

Laura Bennetts-Kneebone

BETA

Laura has worked at BETA since 2019. She is a project lead and data analyst, designing behavioural solutions to policy problems and testing them with randomised control trials (RCTs). She has worked on projects encouraging small businesses to improve their cyber security behaviours, designed a simulated gambling game to test activity statements, and experimented with energy bills. Prior to joining BETA, Laura spent ten years at Department of Social Services, working on Australia’s national longitudinal studies.