Dark patterns and greenwashing

Exposing and addressing deceptive corporate tactics

- Good morning everyone. Welcome to BI Connect. This is the first of three sessions in our BI Connect series for 2022, and we're excited to be exploring leading work across the behavioural insights industry. Today our session focuses on the ACCC's work on exposing and addressing dark patterns and BIT's work on supporting consumers to identify misinformation in the form of greenwashing. I'll just kick off today with an acknowledgement of country. This is Ngunnawal country. Today we are all meeting together on Ngunnawal country. We acknowledge and pay our respects to the elders. Hi everyone, I'm Andrea Willis. I'm one of the senior advisors here at BETA, and I'm really excited to be here today hosting our first BI Connect session for 2022. For those of you who aren't familiar with the work that we do at BETA, we are the Australian government's first central agency unit applying behavioural insights to public policy, and we sit within the Department of the Prime Minister and cabinet. Part of our mission, or our key mission, is to improve the lives of Australians by generating and applying evidence from the behavioural and social sciences to find solutions to complex policy problems. A really key part of our mission also involves building capability, and this event is one of many initiatives that we organize to share knowledge and build interest about behavioural insights across the APS, and even broader. Today we have over 500 people registered for the BI Connect series, so it'll be interesting to see just how big the intention-action gap is. We're hoping it's pretty small today, but it would be great to get a sense of who's on the call with us today. So Ruisi is going to pop some questions into the Q and A chat, and if the participants online today could like the statements that match their situation, that would be really great to give us a sense of who else is on the call. So I'll give everyone a couple of minutes to figure that out. Great, so we can see that there's surprisingly lots of people in Australia, but a few people from outside of Australia. I hope your time zones are compassionate. Lots of people from the public sector, a few from the private sector, couple from non-profits, and it looks like we've got a bit of a mix of BI practitioners, and people who are enthusiastic or curious about BI. Welcome everyone. Okay, so today we have three speakers across two separate presentations, and we'll split our time pretty evenly between those two presentations. There'll be a time for questions at the end, and we'll use that same Q and A chat box. So feel free to pop in any questions as they come to mind and we'll address them at the end in the Q and A section. Our first speaker today is Janine Bialecki from the ACCC. Janine is a principal economist at the ACCC, and prior to her role there, she spent eight years at Treasury, and two, I'm going to say wonderful years here at BETA. Janine holds a Master of Science in Economics, specializing in behavioural economics and game theory. She's passionate about promoting the role of women in economics, and has spent three years as a board member of the Women in Economics Network, Victoria Branch, including as Chair. Welcome Janine. We're really looking forward to your presentation. Over to you.

 

- Thanks Andrea. And thanks to BETA for hosting this wonderful seminar series. I'm really delighted to be here representing the ACCC to talk about exposing and addressing deceptive corporate practices. Before I get started, I also wanted to pay my respects to the traditional custodians of the land where I'm coming from, which is the Bunurong people. I actually live about 200 meters from the border of Bunurong and Wurundjeri country, so I'd like to pay my respects to elders past, present and emerging. Now I will share my slides. So when BETA first asked for contributions to this seminar series, they couched it in terms of supporting effective markets. And so that's the frame that I'm coming, that I'm bringing today from the ACCC perspective. So when we think about supporting effective markets, I do have training as an economist, and I find it helpful to think about how an economist might think of supporting effective markets and a market might need support where there is market failure. And so traditionally we think about supporting effective markets when there are things like externalities. So if there's positive externalities, we want to encourage that thing. If there's negative externalities, perhaps we want to make sure the price is fully reflective. Where there are public goods, we might want government to intervene to create more of that good. Where there are information asymmetries, and this is very traditional ACCC, where the ACCC gets involved. We don't want consumers to be misled or deceived. And so if there's information missing in a market, we might intervene to to give more information. And when there's-

 

- Janine, sorry. It's Andrea here. We can't see your slides yet, so I just wanted to let you know they haven't come through.

 

- Oh. Okay, thanks Andrea. That's probably a big problem. How about now?

 

- I think we've got them. Thank you.

 

- I'm so sorry about that. That's actually the first time that's happened to me in a presentation, which given we've been working in this sort of environment for about three years now, I'm shocked it has taken me so long to have such an error. So look, hopefully hopefully you were able to follow on without the picture, but this was the picture that accompanied my spiel about when we traditionally get involved in supporting markets. So these are the areas where we traditionally think about where government might have a role to support markets. But of course there's some underlying assumptions to this. And in addition to thinking about where there might be market failure, the assumptions implicit in what makes a market work well that on the firm side, so on the supply side, vigorous competition should give firms an incentive to deliver what consumers want. But of course for that to work we need consumers to play their part too. And so this virtuous cycle occurs when consumers are well informed, confident, rational, and effective. And it's when consumers can fulfil this rational and effective and confident, when they can fulfil that role, that vigorous competition will be activated and markets will work well. So we can already start to see that the assumptions and the demands that we put on consumers, as the demand side of the market, are pretty significant. And so if we have information that consumers as humans are maybe not able to meet the brief, at least in some markets, that should twig something in us, and make us think about whether consumers will be able to fulfil their role. And if not, perhaps what can we do about it? So the context for what I want to talk about today, is this concept called dark patterns. Now depending on where you go you might get a different definition, but the definition I'm going to go with is that a dark pattern is otherwise known as a deceptive design practice. So it's a user interface, so we're talking about online interactions, and it's a user interface that's been carefully crafted to trick consumers or users into doing certain things, or displaying certain behaviours or making certain choices. So that's the context for all of the case studies I'm going to talk about today. And when we think about addressing, exposing and addressing, deceptive corporate practices, dark patterns is an area where I think the scale and the impact has been increasing at an increasing rate. So let's think about this demand side of the market that I talked about before, the consumer perspective. So most of us are at least a little bit enthusiastic about behavioural insights or behavioural economics. And so we know consumers in many instances aren't quite the hyper-rational, confident, effective people that we want or hope for in the market. But it's worth digging a little deeper and thinking about the online context, because if we're thinking about dark patterns, we are thinking about online. So we do have empiric, some empirical evidence about the way consumers make judgments and decisions online. And what we have seen is that online compared to offline, consumers pay less attention. They process information less well. They default to simple rules of thumb. They interact with interfaces in a task-focused way, leading them to routinely ignore certain types of information, and they underestimate manipulation and deception online as opposed to offline contexts, and hence adopt an illusion of safety. So this is really important context to know about what, what the reality of consumer decision-making online looks like. And it is this very task-focused, really tunnel vision way of doing things without paying a lot of attention and without processing all the information. Now given we know that about the demand side, what are consumers faced against? Well they're faced against firms who operate very, very differently. And I want to talk about two examples of the extent to which firms can gather data on consumers and use that data to steer consumers in particular directions. So the first example I want to talk about is choosing a shade of blue. And this is quite a well-known example that comes from Google. So I think almost 10 years ago now, Google launched ads on Gmail, and when they did that, just by accident, they found out that some links on one aspect of Google were a particular shade of blue, and some links on a different aspect of Google were a slightly different shade of blue. And someone in the exec suite noticed this and said, you know, we should have a consistent shade of blue across Google, which shade of blue do we choose? And so Google ran over 40 experiments where they showed different shades of blue to 1% of their consumer, of their user cohort. So 1% of their user cohort saw blue number one, 1% of their user cohort saw blue number two and et cetera for 40 different shades of blue. And they ran these as AB tests, and they ultimately landed on a particular shade of blue that was slightly more purple, as opposed to the slightly more green shade of blue. And they estimate that the increase in the click through rate from the right shade of blue led to 200 million dollars a year in extra ad revenue, because the attractiveness of that shade of blue led to increased click through, led to increased ad revenue. So that's the extent of the data and experimentation that firms can engage in. The other example that I wanted to talk about, and you may well be aware of this one too, is the Facebook move manipulation experiment. So there was quite a bit of outrage when this one came to the public knowledge. Facebook experimented on 700,000 of its users without their knowledge. What they did was they contrived the newsfeed to show relatively more positive or relatively more negative updates from friends and contacts. And they wanted to see if that impacted the user themselves. So if I see relatively more negative impacts when I'm updating my status, am I more likely to put a negative status? And they found that that was indeed the case. Relatively more negative emotional priming led to relatively more negative status updates and vice versa. And like I said, there was quite a bit of outrage for this. I saw LinkedIn had done something similar recently in the field of loose ties versus weak ties. And really the point of this is not to say whether it was right or wrong or ethical or unethical. The point that I want to make is just the extent to which these firms have information about you, and can use that information to steer you in the direction that befits their purpose. The prevalence of this, so I've just talked about two experiments, but what we see in practice is that pretty much everyone's doing it and they are doing it to a grand scale. So this is one paper, a pretty recent paper that looked at apps from the Google Play Store, and it looked at the extent to which they used dark patterns on the users of those apps. And so they looked at the consumer experience with these apps in the first 10 minutes of usage. And this particular study found that each app used an average of seven dark pattern practices to steer consumers. So it's not just that you are facing the optimal shade of blue, it's also that you're being nagged, and it's also that you're having things snuck into your basket. And it's also that there's hidden information, and it's also that there's disguise ads. Each app on average had seven dark practices just within the first 10 minutes. And this could potentially cause a lot of consumer harm, both direct consumer harm and indirect consumer harm. And I think it's really important to recognize both potential harms. So the direct consumer harm is the more obvious things. Like if I am in a subscription trap, then I lose money every month for the subscription that I don't actually want. There's the time and effort of trying to wade through these dark patterns. There's the cognitive load and frustration. There's the privacy intrusion and the associated risks with that privacy intrusion. And I think with the Optus and Medibank hacks we're seeing now, we are seeing the fallout from that direct consumer harm. Very importantly and importantly from the ACCC perspective, there's also indirect consumer harm. So not only are consumers directly affected by these dark patterns, but they can also erode consumer's decision-making ability. They can make it more enticing for consumers to stay within incumbent, and they make it difficult for consumers to switch or to find a better offer. And so that affects competition as well as consumers. 

I've done a very long intro about the context of this problem, and so I'm finally getting to what the ACCC's role in all of this is. So we enforce the Competition and Consumer Act, and our goal is to make markets work for consumers now and into the future. And that's really important that we are thinking about consumers, both from a consumer protection and from a competition perspective. So we want proper functioning markets, we want to protect competition, we want to improve consumer welfare, and we want to stop conduct that's anti-competitive or harmful to consumers. And so exposing and addressing deceptive corporate practices, including dark patterns, is really important to us. Now we, as I've just alluded to with that case study about the apps, kind of everyone's doing it, a lot of firms are engaging in using these deceptive corporate practices, these dark patterns. And we can't take enforcement action against every single firm in every single context. And so we have to make decisions about where the biggest bang for buck is for us as a regulator. And so some of the things that guide our thinking about when to take enforcement action is when there is high consumer harm, if the conduct is particularly deliberate, although that's not a deal breaker. So we can take conduct, we can take enforcement action even if a deceptive practice is arguably not deliberate. And very importantly we tend to take enforcement action for deterrence. And that's both general and specific deterrence. And what I mean by that is, specific deterrence is basically punishing the firm who engaged in the conduct for doing the wrong thing. General deterrence is incredibly important as a signal to other firms who might otherwise be engaging in this conduct, or thinking about engaging in this conduct. By taking action and going through the litigation process, we hope to deter other firms from engaging in similar things. So now I'm going to move onto the real meat on the bones, which is talking about some case studies, some enforcement cases that the ACCC has taken to try to address these deceptive corporate practices where firms are using dark patterns. The first case comes all the way back from 2016. So Virgin and Jetstar were offering travel insurance to consumers who purchase tickets, and the travel insurance was opted in automatically. So you can see this is what it looked like, I think this was a Jetstar screenshot. And so you can see the box is pre-ticked, and the consumer can untick it and select, no, I don't want insurance. But that pre-selection is an example of a dark pattern and did affect consumer behaviour. And importantly this type of add-on insurance is typically poor value. So should a consumer want travel insurance, often they can find a much better deal through a different provider, rather than picking the one that's the add-on insurance by the airline. So the ACCC was concerned that this opt-out model meant that some consumers inadvertently bought insurance that they didn't actually want or need. This one did not go to litigation. Through negotiation and discussion, Virgin and Jetstar agreed to stop engaging in this conduct. The second case that I have for you is Virgin and Jetstar, yet again. This was a case of drip pricing, so I don't have a great screenshot to show you. But what drip pricing basically is, is where you've got a headline price, this $39, and it's only much later in the booking process that it's disclosed that there's an unavoidable fee. In this case it was a fee that was charged on bookings using most credit cards, and I think it was in the order of about $8. So the ACCC took this to court and the federal court found that Virgin and Jetstar had both engaged in misleading and deceptive conduct by not disclosing that fee early enough in the process. And really importantly for that general deterrence point, on the back of this litigation, Ticketek and Ticketmaster both agreed to improve their online disclosure practices by not engaging in drip pricing to such a great extent. And this is really interesting to see, because if you go to a jurisdiction like the US, I think you see a lot more drip pricing conduct, because they haven't had the general deterrence that we've had in Australia. So strong enforcement action from the regulator, it makes sure that these firms, in this case Virgin and Jetstar, we've got a court judgment. They were subject to penalties, but other firms who were engaging in similar conduct have also stopped that, because they were deterred by the ACCC's enforcement action. The third case that I wanna talk about is ticket seller viagogo. So these snips unfortunately I think are from a UK version, but what I want to draw your attention to is these, I might call them FOMO claims or scarcity claims. So two tickets left, these social proof claims. 14,000 other people are looking at Celine Dion tickets. Less than 2% of tickets remaining, other people want to buy these tickets. So these social norm and scarcity claims were made by viagogo in Australia. Importantly in terms of the evidence, the scarcity claims were true of the viagogo website, not of the span of tickets generally. So when it says last two tickets left, what we need to show is that the consumer would be misled in thinking that was two tickets left in total, not two tickets left on the viagogo website, because that's what viagogo was basing these statements on, how many tickets left on the viagogo website. So another example of a dark pattern practice, where we're using these scarcity and social norm claims. In this case the court found that viagogo's use of phrases such as, only a few tickets left, was deceptive, because it wasn't about the overall availability of tickets. The judge found that the phrases had the effect of drawing consumers into the marketing web and the transactional web and consumers were lured by repeated assurances that the only tickets available at the venue were going fast. So there the first three cases which I went through in a little bit of detail, I want to move on to two other cases, which were also dark pattern cases, but where the ACCC made specific use of behavioural economics evidence in court. And so this is sort of an area where we're going further and further into to help judges understand the way that consumers interact with these markets, and the way that they make judgments, and the way that they can be deceived. So the first case in that regard that I want to talk about is trivago. So this case wrapped up just this year with the penalty judgment, a $44.7 million penalty judgment. And a lot of you will probably remember these ads. This is where, this lady who's probably very familiar to you, assured you that trivago would find your ideal hotel for the best price. And it started as a typical misleading and deceptive case, which is the ACCC's bread and butter. Did trivago find you your ideal hotel for the best price? But as the team dug into the investigation, the aspect of the dark patterns and the consumer deception that was going on was actually much more complex than it appeared at first blush. And so the investigation team really needed to dig into things like thinking about the ranking of search results, the display of the website, and the design practices of the webpage that consumers see, and default options. And so typically what a consumer saw when they went on a trivago website was something like this, and this is probably familiar to a lot of you who use these hotel search websites. And if you look at this, there's a lot of design practices going on here. Like there's a lot of nudges designed to make consumers land on a particular option. You can see that ranking immediately is something that affects decision-making. The colors used, and the size and placement of things. So this big green view deal button versus this red strike-through price. And in fact the fact that that strike through price is there, the fact that we've got this one which is the top deal, and then we've got other options here, and then in fact there are more options, but I have to click to access those more options. As I said, there's a lot of design practices going on here, and in court to help the judge understand and determine how a consumer would interact with and engage with this website and therefore when they might be misled. We asked a behavioural economics expert witness to provide evidence about how the top position offer would affect the purchasing decisions of consumers, how the strike through price would be likely to affect consumer behaviour, and how the top deal icon would, and on some pages there was a percentage savings box, how that would affect consumer behaviour as well. And so in presenting this behavioural economic evidence to the judge, we're hoping to help the judge understand how Joe Consumer might engage with and understand this website. Not only was there behavioural economics evidence, but this was also a really nice example of how behavioural economics and data science can really come together and be quite complimentary. So as I mentioned, there was a rank, there was a rank not just for the hotels listed, but for a given hotel, the offers for that hotel. And that ranking system of course is based on some kind of algorithm. And so through our compulsory information gathering powers ACCC obtained a version, a copy of that algorithm, which then we got a data science expert to present evidence in court about how that algorithm worked and what effect that had on the consumer-decision making process. And through that algorithmic evidence, the ACCC was able to show that what really determined this top position offer, a very heavily weighted aspect of that, was the cost per click payment. So how much that hotel was willing to pay trivago was a very determining factor in how highly they were ranked in the search results. Through getting data evidence, we were also able to see that the top position offer, so this one here, 93% of clicks went to that top position offer, and it was not the cheapest offer for a given hotel 66% of the time. So this was quite important to the ACCC's case to be able to show this. So that algorithmic evidence was reinforced by the behavioural economics evidence, which looked at how information and time constraints, as well as biases, led to that consumer behaviour. So we were able, we presented in court that most people who face a lot of options satisfies, so they pick something that's good enough. We presented in court evidence about colour psychology, so that green and red, and we presented in court evidence about that strike-through price being a decoy, which made the actual price look relatively better in comparison. Importantly, for setting some precedent, the judge was persuaded by both, some of the algorithmic evidence and the behavioural economic evidence. This quote is about the behavioural economics evidence. So the judge said that this evidence is potentially of assistance in determining whether consumers have been misled. And really importantly because it's, I think like any of us, it's easy for a judge to substitute his or her own experience for the general consumer. But this judge recognised that it can't be assumed that even if the judge is familiar with these websites, that that represents how consumers generally interact with such a website. So that was the ACCC's first instance of using behavioural economics evidence in litigation. We now have this important precedent on the books about the value of that kind of information. The final case study I want to talk about is the Google location case, but first I want to delve into a little bit of background before I specifically get to Google location. So the ACCC took another case against Google in 2011. This was the Google shopping case, and at first instance this did get appealed, and the appeal was on a pretty technical legal aspect I would say. And that's not the focus of what I want to talk about today. What I want to talk about today is the first instance judgment where there was a discussion about whether consumers were misled about whether search results in Google shopping were ads or not, were sponsored. So at first instance, neither Google nor the ACCC produced evidence about consumer decision making, and the judge at first instance determined that Australian consumers could understand the difference between sponsored links and organic search results, and that it wasn't possible for consumers to use a search engine without knowing things like that, about how the search engine operates. Now interestingly, I've done a little bit of digging, and the state of play in 2011, if you did delve into some research about consumer decision-making, there's this nice paper by some Australian academics, and they found that at the time in an experiment about 70% of people were unaware about the provenance of a Google shopping result, so 70% of people couldn't really tell the difference about whether something was sponsored content or an organic search result. And in the US in a case some evidence came out that Google had done some internal work, and they found that even sophisticated consumers were sometimes unaware that sponsored links were ads. And so this is some background about what was going on at the time, but none of this was presented in the case. So when we move on to the actual Google location case, which was only a couple of years ago, this case was about whether Google had misled consumers about the nature and extent of location data that they collected for consumers who use Android, Android mobile phones. So if you have, if you had back in, I think this is from like 2018 or something, if you had an Android phone, when you set that phone up you might have seen a screen that looked like this, and Google discloses to you some privacy terms. And then after all this text, there's a little bit here that says more options. If you click that more options, you might see a screen that looks like this. And importantly there's one heading here that says location history. And you can toggle that between on or off. So if it's off, don't save my location history in my Google account, and you can choose to turn it on, and if you want to use maps and see where you physically are on Google maps, you can toggle that to on. Also importantly, there's this other setting over here, called web and app activity. Now that is switched on by default, it was switched on by default for Android users, and it has this save my web and app activity to my Google account. And the ACCC and what's not visible, or maybe very obvious here, is that when location history was off, but web and app was on, Google was still collecting and saving and storing and using location data from those Android handset users. And so the ACCC alleged that that was misleading and deceptive to have this heading, location history, that consumers could turn off, that they were misled into thinking that that would turn their location off. Again, the ACCC thought it was useful to present behavioural economics evidence, to help the judge understand the consumer experience in interacting with this design interface. And so the types of evidence that were presented in court and that were accepted in the judgment, were about things like ambiguity aversion. So if something is unknown, how much, how far do you investigate? About present bias. So the context of setting up a phone, how does that affect my behaviour and my decisions about how much time I want to engage in these questions? Cognitive cost, so our limited mental resources and what impact that has on the decisions I might make. Status quo bias was an important part of the evidence. So those default settings of location being off, but web and app was on, and choice architecture as well. So it's not just the consumer, it is the choice architect who has designed this interface, which can affect whether, how much, and how carefully users interact with that. And in this case, the fact that those settings were only visible after you had clicked more options, and then the headings of location and web and app, which might seem like a heading that doesn't mean a lot to a lot of people. That choice architecture was really important in evidence as well. Again, that was persuasive to the judge, that type of evidence. So the judge said that he was impressed and assisted by the witnesses who presented this type of evidence, and that this type of evidence gives the appropriate framework for understanding how users approach the process. And so when we are thinking about deceptive corporate practices, when we're thinking about dark patterns, understanding the consumer experience and giving expert evidence on that consumer experience, we've now seen be persuasive to judges in Australia, and therefore an important part of this role that we have in addressing and exposing deceptive corporate practices. So that is all that I wanted to talk about. I think I'm about right on time Andrea, and I think you've set aside 10 minutes for questions, so I'd be really delighted to take any questions.

 

- Thanks Janine. And you are perfectly on time, although I've come to expect nothing less, so thank you. That was a fabulous presentation. I loved some of the case studies, and they were highly relatable, including the fact that I'm wearing blue today, but I can't say that I chose it with the same degree of- There are a couple of questions that have come through into the Q and A box while you've been talking, so I'll launch into those but also encourage participants online to throw some more questions into the Q and A box while we go through the couple that are here. So one interesting question that popped up fairly early on was, are young people, AKA digital natives, better at dodging online consumer manipulations?

 

- That's, I think you would probably, I'm not prepared to make a ruling on that one way or another. I think you'd need to get some empirical evidence on that. What I would say is that, I think more in an offline context, we do see things like particular cohorts of consumers who are maybe particularly vulnerable. So that might be elderly people or people with less education. And we do have separate laws for unconscionable conduct. So if people are at a special disadvantage, there are special consumer protections for them. That's not the specific question that was asked. The specific question was about young people. Honestly, it's hard to judge. It feels like there's maybe two factors that are going to pull in opposite directions. The fact that young people are more familiar with being online, which makes them perhaps more adept at dodging dark patterns, but then they spend more of their time online, and so that perhaps they're more impatient about their online interactions, and therefore some things might get past them. So, I feel like those two things might pull in opposite directions, but you'd really need some empirical data if you wanted to make a definitive ruling.

 

- Thanks Janine. So we've got a couple of questions around like who determines what a dark pattern is, and some of the components of dark patterns. So I'll start off with the first question which was, who determines what is a dark pattern quest design? And the person asking the question has noted that it is an ethical question and probably not that easy to answer, but interested in your thoughts on that.

 

- Yeah, so I would say, I've seen dark patterns defined as deceptive practices designed to steer people to make decisions that they wouldn't have otherwise made. And that inherently requires you to understand what the true preferences of the consumer, which is perhaps unknown even to the consumer themself. From my perspective and from the ACCC's perspective, our laws are limited to when practices are misleading and deceptive or false and misleading, that's when we can take enforcement action. And so, we have to prove to a court that a practice is misleading or deceptive and that's the type of dark pattern where we would take action. So ultimately in that case it's the judge who decides. That's the way we enforce our laws is through the legal system.

 

- Great, thank you. So this one's a related question, are asking are strike-through prices dark patterns? Sorry I'm just reading it live as I'm reading it out. Even if that price is the, oh okay. Sorry, let me rephrase. So are strike-through prices dark patterns, even if the price is the usual price when the hotel room is not on sale. And related to that, if using colours like green and red are considered dark patterns, are they considered to be a dark pattern, even if the company has not specifically done research and used them intentionally for that purpose?

 

- Okay, so there was two questions there. Is the strike through price a dark pattern if it was the actual sale price? No, the short answer to that is no. So there are, the ACCC's had a few cases in this field of decoy pricing, and basically, if you want, if a firm wants to use a strike through price, it has to have sold the good or service at that price relatively recently for a relatively, it's hard to be technical about these legal questions. Basically it has to be an authentic price that a consumer might have otherwise faced, but for the sale. So if a firm is having a genuine sale, and they want to say it used to be 100, now it's 50, and the product is typically sold for 100, that's perfectly legitimate to use that strike-through price. It's only when the firm uses I guess a fake strike-through price that the ACCC might get concerned. The question on colours, so let me clarify that I was not trying to imply that the use of colours itself is a dark pattern. We, the ACCC adduced evidence in court about the impact of colours on consumer decision-making, like so in our culture, green is generally considered with carrying on and continuing and red is generally associated with stopping or alert or things like that. So wasn't meant to imply that colours in and of themselves represent dark patterns, it's more about the psychology of those colours in the design in leading consumers to a particular place.

 

- Great, thanks Janine. And we've got a few questions coming through, and we're probably not going to be able to get to them all in two minutes, but there is one question that caught my attention, which is, what about those annoying clickbait articles?

 

- Again, I would say the ACCC's role in consumer protection, if we are in an enforcement context, so we do do other working advocacy and education, but in an enforcement context we can only enforce the law that we have, and that is that firms cannot engage in misleading and deceptive conduct in trade or commerce. And so if there's a clickbait article, but it's not in the context of trade or commerce, like, you know, and that's defined under the Act and I don't want to get into that, but we can only enforce the law as it applies to trade and commerce. So if there's some annoying article, I'm really sorry, I don’t know that the ACCC can help you.

 

- That's a great clarification, thanks Janine. So we might wrap it up there, there's a couple more questions in there perhaps Janine, if you've got time you might be able to provide some quick answers to some of those questions in the chat. But thank you so much, fabulous presentation.

 

- Thanks everyone.

 

- Okay, we're now going to move on to our next presentation with two wonderful speakers from the Behavioural Insights Team. With us today we have Dr. Karen Tindall and Ravi Dutta-Powell. Their presentation today is going to be focused on protecting consumers from greenwashing, using strategies to combat misinformation. So Karen is a Principal Advisor at BIT Australia. She holds a PhD in Political Science from the Australian National University in the field of Public Sector Crisis Management. Karen's also an adjunct Associate Professor at the Institute for Governance and Policy Analysis at the University of Canberra. Ravi is a senior advisor also from BIT Australia, specialising in regulation and compliance, consumer finance, education and international development. He's co-author of BIT publication, Applying Behavioural Insights to Regulated Markets. And prior to joining the team, Ravi has a public sector background having worked for ASIC. Thanks Karen and Ravi, we're really looking forward to your presentation. Over to you.

 

- Thank you so much. And I'll ask Ravi to share his screen, because I only have one slide. I'd like to begin by acknowledging the Ngunnawal and Ngambri people, who are the traditional owners of the land from which I'm dialling in today. And on behalf of the Behavioural Insights Team Australia, I'd really like to thank BETA for organising this seminar series, and for inviting us to participate. So I'm pleased to be able to talk about an area of research that I'm particularly interested in, misinformation. I'm sure you've all noticed that misinformation is increasingly an area of interest for the BI community and for our partners. So I thought I'd start by talking more generally about strategies that BI practitioners can use to design interventions, to combat misinformation. And then I'll hand over to Ravi to deep dive into a trial that he and the team ran recently on greenwashing. And this trial's got a lot of attention of late, including a write-up in the New York Times, partly because it shows that even very brief interventions, if they're well-designed and informed by the academic evidence, have potential to combat misinformation. But I won't steal all his thunder. So Ravi, if you flick the next slide. That academic literature tells us about a range of ways to counteract misinformation. There are three sort of really broad categories, and these include technological approaches, regulating misinformation and educational or upskilling approaches. But first I should probably define what I'm talking about. I'm using the term misinformation in the really sort of broad sense, basically talking about misinformation as false information that's disseminated regardless of intent to mislead, and that I'm using it in a sort of umbrella term that includes disinformation, so deliberately misleading information and fake news. So if we look at each of those big broad categories, we've got technological approaches. You'll find these particularly on social media platforms, and they include early detection of malicious accounts or emerging narratives and tools that use ranking and selection algorithms to reduce how much misinformation is circling with varying levels of efficacy. And then there's regulating misinformation. Now this can look a bit like regulation against false advertising and the dark patterns that Janine spoke to us about. But given the breadth and depth of misinformation, these regulatory approaches have also been called a blunt and risky instrument by a European commission expert group. So not a perfect solution, nor is anything of course. And then there's the educational upskilling approaches. This can include correcting misinformation, but I'm talking to a bunch of BI practitioners and BI-interested people. We have to be so careful about how we do this. You will have seen that retractions, they don't eliminate people's reliance on the original misinformation, but the educational and upskilling approaches, they can include everything from teaching citizens critical thinking skills and techniques, or warning them that that misinformation is coming their way. And BI has a role to play in all three of those broad categories. And for each type of misinformation and subject of misinformation, a combination of approaches is going to be needed. But I'm going to zero in on education and upskilling. So we all know as BI practitioners that when it comes to behaviour change, not all types of information and education are created equal, but there are some really nice informational techniques that we can draw on to mitigate the negative effects of misinformation. A couple of good tools that you may have heard of, de-bunking and pre-bunking. So debunking misinformation could work well, but like with myth-busting, it can really backfire. And that's because myth-busting often or sometimes, really it gives more air time to the myth, and puts the myth front and centre, if designed badly. So an example, an extreme example might be saying something like, you may have heard a number of people saying that the COVID-19 vaccine affects women's fertility. That's a myth. It's more likely, the myth is more likely to stick in people's heads than the fact that you say it's not. But we do know that there are some good rules of thumb when designing coms that debunk misinformation. And it can be done as quite a simple five-step formula. So, you start with the facts. So, you don't reinforce the misinformation as the headlines. Start with the facts, then warn people about the myth, but only mention it once. Explain in detail why the misinformation is false or misleading, while balancing it out, so you avoid scientific jargon and complex technical language. Then reinforce the facts and provide the correct information. Direct people to credible sources. And of course, again, talking to BI practitioners, evaluate what works in the context. That's not going to work for all forms of communication, but is a really nice sort of go-to, to start with. Then of course there is pre-bunking. So pre-bunking is basically just a nice word for pre-emptively debunking misinformation. And that can be warning people that misinformation is coming their way, and warning them that individuals may expose them to misinformation for personal gain, but also because they're legitimately worried. The next step is then to provide individuals with counter arguments and strategies that they can use to refute those arguments when they come up, or techniques that are used to spread misinformation so they can see them for what they are when they arrive. There's a couple of really great examples, I've got screenshots up there. One is a game called Go Viral, which was an online game developed by Cambridge academics and UK government officials. It's a five-minute game that teaches about common strategies used to spread false and misleading information about virus, to help them resist that information when they encounter it later online. Another great game that's been around for a while, so you may have played it, it's a lot of fun, is called Bad News. It was developed to expose manipulation and the tactics that are used to spread fake news. And you play the game as a purveyor of fake news, a unscrupulous media mogul, and your task is to get as many followers as you can, building up fake credibility, while avoiding telling really obvious lies. And so by teaching those tactics, you can help people see them when they're coming. So you'll see from Ravi's presentation that, Ravi and his team designed a much briefer pre-bunking technique. Now I'm not saying it's the same as a five-minute online game, but we as BI practitioners, we know how hard it is to get the uptake of an intervention, particularly a five-minute intervention. So we should learn from these longer interventions, the academic literature and test out briefer interventions and strategies in different contexts. And with that I will hand over to Ravi.

 

- Thanks Karen. All right, so I'm going to walk through a trial that we ran earlier this year on greenwashing. So the kind of aim of the trial was to understand, can we try and measure greenwashing, and can we do anything about it basically? So we ran this as an online trial. We had in the end a just under two and half thousand people, so was about kind of representative sample of Australians. So what we did, we got people in, did a brief intro and demographics, and then we randomized them into one of three groups, control group and then a literacy intervention and a pre-bunking intervention. They were then randomised to see three ads. Well the order of the ads is randomised. The first two were sort of greenwashed ads, and the third one was just like a generic, ad for a company. So they're all designed around, in this case we chose energy companies just as sort of a generic option. The first two were greenwashed. The third one was like we do great things for the economy, we develop jobs, et cetera, et cetera, standard sort of puffery that you might see in an ad. And then we had some more general affirmations at the end. So the two interventions that we used were quite similar in terms of the specific content, but the way that they were delivered was different. So the first was a more traditional literacy intervention. So they're given a brief description of what greenwashing is, and then given like two common forms of greenwashing, so message and core business and promoting individual responsibility. And so, they saw this ad, and they were told, look this is the strategy and this is how this company is, is highlighting it. And we did that for both of those two types, message and core business and promoting individual responsibility. And the reason we chose those two was because those were some of the most common forms that we could see. And then the pre-bunking intervention was basically a little bit more involved. So essentially what it involved was saying, look, imagine you're an ad company, you're trying to implement this strategy, which ad would you choose? We gave them a few options, and basically they had to choose the right one. If they chose the wrong one, we'd say, that's not right, try again. And then eventually they, once they chose, we kind of gave them this similar spiel to sort of explain exactly what was going on. Just a quick note on how we sort of measured some of the outcomes. So what we did was we basically, after each ad they responded to the following measures. So we asked them about the company's green credentials. So three questions drawn from literature. So this company helped protect the environment, actively reduce its impact, environmentally friendly with other competing brands, kind of one to seven, and then we averaged that out. We also looked at reliability of advertisement, again one to seven on a scale. We also had another kind of measure that will come up later that we'll talk about a little bit, which was just basically level of environmental concern. So it was sort of five or six questions, again one to seven agree, disagree. Around. Like sort of the questions with things like I think we need to take more action to address the impacts of climate change. You know, we're heading for an environmental catastrophe. Questions like that. So, jumping ahead, yeah, just you can see the pre-bunking intervention. This is what it kind of looked like. So if you're planning a market campaign for this company, which version would you use? And again, the kind of middle one is the one that we wanted them to eventually choose. So in terms of the results, a few interesting things came out. So first off, greenwashing absolutely does seem to exist and does seem to have an impact. So basically when you look at the greenwashed ads versus the non-greenwashed ads, companies rate their green credential, sorry, consumers rate the company's green credentials much higher. So kind of 4.9 on a seven point scale versus 3.4. And again, just a reminder, the kind of questions we asked is, this company helps protect the environment, it's actively reducing its impact on climate change, and the company is environmentally friendlier than other competing banks. Actually, I might just jump back one, because I realised I wanted to show you something. If I go back to this one, so you can actually see the ads that we use. So these were the specific ads that we used in the trial. So the first one, first two, you can see are greenwashed. The third one is just generic. So specifically the first one that says all our offices are green, which is great, but that doesn't kind of address the underlying business of what they do, right? Which is kind of how they produce energy. Similarly, the second one is a greenwashed ad in the sense that it doesn't actually say anything about the company. It's kind of encouraging other people to take action, right? Like calculate your carbon footprint by using our online calculator. Again, nothing about actually what the company is doing. And finally the third one is that we create thousands of jobs, very generic sort of stuff. So jumping ahead. Greenwashed ads work. The really interesting thing about this though, so obviously greenwashing quite successful and has a really big impact. The really interesting thing was it's most impactful, and most successful on those who have high levels of environmental concern. So remember I told you we had that sort of scale, at the start at that level, people's levels of environmental concern on a one to seven scale. So people who kind of are a five or above, you can see there that they rated those greenwashed ads much, much higher than people who were sort of low environmental concern. The kind of gap between high and low environmental concern is much larger for those greenwashed ads. So you can see there's a significant difference. So greenwashing works particularly on those who are most concerned. And on the one hand it's a little bit surprising, but on the other hand it's not that surprising, when you think about it, right? Like you imagine that people are, who are environmentally concerned want to do the right thing by the environment, might make environmentally friendly choices or things they see as environmentally friendly. And one of the things we know about biases is that in the absence of other information, we will look for anything that kind of gives us a sort of insight into what might be going on, and we fall back on those biases and and it seems like that's what's happening, right? But a company talking about environmental things is probably doing environmental things we think, and therefore those of us who are concerned about the environment are more likely to take that seriously. The bottom chart there just sort of shows you the proportion of people who we rated as high and low. So right about 70% of people generally would say that they were reasonably high concerned about the environment. So you know, it's something that both is impacting those who are concerned about the environment and impacting a large proportion of the population. Well at the end we also asked about concern about greenwashing practices, and as you can see here going one to seven scale, the vast majority of people were quite concerned about greenwashing and thought the companies that were doing it were being intentionally deceptive. So Janine, if you at ACCC ever needs any actual hard evidence as to what consumers think, here it is, right there. But, the good news is we can actually do something about it. So when we show people those greenwashed ads, we saw significantly lower average responses, the average green credentials ratings from consumers. So both the literacy and the pre-bunking intervention led to much lower perceived green credentials for those companies. And again, as I said, the kind of concerning part of the results was that environmentally friendly, environmentally concerned people are the most likely to be affected by greenwashing. They are also the ones who are most impacted by our intervention. So in terms of the the kind of incremental rating, you see a big decrease for those who are sort of high concern, relatively low decrease but still a sort of decrease for those in the lower concern brackets. We also, as you might recall, asked not just about the credentials of the company, but how reliable people thought those ads were as a marker of people's green credentials. And again, both of our interventions very significantly reduced people's, the perceived reliability of those ads, which again, really, really positive. So there's an interesting question for us here, which is, how, like what is the way in which our intervention is working, right? So there's a couple of different ways you could kind of conceive it. So one way is, it just makes everyone sceptical across the board, right? Like you just are automatically more sceptical of any ads whatsoever, and you just rate everyone lower in terms of their green credentials because it just trains you to be more wary. The second possibility is that it's a little bit more calibrated in the sense that it makes you more sceptical. Like it's particularly more sceptical of greenwashed ads, right? Ideally we'd want the second, but we could imagine there's a bit of the first going on. What it seems like is happening is it's some combination of both. So people are both just generally a bit more sceptical and wary of ads, but also specifically for greenwashed ads, and they're more sceptical. So on the side of people being just more sceptical in general, you can look at the ratings of greenwashed and non-greenwashed ads. And when we split out by the three ads, you can see there that you know, even that third ad, people are rating it lower on green credentials. Like in theory there shouldn't be any difference between control and treatment groups for that third ad, right? There was nothing greenwashy about that ad. It didn't really make any comments about the environmental credentials of that company, but you still see this sort of slight decrease. So there's some element of like, I'm just more sceptical overall, but is particularly pronounced when you look at just those greenwashed ads, right? So the intervention has a much bigger effect on the greenwashed ads than it does on the non-greenwashed ads, right? So people are much much more sceptical there. So there's some element of general scepticism, but it is particularly also like specifically for greenwashing that it works as well, which is an interesting kind of finding for us, right? Like the ideal scenario is where kind of other ads are not affected at all, and greenwashed ads are heavily affected, but we seem to be having some sort of spill over effects where people are just generally a bit more sceptical, as well as being more able to detect greenwashing. Now, one of the reasons why we're interested in greenwashing and particularly the kinds that we chose, particularly that one about promoting individual responsibilities, there's a risk that certain types of greenwashing may lead to a situation where there's less support for broader action on climate change, right? Like if you are able to shift it such that people think it's all their responsibility, it sort of lessens the impetus on governments or corporations to sort of take action, which is one of the reasons why potentially companies do engage in greenwashing, right? Is to sort of reduce the heat on them, essentially. However, when we asked people that, we found no change in kind of perceived impact in terms of whether individuals, private corporations or governments have to kind of mitigate climate change. So across all three, pretty consistent in terms of like private companies and government having more responsibility than individuals, and it sort of didn't matter, our treatments didn't really seem to shift those sorts of things. So that was sort of an interesting finding for us. The greenwashing doesn't seem to shift people's perceptions of whose responsibility it is to deal with climate change. And indeed like again, people are sort of pretty strong in favour of governments taking action to mitigate greenwashing, and they don't think that individuals can or should be responsible for it. Interestingly, we had I think some qualitative stuff and a number of people have commented that, look, you know, it's really hard for us to tell what is greenwashing and then we do need governments to step in and take action. It was really, really interesting to see that. So I did want to kind pause like wrap things up here, and give you a sort of sense of what we're thinking about where this might go next. This is very early stuff. So as far as we know, this is one of the largest trials on greenwashing, certainly in terms of finding a way to combat greenwashing that exists, and so we're hoping to sort of publish it and put it out a bit more widely. But there's a lot of questions for us as to where we we take it. So like one kind question for us is, we were trying to strip out as many variables as we can, so we chose fake companies. There's an interesting question is like, does this replicate on real world companies, right? Like does this happen? If you have strong preconceived notions about BHP or Qantas or Commonwealth Bank, like do these greenwashing things really shift your opinions that much and does our intervention kind of shift it back that much? Like how much will those brand effects start to come into play? The second and probably most important one is, does this last over a longer period of time? So the way this trial worked is like we showed you the intervention and then like straight away showed you the greenwashed ads. That's all and good, but in the real world, that's obviously not what's going to happen, right? People are going to see the ads long, long after they actually have the intervention. So there's an interesting piece here of providing the intervention initially to consumers, and then coming back one, three, six, even 12 months later and seeing like what sorts of impacts do we actually see in terms of their ability to discern greenwashing. I imagine it will fade at least a little bit. How much? I have no idea. There's also the kind of next question as well, like how much does this impact your consumer decision-making? So we saw that obviously we can make people think that these companies are greener than they actually are. How much does that actually change your purchasing habits? Like, sure you think this company's greener, but like will you start to purchase from them when you otherwise wouldn't have? And so there's some interesting things you could think about like a discrete choice experiment where we actually provide people with purchasing options, and see if greenwashing can shift some of those options by how much and start to kind of quantify what the financial impact of some of these sorts of things might be. That's a bit more complicated on down the track, and indeed you could start to combine all three of these elements, right? Like what happens over the long term on purchasing decisions with real world companies. That's obviously very, very complicated and something for future, but it is something we are certainly thinking about, as as we move forward. So yeah, that's all we've got. I'm happy to take any any questions.

 

- Great, thanks Ravi and Karen. Excellent, excellent presentation and super topical. I know here at BETA misinformation seems to be creeping more and more into a number of our projects as well, so it's something we've certainly been thinking about a lot and that sort of application to the climate change space in terms of that greenwashing example is super, super interesting. So there's a couple of questions that actually came through during the presentation, Ravi, which you've beautifully answered as part of the presentation, so I'll leave those for you to reflect on later. But there's one question that has just come through around asking whether you looked at whether pre-bunking can help people differentiate between greenwashing and firms' genuine green claims.

 

- So that's a really good question and that's the one I think that is another area for future like exploration is like does it actually, and what I'd like to try it on is, is the same exercise to sort of see whether to discern between legitimate versus illegitimate claims. I genuinely don't know. If I had to guess, I suspect it'll probably make people just sceptical of environmental claims more generally. I don't know if it will like be totally wipe out any environmental benefits. I imagine to have it, in order for it to be effective you would probably need to do a bit more to sort of say not only this is what an illegitimate claim looks like, but here are the kinds of things that are legitimate claims, right? Like so a company that is like totally carbon neutral, that's good. A company that talks about a specific part of their business going carbon neutral or something is probably greenwashing. It also depends a little bit on the type of greenwashing that you're talking about as well. Yeah. Because there's a few different ways that you can greenwash, but yeah, so the short answer is I don't know, but hopefully we will, we can look into that a little bit more.

 

- Thanks. And another question that's come through draws on those couple of slides where you mentioned the scepticism of people increased regardless of which version of ad they saw. So given that it did seem to create a high level of scepticism across the board, how do we help consumers correctly assess those who are greenwashing versus those organisations who are making genuine efforts?

 

- It's a good question and sort of just related to what I said before, which is that the short answer is, I don't know, but we'll need to try a few different things. So as I said, I imagine it might be something like giving people more, not just what is bad, but also like what is good and what to kind of look out for. The other one that might come through, and this is always tricky, I imagine that sort of looking for certifications and having those sorts of certifications are always a strong way of doing things. So, people often look for, oh what are they called? Any sort of mark of actual green behaviour that is endorsed or supported by a significant agency. The problem is I think a lot of those have become diluted as well. But yeah, I said, so the short answer is, I don't know, but hopefully we can try and find out.

 

- If I can just add to that, I think just generally one of the, it's a bigger ask, but one of those, ideally teaching people critical thinking skills or reminding people of critical thinking skills that they have learned previously, whether it's true or not in the ad, is that a big enough reason to support that company? Or if they're only mentioning like one part of their company, like could they be greenwashing? And if they aren't, then is that good enough? And so I think it is hopefully not just teaching people about one particular type of misinformation like greenwashing, but also just generally getting people to be more sceptical of advertising. So I think the true goal is critical thinking skills and however we can get there is this ideal.

 

- Thanks Karen. So another question that we've got is related to government-level interventions. So did you learn through this experiment what kind of government intervention or mitigation is expected, and related to this, how do citizens respond to the kind of government-driven individual action interventions associated with corporate greenwashing?

 

- So I'll do the first part first. We didn't really go into like what sort of expectations people have of that. It was more, as I said, this is still very early days and it's one of the earlier and larger studies. So it was really just trying to find out where do people think responsibility should lie? I think again, that's one of the things to think about next is, how should governments intervene, and how can they measure some of these sorts of things? I mean I guess the the idea would be to create some sort of test or some sort of metric and a way of measuring it. But we're still a while away from doing that. But in terms of what sort of government intervention people expect, I couldn't speculate. So could you repeat the second part of the question?

 

- Yes, sorry. So how do citizens respond to the kind of government-driven individual action interventions associated with corporate greenwashing?

 

- Again, I don't know, we didn't really look into that, that much. Yeah, couldn't answer that at this stage.

 

- Sorry, I'm just double tapping my mute button. I think it's safe to say there's lots of interest in the research that you've done, and I think that the interest transcends the parameters of the trial. So a couple of other questions coming through. There's a question around the detrimental effect of greenwashing on continued motivation for effective personal action and building a groundswell of public demand. Any insights you can provide on this point?

 

- Yeah, it's a good question. And so like that was the part of the motivation behind understanding what was going on, and then some of those questions that I flagged at the end there about like whose responsibility is it to do with climate change? Our concern was that you might shift some of that away from corporations and governments and more towards individuals. It seems like it hasn't done that, like greenwashing doesn't do that regardless of our interventions that there is still pretty strong, I guess a pretty strong sense of governments and corporates are the ones who need to be taking action rather than individuals. So like yeah, I mean I think it'd be interesting to explore a little bit further. Does greenwashing make people think like they are doing more for the environment by making these environmentally friendly choices and is there some sort of crowding out effort which may or may not be the case? I don't know, although I did read, there was a paper on this recently that just came out I think, like literally a couple of days ago, that suggests that these crowding out effects in terms of climate change don't necessarily hold up, or don't seem to exist as much as we think they might.

 

- So we've got a question here on would you still consider it greenwashing if a company certifies only part of their emissions boundary as carbon neutral, but they still make it clear and public what is included and excluded?

 

- It's a good question. So this is probably similar to those sort of concepts of like what is misleading and deceptive conduct, and it's hard to say this is a hard line. My kind of perspective on this has been that it is probably more of an empirical question than we think, but like if you want to know if something is misleading, like just ask people and you can probably get a pretty good sense like what they think is actually going on. And depending on how it's done, I could see a way in which it's done, which is actually not misleading. It's like, yeah sure, we've only done offset like 20%, these things haven't been offset, and like it depends on how the company sort of puts that forward, right? I could see a scenario in which is like pretty honest, like so far we've done 20%, still 80% to go, but that'll be done in the next decade. I could also see a scenario where a company does something that is very designed to make it look like they are doing a lot when they're not actually doing a lot and they sort of do disclose it, but not in a particularly clear or effective way. So like, as I said, the short answer is it depends on exactly how it's done. There's a way to do it well and there's also a way to do it poorly and like I'd need to see the ad, basically.

 

- Okay, and I think we might make this one the last question. We'll take you out of the hot seat. So this question's around how you would apply this in a real world setting. So how would you control for people's perceptions of companies in a real world ad, replication of this research? Could you consider taking the branding off the ads before presenting to participants?

 

- So those ads that we used were like very similar to like other company ads that already exist. And we did just as an aside, we did a very small, I wouldn't even call it a replication, it was like 80 people at a conference. It was very like bodged together, and we did like super companies instead of funds instead of like energy companies. And we saw like, and again we used ads that were very similar to like what actual companies do. I think one of them in particular was really funny. It was like something about, you know, go out and make a stand for Earth Day, or something like that, which some super company has definitely used, and literally says nothing about the company itself, right? But that was like the most highly rated ad that we had on there. So, we have done like these ads in a way that is designed to be pretty closely what companies actually do. So I think we are getting those effects. I would actually want to get the effects of companies with the brands in there, because what I want to find out is, how much does those pre-existing notions overcome, or like outweigh what the ad does. Like as I said, like you might have actually pretty strong fixed views about the environmental credentials of CBA or Target, or you know, BHP or what have you. And so actually greenwashing doesn't really shift things that much for you, or maybe it does, I don't know, maybe there's like interaction effects. I would probably want to keep the brands on there. I should note there are probably some complications with doing that. Our legal team has told us there are probably going to be some issues with using real company trademarks in that way. So that might make it harder if someone from the government wants to indemnify me from it, any consequences, I'm more up for doing it. But there will probably be some sort of complications around using specific brands in that way. But like to the point of the question, I mean these questions, these ads effectively were like stripping out brands and just putting random companies on there, and using essentially the kind of ads that the company's already put out there.

 

- Great, thanks so much Ravi, and especially for handling all of those questions in quick succession. Karen, I'll just give you a quick opportunity if you wanted to make any final comments.

 

- Oh, I think Ravi did a phenomenal job answering all those questions, and I'm just really excited about how not only we can take the research forward, but also apply it to other areas of misinformation, disinformation, fake news. And also to note that it is on the definitional point, when we started in the misinformation research, it was surprising how varied the different definitions between different academics or research groups. So it isn't very, it isn't perfectly clear cut, so I'm sure greenwashing as great a term as it is, also suffers the same definitional challenge. So yeah, just really excited about where this research takes us.

 

- Great, thanks Karen. And Janine, I know there was a couple of questions in the chat there where there was a bit of an overlap between your presentation and the concept of greenwashing. Did you want to make any final comments or questions?

 

- Yeah, I'd love to. Thanks for the opportunity, Andrea. Yeah, there's a lot, as you say, there's a lot of overlap and it's really nice to be able to say that the ACCC has made green claims an enforcement priority for this financial year. So we are currently undertaking a sweep, and there's a media release which I linked to one of the previous Q and A's, but I can relink that for anyone interested. So it is something we take very seriously. We want to make sure consumers are not misled by green claims. And I will say that we are working in conjunction with ASIC who take green claims cases in the financial services sector. So those superannuation funds that BIT mentioned, that would be within ASIC's remit. We do everything that's not in the financial services space. So it's something we care about a lot. It's something we have a few in-depth investigations going on into, and we want to make sure that consumers can make good choices in line with their values based on accurate statements.

 

- Wonderful. Thanks Janine. Thanks Karen and Ravi so much for your time today, and the efforts you've gone to prepare your presentation for us. They were both fabulous. Thanks also to everyone online, I hope your experience joining online, and thank you for all playing along with the online Q and A. I think that worked really well. A reminder that the next BI Connect session is next Thursday, the 8th of December, and the topic for that session will be sludge and complex consumer decisions. So in that session we'll be hearing from Alex and Eva over at the New South Wales Behavioural Insights Unit, and also we've got Katie and Laura from BETA presenting. So thank you everyone for your time, and enjoy the rest of your day.

Presenter
Karen tindall

Dr Karen Tindall

Principal Advisor, Behavioural Insights Team Australia

Karen is a Principal Advisor at the Behavioural Insights Team (BIT), Australia. She holds a PhD in Political Science from the Australian National University, in the field of public sector crisis management. Karen is an Adjunct Associate Professor at the Institute for Governance & Policy Analysis, University of Canberra.

Janine bialecki

Janine Bialecki

Principal Economist at the Australian Competition & Consumer Commission (ACCC)

Janine is a Principal Economist at the Australian Competition & Consumer Commission (ACCC). Janine holds a Master of Science in Economics, specialising in Behavioural Economics and Game Theory. Prior to ACCC, she spent eight years at Treasury and two years at BETA.

Janine is also passionate about promoting the role of women in economics, having spent three years as a board member of the Women in Economics Network Victoria branch, including as Chair.

Ravi dutta powell

Ravi Dutta-Powell

Senior Advisor at BIT Australia

Ravi is a Senior Advisor at BIT Australia, specialising in regulation and compliance, consumer finance, education, and international development. He is a co-author of BIT’s Applying Behavioural Insights to Regulated Markets. Prior to joining the team, Ravi worked for the Australian Securities and Investments Commission.