Skip to content
Best Practices Ethics & Culture

Behavioral Science & Compliance: Why You Can’t Just Say ‘Be Compliant’

There’s always a better way. And today we recap a recent Star webinar with human-risk expert Christian Hunt, who spent some time with us helping attendees understand new and better ways of thinking about and approaching compliance

Several weeks back in this space, we talked about creating a culture of compliance at your firm. How culture is a mindset. How when you embed a culture it shapes behavior, and can nudge people toward making the right decision or the wrong decision. Today we continue this thread of thinking with A Deep Dive Into The Human Psyche: The Application Of Behavioral Science To Modern Compliance. This was the title of a recent webinar Star’s Director of Product Strategy & Marketing Tim Ward hosted with human-risk expert Christian Hunt. Hunt is the founder of a company called Human Risk—a behavioral science consulting and training firm that specializes in the fields of risk, compliance, and ethics. He and Ward discussed the subject of applying behavioral science to compliance programs. Following are the highlights of that discussion. You can access the webinar in full here.


  • Behavioral science is the understanding of the drivers of human decision-making—the actual reasons we make decisions. People are good at deluding themselves as to why they’ve done something. We’ve all bought things that we’ve later regretted. We’ve all done irrational things.
  • Research in this space surrounds neuroscience but also economics. If you simply increase the value of certain goods, they’re perceived as more valuable and people line up. That’s not rational behavior, but what drives it?


  • You need to get people behaving in the right way. And if you’re in the business of influencing human decision-making, why not use techniques other people have used for similar ends, like advertisers, and apply those same techniques?
  • A good example of the way compliance has proliferated is its massive policy books. Something’s gone wrong? Write another rule. What’s left is a giant rulebook that gives the organization a sense of comfort, but for frontline individuals it’s just another set of pages to trudge through.


  • Theoretically, it’s effective to have something in a policy, in a handbook, but if nobody is going to read that handbook, then you’re just theoretically mitigating a risk. We know from real life experience that if we’re required to do something particularly tedious, we’re less likely to do it.
  • When you download an app, there are terms and conditions you’re supposed to read, and theoretically you have, because you clicked on the button that says you have. But in practical terms, you haven’t. And you know you haven’t. And Apple knows you haven’t.
  • Why don’t most people bother reading these things? In the Apple example, since it’s a non-negotiable proposition what’s the point? If you translate that thinking into compliance, there are loads of processes that look good on paper but don’t achieve what they set out to achieve.


  • What behavioral science does—bringing science to compliance, if you like—is all about saying: “Let’s think about it from the end-user perspective, and work out what it is we need to do to make people more likely to do the thing that we want them to do or not do.”
  • Sometimes we need people to do something. Sometimes we need them to not do something. Sometimes we need them to do something in an engaged manner to really deliver the outcome we’re looking for. The techniques you deploy depend on the outcome you want.


  • Many of the compliance methods we’re accustomed to using have been passed down, they’re traditional methods, but they aren’t necessarily delivering the outcome. Accepting that is challenging. If you’re the person who’s built up a framework, it’s difficult to turn around and say ‘I got all that wrong.’
  • The second challenge is, people are terrified of regulators and they quote regulators and say regulators won’t like something or other. But it’s really more a case of how you present something to regulators.


  • The moment you start codifying, you run the risk of oversimplifying a very complex situation. Just because you stick to the speed limit doesn’t mean you’re a safe driver. One of the things we take comfort from in compliance is we’ve codified things.
  • But there are other situations where you need people to use their initiative, where you need people to think for themselves, to react to the situation. And so thinking about which of those two you’re dealing with would lead you to decide whether you need to codify or not codify.
  • We really need to start balancing compliance: “Here’s the stuff I need you to do, no questions asked, no challenge. Here’s the stuff where I need you to think for yourself.”


  • If you have lots of people breaching a policy, it’s unlikely they’re all willfully doing so. It’s more likely there’s something wrong in the policy. Maybe it’s badly worded. Maybe training on it is poor. Maybe it was presented too long ago. Maybe they don’t see the relevance of it.
  • When you’ve got widespread breaches of a policy, management says: “Ah, bad population, we need to reprogram the people.” But very often there will be a cause that’s within the remit of the people setting up the control framework that’s driving a particular behavior.


  • Data is very powerful and should never be dismissed out of hand. It can tell you some very interesting things. But it’s worth remembering that data comes from one place—the past. As we know in financial services, past performance is not necessarily predictive of future outcomes.
  • We’re all human beings and there are some basic behaviors we all display. For people mad keen on analyzing data? Use it, but remember that it has its limits. People can be unpredictable.


  • If you employ human beings, you’re running a degree of risk. It is inescapable. You can’t be there every single moment somebody might do something you might not want them to.
  • Tech will allow you to eliminate certain kinds of risks. There are things that tech is really good at that people are appalling at; repetitive tasks that require strict adherence is in the latter category.
  • Things you can program an algorithm to do, where there’s no shifting around—where you just want a particular outcome—use technology to do that kind of work.


  • There’s a temptation to lock everything down, but people don’t like being controlled. Recognize that if you have machines doing the simple, repetitive tasks, then you’re employing people to do smart, creative, empathetic, and intelligent tasks.
  • We’ve got to think about creating smart ways to surveil so people don’t feel like they’re in a big brother state. There will be times where that’s appropriate, but not every time.
  • Recognize where people are likely to have problems, then try and intervene before people make mistakes. Very often the only intervention is a system that catches someone afterwards.


  • Everything you do will have a desired outcome and a potential side effect. We tend to focus on the desired outcome. We don’t think about the unintended consequences of our processes.
  • Recognize there’s always a tradeoff, and that by making it clear what is unacceptable we’ll achieve compliance with certain people, but we’ll also irritate others.


  • The word “compliance” is an awful, awful, awful piece of branding that sends all the wrong messages. What we have to understand here is we’re in the business of persuading people to behave in a certain way, and that it’s not a black-and-white process.
  • We’re not programming computers. We’re trying to influence human decision making. That’s complicated, but there’s a ton of material out there we can deploy that probably hasn’t been considered before. Think differently, in other words.