Phishing: it's not the user's fault...
I was reading an article around the potential of Generative AI being used by attackers for improving the efficacy of their phishing emails, and a specific piece of guidance stood out to me:
“…Companies probably need to just be making sure that their employees are even more vigilant and even more aware of phishing attacks in general, regardless of whether they’re AI-generated or not…”
First off is the statement that protections should exist “…regardless of whether they’re AI-generated or not…”. At its core, I agree – we should avoid security mitigations that rely on a point solution to a point problem; but critically is the idea that we really have little idea of whether an email is AI-generated or not, so how could we change our posture in response? To somewhat butcher1 a quote from my colleague Max Heinemeyer:
As Defenders we only really see the sausage – very rarely do we see how the sausage is made
It’s fundamentally a question of cause and effect – i.e. we often only ever observe the effect, not what caused it. This extends beyond phishing emails, and whilst I accept that with some forensic investigation sometimes it’s possible to determine the cause, many times we simply have to infer it.
But this probably distracts from my main issue with the quote – the idea that it’s down to the users to be more vigilant, and by extension, it is therefore the users fault for not being vigilant enough, should it get through.
For the folks at the back of the room, I’ll be really clear:
It is (with a small number of specific exceptions) never the user’s fault.
The snake oil salesmen, ambulance chasers and keyboard warriors who peddle the idea that it is always a problem with “Layer 8”, with their technical snobbery, but often close to zero hands-on practical experience of Cyber Security operations will unfortunately likely never go away. But, we must as security pro’s dismiss that idea wherever we see it.
I think there are two ways you can look at this – the System Engineer’s perspective and the Psychologist’s perspective.
The System Engineer
The traditional model for network design has been to focus on the technical components - the hardware and software that goes together to make our system. We introduce fault-tolerant system design with fail-safes – even the origins of the Internet, back then known as ARPANET, was a deliberate divergence away from traditional circuit-based systems, so that that they could be more resilient in the case of a Nuclear War!
But even with ARPANET, the fault-tolerance was limited to the user’s impact on operations, not on security. And that mindset bleeds through today, with almost a “finger’s crossed approach” that the user won’t click on a link.
Instead, we should be including users as part of the system design, and as part of any security fail-safes. If we’re content with the idea of “defence in depth” for when a specific technical mitigation might not 100% succeed, why can’t we include the same concept for the user?
Simply put, the system should be able to still provide some defence regardless of a user clicking a link or not – a user clicking a link should not be a direct “do not pass go” gateway to your crown jewels, there should be other things in the way.
The Psychologist
Other than the time that my dog tried to lick the buttons on my keyboard, most users are human. They make decisions like a human, and they take actions like a human. There is an element of bias that creeps in, as well elements of personality. Someone’s ability to spot patterns, or their attention span. Their ability to focus, multi-task or just whether they’re distracted because a bird flew past their window.
And people have jobs, jobs that are stressful, jobs that require intense focus on tasks that have little to do with security. Jobs that rely on speedy responses. There are even some jobs where it is literally a requirement to click on links in emails or open up attachments – recruitment teams, accounts payable, and legal & medical secretaries, to name but a few.
A fundamental tenant of email spam is that it works, even if it’s only a very small percentage, it still has a success rate, that is only increased with quality and quantity. And it doesn’t work because people are stupid, but because they were busy, focussed on something else and on first glance, it looked just like any other email they’d receive.
Story time
When speaking about the level of social engineering that I have seen, one story that I often bring up, is that of a CEO at a high-tech firm. Their company had been on the receiving end of countless campaigns targeting their environment, some of which from the more scary end of the Nation State / APT spectrum – and they’d been learning their lessons, with extra monitoring and hardening of end-points.
The attacker was persistent and they had done their homework. Using social media, they’d identified that the CEO had two young children, and from the classic school uniform photos uploaded from the first day of term, worked out which school it was that they went to. From there the attackers turned their attention to the School IT system – a sector known for weaker protections partially down to limited resourcing. It wasn’t long before the attacker had compromised the school IT and worked their way up to the email account of the school head teacher. From there, they sent an email to the CEO:
“Dear XXX, I just wanted to reach out and let you know that your son, XXX, has been involved in an accident at school. He’s fine, but we’re taking him up to the local hospital just to make sure.
Attached to this email is the accident report form and a few photos so you can see for yourself”
When the CEO tried to open the attachment, they’d found it had been stripped by the email gateway. So they rang up the IT Helpdesk and told them with some urgency that they need to release the email. And as they were the CEO, and given their tone of voice, it was promptly released. When the attachment was opened it was immediately quarantined by the on-host AntiVirus. Yet again the CEO rang up the IT Helpdesk and with more urgency and exasperation ordered them to release the file. A few days later, nearly 40GB of sensitive commercial data was exfiltrated out of the network.
Now, I’m a parent, and I can tell you there is a pretty decent chance I’m going to open that attachment. This is not meant to scare you, and certainly not every attack is going to look like this, but it’s a pretty big “come to Jesus” moment that there is fundamentally a threshold for which if the attacker works hard enough, the social engineering is just going to work time and time again.
I’m a big fan of this blog by, then, CTO of NCSC Ian Levy on when he nearly got caught out by an online prankster. A good reminder that even those at the top of their game in Cyber Security could get tricked: https://www.ncsc.gov.uk/blog-post/serious-side-pranking
And could we see Generative AI producing targeting such as in the story above… Yea, I’m pretty sure you could.
What’s the point - shall we just give up now?
The point of this post is to simply say that it shouldn’t be down to the user to catch and stop phishing emails - whether they were AI generated or not. There are absolutely alternative ways, and my core tenant is that technology should be used to solve what is fundamentally a technology problem.
One way to think about it, is much like my advice that I often give around a so-called “Zero Day attack” - the 0-day exploit, or in this case an email, only represents just one component of the end-to-end process. Whether its before delivery or after, there are lots of opportunities to detect and stop the attack from propagating, and they aren’t all going to be just as sophisticated. The Cyber Kill Chain® from Lockheed Martin is a good way of thinking about opportunities to position defences against other elements of the attackers activity.
But even with the email itself there are opportunities to spot and defeat it. Simple technical mechanisms such as SPF/DKIM/DMARC can help, but also much more sophisticated methods such as Machine Learning to understand if this is an email structured in a way that you’d expect to receive from this recipient, regardless of prior knowledge or threat indicators2.
But what about phishing awareness training?
All that being said, user vigilance definitely has it’s place and a lot of that is informed and shaped by security awareness training. It’s an invaluable component that features at the core of many common security frameworks that emphasise even just doing “the Basics”, including, but not limited to NCSC’s Top 10 Steps.
But for a few different reasons, I’d urge caution on just relying on this one approach.
Every test we’ve done in life has a pass mark, a standard by which we’re expected to attain. I can’t think of a single example I’ve personally done where that pass mark is 100%, so therefore there must be an element of mistake that’s acceptable. What’s the acceptable rate of clicks (mistakes) for a phishing campaign? Does this rate vary from employee to employee - are there some individuals where a lower tolerance is appropriate, individuals with higher access privileges perhaps? Or is everyone from Cleaner to CEO expected to pass to the same standard?
What is the intended outcome and what is the simulation going to simulate? Are you trying to give users visibility and the skills needed to detect the run-of-the-mill mass phishing campaigns, most of which should be stopped by any decent email security platform? Or are you trying to demonstrate the “art of the possible”, to give complacent users a wake-up call, knowing full well that most will fall for it? Or is it somewhere in the middle, or maybe a mixture of all 3? I’ve seen it go horribly wrong when it was just an opportunity for the internal red team to just show off how 1337 they were.
Be careful how you compare yourself with others or even yourself as a comparison year-on-year. Following on from 2, is the recognition that without the right controls in place, no two phishing simulations are the same. It’s straight out of School Science Lessons - keep the variables to just one, and for this scenario ideally that is just time.
A good outcome, is not just thinking about the clicks. Ultimately, whilst phishing simulations will never completely eradicate a user clicking on something, every training session increases the chance that an employee thinks “Hey, that’s weird - I’m gonna report that to Security”. It might only take a single user to click the link that leads to ransomware, but it also only takes a single employee to flag it to the SOC.
Pun intended.
Disclaimer: I work for Darktrace who sell a product that does exactly this! ahem… https://darktrace.com/products/email