
Establishing a path to secure AI
January 2025
Cybercrime is booming. We all know it. But while attack vectors are evolving by the minute, our security budgets are barely inching forward. This disconnect is creating a dangerous vulnerability, particularly with the rise of human-manipulation fraud, like AI-enabled deepfake. It is no longer just something that happens to celebrities. It is now an everyday reality, and a boardroom problem. My work in biometric security working with the public and private sector, has given me an insider’s view of how devastating it can be.
Cyber criminals are leveraging the power of AI to clone voices, create realistic video impersonations, and mimic suppliers. They are tricking employees into approving multi-million dollar transactions, and unfortunately, it’s working. Remember the deepfake CFO who orchestrated a $25 million transfer over a Zoom call? That’s just one example of how sophisticated these attacks have become.
Let’s look at some numbers to contextualise the problem. Cybersecurity budgets are creeping upwards – around 8% in 2024. The talent shortage is crippling, with 500,000 unfilled cybersecuri jobs in the US. Meanwhile, cybercrime is projected to cost a staggering $12 trillion in 2025. The math simply doesn’t add up.
A legitimate business can only allocate a percentage of its budget to cybersecurity. Cybercriminals, on the other hand, dedicate 100% of their resources to committing fraud. They’re playing with a full deck.
Yet, many companies are stuck in a reactive cycle: buy more tools, hire more specialists, run “check-box” training. It’s an unsustainable arms race. We’re throwing money in a bottomless pit, without results.
The harsh reality check: We are in the cyber criminal’s trap
I want to highlight 3 points to support my thinking:
Cybersecurity spending has become a black hole. No matter how much we invest, attacks are still getting through. Why? Because criminals aren’t always hacking systems; they are hacking people. A well-timed email, a cloned voice note, a convincing deepfake Zoom call – if an employee believes it, no amount of security technology can stop them from acting on it.
If budgets are flat, security teams are stretched thin, and technology alone is not enough, what’s t answer?
Well, how about we tap into our latent pool of salaried employees? What if we build a fraud-aware workforce, and empower every employee to become the secure gateway?
Deepfake fraud exploits human trust. But humans also possess the solution: critical thinking and pattern recognition, qualities that AI cannot easily replicate. A trained employee might notice subtle inconsistencies in a deepfake video – a lack of the CEO’s usual sarcasm, perhaps – and decide to proceed with caution.
Employees are also a highly scalable defense. Instead of relying on a few experts, organizations need to equip everyone with the skills to spot and stop deepfake threats. This amplifies our defensi capabilities exponentially. Very quickly, employees go from being “Attack Surfaces” to being “Defence Forces”.
I have seen fast and fantastic results with the following:
If criminals are impersonating your executives, why aren’t you simulating those scams to train your employees?
Yes, we need to invest in defensive technology and robust processes. But we also need to invest in our people. We need employees who:
Cybercriminals are using AI to scale their attacks. We need to be just as strategic in scaling our defences. This means testing our employees before the criminals do. The best security investment is not always another tool. It’s a workforce that knows when they’re being played. Because in a world
where AI can fake faces, voices, and messages, a curious, cautious, and well-trained employee is the most valuable line of defence we have.
What are your thoughts on this? How is your organisation preparing for the rise of deepfake fraud? Share your insights in the comments below.