top of page

The Deepfake Dilemma: Can You Trust What You See Online?

Four people in a video call, smiling and waving at the camera. Each person is in a different setting. "11:35 | 4 people in the call" text.

Introduction

When you hear “deepfake,” you might imagine fake celebrity pictures or convincing prank calls. Maybe you think about those AI tools that can “bring back” famous people as they would look today. However, deepfake technology has also infiltrated our workspaces and become a serious threat to our data privacy.

From finance departments to video meetings, threat actors use these convincing impressions of someone trustworthy or intimidating. By mimicking audio and video, these hackers can impersonate executives, submit fake invoices, and trick employees into taking unsafe actions that they otherwise wouldn’t.

What Is a Deepfake Scam?

Deepfakes combine AI-generated audio and/or video to create convincing impersonations.

These scams don’t always look like the usual phishing emails that you know how to spot. They’re smarter. They use voice clones, video impostors, even virtual meetings with fake participants. That’s why awareness matters — not just for IT teams, but for you and your coworkers, too.

Here are some examples of scams that you may encounter:

  • A fraudster clones a CEO's voice and calls a finance employee, urging them to wire money urgently.

  • A video call appears to show a senior executive in a trusted meeting, asking for a “quick and confidential” transaction.

  • A spoofed number appears to be from a trusted third-party vendor, but the familiar voice that picks up isn’t your vendor.

These attacks exploit two things: the visual or auditory trust that we automatically give to “familiar” voices or faces, and the urgency of the request itself. When done well, these scams can be terrifyingly believable.

Case Study: £20-25 Million Deepfake Fraud

In early 2024, an engineering firm based in the UK, called Arup, fell victim to a deepfake video call scam. According to multiple reports, an employee accepted what appeared to be a routine video conference with senior executives. The issue? In the meeting, the “executives” were, in fact, deepfakes that mimicked real people. The attackers used publicly available images and audio to perfect their scheme. As a result, the tricked employee transferred about $25 million USD to fraudster-controlled accounts.

What stands out is how this wasn’t a traditional malware breach. It began with a trusted face (or voice) and a believable request. That’s what makes deepfakes so dangerous.

What Does This Mean for You?

Even if you’re not in finance or a senior executive, deepfakes can still affect you, too. For example:

  • You could receive a message (email, chat, or call) that appears to be from your boss, asking for access, approval, or confidential info.

  • You might join a meeting where someone who looks and sounds like a leader is directing you to act — but that person isn’t who you think.

  • You may see a video or voice message urging you to sign something, pay someone, or give credentials.

Under pressure, you might fall for any of these scams because they seem urgent and real.

How to Stay Safe (Without Panic)

In short: deepfakes blur the line between “I know that person” and “That person is fake.” Here are practical habits you can adopt, so that it’s easier to spot and report suspicious requests in the future.

  • Pause and verify: If someone sends an urgent request (especially one involving money, access, or data), stop. Pick up the phone or message your boss via a verified, encrypted channel.

  • Check the program and plan: If you get a video call from someone you know, was it scheduled or expected? Did you verify it through your calendar or a trusted invite, or was it just a random link?

  • Look for small, weird things: Deepfakes have gotten very sophisticated, but the technology is still imperfect. Strange accents, odd phrasing, awkward pauses, or a face that doesn’t quite feel right can all be clues.

  • Question urgent instructions: “We need this now because…” is a favorite trick. Take a breath and reassess stressful messages before complying.

  • Use secure channels: When sharing sensitive information or approving requests, stick to safe communication pathways that your company uses and trusts.

  • Speak up: If something feels off, you’re doing the right thing by asking. It’s better to pause a real request than to rush a fake one.

When it comes to protecting your private data, caution is the best defense.

Conclusion

We often think of cyber threats as code, scripts, or viruses, but deepfakes show us a new frontier: Our senses and trust under attack. The face or voice you thought you knew might not be real, and the request you assumed was legitimate might be a cleverly crafted scam.

By staying alert, verifying requests, and trusting your instincts, you become a key defender of company data. Remember, in cybersecurity, what you see isn’t always what you get — but what you check can make all the difference.

Comments


bottom of page