A New Era of Phishing… Are You Ready for What’s Next?
Phishing and spear phishing have long plagued individuals and organisations alike.
These attacks exploit human trust and error, manipulating users into surrendering sensitive information or unwittingly enabling access to protected systems.
For years, the silver lining was that many of these emails were riddled with obvious signs - poor grammar, awkward phrasing, and strange formatting… But that window of easy detection is closing rapidly.
In recent years, generative AI tools like ChatGPT have significantly improved the quality of phishing. What once looked suspicious now reads smoothly. But, if you think phishing has already reached dangerous heights, brace yourselves,,,
The Rise of Hyper Realistic Deepfakes
At Google I/O 2025, Veo 3 was launched - Google’s latest, most advanced AI video generator. In a matter of hours, creators on TikTok and YouTube began releasing shockingly lifelike videos, produced in minutes using nothing more than a prompt. These 8 second clips are unnervingly real, and they’re only going to get better.
Veo 3 generates visuals, audio, dialogue, background noise, and movement that borders on indistinguishable from reality. The uncanny valley - that eerie sense of “something’s not quite right” when watching AI generated humans, is narrowing dramatically. Characters can now be consistent across different clips. Facial movements and vocal tones are natural. That once jolty movement quality is now smooth and fluid.
For the average social media user, this technology is currently a playground, but in the hands of threat actors, the implications are profound.
What Happens When Deepfakes Go Malicious?
Cyber criminals have always capitalised on urgency and emotion. Phishing succeeds because it bypasses logic and strikes at human instinct - fear, authority, curiosity. Now imagine adding a new layer: a video message from your CEO, asking you to approve a wire transfer. It looks like them. It sounds like them. It includes a subtle urgency and a familiar tone… and you’re not expecting it to be fake.
This is no longer a hypothetical situation. With tools like Veo 3, it now takes just a couple of hours to create an ultra realistic video of someone’s colleague, manager, or family member. Combine this with the precision of spear phishing, where attackers have already done their homework on who you are, what you do, and who you interact with, and we are facing a seismic shift in the threat landscape.
We’re already seeing real world cases emerge, including recent controversy around ScotRail’s new AI-generated announcer, which voiceover artist Gayanne Potter claims was built using her own voice without consent. The voice, developed by ReadSpeaker and deployed across Scottish trains, is now at the centre of a dispute that highlights how easily real human input can be repurposed into synthetic systems without visibility or control.
The only detection clue might be an almost imperceptible sheen, a ‘too perfect’ polish in the video. But most people aren’t trained to notice these signs. In fact, research published in Science Advances (2024) showed that only 36% of people could accurately identify high quality deepfakes, and that number dropped further when emotional urgency was introduced.
Furthermore, we’ve already seen the need for credibility indicators on platforms like Twitter (now X), where fake content spreads rapidly. But what happens when that content arrives in your inbox - directly, privately, and convincingly?
A Wake-Up Call for Cyber Professionals
The emergence of AI-generated video phishing is a human trust issue. As these tools become more accessible, the attack possibilities expands dramatically.
We’ve seen this already. In early 2024, a finance employee at a multinational firm was duped by a deepfake video call featuring multiple executives. The result: a seven figure wire transfer sent to attackers.
The call included AI generated voices and faces of real employees. The attack took weeks of prep, but the execution lasted less than 30 minutes.
We can’t stop the attackers from doing what they’re doing, but we can prepare and protect ourselves.
We at Cyro Cyber recommend:
Implement Multi-Factor Authentication: Requiring a second layer of verification significantly reduces the success rate of impersonation-based attacks.
Clear Policies: Organisations need to consider communications strategies to include security viewpoints. Having clear policies around how information will and won’t be shared, and where, is critical to give employees clarity and confidence.
Reinforce Password Hygiene: Encourage and enforce strong, unique passwords across systems. Consider passphrases for added complexity.
Train for the AI Era: Employees must be educated not just on phishing emails, but on deepfake videos, audio attacks, and synthetic media. Provide real-world examples. Teach them how to verify unusual requests through secondary channels.
Create a Culture of Pause: Encourage staff to stop and question before acting on unexpected requests - especially those involving financial transactions or sensitive data. A brief pause can stop an irreversible mistake.
Use Internal Codewords and Protocols: For highly sensitive operations, establish verification methods that are private and cannot be inferred by attackers. Think beyond MFA - consider human-in-the-loop checks.
Red Teaming: Your Best Line of Defence
At times like this, proactive defence is crucial. This is where red teaming becomes invaluable. Red teams simulate real world attacks - not just testing your technology, but your people, your processes, and your ability to detect and respond.
If your red teaming hasn’t evolved beyond phishing emails and USB drops, it’s time to consider:
Video Message Simulation: Sending internally spoofed videos via Slack or Teams that appear to come from known figures - testing whether staff question or escalate the request.
AI-Powered Phishing Copywriting: Using tools like ChatGPT to replicate natural, convincing language that aligns with your corporate tone - and then testing user response.
Spear Phishing Scenarios: Combining open source intelligence (OSINT) with synthetic content to target high value individuals inside your organisation with customised lures.
If your security assessment partner isn’t including these elements yet, it’s worth a serious conversation. At Cyro Cyber, we’re here to help you prepare with insight, empathy, and a deep understanding of where the threat landscape is heading.
Because in the end, security is about people. Let’s keep them safe.
Get in touch today to learn how we can help you.