A video conference call taking place inside an office - but is the person on the other end of the call really who they say they are?
Image: Getty/Luis AlvarezIf the ongoing fight against ransomware wasn't keeping security teams busy, along with the challenges of securing the ever-expanding galaxy of Internet of Things devices, or cloud computing, then there's a new challenge on the horizon -protecting against the coming wave of digital imposters or deepfakes.
A deepfake video uses artificial intelligence and deep-learning techniques to produce fake images of people or events.
One recent example is when the mayor of Berlin thoughthe was having an online meeting with former boxing champion and current mayor of Kyiv, Vitali Klitschko.
SEE: These are the cybersecurity threats of tomorrow that you should be thinking about today
But the mayor of Berlin grew suspicious when 'Klitschko' started saying some very out of character things relating to the invasion of Ukraine, and when the call was interrupted the mayor's office contacted the Ukrainian ambassador to Berlin -to discover that, whoever they were talking to, it wasn't the real Klitschko.
The imposter also apparently spoke to other European mayors, but in each case it looks like they had been holding a conversation with a deepfake, an AI-generated false video that looks like a real human speaking.
It's a sign that deepfakes are getting more advanced and quickly. Previous instances of deepfake videos that have gone viral often have tell-tale signs that something isn't real, such as unconvincing edits or odd movements.
This whole episode appears to have been concocted by someone purely to cause trouble -but the developments in deepfake technology mean it isn't difficult to imagine it being exploited by cyber criminals, particularly when it comes to stealing money.
As such, this incident is also a warning: that deepfakes are enabling a new set of threats -not just for mayors, but for all of us.
While ransomware might generate more headlines, business email compromise (BEC) is the costliest form of cyber crime today. The FBI estimates that it costs businesses billions of dollars every year.
The most common form of BEC attack involves cyber criminals exploiting emails, hacking into accounts belonging to bosses -or cleverly spoofing their email accounts -and asking staff to authorise large financial transactions, which can often amount to hundreds of thousands of dollars.
The emails claim that the money needs to be sent urgently, maybe as part of a secret business deal that can't be disclosed to anyone. It's a classic social-engineering trick designed to force the victim into transferring money quickly and without asking for confirmation from anyone else who could reveal it's a fake request.
By the time anyone might be suspicious, the cyber criminals have taken the money, likely closed the bank account they used for the transfer -and run.
BEC attacks are successful, but many people might remain suspicious of an email from their boss that comes out the blue and they could avoid falling victim by speaking to someone to confirm that it's not real.
But if cyber criminals could use a deepfake to make the request, it could be much more difficult for victims to deny the request, because they believe they're actually speaking to their boss on camera.
Many companies publicly list their board of directors and senior management on their website. Often, these high-level business executives will have spoken at events or in the media, so it's possible to find footage of them speaking.
SEE: Securing the cloud (ZDNet special feature)
By using AI-powered deep-learning techniques, cyber criminals could exploit this public information to create a deepfake of a senior-level executive, exploit email vulnerabilities to request a video call with an employee, and then ask them to make the transaction. If the victim believes they're speaking to their CEO or boss, they're unlikely to deny the request.
Scammers have already used artificial intelligence to convince employees they're speaking to their boss on the phone. Adding the video element will make it even harder to detect that they're actually talking to fraudsters.
The FBI has already warned that cyber criminals are using deepfakes to apply for remote IT support jobs, roles which would allow access to sensitive personal information of staff and customers that could be stolen and exploited.
The agency has also warned that hackers will use deepfakes and other AI-generated content for foreign influence operations -arguably it's something along these lines that targeted the mayors.
While advances in technology means it's becoming more difficult to tell deepfake content apart from real-life video, the FBI has issued advice on how to spot a deepfake, which includes the video warping, strange head and torso movements, along with syncing issues between face and lip movement, and any associated audio.
But deepfakes could easily become a new vector for cyber crime, and it's going to be a real struggle to contain the trend. It's entirely possible that organisations will need to come up with a new set of rules around authenticating decisions made in online meetings. It's also a challenge to the authenticity of remote working -what does it mean if you can't believe what you see on the screen?
The more that companies and their people are aware of the potential risks posed by malicious deepfakes now, the easier it will be to protect against attacks -otherwise, we're in trouble.
ZDNet's Monday Opener is our opening take on the week in tech, written by members of our editorial team.