Since the start of the pandemic, digitals tools have allowed us to continue to function as a society – albeit with a few changes here and there. Thanks to these tools, we’ve been able to continue working, shop for groceries, attend virtual GP appointments via video conferencing and even attend digital court hearings.
About the author
René Hendrikse, MD EMEA & LatAM at Mitek.
However, the move to digital has not been without its challenges. The recent viral video of a ‘cat’ passing ‘judgement’ via a Zoom call – when a judge attempted to use his assistant’s computer during a virtual hearing – highlights this. While the trending video brought joy to many, it also calls into attention, the question of whether our digital interactions are as secure as they seem.
Could video calls, a tool we have become accustomed to, pose a potential, unquantified threat? How would this threat, if left unmasked, impact our ability to work, borrow and buy securely?
Fraudster’s new favourite trick
This isolated incident, ‘catcalling’ if you will, should not be dismissed as a one-off. In fact, during the first nine months of the pandemic, a quarter of Brits and 23% of Americans compromised their security at home, sharing their work passwords with their flatmate, partner, or family member amid increased home-schooling, remote working, and socializing.
Research by SailPoint showed that our lockdown cyber hygiene has slipped – which isn’t making the already high risk of fraud any easier to manage. We trust our eyes the most, which means videos in particular can create a false sense of security.
With our social interactions mostly reduced to messages and video calls, what does this mean for retail banks, approving more mortgages, loans and new customers online than ever before? Or corporate banks, where video calls are now the mainstay of relationship managers and corporations large and small? With billions in hard-earned cash on the table, could video calls be the biggest fraud risk yet?
They just might be. Banks and fintechs have already started establishing partnerships to tackle the use of spoofed videos – found to be a new favorite trick of fraudsters a few months into the pandemic – as ‘deepfake’ crimes continue to be the biggest consumer worry. It’s not without reason. Deepfakes and synthetic identities are likely to open the door for the next wave of identity theft fraud.
Perfecting our “computer vision”
While fraud rises, businesses can’t stay still when it comes to cybersecurity. With digital, including video calls, we often rely on the safety of the channel we use, such as end-to-end encryption, but not how our identity is used there.
‘Frankenstein fraud’, or synthetic identity fraud, is changing that. We are seeing fraudsters gaining access to ever-more sophisticated technologies to create not just false ID images and video feeds, but fake data records that back up that false identity.
Deepfakes posing as real famous people spotted on videos, including Elon Musk and Tom Cruise, are quite hard to tell from their real counterparts. What’s the probability then of spotting a spoof when a brand new customer is trying to sign up to a new banking service? It certainly is a big threat for fintech companies, banks and e-commerce giants alike.
This means our identity verification technologies must take a risk-based, zero trust approach. The reality is that the identity risk profile of a person can change, and probably will over their lifetime – for example if they become a victim of identity fraud. Our technology must stay flexible to enable a change in parameters if the situation develops, protecting consumers from the risks of identity fraud, stopping it in its tracks.
To keep real people protected, we have to perfect our “computer vision” and train digital identity verification algorithms on a variety of diverse profiles, lighting, and proximities. The technology of today and tomorrow must be able to tell a mask, deepfake photo or video from a real person – and avoid disabling people’s access to vital financial products or services at the same time.
A balancing act
That said, we tend to like getting what we want in an instant and avoid jumping through hoops. Having a quick and seamless user experience is key nowadays, meaning that the onboarding process is often a balancing act, between convenience and security; speed and catching more fraud.
So, remember this: every time an app – whether for a bank, payment provider, or retailer – asks you to move closer to the camera or step back, change the frame or lighting of your face, it is not doing it to make the process more difficult. The technology is doing its best to protect us from identity fraud, to keep fraudsters away.
The digital world has its advantages, and its disadvantages too. What we see on the surface may not be what it seems: not every Frankenstein face will look weird or fake, and not every human face will pass the test first time round. The occasional cat filter, therefore, may be one of the most innocent human representations yet. Or it could be a warning sign of fraud that we’re likely to see on the horizon.