Deepfake – A growing threat to financial data security
Synthetic identity fraud is a nuisance, but not really a novel one. Criminals combine fake and real information like social security numbers and names, to create false identities. These are then used to defraud financial institutions, government agencies, and individuals via fake accounts, fraudulent purchases, and other means. Till recently, identity verification using photo and video was considered secure. However, the new menace of ‘deepfake’ is changing this. Here is a look.
The term was first used in 2017, and combines the terms ‘deep learning’ and ‘fake’. Deep learning is a powerful machine learning (AI) technology which has applications in a range of disciples from gaming to advanced medicine. Deepfakes are created by using a deep learning technique known as Generative Adversarial Networks (GANs). It uses 2 machine learning models to make counterfeits more believable. A deepfake model is trained using available samples of a real person’s voice recordings and photos. Seep learning is a machine learning technique that teaches computers to do what humans naturally do. Deep learning algorithms mimic the experience-based learning capabilities of humans. With enough training (using example tasks), the algorithms can replicate surprisingly human-like responses under specific conditions.
Making deepfake videos involves gathering thousands of video frames of the target individual. Videos constitute the input dataset. These are often cropped to only show the face. AI tools can somewhat automate the cropping process. However the mechanism still requires substantial manual effort. Images from different angles and under different lighting conditions are included. This is so that the algorithms (neural networks) can learn to encode and transfer different nuances of the face and environments. The need for large input datasets (vids and pics) is why most deepfake videos target celebrities. It is not possible to create a convincing deepfake of someone without hours of video of them in different settings.
Deepfakes have been used for pranks and entertainment purposes. Deepfakes were used in an investigative film about the persecution of LGBTQ individuals in the Russian republic. This became the first documentary to use deepfakes to protect its subjects’ identities.
Deepfakes and financial security
In the domain of international finance, deepfakes are being considered a real threat. The international money transfer industry is worth $714 billion in remittances sent annually. Cross-border transfers are heavily reliant on identity verification. A profusion of deepfakes can be a real problem. Rulioo, a leading global identity verification service explained why. The speed and volume of online and mobile commerce is staggering. There is a large demand for quicker onboarding and credit processing. This demand can only be met by taking human interaction out of the process.
Alvin Rodrigues is Senior Director and Security Strategist for Asia Pacific at Forcepoint. He says that deepfakes will be used by criminals to illegally impersonate high level targets at organizations. They will be used to scam employees into transfering funds into fraudulent accounts. Another threat is the use of deepfakes of deceased persons to claim annuities/pensions. Insurance and benefits frauds can take a similar form. Fraudsters can use deepfaked identities to obtain credit/cards and perform all kinds of transactions. These can destroy the victims’ credit scores and create criminal liability.
Symantec Corporation said it had seen 3 cases of seemingly deepfaked audio of company CEOs. The fakes were used to trick senior financial controllers of the respective organizations into transferring cash. The Wall Street Journal reported that the head of an unnamed UK-based energy company thought he was talking on the phone with his boss, the CEO of the German parent company. The fraudster impersonating the CEO asked the victim to transfer GBP 220,000 to a Hungarian supplier. The UK executive was taking instructions from a scammer who had used AI-powered voice technology to impersonate the boss.
Detecting deepfake images can be challenging. Gartner Research analyst Avivah Litan estimates that 90% detection rates may be possible by analyzing the content, the profiles submitting it, devices it originates from, and traffic patterns. This is how spam/bots and criminal operations are already being detected. Litan said that security analysts can use a combination of deepfake detection algorithms, internet whitelisting, and fraud detection techniques to fight socially engineered attacks.
In November 2019 the US government approved a bill ordering further research into deepfakes. The UK government also is looking for solutions by evaluating legislation to ban non-consensual deepfake videos. According to McKinsey, fraud will continue to evolve to evade detection. Banks can deepen their understanding of their customers by mining the growing number of third-party data sources available. These measures can help banks improve their risk controls and stem losses from synthetic identity fraud.
About the author:
Hemant G is a contributing writer at Sparkwebs LLC, a Digital and Content Marketing Agency. When he’s not writing, he loves to travel, scuba dive, and watch documentaries.