Deepfake – A growing threat to financial data security

Deepfake - A growing threat to financial data security 1

Synthetic identity fraud is a nuisance, but not really a novel one. Criminals combine fake and real information like social security numbers and names to create false identities. These are then used to defraud financial institutions, government agencies, and individuals via fake accounts, fraudulent purchases, and other means. Till recently, identity verification using photos and video was considered secure. However, the new menace of ‘deep fake is changing this. Here is a look.


Understanding “deepfake”

The term was first used in 2017 and combined the terms ‘deep learning’ and ‘fake.’ Deep learning is a powerful machine learning (AI) technology with applications in a range of disciples from gaming to advanced medicine. Deepfakes are created by using a deep learning technique known as Generative Adversarial Networks (GANs). It uses 2 machine learning models to make counterfeits more believable. A deepfake model is trained using available samples of a real person’s voice recordings and photos. Seep learning is a machine learning technique that teaches computers to do what humans naturally do. Deep learning algorithms mimic the experience-based learning capabilities of humans. With enough training (using example tasks), the algorithms replicate surprisingly human-like responses under specific conditions.

Deepfake videos

Making deepfake videos involves gathering thousands of video frames of the target individual. Videos constitute the input dataset. These are often cropped only to show the face. AI tools can somewhat automate the cropping process. However, the mechanism still requires substantial manual effort. Images from different angles and under different lighting conditions are included. This is so that the algorithms (neural networks) can learn to encode and transfer different nuances of the face and environment. The need for large input datasets (vids and pics) is why most deepfake videos target celebrities. It is impossible to create a convincing deepfake of someone without hours of video of them in different settings.

Deepfakes have been used for pranks and entertainment purposes. Deepfakes were used in an investigative film about the persecution of LGBTQ individuals in the Russian republic. This became the first documentary to use deep fakes to protect its subjects’ identities.

Deepfakes and financial security

In the domain of international finance, deepfakes are being considered a real threat. The international money transfer industry is worth $714 billion in remittances sent annually. Cross-border transfers are heavily reliant on identity verification. A profusion of deepfakes can be a real problem. Rulioo, a leading global identity verification service, explained why. The speed and volume of online and mobile commerce are staggering. There is a great demand for quicker onboarding and credit processing. This demand can only be met by taking human interaction out of the process.

Alvin Rodrigues is Senior Director and Security Strategist for the Asia Pacific at Forcepoint. He says that deepfakes will be used by criminals to impersonate high-level targets at organizations illegally. They will be used to scam employees into transferring funds into fraudulent accounts. Another threat is the use of deepfakes of deceased persons to claim annuities/pensions. Insurance and benefits frauds can take a similar form. Fraudsters can use deep-faked identities to obtain credit/cards and perform all kinds of transactions. These can destroy the victims’ credit scores and create criminal liability.

Symantec Corporation said it had seen 3 cases of seemingly deep-faked audio of company CEOs. The fakes were used to trick senior financial controllers of the respective organizations into transferring cash. The Wall Street Journal reported that the head of an unnamed UK-based energy company thought he was talking on the phone with his boss, the CEO of the German parent company. UK executive took instructions from a scammer who had used AI-powered voice technology to impersonate the boss. The fraudster impersonating the CEO asked the victim to transfer GBP 220,000 to a Hungarian supplier. The fraudster impersonating the CEO asked the victim to transfer GBP 220,000 to a Hungarian supplier.

Possible solutions

Detecting deepfake images can be challenging. Gartner Research analyst Avivah Litan estimates that 90% detection rates may be possible by analyzing the content, the profiles submitting it, devices it originates from, and traffic patterns. This is how spam/bots and criminal operations are already being detected. Litan said that security analysts could combine deepfake detection algorithms, internet allow listing, and fraud detection techniques to fight socially engineered attacks.

According to McKinsey, fraud will continue to evolve to evade detection. According to McKinsey, fraud will continue to evolve to evade detection. In November 2019, the US government approved a bill ordering further research into deepfakes. The UK government also is looking for solutions by evaluating legislation to ban non-consensual deepfake videos. Banks can deepen their understanding of their customers by mining the growing number of third-party data sources available. These measures can help banks improve their risk controls and stem losses from synthetic identity fraud.

About the author:

Hemant G is a contributing writer at Sparkwebs LLC, a Digital and Content Marketing Agency. When he’s not writing, he loves to travel, scuba dive, and watch documentaries.

Read Previous

What 2019 has in keep for commercial real estate

Read Next

Iranian Currency Reaches New Lows After Terror Attack