Deepfakes are video, audio and image content generated by artificial intelligence. This technology can produce false images, videos or sounds of an individual, place or event that appear authentic.

In 2018, there have been roughly 14,698 deepfake videos circulating online. Since then, the number has soared through the recognition of deepfake apps like DeepFaceLab, Zao, FaceApp and Wombo.

Deepfakes are utilized in several industries, including filmmaking, video games, fashion and e-commerce.

However, the malicious and unethical use of deepfakes can harm people. According to research by cybersecurity firm Trend Micro, the “rise of deepfakes raises concern: It inevitably moves from creating fake celebrity pornographic videos to manipulating company employees and procedures.”



Increased vulnerabilities

Our research found that organizations are increasingly vulnerable to this technology and the prices of the sort of fraud might be high. We focused on two public case studies using deepfakes that targeted CEOs and, so far, have estimated losses amounting to US$243,000 and US$35 million respectively.

The first case of fraud occurred at a British energy firm in March 2019. The chief executive officer received an urgent call from his boss, the chief executive of the firm’s German parent company, asking him to transfer funds to a Hungarian supplier inside an hour. The fraud was presumably carried out using a industrial voice-generating software.

The second case was identified in Hong Kong. In January 2020, a branch manager received a call from someone whose voice appeared like that of the director of the corporate. In addition to the decision, the branch manager received several emails that he believed were from the director. The phone call and the emails concerned the acquisition of one other company. The fraudster used deep voice technology to simulate the director’s voice.

In each cases, the firms were targeted for payment fraud using deepfake technology to mimic individuals’ voices. The earlier case was less convincing than the second, because it only used voice phishing.

Opportunities and threats

Forensic accounting involves “the applying of specialised knowledge and investigative skills possessed by [certified public accountants] to collect, analyze and evaluate evidential matter and to interpret and communicate findings within the courtroom, boardroom, or other legal or administrative venue.”

Forensic accountants and fraud examiners — who investigate allegations of fraud — proceed to see an increase in deepfake fraud schemes.

One form of deepfake fraud schemes is generally known as synthetic identity fraud, where a fraudster can create a brand new identity and goal financial institutions. For instance, deepfakes enable fraudsters to open bank accounts under false identities. They use these fabricated identities to develop a trust relationship with the financial institution with a purpose to defraud them afterwards. These fraudulent identities can be utilized in money laundering.

Websites and applications that provide access to deepfake technologies have made identity fraud easier; This Person Does Not Exist, for instance, uses AI to generate random faces. Neil Dubord, chief of the police department in Delta, B.C., wrote that “synthetic identity fraud is reportedly the fastest-growing form of financial crime, costing online lenders greater than $6 billion annually.”

Forensic accounting helps trace the impacts of fraud.
(Shutterstock)

Large datasets

Deepfakes can enhance traditional fraud schemes, like payment fraud, email hacking or money laundering. Cybercriminals can use deepfakes to access priceless assets and data. More specifically, they’ll use deepfakes to achieve unauthorized access to large databases of non-public information.

Combined with social media platforms like Facebook, deepfakes could damage the popularity of an worker, trigger decreases in share values and undermine confidence in an organization.

Forensic accountants and fraud investigator need to acknowledge red flags related to deepfakes and develop anti-fraud mechanisms to forestall these schemes and reduce the associated loss. They must also have the option to judge and quantify the loss resulting from a deepfake attack.

In our case studies, deepfakes used the voices of senior management to instruct employees to transfer money. The success of those schemes relied on employees being unaware of the associated red flags. These may include secrecy (the worker is requested to not disclose the request to others) or urgency (the worker is required to take immediate motion).

Al Jazeera investigates the growing threat of deepfakes.

Curbing deepfakes

Some easy strategies might be deployed to combat the malicious use of deepfakes:

  • Encourage open communication: speaking and consulting with colleagues and others about anything that appears suspicious are effective tools to forestall fraud schemes.

  • Learn how you can assess authenticity: for instance, ending a suspicious call and calling back the number to evaluate the person’s authenticity.

  • Pause without reacting quickly to unusual requests.

  • Keep up-to-date with recent technologies that helps detect deepfakes.

  • Enhance certain controls and assessment to confirm client identity in financial institutions, similar to Know Your Customer.

  • Provide worker training and education on deepfake frauds.

Cybercriminals may use deepfakes to make their schemes appear more realistic and trustworthy. These increasingly sophisticated schemes have harmful financial and other consequences for people and organizations.

Fraud examiners, cybersecurity experts, authorities and forensic accountants might have to fight fire with fire, and employ AI-based techniques to counter and detect fictitious media.

This article was originally published at theconversation.com