Artificial Intelligence and Corporate Fraud: The new face of digital threats

OR artificial intelligence (Ai) revolutionizes the business world, radically transforming the way organizations work. From the automation of repetitive work and the reduction of costs, to enhancement of efficiency, AI emerges as a basic digital development lever.

According to Gartner estimates, in October 2023, 55% of businesses worldwide were already in the phase of pilot or productive use of genetic artificial intelligence (GENAI) – percentage that is now estimated to have increased even further.

Artificial intelligence greatly improves the customer experience, supports software development and provides deeper, more insightful information on strategic decision -making. But this progress is not only used by businesses.

“Cybercriminals do not stay back; they also adopt new technologies, reinforcing their tactics and causing new challenges for IT and Security departments,” explains ESET’s World Safety software company.

To tackle this new landscape of threats, a multilevel approach based on three pillars: humans, processes and technology is required.

Only in this way can a business remain ahead of the evolving threats and fully utilize the benefits of artificial intelligence, safely and responsibly.

What are the latest threats AI and Deepfake

Cybercriminals use artificial intelligence and deepfakes in a variety of ways:

  1. Employees … “monkey”: North Koreans representing freelancers of computer science and work remotely they behave have penetrated hundreds of companies. They use artificial intelligence tools to create counterfeit biographies and other documents, including fake AI processed photos, to pass the checks. Their goal is to financially support North Korea’s regime, as well as theft of data, espionage and even ransomware.
  2. Business Email Compromise Frauds (BEC): Deepfake audio and video clips are used to boost Bec, where financial workers are deceived to transfer funds to accounts controlled by scammers. Recently, a company’s accounting worker was convinced to carries $ 25 million To scammers who used Deepfakes to be presented as the company’s financial director and company executives at a teleconference. However, such frauds are not a new phenomenon. In 2019, the scammers fooled Deepfake an Energy Company executive in the United Kingdom, making him believe he was talking on the phone with his boss, and the They persuaded to carry them000 pounds.
  3. Outstanding Authenticization Services: The scammers utilize sophisticated techniques to dress legal customers, to create false identities and to bypass the identification controls when creating accounts or connect to services. A highly advanced malicious software, Goldpickaxe, is designed to collect face recognition data, which are then used to create deepfake videos. According to a recent report, 13.5% of new digital accounts internationally last year were considered suspicious of fraud.
  4. Deepfake scams: Cybercriminals use Deepfake technologies not only for targeted attacks but also to mimic corporate executives and senior social media officials. Through this tactic, investment fraud and other malicious actions can be promoted. As Eset’s Jake Moore has shown, any corporate executive could fall victim to such techniques. According to the latest ESET threat report, cybercrime use deepfakes and corporate brands in social media posts to attract unsuspecting victims to a new type of investment fraud known as Nomani.
  5. ‘Breaking’ Passwords: Artificial intelligence algorithms can be used to decipher customers and workers’ passwords, allowing data to theft, ransomware attacks and identity fraud. For example, Passgan is said to break passwords in less than 30 seconds.
  6. Document forgery: Document counterfeiting is another way to bypass the dating checks with the customer (KYC) to banks and other companies. In addition, it can be used for insurance fraud. According to surveys, 94% of the requirements are suspected that at least 5% of the claims have manipulated through artificial intelligence.
  7. Electronic fishing and target recognition: The National Center for Cyber ​​Security (NCSC) of the United Kingdom has warned of the ever -increasing use of artificial intelligence by cybercrime. In early 2024, the NCSC stated that this technology “is almost certain to increase both the volume and the impact of cyberattacks over the next two years”. Particularly alarming is the improvement of the effectiveness of social engineering techniques and the recognition of goals, which enhances ransomware attacks, data thefts and large -scale electronic attacks on customers.

The impact of fraud with artificial intelligence is mainly translated into financial damage and damage to reputation. According to a report38% of the lost revenue due to fraud in the previous year are attributed to AI -based techniques.

Looking at the impact:

  • Bypassing the KYC (KYC (KNOW Customer) process allows scammers to increase credit and drain money from legal customer accounts.
  • Fake employees can steal sensitive and adjustable customer information, causing financial losses, reputation and compliance problems.
  • BUC (Business Email Compromise) scams can lead to huge losses. In 2023, these kinds of attacks brought about $ 2.9 billion in cybercrime.
  • Fractured scams threaten customer dedication. Research shows that one -third of consumers will be removed from a company after a single bad experience.

Inhibiting fraud in the age of artificial intelligence

The fight against fraud using artificial intelligence It requires multilevel treatment, with emphasis on people, processes and technology, Muncaster stresses from the ESET team.

This should include:

  1. Frequent risk of fraud risk assessments
  2. Amending and updating the fight against fraud to be relevant to TN
  3. Integrated staffing and awareness programs (eg to identify phishing and deepfakes).
  4. Education and Awareness Programs for Customers
  5. Enable Multi -Factors Authentication (MFA) for all sensitive corporate accounts and customers
  6. Improved historical checks for employees, such as scanning of Curriculum Vitae for Career Inconsxions
  7. Ensure that all employees are interviewed via video before hiring
  8. Improve the cooperation between human resources and cyber security groups

Artificial intelligence can be a powerful ally in this battle. Eniche, can be used for:

  • Detection deepfake Through AI tools, especially in identity verification procedures (KYC).
  • Pattern analysis Suspicious behavior in workers and customers’ data, using mechanical learning algorithms.
  • Create synthetic data Through Genai, with the aim of developing, testing and training new fraud detection models.

As the conflict between malicious and positive use of artificial intelligence enters a new, more demanding phase, businesses have to revise and develop their strategies for cyber security and the fight against fraud. Adaptation to the ever -changing landscape of threat is no longer optional; it is imperative.

Failure to respond can erode customer confidence, hurt the value of the brand and derail critical digital transformation initiatives.

Artificial intelligence has the power to change the terms of the game for cybercrime. However, it can do the same for cyber security and risk management groups.

Source link

Leave a Comment