Protect Your Business From These Three Rising Identity Fraud Tactics

Identity theft is not new—people have been stealing others’ identities and repurposing their personas for centuries. But in the digital age, the nature of how it works has changed. What was once committed via trash-rifling, telephone scams or routine robbery has now taken on a new and more sophisticated form with the help of technology and the internet.

The limits of what scammers can accomplish with stolen identities have been pushed back with every new technological development. As businesses increasingly rely on online systems and digital workflows to conduct their affairs, they become more vulnerable to attacks by identity thieves—attacks that can quickly cost millions of dollars in both individual and institutional losses.

For companies to keep their assets and customers’ data safe, they first need to be aware of the fraud problem. More than awareness, though, they need to understand the intricacies of fraud tactics and deploy sophisticated technology that can help detect and stop fraud before it happens. Thankfully, while scammers are developing new ways around advanced systems, companies are developing new technology to catch them in the act, all while more people are learning to use these new programs. This encouraging development can ultimately keep us safe in an age where fraudsters can steal identities with the touch of a button.

For those in high-risk positions like business owners, financial institutions and those in the public eye, awareness is the first course of action against identity fraud tactics on the rise. With the knowledge of precisely what to look for, enterprises can develop new programs, processes and technological infrastructures to mitigate identity fraud.

Synthetic Fraud

The tactics of an identity thief are familiar to most; they target one person’s identity, hoping to collect enough information to open fraudulent accounts or take out loans in that person’s name. Synthetic fraud takes this one step further.

Also known as “Frankenstein fraud,” scammers use bits and pieces of information gleaned from different sources to create an entirely new identity. They cobble together a Social Security number, date of birth and other personal details from different people.

A charge was brought against several co-conspirators in 2020 for the theft of over $1 million using this method; it’s a growing problem. Synthetic fraud can be difficult to detect, as the false identity is often a good match to the real person, making it even more likely that fraudulent activity will go undetected.

This situation isn’t without hope; if companies have access to identity-proofing technology, there are ways to detect anomalies and identify fraudulent “Frankenstein” behavior as it occurs. A data-driven approach is the most foolproof method here.


Many may recall multiple videos in 2017 of famous people saying and doing things extremely out of character—we later learned this wasn’t actually them. This was the emergence of deepfakes, a term used to describe videos that have been manipulated using AI and machine learning techniques to replace faces or insert voiceovers in a video with uncanny realism.

The potential for misuse was evident from the get-go with this technology. And, as we’ve seen happen with plenty of other technological advancements, deepfakes are now being used for nefarious purposes such as identity theft.

Fraudsters are using deepfake technology to create videos of people and using them to commit identity fraud. They’re also using it to create fake social media profiles and websites that appear legitimate. If a company has video-based security measures, fraudsters can bypass them with a well-done deepfake. Ironically, the same machine learning and artificial intelligence used to create a deepfake is the same technology companies can equip themselves with to detect a deepfake video. Companies can also safeguard themselves from this emerging threat by identifying and training their employees on the telltale signs of a deepfake, like inconsistent audio, blurring or misaligned graphics and unnatural movement, blinking or facial expressions.

Friendly Fraud

In a tactic that can be described as anything but friendly, individuals or co-conspirators make purchases and then dispute the charges with the credit card company. Some scammers claim that they never made a purchase, while others claim that their order never arrived.

Alarmingly, some 86% of all chargebacks are likely friendly fraud. This is a costly problem for businesses, as the average chargeback costs more than double the original purchase.

There are ways to combat friendly fraud, though. One is to require additional identity authentication for high-value purchases. Stores can also monitor accounts and identities of repeat offenders through a fraud detection tool that uses machine learning to detect customers more likely to dispute purchases based on past behavior or repeat chargebacks. Watchlists can be created to monitor or lock suspicious accounts. By taking a proactive approach, businesses can reduce the amount of friendly fraud.

We can expect to see more of this type of crime in the coming years. As businesses, we need to be proactive in our efforts to protect our customers and ourselves from these threats.

Vigilance is only the first step. Without the methodologies and technology in place to help mitigate fraud, companies and customer data will become easier and easier to steal. It is high time to incorporate scam prevention methods into our standard operating procedures and keep tabs on the latest identity fraud tactics—whether through staff training, online resources or third-party monitoring.

And, of course, we should treat identity-proofing technology as an opportunity to protect our businesses and ourselves. New tech is enabling our attackers, but it is also the key to foolproof fraud identification tactics.

With the right precautions in place, we can mitigate the risk of falling victim to these schemes and keep our businesses and customers safe.

Leave a Reply

Your email address will not be published.