For decades, the financial sector and other industries have relied on an authentication mechanism dubbed âknow your customerâ (KYC), a process that confirms a personâs identity when opening account and then periodically confirming that identity overtime. KYC typically involves a potential customer providing a variety of documents to prove that they are who they claim to be, although it could also be applied to authenticating other people such as employees. With the ability of generative artificial intelligence (AI) that use large language models (LLMs) to create highly persuasive document replicas, many security executives are rethinking how KYC should look in a generative AI world.
How generative AI uses LLMs to enable KYC fraud
Consider someone walking into a bank in Florida to open an account. The prospective customer says that they just moved from Utah and that they are a citizen of Portugal. They present a Utah driverâs license, a bill from two Utah utility companies, and a Portuguese passport. The problem goes beyond the probability that the bank staffer does not know what a Utah driverâs license or Portuguese passport looks like. The AI-generated replicas are going to look exactly like the real thing. The only way to authenticate is to either connect to databases from Utah and Portugal (or make a phone call) to not only verify that these documents exist in the official systems but that the image in the official systems matches the photo on the documents being examined.
An even bigger security threat is the ability of generative AI create bogus documents quickly and on a massive scale. Cyber thieves love scale and efficiency. âThis is what is coming: Unlimited fake account setup attempts and account recovery attempts,â says Kevin Alan Tussy, CEO at FaceTec, a vendor of 3D face liveness and matching software.
AI-generated fake personal histories could validate AI-generated fake KYC documents
Lee Mallon, the chief technology officer at AI vendor Humanity.run, sees an LLM cybersecurity threat that goes way beyond quickly making false documents. He worries that thieves could use LLMs to create deep back stories for their frauds in case someone at a bank or government level reviews social media posts and websites to see if a person truly exists.
âCould social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a âcredibleâ online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check,â Mallon says. âThis isn’t a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.â
KYC security tools, processes need to adapt
Alexandre Cagnoni, director of authentication at WatchGuard Technologies, agrees that the KYC security threats from LLMs are frightening. âI do believe that KYC techniques will need to incorporate more sophisticated identity verification processes that will for certain require AI-based validations, using deepfake detection systems. The same way MFA and then transaction signing became a requirement for financial institutions in the 2000s because of the new MitB attacks, now they will have to deal with the growth of those fake identities,â he says. âIt’s going to be a challenge because there are not a lot of (good) deepfake detection technologies around and it will have to be quite good to avoid time-consuming tasks, false positives or the creation of more friction and frustration for users.â
Cagnoni says that decent detection tools for catching deepfake videos are available, but even the best ones still struggle. He references recent testing where systems that identified fake videos as fake also identified valid videos as likely fake.
Rex Booth, CISO at identity management firm SailPoint Technologies, differs from others in that he sees the KYC LLM problem as serious but not immediately critical. âI donât think the KYC script needs to be completely rewritten, but it does need to be built upon, strengthened, and augmented. We do not fully use the potential of some of the authentication measures that we have available today,â he says. âGranted, the tools that we have today are insufficient, but they are nonetheless the tools that we have.â
A handful of current authentication mechanismsâincluding biometrics with liveness testing and behavioral analytics with as many datapoints as possibleâare often mentioned as possible ways to combat LLM-generated identity fraud, but most are strong at verifying whether an existing customer is indeed the one making a request. They are much less effective for onboarding because there is often no data about a new potential customer. Behavioral analytics, for example, only work if a history of that userâs behaviors is available for comparison. Biometrics only work with a highly reliable indication of what that person truly looks like.
Some behavioral analytics can operate standalone and do not necessarily always have to leverage the history of that customer, says Linda Miller, CEO of the Audient Group, a Washington DC-based consulting firm. âAre they typing in their Social Security number as though they are copying it rather than typing it from memory? Have you checked the applicantâs name against a database of recent data-breach victims?â she says. âYou are not going to solve this problem with a tool. The KYC strategy has to be multi-layered, and it has to be risk-based.â
Sharing identity details one hedge against AI- and LLM-based fraud
One possible way to do that would be to have far more identity details shared among financial entities. Cagnoni argues that companies are hesitant about sharing such details, however. Beyond competitive issues, there is the poorly defined area of global compliance and whether such data-sharing would violate any regional compliance rule.
âConsider the privacy regulations here in Europe. Could I share any behavioral information? They would have to share some identifying information to communicate,â Cagnoni says. âWhat if I have a limp in my right leg? Could sharing that violate any type of regulations? Most of the banks donât want to share this information about customers. I don’t see banks sharing behavioral information.â
Booth, however, doubts compliance implications of sharing such authentication data among companies would be a problem because just about all the impacted compliance rules focus on personally identifiable data. âBehavioral analytics in terms of how I move my mouse is an individual-level authentication. That is not data.â
As for trying to access government databases to verify identities or documents, Miller says she is highly skeptical. âGovernment and data are two words that go horribly together. Government agencies are using very old data through really antiquated systems. Some 25 states today are still on Cobol.â
AI-enabled fake employees another KYC fraud risk
Another problem Cagnoni flags is fake employees. That is where a thief identifies companies that allow fully remote work. âHow are individuals vetted? In situations where you wouldnât have to turn up at the office, you could be maintaining 100 jobs. We donât currently have a silver bullet for this.â
That scheme would take advantage of how a lot of enterprises function. The first X weeks would be paperwork and training and other tasks that donât necessarily produce much that is tangible. Cagnoni suggests that an effective thief could receive money from payroll for many weeksâpotentially monthsâbefore being discovered, assuming they are ever discovered.
Business models donât incentivize KYC detection
Part of the problem that authentication experts stressed is that many business models today do not meaningfully allow for the time, effort, and investment to do extensive identify verifications. âThe authentication systems we have today are a reflection of the incentives we put in place. The incentives right now are to encourage workers to move fast and reduce friction and accept a certain amount of fraud,â Booth says. âAs it stands right now, the incentives donât work for meaningful identity verification.â
Miller agrees with Booth that the structure that businesses use for bonuses and other incentives discourage workers from trying to identify fraudulent behaviors. âThere is a human element to it. Businesses put in place perverse incentives when it comes to a diligent KYC process,â she says, adding that management doesnât want to slow down processes âwhen people are waiting to close a transaction with a customer.â
Instead, Miller, who had been a principal at Grant Thornton, the audit and assurance firm, suggests that CISOs identify their top financial issues âand tailor your KYC to those highest risk areas. But remember that in almost every arena, generative AI will accelerate fraud. Social engineering and phishing are about to become an order of magnitude more effective [because of generative AI].â
Go to Source
Author: