It’s only a matter of time before generative AI renders current methods of identity verification for financial transactions useless.
By Trevor Butterworth
As the paper, “Increasing Threat of DeepFake Identities,” published by United States Homeland Security put it:
[People have a] “natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation.”
This paper was written two years before the long, 2023 reveal of exponential advances in commercially available generative AI technologies. The ability to deepfake reality is now spectacularly believable and the tools to do so are all easily available.
While the problem space for deepfakes in 2021 was the threat of political malfeasance and misrepresentation—how you could make a politician, expert or celebrity say something prejudicial or wrong—now, that space has metastasized to incorporate the daily business of “knowing your customer” — aka KYC.
KYC is critical to trust in digital banking and finance. Selfies and liveness checks using facial biometrics are used to onboard customers remotely for accounts, approvals, and transactions without the customer having to show up in person to be verified.
But, as TechCrunch’s Kyle Wiggers recently noted, an attacker using generative AI “could download a selfie of a person, edit it with generative AI tools and use the manipulated ID image to pass a KYC test.”
While Wiggers points out that it hasn’t happened yet (but to the best of his knowledge—right?), the article details emerging capacities that threaten to disrupt an entire layer of advanced identity and data verification tools based on images, facial biometrics, voice, and video.
In a recent post on Medium, security researcher and masters student Harish SG claims to bypassed “multiple liveness detection and deepfake detection APIs used in several applications.”
At best, innovation in AI will create an arms race, with the advantage seesawing between fraud and fraud detection.
If images, liveness, and voice checks can all be faked with easily available tools and the right digital skill set, the impact on business, government, and social trust is potentially vast. Consider how reliant the IRS is on ID.me performing liveness checks on U.S. taxpayers. How long will it be before we see a generative AI, KYC-related scandal that will make global headlines? Given the innovation we’ve seen in 2023, it is reasonable to think in terms of months rather than years.
Decentralized identity, verifiable credentials, and KYC
The capacity to deepfake a liveness check completely disrupts any verification process dependent on a liveness check. One of the weakest targets is remote KYC, which will send customer account creation back to the pre-internet era.
But there is a simple fix: portable, reusable KYC — a simple, unfakeable way to prove you are who you say you are over and over again. Relying on math rather than imagery, this KYC can be easily shared with, and verified by, any organization that needs to know that you are who you say you are. In short, a KYC credential.¹
A trusted, local organization such as a bank would undertake an in-person liveness check (or a full KYC) and then issue a verifiable credential to that person attesting to having verified their identity. This credential is unique to the person and can be thought of as a container for holding any kind of data, including biometric data. The person can’t add to or change the data in the container without breaking the container. The container is linked to the organization that created it, and this link cannot be altered due to the use of cryptographic keys. Anyone with the right software is able to verify the origin and integrity of the container.
When the person needs to do KYC remotely, the organization contacted can verify that the container came from the issuing organization before interacting, and they can verify the information hasn’t been altered.
Of course, a local bank or financial institution can also do a full KYC on their customers and issue “full” KYC credentials with all the relevant data needed for interaction with other businesses or organizations. In addition to mitigating fraud, this also saves institutions and organizations from having to do complex KYC over and over again.
In both cases, device binding and biometric checks bind the credential to the person’s mobile device and wallet software, preventing misuse by others, and the verifiable credential could then be used, anywhere, and in conjunction with further liveness checks to facilitate remote KYC.
The result is seamless identity verification and immediately actionable data. A customer can simply and quickly prove that they are who they are and wherever they are. And they get privacy-preserving features and the capacity to provide the continuous verification needed by zero trust approaches to security².
The inevitable threat of generative AI can only be met by adding verifiable credentials. But the good news is that verifiable credential technology is easy and quick to add to existing systems. Indicio provides a complete, award-winning solution — Indicio Proven® — including a KYC credential.
Verifiable credentials will also be essential to releasing the full value from using AI agents to simplify and manage interactions and transactions involving analysis of personal data and integrated payments. For more on this, see our position paper, A Trusted Copilot, on using decentralized identity to manage an AI virtual assistant.