Verifiable Credential technology has a critical role to play in making AI systems work for companies and consumers — simplify compliance with the EU Artificial Intelligence Act and provide the safety and security that everyone wants.
By Trevor Butterworth
Between hype and fear lies compliance, and the EU Artificial Intelligence Act of 2024 sends a clear signal that businesses will pay heavily for noncompliance with the bloc’s rules for safe and trustworthy AI systems: up to €15 million or 3% of global turnover. Fines for violating the rules are even higher: up to €35 million or 7% of global turnover.
Similar to the EU’s data protection regulations (GDPR), the legal reach of the AI Act is long: if your AI — or a subcontractor’s AI — ends up processing the data of an EU citizen, you’re liable — and it doesn’t matter if you or your subcontractor are based outside the EU.
And while regulation typically lags behind technology, the EU appears to recognize the need for urgency in keeping pace with how AI is being deployed and continually updating its risk-based approach.
How will this shape the pace and scale of AI adoption and innovation for business and public sector use cases?
A risk-based approach to AI systems
The first thing to note is that the act is broken into four levels of risk, with the highest risks being unacceptable and, therefore, prohibited. For example, Article 5, prohibits malicious or exploitative uses of AI that violate human rights and exploit people’s vulnerabilities, engage in social or criminal profiling, or scrape people’s biometric data to create facial recognition biometric databases.
The second category of risk is for “for high-risk AI systems,” such as those used in autonomous vehicles, for diagnosing disease, for grading exams, or assessing eligibility for loans —or as a safety component in a system. These AI systems “must meet strict requirements and obligations to gain access to the EU market. These include rigorous testing, transparency and human supervision.”
The third category is for AI systems deemed to be of “limited risk.” This includes AIs that directly interact with people, such as chatbots and digital assistants, AIs that generate synthetic content or generate deceptive synthetic content for manipulative purposes, such as deepfakes, and AIs used for biometric categorization or emotional recognition.
These systems are allowed if they are transparent, meaning that people know they are interacting with an AI or AI-generated content.
It’s also important to remember that the EU AI Act sits on top of GDPR, which means that any AI system that processes the data of an EU citizen (of which there are 450 million) is also subject to rules on data minimization, purpose limitation, and consent — legal considerations that need to be considered when it comes to AI solutions that require access to people’s personal data.
This is where decentralized identity’s ability to efficiently manage AI transparency, GDPR, permissioned access, consent, data minimization and purpose limitation enables AI systems to interact with people in a safe, secure, privacy-preserving way — and in a way that enhances their performance.
Permissioned access
A key element of decentralized identity is that — in the language of GDPR — data subjects are able to hold their data using Verifiable Credentials stored in a digital wallet on a mobile device. This means that data is fully portable. The data is also cryptographically verifiable without having to check in with the source or a third-party. And this means:
1. A person can share it directly by consent.
2. That consent can be recorded by the relying party for audit.
3. The source of the data is immediately verifiable.
4. The data shared can be trusted and immediately acted on because it has been digitally signed by the original issuer.
This makes the permissioned access vastly simpler to manage in terms of basic usability. Think about an airline chatbot or digital assistant interacting with a passenger who has missed their flight. The passenger can simply swipe or use voice consent for the assistant to access their relevant flight data stored in a credential — as opposed to the passenger having to manually input all these data fields by going outside of the chat UI to find the data.
Suddenly, chat is much easier to use — and now there is permissioned access, we can develop personalized services through loyalty program credentials. We can combine search with small language models to solve sets of recurring problems or processes. We can integrate payments in a secure way.
Know your chatbot, assistant, agent
All these things can be achieved efficiently and effectively not just because we have AI systems but because we have decentralized digital identities for seamless authentication built on verifiable credentials.
It is only because the AI can verify a “government-grade” digital identity, such as a Digital Passport Credential, that the AI can be certain the data it is accessing really belongs to the person it should belong to. It can verify that the person paying for a service is the authentic account owner.
And this authentication works both ways. In terms of transparency, people are not just going to want to know that they are interacting with an AI, they are going to want to know that it’s a legitimate, trustable AI agent they are interacting with before they give it permission to access their data.
Get this right and you deliver superlative services and value. You get to build trust networks that create new, dense partnerships between businesses and with customers. You eliminate the kind of inefficiencies that frustrate people in basic digital interactions, and you remove the obstacles to interacting at scale. And with the integration of search, you remove the need for intermediaries as a cost.
Europe is on the right path with decentralized digital identity — but it needs to up its technical game to meet AI
We’re excited about the EU’s embrace of decentralized digital identity (in the form of the specifications eIDAS 2.0 and EUDI), but we also must point out that successfully navigating agentic AI (something barely mentioned in the EU AI Act because the tech is moving faster than the regulation) will require more sophisticated decentralized identity solutions than those specified.
Specifically, the DIDComm communications protocol is critical to making human-agentic AI interaction safe, secure, and workable.
The good news is the protocol specified by eIDAS 2.0, Open ID is interoperable with DIDComm. Indicio has also successfully combined both protocols in a single workflow for seamless international travel.
To learn more about how Indicio Proven can help you to develop a global decentralized identity solution that works with Europe and can be used with AI systems, contact us here.
###