Indicio recently had a breakthrough in work to scale the mediator we have been using for issuing and verifying credentials. This new mediator greatly reduces a huge problem decentralized identity faces — accommodating all the people that want to use the technology.

By Alexandra N. Walker

A mediator is a critical part of any decentralized interaction between the holders of decentralized identifiers. This technology is the “middle man” that facilitates messages between decentralized entities using mobile devices (which don’t have fixed IP addresses for easy connection). You can almost think of a mediator as a mailbox; it provides an endpoint for parties that cannot otherwise connect with each other to send and receive messages. Importantly, these “messages” can include issuing and verifying credentials.

As you can imagine, the amount of messages your mediator can process in a given period of time has a significant impact on performance, limiting how many people can use your services concurrently which, in turn, will limit adoption. People today expect technology to “just work;” if they perceive the interaction to be slow, the likelihood of them trying again is very low. To put this into perspective, 47 percent of consumers expect a web page to load in two seconds or less, and 40 percent will wait no more than three seconds for a web page to render before abandoning the site.

This obviously puts some pressure on your development team to make sure everything works quickly, and the mediator represents a key part of the process where people can get stuck. The option to pay for more, or bigger, mediations is always a viable solution, but it can be costly. This is the context for Indicio’s breakthrough.

The first thing we had to know were baseline numbers: using mediation, how many credentials could ACA-Py sustainably issue to new users in a minute? What about existing users? How about verification?


Current Traditional ACA-Py Mediator (2 vCPUs, 8GB of RAM):

issue 258 credentials per minute to new connections
issue 540 credentials per minute to existing connections
verify 282 credentials per minute to existing connection

These numbers represent our baseline numbers in a non-clustered environment; however, taking this architecture into a clustered environment—where we have the same architecture as before, but now everything (user agents and ACA-Py agents) is abstracted away behind load balancers and managed instance groups—we get much better numbers.


Current Traditional ACA-Py Mediator (2 vCPUs, 8GB of RAM):

The traditional mediator from before can sustainably

issue 300 credentials per minute to new connections,
issue 900 credentials per minute to existing connections, mediator, and
verify 600 credentials per minute to existing connections,

using the same mediator (with 2 vCPUs and 8GB of RAM) as before but now with 10 ACA-Py agents (1 vCPU and 4GB of RAM, each).

Using some new techniques and new technologies, such as the open-source project SocketDock, our team was able to greatly improve these numbers.

Indicio’s Mediator:

Indicio’s Mediator can

sustainably issue 1,200 credentials per minute to new connections,
sustainably issue 1,680 credentials per minute to existing connections, sustainably verify 3,000 credentials per minute to existing connections.

Our team is hugely excited about this progress, given that high-volume verification will drive many seamless processes. For example, with Indicio’s Mediator, airport security will be able to verify 3,000 passengers per minute per mediator, shortening lines while saving airports expense.

While the Indicio Mediator is not quite ready for public use yet, our team wanted to share the news that ACA-Py (Aries Cloud Agent-Python) is able to scale to meet the growing interest in the technology. Once the Indicio team has fine tuned the codebase, we expect to open source and contribute it to the Hyperledger Foundation so organizations can build their own solutions.

If you have any questions about the new mediator or building your own verifiable credential solutions our team is always happy to help. You can contact us here.