Why was the internet built without an emphasis on identity or security? Why are we still trying to figure these out 50 years later? What role will AI play in the future of digital trust? Let’s take a brief look at the history of the internet and see what we can learn.
By Sam Curren
The first network of networks — the ARPANET — was declared operational in 1971; it was the first thing built that resembles the internet as we know it today. Developed by the Department of Defense, ARPANET was capable of file transferring, remote login, email, and that was about it. Because computers were, at this time, both large and expensive, they were relatively uncommon, and you would likely know everyone who had access to your network. So, no real need for formal trust just yet.
Fast forward to the early 1980s, several national and international networks have been created, one of the biggest launching in 1986 when the National Science Foundation Network (NSFNET) funded supercomputers at several universities and connected them to the network, enabling access for research and academic organizations in the United States.
It was on the NSFNET that interest in international connections grew, leading to several breakthroughs, including the adoption of TCP/IP internationally and the creation of the domain name system (before this everyone had to know the Internet Protocol (IP) address of where they were trying to go). In 1989, the first commercial internet service providers hit the scene, and by 1991 the World Wide Web was available to the public.
Now we have billions of people using the internet, and it would really benefit us to know who is who and how to manage access to private data. But how do we accomplish this given the current scale of digital interaction?
Enter Cryptography
Originally a late addition to email, encryption was developed as the need arose to keep certain sensitive information private. There are two main ways to create encryption: symmetric key cryptography and public key cryptography.
Symmetric key cryptography is fairly straightforward. Both parties have a key, or a shared-secret; this key is used to both encrypt and decrypt data and must be known to both parties. An example you likely use every day is a login and password. Both parties know the password and, therefore, can be reasonably certain of the identity using the password and that they can be trusted. Unfortunately, if someone else finds the key, symmetric encryption becomes useless, as now there is no way for the system to tell who the true user is.
Public key cryptography is both more complex and more useful for sharing data with a large audience. Through mathematical functions that use large prime numbers, a pair of keys is created. One of these keys is private and is used to “sign” data or prove its origin, the other key is public and is used to verify the source and read the data. One shortcoming of this system is that if people don’t have your public key it doesn’t work, so you need a good way to distribute the key. A few examples of public key cryptography include TLS/ SSL (web browser security) systems like Bitcoin, and password managers like FIDO/passkey.
Cryptography enables us to do two very useful things: encryption and signatures. While sometimes used together, there is a clear and important distinction. Encryption allows the user to scramble a message so that only a party holding the relevant key can decrypt it; however, there is no level of certainty for the key reader about where the message originated. Signatures, on the other hand, allow you to cryptographically prove the source of data.
Verifiable credentials are a good example of a signature in action. When a credential is issued, the person issuing the credential signs the data, and every verifier can check that the data has not been changed since it was issued. This enables the verifying party to trust whoever holds the data the same as if it was coming from the source. These systems are effective at communicating trust however, they are not yet in widespread use.
Signature systems, specifically these decentralized verifiable credentials, are the best solution we have come up with so far for addressing the chronic problem of not knowing who anybody is or where data is coming from on the internet. By ensuring that data is signed by an authority, such as a government, university, or trusted expert in the field, we know that we can trust this data in proportion to our trust in the authority.
Similarly, if the data is signed by a known acquaintance or friend you can be sure it came from that party, as opposed to a bad actor pretending to be them. The internet is ever expanding, with an estimated 328.77 million terabytes of data being created daily (that’s 328.77 billion gigabytes). Being able to rely on the information we find will be critical as our society continues to grow and move toward the digital future.
Living in an AI world
Artificial Intelligence (AI) is a powerful technology that is capable of doing some really neat things. Because it is trained to mimic the decision-making process of a human brain, it is very good at certain kinds of tasks: recognizing patterns, such as identifying faces, creating strings of words or sentences that make sense, or even mimicking voices and faces or videos. AI has one weakness in that it typically cannot explain why it does what it does, often surprising even its programmers with the decisions it makes in pursuit of accomplishing a task. But when AI can make almost anything look “real” and can’t really “show its work” for how it got there, how will we be able to trust anything we see online?
The answer will be to implement robust signature systems such as those offered by verifiable credentials. Here are just a few examples of how verifiable credentials could bring trust to AI systems.
For chatbots, the system is typically trained on databases or through scraping websites. If the system was, instead, trained on sets of verifiable credentials, each containing specific data that is signed by the originating authority, then the answers provided by the AI can reference the specific information it came from. Evaluation of those sources can be used to determine what trust to place in the AI-provided answer.
A similar system could be used for voices and videos. The next generation of cameras, microphones, and phones could use software to create a signature at the moment of media capture to provide evidence that the recording has not been altered since the taping. Software that performs light editing, such as boosting audio levels or brightening an image, could also provide a signature across the modifications, ensuring that no significant manipulation of the media occurred.
In short, verifiable credentials could save AI from becoming an engine of mistrust — and provide not only the internet’s missing verification layer but a verification layer able to handle the rapid pace of digital transformation.
Want your organization to be prepared? The strong trust systems you need are already available. Trust Digital Ecosystems, like Indicio’s Proven, can ensure that your data is securely shareable, immediately verifiable, and instantly actionable.
If you have any questions about how to get started or would like to discuss specific use cases with our team you can reach out here.
If you want to read a more in depth look at how AI and decentralized identity could interact you can download a recent article Indicio VP of Communication and Governance Trevor Butterworth and Karl Schweppe, Head of Innovation at Bay Tree Ventures wrote on the subject.