With ChatGPT 4 expanding its functionality through plugins, the emergence of AI virtual assistants capable of performing delegated tasks through text and voice input in a realistic human way is increasingly likely. In a new position paper, we look at a hypothetical “Chat Valet” booking a flight in order to illustrate why continuous verifiable identity is essential to managing the process.
By Trevor Butterworth
The current inability to know how ChatGPT 4 formulates a specific answer to a specific question as it learns from answering questions successfully has created an outpouring of speculation as to what this all means for the future of business (the end of many jobs) to the future of humanity (the end, period).
Unfortunately, as ChatGPT cannot give an account of its evolution in advance of evolving, we need to: anticipate how, realistically, the technology is most likely to be used in the near future, assess where problems may occur, and suggest solutions that, probably, should be adopted in advance of the problems occurring.
Here, the advent of plugins for ChatGPT gives us a useful prompt: If we combine the ability to “understand” text or verbal requests with a learning capacity to deliver successful responses and then add more ways to learn through plugins that expand functionality, we can see how a chatbot could become a more adept virtual assistant with the ability to act on its user’s behalf.
The following position paper explores this idea in more detail. It anticipates using a ChatGPT-like virtual assistant to book an airline flight in a fully delegated way that results in a “beautifully frictionless process.”
But it also imagines what “a compromised AI virtual assistant could do with access to all your accounts AND a predictive understanding of your behavior and preferences.”
To be able to take advantage of AI, this systemic risk needs to be addressed with some urgency; and to that end, we illustrate how open-source decentralized identity provides one way to tackle these problems. If we are to delegate tasks to an AI assistant, we need continuous, mutual verification between ourselves and the assistant. It must be able to trust that it is dealing with us as much as we must be able to trust that we, and we alone, are dealing with our assistant.
A Trusted Copilot is co-authored with Karl Schweppe, Head of Innovation at Bay Tree Ventures. To download a PDF of “A Trusted Copilot,” please enter your email in the form below.