Participants: Christopher Allen (Blockchain Commons), Christian Saucier, David McFadzean, Christoph Dorn, Rich Streeter, Andre Ferreira, Paul Fuxjaeger (University of Vienna), Ian Grigg, Georgy Ishmaev, Frederic de Vaulx (NIST), etc.
Duration: ~100 minutes
AI Transcription: Automated transcription (App: Mac Whisper, Model: Large v3 Turbo w/Speaker Recognition)
AI Processing: Medium cleanup (App: Claude Code, Model: Claude Opus 4.5)
Note: This transcript has been moderately edited for readability - filler words removed, transcription errors fixed, fragmented sentences cleaned up, paragraph breaks added, technical terms standardized. Original content and speaker voice preserved throughout.
Opening and Introductions
Christopher Allen: Welcome everybody to our January Revisiting SSI discussion. I see a few of you weren’t at our kickoffs, so we’ll try to cover some of the old material if necessary. And still not quite the quorum we had for our last meeting at this time, so I’m going to wait just a minute longer and see if we have some late arrivals.
So we have a few more in today. Welcome. Thank you very much. So I first need to offer thanks to our sponsors, Stream 44 Studios. And I hope more will be added to support our strategic inquiry to shape the next decade of digital identity. I also want to thank my monthly GitHub patrons. These are people that support my open source efforts through GitHub with a monthly anywhere from $10 to a couple hundred dollars a month. So thank you very much to all of my individual patrons.
We’re going to skip introductions without the typo, and then we’re going to talk about the working circles and try to get started on them. We’ll have some breakouts and discussion and then continue and plan our next steps. But our overall goal is find a lens that calls to you and have someone help you with it.
So some quick introductions. Raise your Zoom hand to share and I’ll call on you. Your name, affiliation if any, one sentence about what brings you to this work. We’ll try to keep them brief. As a model, I’m Christopher Allen of Blockchain Commons. I wrote the original 10 principles in 2016. I’m here because I believe we’ve lost our way and this community can help us find it again. Who wants to go first?
Christian Saucier: Hey guys, my camera is not working today, but I’m Christian Saucier. I’ve been working in the identity space for about three years, built a system here with David who is in a call here today with us. I’m very excited about decentralized identity and self-sovereign identity in general. Here to build solutions with you all that will show the world how to use and benefit from decentralized identity. Happy to be here. Christopher, very impressed by your work as well.
Christopher Allen: Thank you. David, go ahead.
David McFadzean: Yeah, as Christian mentioned, he invited me here. I’m also very impressed by the work of Blockchain Commons and Mr. Allen in particular. I’m really interested in the decentralized space and looking forward to working with all of you.
Christopher Allen: Okay. Christoph, do you want to go next?
Christoph Dorn: Hi. Yes, I came across SSI when it first came out and I’ve been thinking about it ever since. I’m interested in building a modeling engine for private data spaces. I need a way to define if a system can secure your identity or not. So I’m looking to get involved now to figure out how can we make the SSI principles, how can we define them in a way that we can model them and then assess whether a system is SSI capable or not as a starting point. And from that lens, look at if it’s just a technical capability requirement of a system or if there’s external environmental pieces as well that need to fall into place for a system to be SSI. Exploring that to be able to model SSI systems.
Christopher Allen: Thank you. I see next is Streeter.
Rich Streeter: Hey, my name’s Rich Streeter. I work for a company who created a way to store files without using the network or any of the tools on the network to store the files. In the process, we think we came up with something that was fairly close to the SSI principles. So I’m just around here to see if we can make 2 plus 2 equals 6.
Christopher Allen: Great. Thank you. Andre.
Andre Ferreira: Hi. My name is Andre. I came to SSI through a research conducted for OWASP focused on the Cornucopia IM suites. My goal is to offer a new companion deck for that project. I’ve been exploring how I can align SSI with Cornucopia, or at least its methodology, for players to better understand and work with SSI, or at least with that entity.
Paul Fuxjaeger: Hi, everyone. I’m Paul, calling from Vienna. I got interested in SSI via a colleague of mine called Markus Sabadello. Maybe someone here knows him. I’m currently mostly interested in the application of SSI also via the AT Protocol and DID PLC. And doing my best to advertise SSI or everything around it here in Vienna at the university.
Christopher Allen: Great, thank you. Ian?
Ian Grigg: Hello, guys. I haven’t actually put my hand up, but I have been involved in various aspects of the identity space for a long time, like 30 years. About 15 years ago, I bounced into Kenya where I discovered a thing called Chamas. That provides a radically different viewpoint of identity, which stands in opposition to the Western individual-centric view. It’s more a group-centered view that exists in Africa and Asia. So I’ve been following along with that for some time and building software to support those people. That’s my interest in the whole space directly.
Christopher Allen: Great. And Georgy.
Georgy Ishmaev: Hi. A couple of words. I’m an academic researcher. I worked on SSI before. I have a double background in computer science and philosophy. So I’m really interested in broader problems with digital identity. The ethical issues attract me the most, because right now I think most of the digital identity solutions which are going to be adopted are problematic on many levels. And to me, the SSI community is at least where people are coming from maybe a different perspective.
Deeper Introductions: SSI Journey and Lens Selection
Christopher Allen: Great. Thank you. I wanted to have some context from people. In particular, when did you first hear about self-sovereign identity in general? When was the first time you heard it pass over your wheelhouse and said, “What is that?” or “That sounds interesting”? When did you first learn about the principles? If they inspired you in some fashion, what did they inspire you to do, especially early on? But then what is the biggest obstacle that you found in our work? And finally, if you’ve got kind of a hint on what of the lenses that we’ve shared on the Revisiting SSI site in our previous meetings, is there a lens particularly that appeals to you? Anybody want to begin? Otherwise, I’m just going to start alphabetically. Andre!
Andre Ferreira: I’m used to that. So I’ve started organizing my research. I think it was either September or October of last year. I just went through all of the organization of the structure that I had, and then I came across it around November last year. So that’s when I actually found it for the first time, when I started reading about it.
I’ve got my lens picked out already. It will be the compliance lens. I think the technical background that I have might be of assistance there. Of course, the ethical and the legal one, I’ll need assistance. I’ll follow and learn from other people. But on the technical side, I think I can assist. I did read the paper already on CSSPS. I’ve downloaded the zip and went through some of the data there.
I’m not 100% sure that I can actually achieve my goal of marrying the two projects together at this stage, but mainly because I need to offer mappings that don’t exist yet. So I think there will be a lot of creation from this group or all of the groups that will allow more knowledge to be shared and placed somewhere. And the goal then would be to connect those, either bring that knowledge into OWASP or connect to that knowledge, to better pass it on to other developers. So that’s pretty much where I’m at.
Christopher Allen: Great, thank you. So Ian and I have been involved with Digital Identity for 30 plus years. I know you heard about it early on and had your own thoughts about it. You want to share a little bit about your early experience with self-sovereignty and pros and cons?
Ian Grigg: Sure. To be fair, I did hear about it fairly early on. But I felt that it was more on the con side. My principled objection was a logical one in that SSI had such far-reaching goals in terms of managing the data of users that there was no way that a user could understand what was going on under the hood. Therefore, it would go the same way as the browser in that once it was up and going, I expected companies to move in, take control of the SSI clients, and then start to manipulate those clients to their own benefits. That is, the clients would be working for the companies that were shipping the clients, not for the individuals.
So I felt that SSI would have this logical inconsistency at the heart of it. And I couldn’t see a way around that. I understand that’s a bit of a dilemma. And it’s also worse for me in the sense that I’m building software, which also has the same contradiction in that I’ll be providing software one day to users who have to trust what’s under the hood and therefore trust me, the supplier.
My saving grace is that we live, my software lives in the world in the context of groups which all meet physically. So there’s lots of things that happen in the physical room, and all the software has to do is to catch up with what’s going on in the room. It doesn’t direct what’s going on. At no point is it in control, and at no point can it isolate the user into their bubble and take them off on a different path. The user will be fundamentally working with the group, and the group will be tracking everything that’s going on in the apps, and the apps will be coordinating amongst themselves and delivering the obvious result to the users.
So there’s a control there. We can do some things as a rogue supplier, but we can’t isolate the user and start to pervert them in the way that modern social networking does. That’s my logical critique of SSI. And I’m happy to have that debate and hear what other people say.
Christopher Allen: I’m curious, there are two lenses that felt particularly appropriate for you. One was the generative lens and the other one was on kind of the group lens. Have you looked at either of those, does either appeal to you more than the other?
Ian Grigg: To be frank, I’ve been snowed under with other stuff for the last two months. I wasn’t even able to get to the first meetings. I probably would err on the side of the group lens because it naturally appeals to the work that I’m doing. The generative lens, I’m still struggling to get a nice viewpoint of what generative means. So I would lurk on the outsides of that if I had time. But certainly I’d be interested in the group lens.
Christopher Allen: Yeah. Well, by the way, this tension goes a long way back, even I would say more than 30 years. I remember Trudy and Johnson Lenz, they basically invented the term groupware and defined it as intentional group processes and software to support them. But it was only two or three years later that Lotus basically said, “Well, we’re groupware, it’s software for groups.” And then eventually it became, “We don’t really care. It’s just multi-user software.” So it’s that whole tension of subversion of some of these concepts. I love the idea of intentional group processes and software to support them, but that got lost quick. Christoph.
Christoph Dorn: Yeah, hi. I’ve been building complex software systems in JavaScript before Node.js. And by complex, I mean trying to model all aspects of a full stack application in components and then finding a language to interlink those components. To eventually arrive at a point where you can construct web applications from models.
As I went down that path, it all comes down to data, identifying data and privacy of data. So over the years, I got into this idea of private data spaces. The idea is that data is private first, and then you slowly disclose it in various ways. If you look at SSI being a way for an individual to control things, first everything is private, then you gradually start controlling things, start to gradually interact with the world to make a payment which has to link back to your identity somehow to the bank account and so on.
So applications as a whole become about binding data spaces and allowing access between different data spaces. My interest is there are a lot of ways to arrange this. In the current paradigm, platforms arrange it in a certain way. SSI is going to arrange it in a different way. And my interest is to figure out if there is an arrangement that allows you to extend these private data spaces.
It really comes down to: you don’t want to disclose it to the world, you want to disclose it to a small group. And then you want to also be able to revoke it and everything that comes with that. So is this even possible? If it’s possible, what are the technical primitives and patterns required and the boundaries that must be enforced and how can we model that in components that we can interlink in an interactive JavaScript work model that I have running?
So I need a language to program the model. SSI is a way to create, in my view, systems that are capable of keeping your sovereignty in terms of data and connections. So how would that work? How can SSI help define what if a system is capable of safeguarding your data or your identity? That’s what it comes down to. And that’s not realistic in every context. So where does that break down and what types of applications are even possible to be built SSI or not? Defining that space really well will open up a lot of insight into what the challenges are we’re facing and where the technical boundaries are and then the human boundaries start.
Christopher Allen: Of the different lenses, was there any one that kind of rose to the top that you’d like to concentrate on?
Christoph Dorn: I’m not sure about the specific lenses. That’s been my struggle to figure out an approach here. It’s really about defining what does it mean for systems to be capable of SSI, whatever that means. And that’s what I’m interested in exploring. That’ll touch on a lot of topics.
Christopher Allen: Okay. Anybody ready to speak next?
Christian Saucier: We can keep going in alphabetical order, I’ll pick up the next. Hey guys, Christian here. I was an early adopter of Bitcoin and promoter back in the early 2010s. That really opened up my mind as to what’s possible with decentralized technology. So decentralization has really been my focus for the last 15 years.
My passion with blockchains has always been more on the data side than the financial side, which is also why I’m still working today. Identity comes in at the end here in my career. The last couple of years was more of a focus towards that and I realized how fundamental it is. Decentralized money was definitely fundamental in a way. Decentralized identity is a harder problem because it’s more complex than just adding and subtracting numbers.
The protocol that David was the architect for, more on the solution side, the protocol we built over the last few years really implements that multidimensionality of relationship between identities of things. So it’s not just a person, right? It’s a group of persons that’s got an identity too. And it’s what we’re working on that has an identity and each document, each piece of data could have an identity. So that is the big problem.
I believe we have tech now that can actually be used to do this stuff for real, in a truly peer-to-peer fashion. I came from the DIF where there was almost a bit of a reluctance towards blockchain, which I’m not saying it has to be blockchain, but it’s got to be decentralized. And DNS is not decentralized to my taste.
So I’m very excited to be here. When I look at the lenses, based on what I said, I think the relational and contextual identity lens - identity is really a relationship, essentially. Now it becomes hard to model, indeed. So I think that’s where I’m attracted. But man, it’s a big space. So I want to have success.
Christopher Allen: Thank you. David, do you want to go next?
David McFadzean: Yeah. So I think I first heard of self-sovereign identity around 25 years ago. I was involved in a startup called Javion. One of our competitors at the time was a company called One Name, which was Drummond Reed’s one of our early ventures. We were both in the space of secure internet identities. We were trying to integrate that with being able to publish and rate and review content and send and receive micropayments based on that content. But that didn’t last too long. This was when the dot-com bubble burst. But I’ve been kind of in and out of the space ever since.
Most recently with Christian at Self-ID, that’s where I became aware of the SSI principles for the first time. And they very much informed the design and architecture of the system that Christian mentioned, our DID system.
I think the biggest obstacle I’ve seen is in working with the DIF. I was kind of shocked and dismayed that the DID systems that they were talking about and recommending were actually not decentralized. They were decentralized in name only. And that just seemed to be a step backwards. So that’s the main issue that I’d like to address is actual decentralized identity, not just in name only.
And I’m open to whatever suggestions you have for lenses where I could add the most value.
Christopher Allen: Okay, well, we’ll be talking more about that shortly. Frederick, going alphabetical.
Frederic de Vaulx: Yes. Hello. So I’m not directly connected with all the decentralized identity and everything, but it’s something ever since I started joining the mailing list from the W3C, something that is important to me. But with the work that I do as a contractor at NIST, it’s not directly always at the forefront.
I feel that I need to be able to make sure that I see where things are going to see if there is anything that sometimes I can nudge into this direction. The work with the federal government sometimes is not necessarily directly looking at decentralizing everything. But again I feel it’s important so I’ve been trying to follow a little bit what I mean the great work that you all are doing in this space and trying to see how little by little we can start making sure that by design when identity is needed or when information needs to be shared, we start seeing some of these principles showing up so that in the long run, in terms of education or having people knowing and understanding what all that means, all of a sudden, “Oh, yeah, sure, it makes sense,” even though maybe they were not directly brought up to that.
What’s inspiring? It’s that this community is really driving, looking at the different technologies and seeing how that can be used, either decentralized blockchain-based or something, or also outside of that and seeing how that can be leveraged.
I guess at least in the blockchain world, some of the blockers that we can see is still a little bit the connection between everyday use and everyday folks. And what that means when it relates to the technology. And also the responsibility of maybe holding all the keys. It’s been, I think, not a given, but it felt maybe safe before to know that we may have trusted institutions that do things on our behalf, at least coming from a European country. That’s a little bit what maybe not we’re taught, but it’s something that seemed okay.
And having now maybe the responsibility that, well, if we lose our things, then maybe we can’t get back to it or there are things maybe that we can’t prove anymore or something like that. Maybe not necessarily a deterrent, but something that needs to make sure that it’s intuitive when people are using it. So right now I’m trying to follow a little bit with what you guys are doing, learn, see how I can help if there is anything.
Christopher Allen: And was there any of the lenses that draw you yet or you’re still puzzling?
Frederic de Vaulx: Lenses? Sorry, can you… I’m sorry, there were the 15 lenses.
Christopher Allen: That’s fine. We had them at, we discussed them in the last meeting. So let’s go on to Georgy.
Georgy Ishmaev: All right, so I think I can also relate to what some said before me. I got interested in SSI primarily coming from research in blockchain. To me it was a very interesting approach which in a same way as blockchain protocols try to minimize reliance on identity, I think SSI philosophically speaking also tries to minimize reliance on identity to provide only some minimal information needed.
How did they inspire me? They inspired me as an alternative vision because right now I think identity is seen as a solution to everything and we really try to fix everything with identity. Just consider recent proposals on age verification and identity checks for online users. I think it’s a very problematic line of thinking. Sometimes we need to think that less identity is better. And in my opinion, at least in the European landscape, there is a broader appreciation that putting identity everywhere has its risks.
What is the biggest obstacle? I don’t know, it’s an open-ended question, because what is the purpose of this work? Trying to shift balance in these systems to serve better end users rather than verifiers. There is no single concrete obstacle. There are many of them so maybe we’ll try to identify them in this process.
Christopher Allen: And have you had a chance to look over the lenses? Any particular one feel like…
Georgy Ishmaev: I’m a bit hesitant to commit due to my current job commitments, to be honest. If at some point my current job aligns well with lenses, I would be very happy to do this dual-use purpose research.
Christopher Allen: Great. Okay. So I think we just have Paul and Rich left. So Paul, do you want to go first?
Paul Fuxjaeger: Yeah, sure. Thank you. So for me, it’s 10 years now. I think what inspired me back then when Markus explained it to me was when he said there is a way to get rid of the identity provider. So we don’t need to have this anymore. We can do it ourselves in a sense.
So I think that also means that I haven’t read on the lenses, but I just skimmed a bit. I think what speaks to me is this principal authority lens. To me, SSI is one component that may serve to leave this era of digital feudalism behind, I hope.
If I can speak to what the biggest obstacle in my understanding was so far, it’s that the terms themselves are, in many discussions, very loaded. Whenever I say identity or sovereignty, people understand very different things.
Christopher Allen: Yeah. This is one of my experiences.
Christopher Allen: Great. And we’ll close with Rich. Unless I miss somebody. Did I miss anybody? I think Rich is last.
Rich Streeter: Very seldom has having an R and an S in your name been an advantage. So one and two are about the same for me. My first email to you was on May 3rd, 2017. So I think we’ll narrow it down to that date.
What inspired me was the dream of decentralization. That just pings to me because I don’t know how you can have a centralized system and be decentralized at the same time. How you can be centralized and sovereign at the same time. It just doesn’t seem to be possible to me. So that’s what inspired me.
The biggest obstacle that I think is out there, and please excuse me, I’m going to get lots of hate for this: I think there’s too much focus on blockchain. I realize blockchain has got its good points and its bad points. But I don’t know if you can have a decentralized system where you’ve got a blockchain out there. Again, kind of like the other one, it just doesn’t seem to work together how you can have a decentralized blockchain.
The lens that appeals to me is the cryptographic paradigms because what it says in the write-up is that PKI brings limitations with it. I think PKI is part of the problem. For what it is and what it was, it gave us that we didn’t have before it was around, it’s a miracle. But the further and further we go down the road, I think we have to come to the realization that, much like Moses, it can get us close, but I don’t know if it can get us over into the promised land.
Project Goals and Timeline
Christopher Allen: Good metaphor. So thank you, everybody. Just to talk about what we’re trying to do over the next couple of months. Each of the lens briefs were largely written by me based on my experience and issues that either I have had or others have brought up as different categories. What I’m really seeking from the community is peer review. I don’t want to put a lot of effort into one of them if it just isn’t resonating with people, but with some review, make sure that they’re reasonable, that I’ve got the necessary references, etc., that will allow us to identify a subset of the Lens Briefs where it’s worthy of additional work. I’d really like to do that this month and in February.
Out of that I hope will emerge some specific areas where more work needs to be done. I think it’s pretty clear that the operational one has had multiple people interested in that particular lens. The goal of that lens is to say that there are a lot of aspects of the aspirations of self-sovereign identity that are subjective, but others are objective. So how can businesses, regulators, etc., basically say, “Hey, wait a second, this actually is in the right space. We’re going in the right direction” in an objective way. And it feels like there’s a number of people who want to work on that.
But I think there’s going to emerge out of finishing a lens brief to just simply cover the broadness of the territory to go, “Wait a second, what does less identity mean? What does the specifics of this?” And I think this will lead to some more microcosms of, “Okay, so how do we express portability as a principle that is objective and portable for who?”
So I think that’s the idea that out of the lenses comes some specific meaty things that could be that we’re not talking necessarily about academic papers at this particular point, but at least a draft that basically says, “Hey, the three of us have dived deep into this one thing and here is something that is solid and useful for people.”
My goal is by April 26th, which is exactly 10 years from the original principles, to publish a first draft of the new revised principles based on the lenses in any of these drafts and get it out in a variety of places, but also talk about it as an ongoing work. Maybe there’s something we missed in that portability principle that we updated and the supporting material for it.
And for a number of people that have blogs or are academics and papers are part of what they deliver in their work, that we would do papers and articles over the next year that are based on the work that we’re doing together.
The 15 Lenses Overview
Christopher Allen: So we’ve talked briefly about the 15 lenses. I don’t think we have critical mass to attack all of them. But the idea is they all reveal something that others miss. I’m hoping that we can go deep on some. But there are four broad categories of the principles.
There are a number that are really just on the foundations: What is decentralization? What are rights? I think there is an emerging sense that coercion is another way to talk about things. I’ve been saying lately to a variety of privacy advocates that maybe we’re making a mistake by emphasizing privacy when we should be talking about coercion resistance. And so there are some aspects around that.
We’ve had a little bit of discussion about relational and contextual identity in the sense that, as somebody said, in some ways there is a centralization because it’s what others say about you as well. How do we handle that? How do we handle that in group situations? How do we make it so that it’s not abusable and is as fair as possible?
And then there’s this compliance, governance and technical side of things. We talked about the compliance briefly. But I also think there’s some real opportunities. Somebody brought up about blockchain. Part of the problem is what is the definition of a blockchain? I think where it has been most problematic is when blockchain has been strongly associated with money, that the monetary incentives and the games and things of that nature are what a lot of the people who object to or have concerns about blockchain and digital identity. It’s that intersection that we need to puzzle out. And I do think there are some interesting answers in that space emerging. There’s everything from various kinds of witnesses and different kinds of chains. And then there are new opportunities in key management that keep us from being at the mercy of that single key on a single chip, someplace that we all could lose or somebody could take from you.
Discussion: Defining SSI
Rich Streeter: Hey this is Rich, can I just before we move forward, if I’m sabotaging your agenda please excuse me. Did we, I was just going back looking at what you have as a definition of self-sovereign identity and there’s a lot of talk about what it’s got to contain and the attributes of it and to some degree it says this is how it differentiates from other things that are out there right now. But I’m not sure that there’s a Webster’s definition in there.
Christopher Allen: No, and that has been the obstacle from the beginning. When I first proposed the umbrella term for the technology, I couldn’t really come up with something that met my needs. Anything that could be done in a sentence, much less a paragraph, just wasn’t rich enough to cover it. And that’s why we ended up with the 10 principles, was basically saying, “Hey, if you’re following these 10 principles, you’re probably self-sovereign.”
And what we’ve learned in the years since is that it’s quite possible to have a lot of the principles and not truly be whatever this larger thing is. And I’m not absolutely sure that, I mean, this is a classic, I think this is a 21st century problem. It isn’t just an art discipline. The desire to fit everything into a soundbite sentence or whatever that can fit in a tweet has a cost. And do we want to bear that cost with something that is so important? But yeah, it’s absolutely a tension.
For me, it’s been interesting because self-sovereignty had this association with the libertarian side of the movement and wasn’t entirely my intent. I came from a perspective of living systems theory, which basically says that for various kinds of living systems, you have to have membranes and those membranes need to be semi-permeable. Without them, you lose coherence and it causes problems with our systems. And I felt like the membrane here wasn’t working.
I would add to that the historical precedent that corporations are, well, I’ve been saying this in a couple of different talks that the nature of what is sovereignty has been renegotiated periodically over the last thousand years. In our latest version, corporations are basically saying, “Well, we’re a new sovereign” in the same way that city states basically went to the kings and said, “Wait a second, we have our own sovereignty and our own layer there” that ended up changing the nature of sovereignty in our world today. Well, now corporations are doing that. They’re basically saying, “We’re not subject to national control. We’re going to choose where we want to be. We’re the new sovereigns.” And this ends up leaving people out of the equation.
So that’s just two of a number of different kind of historical things that I don’t know how to capture in a definition of a technology. Christoph?
Christoph Dorn: Yeah, what came to me listening to you, I think one theme that keeps coming out is that systems are being implemented and they are not what we consider SSI systems. It’s this idea of defining what SSI is. The way I came into this workshop and reading some of the material and where things are at, from my perspective, I think if anything comes out of this workshop and the only thing comes out of it is a clear definition of what makes a system SSI and what doesn’t, I think that’s highly valuable.
Because you have to, SSI is a great umbrella to potentially realize this vision. And if it gets diluted and there’s no strong boundary to point at, that term is going to get lost and that whole movement is going to get lost. So I think a very strong line needs to be drawn. What is an SSI system? What are SSI capabilities? What is not an SSI system? To then start that conversation. This is not SSI, right? And why not?
The CSSPS Framework
Christopher Allen: So I see Grok and I’ve seen a number of all of the various LLMs out there are well-versed in talking and trying to summarize self-sovereign identity. I actually think I’ve tried it on at least four of them so far. And it’s surprising how much detail they have on it. But I think that the challenge is that all of these end up being, what is ownership? What is control? What kind of autonomy is absent? And that just causes some problems. But I do agree that we ought to be able to define it better.
I like what two Japanese scholars, I hope will be on tonight. The principles to compliance one was one that they were really focused on. When they did their analysis of my self-sovereign paper plus two others that were written by academics in the years since, they ended up making the 10 principles into 46. They basically said, “Well, these ones over here are not really measurable. They’re subjective. So we’re going to slide those aside. But those that remain we’re going to nail.” And they nailed down and they ended up with 46 of them, which I thought was kind of fascinating.
And I’d like to take that further. I have maybe some quibbles about some of the choices that they made. But I think also there’s this element that they missed, which is that a number of the things that they slid off to the side can be addressed at the regulatory side of things that can’t be addressed on the technical side.
So when I was doing some advice on the Switzerland EID, which is a centralized digital identity system, there were a number of things that they were attempting to do that were kind of self-sovereign-ish, but not decentralized by basically saying, “Here are some regulatory things that we’re going to demand and establish in courts and whatever” that can’t be done with technology, or at least I don’t have any obvious ideas on how to do that. Which sort of says there’s an element here of this isn’t all about tech.
And then there’s that smaller set, and I don’t really know what that smaller set is, which is that not everything important can be measured. So what are the ethical commitments here? And this is maybe where some of you who have more of a philosophy background or whatever, how can we better frame the ethical commitments, the coercion resistance, anti-violence, whatever. I’m not sure what the right philosophical framework is versus improving our technology tests, which a lot of people go, “Wait a second, DID web, that’s decentralized? How can that be a decentralized identifier when it’s using DNS?”
And then my thing at the bottom of this is I feel like if we treat all the principles as measurable, it might destroy the ones that matter the most. So that’s kind of my thing there.
When I’m talking with both Taiwan and with the Swiss governments on the regulatory frameworks, where I talk briefly about some of the aspects of their aspirations to address the potential harms of digital identity systems through regulation, unfortunately some of them I think are easily abusable. In particular those where something that is supposed to be a voluntary system ends up becoming mandatory and they’re basically saying, “Oh, it’s okay because it’s voluntary. You don’t have to use it.” But then the reality is you’re going to be forced to use it by market pressures, by cultural pressures, or whatever. So again, how do we do that?
I think there’s some interesting things with the Utah model where they’re basically saying, “Hey, we’re going to recognize some identity systems, but we’re not going to monopolize them. We’re going to basically say what the minimum standards are, but we’re not going to create a Utah identifier.”
But I think there are other things here. Somebody expressed interest in the principal authority and the anti-digital feudalism side of things. My particular twist on this is that if we’re going to give these new sovereigns, Google, Apple, et cetera, a lot of ourselves in various ways, there actually is a whole body of law around what are the duties that they owe us when that happens, when they have more power than we do, but has never been applied to platforms. What are the duties of care, of loyalty, of disclosure of their incentives? How do we restore accountability? I think that’s another very interesting area.
These are all in the big category of how do our principles survive contact with law, standards and power? And it feels like we at least have maybe six people that are interested in this general high-level category that will emerge as a lens, as something that will be useful that we can take forward and revise some of the principles.
On Defining Requirements
Rich Streeter: So this is Rich again. Can I humbly suggest that going back to my systems analysis days, that the people who are putting out the requirements, speaking as one of the guys who thinks he’s got a tech solution to this, is that the people who are putting out the requirements haven’t figured out what the requirements are yet.
Just take it for what it’s worth. It’s free advice. Concentrate first on the philosophical side, figure out what it is that the philosophers think we need to do. Then after that, let the techies figure out if they can hammer something into that, because my fear would be as a techie is I’m not quite sure exactly what I would be building to at this stage of the game.
Because right now it basically came up with your 10. I think our company came up with something completely irrespective of it that actually meets a lot of what it is that you’re looking for. It was just by pure kismet. Kismet happening a second time is very unlikely. So I think I’d prefer to know what the philosophers want before the people who do the building take a shot at it. Over.
Philosophy and the Nature of Identity
Christopher Allen: Ian, go ahead.
Ian Grigg: Yes. I like the domain of philosophy. Not that I know much about philosophy, but it’s a very interesting alternative philosophy to what most of us think. I will point to the West because the West has what we call an individualistic philosophy. It starts from the center of the human mind. And Descartes said, “I think, therefore I am.”
But there’s a completely alternative viewpoint which casts the world of identity into a different vein, which is the world of groups. And that basically says, for example, that a person is a person through other people. I am because they think. I am because they view me.
The interesting thing about this, I’ll just post a very short article on the chat. Hopefully it’s there. There’s an article posted there. It’s not that long, but it’s really worth reading that because it casts the philosophy as a worldview at war in the sense that we’ve got the Western individualistic concept, which we’ve all been taught in the West. And then there’s the Eastern group-centric viewpoint, which takes a completely different perspective.
Now, I would plant the seed that this is going to be germane to the answer of what is the I in SSI? What is identity inside self-sovereign identity? And the other claim I should make is that their view, the group-centric view of what other people think of you, is actually much more aligned to the concept of SSI, or not so much the concept, but the dream. What you’re reaching for is implementable from that starting position. What you’re reaching for is not implementable from the starting position of individualistic identity.
Christopher Allen: Right. So there’s been lots of interesting discussions in this category. There’s everything from the progressive trust side of things. Somebody mentioned earlier that there were these sort of different levels of disclosure. There’s the relational autonomy aspects of “I am because we are.” This is also some of the discussions about generative identity.
A lot of desire from people to keep context boundaries distinct, because when contexts cross, there can be misinterpretations because people don’t understand the contexts. I think, Ian, in some of your work, you’ve been thinking about some of the multiscalar aspects of things. There’s a lot of discussion about when is self-binding appropriate? Can you coerce yourself? But when does that cross over into things where those things become abusive? Can we better understand where those lines are?
It’s particularly appropriate as I’m seeing a lot of interest in a couple of different places around stewardship, whether that’s the age of consent, the “are you an adult” side of things, but also applies to people as they get older and children and whatever. And I think there are some particular risks there of that, which is the dark side of everything being a group identity.
Can we address some of those? So I would love to see some more stuff here. I do know that you and Matthew Schutte are interested in various aspects of this. And I hope that we can get some more there on the more pure philosophy side.
There is this sort of irreducible person one that keeps on coming up. Part of it just is your existence has nothing to do with the digital side of things. And how we keep on forgetting that. We keep on falling back into, “Oh, well, it’s your biometrics or whatever.” But no, there is a fundamental dignity that you have that is completely distinct from any way that you can be measured, that you’re six foot tall or any of those types of things. Even if you could measure all those things, it’s still immeasurable.
And I know that in the first principle of self-sovereign identity, it has consistently been, in my opinion, misinterpreted as that it’s digital, it’s only about digital. But no, the first principle was you are always more than the digital. And that “more than digital” comes first.
Coercion and Discipline
Christopher Allen: Which then comes to this preventing coercion side of things. I think that’s the big one. If philosophy of who is a person, what is identity and all that kind of stuff, “I think therefore I am” versus the Ubuntu “I am because we are,” they all have these coercion risks in this modern world. How do we address these coercion risks? This is where I feel maybe the most scared that there’s just so much monetary and political benefit to coercion, to making people less aware of how they are being coerced.
Ian Grigg: Yes, yes and just to play the devil’s advocate here, I’ve noticed that yes this coercion is a big issue for us. But what I’ve noticed with the Chamas in Africa, there’s many names for them, people join them for the explicit purpose of being coerced into a group-saving context. In this sense, I would say that a better synonym for coercion is discipline. They join them, they join these groups so that they can be disciplined amongst the group. They can self-discipline themselves and discipline each other to turn up and save money.
Now, it helps that they have a really refined concept here. It’s a very particular, specific concept, which I guess is one of the advantages that they have. They are there to save money, and they can’t do it any other way. All the other ways are blocked for them. But if they come together as a group and discipline themselves to follow this process, indeed coerce themselves to follow this process, that is a happy result for them. They are extremely, extremely loyal and happy with their groups. It is amazing to see how happy they are and how excited they are to go to their group meetings, to be coerced, to be disciplined.
So I think in the bigger perspective here, coercion can be a tool for good, obscure or particular or extraordinary as that might be.
Christopher Allen: Yes. This is why I really hope that you, I love this word discipline. I haven’t heard it used. I’d love to have this lens be modified to where I try to talk about this aspect of things, to talk about the good side of this.
I mean, to a certain extent, some of it is due to the history of the real world in this particular type of thing. You could say you’re coerced by where you’re born into a certain set of rules and whatever that are associated with your country. And it’s supposed to be for the benefit of society as a whole. We’ve seen that work very effectively at various kinds of scales. And we’ve also seen it be abused at other scales.
So how do we allow the good things, whether that’s the group support for addiction or other different types of things where there’s a social self-coercion in the real world when you join one of these support organizations, or whether it’s trauma with financial discipline? And then when does it move into the category of, “Hey, where is it challenging?” Where can we both support these kinds of things, but also where are the harms?
I think I’ve done maybe too much in this particular lens on the potential harms of self-coercion. When I think maybe what I was hoping to do was talk more about the balance there. But that’s exactly why I need people like you and Matthew and others who are interested in this thing to help make it better.
Anybody else have a comment on this? Do we work on the philosophy side of things? Does this, or try to get the irreducible person really clear or getting the various forms of coercion really clear? Or is it the compliance, governments and technical since we have 10 years of development of systems, government identity systems are being deployed in Europe. Switzerland has their own little twist on it because they’re not part of the EU. But then we also have countries like Taiwan who are basically going, “Well, at some point our identity system may be captured by a government that doesn’t have the principles that we have. So we have to be even more careful about how we do our digital identity systems because they can become future coercion points.”
Maslow’s Hierarchy and Requirements
Andre Ferreira: The model that came into my mind when I was hearing is Maslow’s Pyramid of Needs. From a philosophy standpoint, the way that I saw it, or the way I thought about it, is that once your safety needs, the individual ones are resolved, you move on to the next level of the pyramid which would be social connection.
So from a requirements perspective, the way that I view it through that model, I think I have enough philosophy. But it’s very narrow. It’s not my area, my subject matter is not there at all. That would be through the lenses that I would actually look at it. And accepting that and going up to the way you get to the pyramid, to self-actualization, and realizing that you’re very lonely at the top, allows me to then define some requirements towards the needs that I have at a social level. And for me, it means that my safety needs, which are more important to me, and basically towards survival as well, is where I would actually define the requirements.
Now, reading, and I don’t remember the 46 principles from the top of my head anyway, those are a very good starting point so that we can actually move forward to actually building something and we’ll raise more questions. So I found the paper, the CSSPS, quite interesting in that regard. But it does need a lot more input from a philosophy standpoint.
Christopher Allen: Yeah, I wanted to show that spreadsheet. I thought I had it. Yeah, so this is the supplemental spreadsheet from the CSSPS. Although I think it’s missing, maybe I don’t have the whole spreadsheet in here. I don’t think it imported the whole thing, but you get the idea. They’ve really kind of separated things out into a large set of, if you can’t do this, then you’re not doing it right aspect of things.
I was fascinated by the things that they chose to not do. I would love to refine what they are doing and I would love to see some more effort by Andre and others. Andre, I don’t know if you’re up for being around 11 hours from now in the morning, 7 a.m. European time. I’m not sure which time zone you’re in.
Andre Ferreira: I’ll try to be there. If they’re about, I would like to meet as well.
Christopher Allen: Yeah, I hope they’ll be here tomorrow morning. They had expressed interest in joining. In any case, I’m as interested in advancing that forward as I am picking out all the things they said, “Well, we don’t know how to do that” and go, “Hey, maybe there is a way to do some of that and to identify it.” Cool.
Closing Discussion: Lens Selection
Christopher Allen: So in our last minutes here, for those of you who’ve said, “Yeah, I really am interested in a lens.” Do you want to say a little bit more about now after this discussion, what you would like to see maybe added in your lens or what you’d like to see someone join you to help you with?
Andre Ferreira: I can tell you from the one that I’ve been looking at, I’ve been through the spreadsheets, but now it’s actually thinking about how to create a system that goes through all of that so that we can actually validate them. So you have the controls there, but what does that actually mean? And without having a system, the question that I paused and I’m still reflecting on is pretty much, and now my train of thought just went away. I had this question for you.
If I control, if I have sovereignty over my identity, what stops me or is it on an ideal situation that I want to destroy that control over that identity and create a new one? So let me rephrase that. I can’t cease the existence of myself, well I can but I shouldn’t, destroy my own physical existence as an identity but I could do that towards a digital one. For anyone that is actually connecting towards that identity or other systems which I as an individual have control of, what happens when I cease that identity, cease to exist and I create a new one?
Christopher Allen: Yeah, so this sort of leads, this is kind of associated with this context boundary lens. The idea here is that, this actually isn’t the, this is some of the harm side of it.
Contextual Privacy
Christopher Allen: I wrote this article twice. I wrote it once in the early 2000s, and then I did a major update. And the basic thing here was I was saying that when I went to these privacy conferences, people kept on saying the word privacy and meaning entirely different things. And I basically said, “Hey, there is this thing called contextual privacy” which didn’t fall into the libertarian privacy. It didn’t fall into the defensive privacy. And there’s the human rights privacy.
But then we kind of hit this third kind, which some of it is the ickiness factor. In particular, I’ve noticed women talk about this particular thing of they really often want to keep their family very separate from their business life much more than I think at least Western men do.
So I had made this thesis that lack of personal privacy causes you not to be yourself, but the loss of contextual privacy allows others to see you not as yourself. In other words, because people have different contexts, different language, different terms for all these different types of things, different styles of speaking. My ex-wife would, with some people, speak with a deep Southern accent and she was totally unconscious of it. And then with other people, she would sound straight up Midwest American. It’s because she’s in these two different contexts and can use the language of those two different contexts.
You hear a lot about this in the American Black culture where they’re having to constantly switch between contexts. So without contextual privacy, you basically have others misinterpreting or not fully making interpretations about you.
That really increases the importance of being able to have multiple identities, as some people call it. And the ability to potentially say, “I don’t want that identity anymore.” Dana Boyd, who’s one of my favorite sociological researchers, says that modern children today will experiment with 10, 15, 20 different kind of modalities and identities in their online life. They will try being the goth girl and the goth thing. They may not realize it’s an experimentation, but they do it. And what she’s found in her research is that teenagers do a lot of this experimentation before they settle on something that is them.
So that means we need to be able to forget those types of things. We need to be able to say, “Those are not me.” But it’s a little bit different than the right to forget. It’s a different rationale than the right to forget. And of course, there’ll be others who will basically say, “Well, this is how trolls happen on the internet. And everybody has to use real names.” So clearly there’s a balancing act that has to be done.
I think there’s a real interesting opportunity to talk about contextual privacy in the future. Ian, I’d be curious what you think about contextual privacy, given what I was just sharing. Go ahead.
Ian Grigg: Yeah. Privacy is a funny thing. Some people cast it as a right. And I think that might be an unfortunate restriction on understanding what it is. To me, privacy is a defense against the unknown. If the unknown person out there knows nothing about me, then they can’t attack me, and that reduces my threat surface, my attack surface.
But when it comes to actual real life, what we do with privacy is we go to the bar and we share beers and we talk about the stuff we’ve done and we boast about our secrets. We share stories and anecdotes and we happily tell people, “Yeah, I managed to do this thing.” And actually, it’s a piece of private information. But over beers, it’s not private anymore.
So privacy tends to be heavily modulated by the context. And therefore, it’s not a universal. It’s something that you hope that random people you haven’t admitted into your circle can’t see anything of. But you really do want to share that stuff with the people you meet. And if you think about it, at the extreme end of that spectrum, you go on a date with a member of the attractive sex, and you actually are sharing private information because you’re trying to attract that partner.
Can we, I have to say, can we as technologists add to that debate when technology itself is such a blunt instrument that it can’t possibly cope with the nuances of when we want privacy versus we don’t want privacy? That’s such a fluid human spectrum that I just doubt that technology can provide the answer there. And all it can really do is say things like, “Ah, yes, in SSI, self-sovereign identity, we protect our privacy by having complete control.” But it doesn’t take much thinking to realize that an individual having complete control over all their data is a non-starter. There’s too much data. They can’t cope with that. So you get into trouble there. Anyway, it’s a very complicated debate. And I don’t think the answer is simple. I don’t think there’s any binaries in privacy.
Progressive Trust
Christopher Allen: Yeah, this is where I have a number of articles on this particular topic, in particular, this progressive trust one where I at least try to articulate a model. It starts off with, what is the context? Is it just two of us in a bar where nobody can overhear? What are my risks at this particular aspect of things? But this is just the internal side. Am I willing?
Only then might I go and start making some assertions about some basic information. The goal is, do we have some mutual interests in things? They’re not proven. They’re just loosely asserted. And then if parties want to move deeper, they may want to look at the basic integrity of them. Do they make sense? Are they whole? Only then might you go to actually verify proofs.
In the real world often might, I remember when I was first starting my career, it was, “Oh, that business card was embossed. So they actually put some money into making that card.” So thus that card has a little bit more legitimacy because it was more expensive than a cheap card from a quick print place.
But then as you move into these higher levels of things, we’re basically aggregating references from a variety of different places. And then now we’re getting more into the group types of things. Within our community, whether our community is local contractors in my neighborhood, do they comply to kind of the group practices?
And only then we make a decision of, do we want to do the risk? What can we do to mitigate those risks? Maybe have some kind of threshold, like I want my wife to join me in approving this. And then only then do we actually do the interaction of whether it’s just that we’re going to meet again in two weeks and we actually succeed in doing so, or whether we’re going to enter into a contract with each other for you to work on my kitchen, or we’re going to get married or go into business together.
There’s these aspects of things, but then ultimately things can go wrong. So how does escalation and disputes work? I’ve at least tried a first pass at what might that look like.
Identity Revocation
Andre Ferreira: The specific thing that I had on my head would be something like when you join a group and they get access to your identity attributes and you find out that it’s not something you want to be associated with for whatever reason. How do you not destroy that identity? Because in real life, you’re not familiar, you’re not happy with that group, you move on, you stop going to that bar, you go drink somewhere else. But with keeping the sovereignty of that identity, but then specific attributes that you share, how do you stop sharing those? No, I actually just answered my own question.
Christopher Allen: Somebody mentioned some work by Drummond Reed and by Markus Sabadello. One of their big things was before blockchain, before a lot of the decentralized stuff was a system where you had these link contracts, I think is what they call them. They weren’t really smart contracts, but they were this thing of, these are the information I’m sharing along with my ability to pull it back. And so there were some interesting ideas there that I’ve not really seen much since then.
Of course, that doesn’t stop people from saving this data anyway. A link contract does rely on a legal contract that basically says, “Yeah, when you say you want to pull your address out of my database, I will do so.” And of course, these days when we have LLMs who can make very amazing correlations without actually technically having the data, it becomes even more difficult. But there’s some space in there that’s still viable and interesting.
The Nature of Trust
Ian Grigg: Yeah, coming back to the concept of trust and so forth. In researching what these Chamas meant, I went through this process of identity and I came back with a definition that relied on trust. And then I realized that the definition was meaningless because I hadn’t defined trust.
I sat there and thought about trust quite a lot. We discussed it and debated it. And the metaphor I came up with for trust was this. Basically, you start with the tabula rasa or the blank slate theory. A baby that was born today has nothing in their minds. They’re an empty vessel. They’re a blank slate or a tabula rasa. But then you look at a person who’s 20 years old, and they know how to do this thing called trust.
And the more I thought about it, what I realized was that you could view the process of growing up and the process of bringing up, taking a child to adulthood, or being a child and reaching adulthood as the process of learning how to do trust.
Now, if you think about all the little things that happen during childhood from zero to 20 years old, they can all be viewed as, “Oh, yeah, you’re doing things whereby you have to make decisions. And you have to learn how your limits are. You have to learn what the limits of the other person is. You have to figure out whether you trust that person and how you do trust and so forth and so on all the way down the line.”
So consequently, the insight I would bring to this conversation is not that we should spend our time growing up again, start at zero and go through the 20 years. It’s that when we start talking about the word of trust, there might be 20 years of learning, experience, pain, reflection in that word, in that one word. And here we are. We’ve locked it into five little letters, one word, trust. And we think we can put it into a 20-page document. We think we can come up with 10 requirements. We think we can program this. When it took you 20 years to figure out how to do it.
I think there’s a knowledge gap here. There’s a wisdom gap. There’s a process gap here. I don’t think this is amenable to simple technology.
Christopher Allen: Yeah, I definitely, I mean, this has been an ongoing problem in the sense that with the original 10 principles I said, “Hey, let’s meet at Rebooting Web of Trust in a month and let’s continue to articulate and make it bigger.” But it was hard to even get people to think about the 10 things.
The same thing goes with this progressive trust model. I have no confidence that this even comes close to the richness that is needed. I believe it’s better than what we have right now with our digital systems, but I also don’t believe that it’s complete. And as you can see, it’s complex. You don’t learn how to do this graph easily. And as you said, it’s actually in reality much harder. And it’s also completely different in different contexts.
What is the nature of trust in my neighborhood where people on my street know that my partner and I walk dogs for them when they’re out of town? We’re not paid dog watchers. We just do it as friendly neighbors. They’ll give us a key to the house when they’re gone. Which is very different context than another dog walker, which is somebody is paying 25 bucks for them to come and walk their dogs. Just the context of one being a neighbor and the other one being paid makes for very different trust models.
Closing and Next Steps
Christopher Allen: Okay. It’s 11:38. I hope that people have at least a little bit more of a concept of where they might want to dig in. If there was like one person in this group that basically said something that kind of resonated with you or the two of you would like to maybe start a conversation either in GitHub or in Signal, that would be a great way to kind of kick off to see if we have the two, three people and a topic that is worthy of further collaboration. That’s really what I’m hoping will happen.
For those of you in the EU, we’re having 11 hours from now, a 7 a.m. meeting with the Asia Pacific. So there may be some good crossover there if you’re up for two meetings. And otherwise, these will both be recorded and shared and probably out on Wednesday or Thursday.
Any last comments before we close for the day? Any other thoughts? I’m not seeing any raised hands.
So thank you very much for joining in this conversation. I’m planning another call like this in two weeks, but I’m hoping by that point we’ll actually have, we’ll break up into some breakout rooms with two, three people and conversations and then bring out the thoughts for everybody. So I’ll be sending notes on that meeting in two weeks and we’ll move forward from there. And of course, please use Signal or GitHub. Thank you very much.
Paul Fuxjaeger: Thanks for organizing this.
Christopher Allen: You’re welcome. Absolutely.
Ian Grigg: Thank you. Thank you.