Participants: Christopher Allen (Blockchain Commons), Andre Ferreira (OWASP), Denken Chen (Taiwan Ministry of Digital Affairs), Shigeya Suzuki (Keio University), etc.
Duration: ~90 minutes
AI Transcription: Automated transcription (App: Zoom, Model: Zoom auto-transcript)
AI Processing: Medium cleanup (App: Claude Code, Model: Claude Opus 4.5)
Note: This transcript has been moderately edited for readability - filler words removed, transcription errors fixed, fragmented sentences cleaned up, paragraph breaks added, technical terms standardized. Original content and speaker voice preserved throughout.
Opening and Introductions
Christopher Allen: Welcome, everybody, to our Revisiting SSI session today for Asia, Pacific, EU, and Africa. As always, I want to thank the sponsors, in particular Stream 44 Studio, who is supporting this strategic inquiry to shape the next decade of digital identity. And of course, my monthly GitHub patrons. It is through contributions like these that allow me to continue to do this type of work. Thank you very much.
Today’s agenda: we’ll do some more introductions and a little bit of context storytelling. Talk about working circles. Not quite sure that we have enough people for a breakout, but we’ll do the breakouts in the main session and then talk about next steps. But the big goal here is to find a lens that calls out to you and find someone to join you.
Quick introductions: your name and affiliation, and just one sentence of what brings you to this work. As an example, I’m Christopher Allen of Blockchain Commons. I wrote the original 10 principles in 2016. I’m here because I believe we’ve lost our way, and this community can help us find it again. Andre, you just did this about 12 hours ago. You want to repeat?
Andre Ferreira: My name is Andre Ferreira. I came to SSI through a research that I was conducting for OWASP focused on the Cornucopia IAM suites. The goal is to offer a new companion deck as part of their 25th anniversary commemoration. What I’ve been exploring is how SSI can be aligned with Cornucopia, or at least how I could apply the methodology to help players understand and work with SSI.
I got inspired because I didn’t know much about it. I came across it in November last year. Lots of new knowledge. Got really interested in being able to work with peers that know a lot and have a lot of experience in identity space. The Lens, if it’s the compliance one, as you mentioned before, I’d be interested in collaborating there and taking the work a bit further.
Christopher Allen: Okay. So, Denken, why don’t you jump in with an intro, and then I have some other questions if you just want to help with those as well.
Denken Chen: Sure. I’m Denken Chen. I work with the Taiwanese government, Ministry of Digital Affairs, to launch our own national identity wallet. We face lots of challenges along the way during the building process. So I resonate a lot about those lenses.
I personally would like to start from the business entry point of view. It’s like how to form the fuel of the whole SSI systems. It can be a broader discussion, but I think it eventually will converge into some discussion like: is a legal system of the government needs to be established first, or is there a possibility for business opportunity? Those kind of things I would like to explore.
Christopher Allen: Thank you. Shigeya?
Shigeya Suzuki: My apologies for not showing my face. My camera is broken at this moment. I’m Shigeya Suzuki, researcher and professor at Keio University Graduate School. I’m teaching 25 students in my group at this moment, and some of them are really enthusiastic about digital identity.
I came to this digital identity area because one of the reasons I was involved in some data security projects. The other reason is that when we started working on blockchain technology about 10 years ago, what we wanted to do was not use it as a digital currency but for application use, especially in some context of data security. I found that we needed digital identity to make that happen because we need to know about the identity for assigning the data objects.
About five years ago, I finally found this area of activities at the W3C and I started getting involved with the DID working group there. I helped some of the discussion in the later stage of the DID working group discussion and later I’m continuing on the Verifiable Credential working group.
I first heard about self-sovereign identity about five or six years ago, and at the same time I learned about the principles. I’m one of the whole sort of discontent of the Trusted Web discussions that happened in Japan over the last five or six years. Some of the discussions influenced the Trusted Web discussions, which is really focusing on how to communicate or how to transfer data in reliable and verifiable ways. I think some of the articles Christopher wrote and presented as reading materials is actually really aligned with what we discussed in the Trusted Web actually. I think potentially we can learn and also of course we can contribute some of the discussions in this area.
Origins of Self-Sovereign Identity
Christopher Allen: Great, thank you. So when did I first hear about self-sovereign identity? I coined the term in 2016 somewhat as a reaction to a number of years of user-centric identity and how it just was not working. Like a number of people, I was always a little bit uncomfortable with the “user.” It still felt hierarchical in some ways.
Then I did some really interesting readings on the history of sovereignty over the last thousand years and realized, “Wait a second, where is the individual’s place in this? Otherwise, we risk feudalism.” So that’s how I chose the term.
I really did try to identify and define it in some kind of concise way and failed over and over again in the spring of 2016. This actually came up in our call yesterday with the EU-US where there was some desire, “Oh, well, what we really need is a concise definition.” I’m like, “I’ve been trying.” The closest that really was that initial 10 SSI principles, which I first introduced at the United Nations. And it really kind of took off.
I had said, “Hey, we’re going to be having a Rebooting Web of Trust. Let’s talk about it more. Let’s improve on them.” And that never really happened. That was the baseline. The biggest obstacle I feel like is that there’s some depth. I compressed it down to the 10 SSI principles, but I didn’t talk about some of the things that I left out. I do have some articles on the origins of self-sovereign identity that talk about that. And these things keep coming back to haunt me.
Project Goals: Lens Review and Revised Principles
Christopher Allen: So what is the goal here? I’m hoping that we can review some of these lens briefs. All of them right now, I did the first draft on. Many of them I consider to be weak or kind of one-sided. For instance, one of the critiques of self-sovereign identity is that it is too libertarian or too Western or too individual because of the term “self” and “self-sovereign.” So I tried to capture some of the alternatives there in generative identity and some of these other different ideas. But I really don’t want to publish those documents without somebody who really is much more deeply involved with group identity and generative identity and some of these other kinds of aspects.
What I’m seeking is some peer review and up to some level of co-authorship to at least polish these to the next level during the rest of this month and in February. And then from that, identify some key areas where it’s worthy of diving deeper. And that would be us working on drafts of papers on specific things that would ultimately help influence what I hope to have on April 26th: a new set of revised principles, or at least a first draft of it for broader discussion where all the principles have been touched in some fashion based on the last 10 years of learning about them.
These will influence the next generation of decentralized identity, and there’ll be more papers and articles based on that, depending on the various co-authors and needs that are there over the course of the year, because it is the 10th anniversary.
One of the things that I tried to do, which maybe was a bit overwhelming, was I tried to capture a lot of my own concerns. Things that I felt like I didn’t quite capture or have felt have been misinterpreted. Or those things that looking at over 100 papers so far, and I think there’s about 500 on my list. I’m still working on this, but basically these are some of the most important ones that I’ve identified. They all have a little bit of an abstract, why I think it matters to some of the different lenses and various kinds of tags. I’ve got about 100 at this point in this document and I’ve got like 500 on my list that still need to be read and grounded.
The Four Lens Categories
Christopher Allen: The idea behind the lenses was they reveal something the others miss, but none of them are complete. The idea was that we would get two or three people to work on at least fleshing them out a little bit so I feel like it’s not just my scholarship that is there.
There are four big categories in the lenses:
1. Foundational Principles - Starting with principle number one of the old self-sovereign identity: What is the nature of the irreducible person? What are some of the flaws in the way that’s been interpreted in different places? There’s a lot of concern by various people about property paradigms in our discussions causing problems because property can be owned, it can be sold, but identity really can’t. There are issues of the right to transact in the area of digital participation that isn’t just like in the real world of just showing up. You have to pay to get there, whether it’s paying for an internet service or to participate in a platform. That means freedom of association, freedom of speech - you need to be able to pay for it. That’s been increasingly a challenge.
That being said, there’s a lot of critiques of a number of self-sovereign identity projects as being too financially oriented or using financial blockchains and having disadvantages from that. The cryptographic paradigms - I personally feel like there’s a lot of new things emerging that really will transform what the technical possibilities are.
2. Preventing Coercion - I’ve been saying for the last couple of years that maybe focusing on privacy and a right to privacy is burying the lead. What we’re really trying to do is prevent coercion. There are all kinds of coercion that systems compel behaviors. So what is the nature of coercion resistance? What about self-coercion, which can be good and bad? We had a good discussion earlier today, a little bit about how discipline is encouraged in various kinds of systems that allow you to self-coerce to save money or change your behaviors. There are some good sides of self-coercion - the term literally spoken earlier yesterday was discipline. I kind of like that: self-discipline, using these tools for self-discipline.
But also there are risks where it is possible through various kinds of mechanisms for the control to feel like it’s really your idea when in fact it’s people taking advantage of cognitive bias. This leads to choice architectures. And what is the nature of binding commitments? People say, “Well, self-sovereignty means I can walk away at any time.” Well, sometimes no. But where is that line?
3. Relational and Contextual Identity - Identity exists in the context of relationships, both with other people, but also to your context - your work context or your play context or your family context - or to different scales: your street, your neighborhood, your county, your country, the world. There’s a lot of tough issues here. I think we’ll probably have a lens on one of these or maybe some combination. There’s at least three people that have said they’d like to dive into this a little bit more. But so far, nothing on stewardship. I think there’s some real interesting things going on in, for instance, Utah, around what is the nature of caring for those who can’t consent, whether it’s old age or children, but also where can that become abuse.
4. Compliance, Governance, and Technical Lenses - I think that’s where the people today are more in alignment about some intersection of these. The first one, principles to compliance, is really more around how do we make things measurable? If a Taiwanese government or the Swiss government or some business says we’re asking for systems that protect people and we like the idea of self-sovereign identity, what are the checkboxes? That being said, not everything can be measured. And in fact, there are risks for trying to measure everything.
I am particularly inspired by the CSSPS work, not only from the vantage point of making more of the principles into objective, measurable checkboxes in technology, but also I was fascinated by their list in one of their Excel spreadsheets about the things that they left out that they didn’t try to do subjective measures for. When I looked at them, a lot of them I felt like, “Oh, those are the things that the regulatory framework should be involved in.”
Discussion: Cornucopia and Threat Modeling
Christopher Allen: Andre, you’ve now heard this twice. And I think you were also in this territory. Your work in the Cornucopia project - where does that fall on the regulatory versus technical requirements side of things?
Andre Ferreira: This lens is more from a professional perspective that I got interested in. It doesn’t really marry against the project. It does allow me to explore more but it’s my own interest in exploring. The reasoning why is GRC - trying to move the career more towards that side and leaving the hands-on coding left behind.
I did enjoy it. It’s easy to follow the paper, the one that I’ve read. The artifacts themselves tell a bit of a story and I would have questions for the authors if we got the opportunity to speak to them. Regarding exactly what I’m doing, because it’s a new field and it requires a lot of mappings because it’s for threat modeling, the threats themselves, stores where people can find more resources to understand or to discuss the event. It will be positioning the project towards the future to encounter and facilitate that.
Christopher Allen: You spoke about threat modeling. One of the things that I talked about in one of the other areas here was coercion resistance rather than privacy. Trying to pop things up: we want less violence in our society, less coercion, less dark patterns. Do you feel like the threat analysis is just focused on monetary losses and things being stolen or denial of service? Or is there any urgency to pop it up a level to say there are other kinds of systemic harms and threats that also need to be considered?
Andre Ferreira: I also worry about the power that corporations have - sharing my government ID with social networks when they only need to know if I’m above 18 years of age. I think it’s abusive and leads to abuse.
I got very interested when I started using some requests of information on GDPR. I wanted to understand all the data that they had to try and figure out how it was organized and what could be extrapolated from my activity. You won’t learn from your data what is done with that data, what is learned from you. Looking towards the future, I don’t think it’s a really good thing.
The other bit is that on the context, we tend to be different ways for different people depending on the context that we are in. I was interested in the best way to actually separate that without having to destroy the identity. The example that we had earlier of going and drinking in a bar because you like a group and suddenly there’s somebody or an event that you don’t identify with and you want to move on to a different location.
Being able to have knowledge on a particular individual and being able to influence their political views or whatever aspect of life will profit someone else. And the level of understanding that individuals have that exactly that happens and how easily it happens is also interesting. If we can empower the individual to protect them from all of those events and how to simplify it so that in simple sentences - the challenge that you mentioned also earlier today of trying to make it into a tweet - how do we actually achieve that?
Discussion: Trusted Web and Progressive Trust
Christopher Allen: Shigeya, where have you kind of fallen? Have you had a chance to look over the lenses? Any particular ones feel like they don’t survive your test and need significant work?
Shigeya Suzuki: I couldn’t read the lenses yet, but I understand the topics you were trying to cover. As always, for my way of thinking, I want to understand what’s the gap between what these lenses discuss compared to my current ongoing projects like Originator Profile. I always want to start with that kind of gap, thinking about the gaps between the discussions and where I can start working on.
I had a really interesting discussion about the similarity of some of the discussion you did with the membrane, between the recursive structure - we use the term “community” for the group of identities in the Trusted Web discussions. I think I’m particularly interested in the structure of these boundaries and solving. But I still need to digest. At least, I need to read these lenses first well enough to understand first.
Christopher Allen: Have you had a chance to read the progressive trust articles that I’ve written on building trust ingredients?
Shigeya Suzuki: I read it a while ago, but I’m not connecting the progressive trust idea into what I’ve read recently with regard to this session. At the time, I saw that it’s really interesting. Some of the discussion with regard to our currently ongoing project - I have two projects tackling misinformation and disinformation. One side is trying to tackle from the origin of the information. The other technique is looking at the contents of the information without relying on other information.
I think there is something in common. I want to try to digest that into the framework of the Trusted Web discussion and also in what you discussed in this, but not yet materialized.
Christopher Allen: Maybe one of your new students might be interested in this particular category. In my own things, when I start dividing things up into these categories and try to be very deliberate about the words I use - for instance, in what I call the “assertions declared” phase, I use the word “declare” and “assertions” and then I “accept” those assertions. But I avoid things like “assess” in there and actually deprecate “validate” entirely because it’s an overloaded term. I try to have “you can verify secrets, but the validity is a different thing.”
At even higher levels, we’re basically endorsing and affirming different references. Just simply having a language that pulls it apart and says trust is not black and white, trusted or non-trusted - it has these graduations. Trying to create at least a consistent language for how I use it has been very helpful in my own architecture design.
Taiwan Digital Identity Wallet
Christopher Allen: Denken, I don’t know where you guys are in the Taiwanese digital identity as compared to, say, the Swiss EID work. But it’s coming on fast. What are the opportunities to influence things? And where are things already locked in stone?
Denken Chen: Before talking about that, I would like to add a little bit more about what Professor Shigeya has been working on. He’s been putting efforts into a system called Originator Profile that’s based on W3C’s VC. I think it’s super interesting that it’s building up, tracing the author, the authenticity from the news organizations there in Japan.
That would resonate with me about one of the things called cryptographic paradigms that compares VC with PKI systems, what has been constrained by PKI. I know they got questioned a lot about what’s the difference between Originator Profile and C2PA that has been promoted for years. During this AI generative content era, I know there are several social platforms that have adopted C2PA for image metadata. Even the government might require them to add that metadata to distinguish what’s been made by human and AI.
Those kind of identity things have been already implemented in C2PA, in the PKI system. So that’s one thing I’m very curious about: Will they face some limitation based on those PKI assumptions or the challenges we already faced in the web PKI systems? What else should the whole SSI system improve?
Christopher Allen: My biggest concern regarding these types of things is kind of the experience from Hong Kong. If everybody’s device that is able to document content and document different kinds of activities is now traceable back to the person who has that device and can’t be safely elided in some fashion to protect the sources, I think that it can lead to very authoritarian type of practices.
I’ve talked with them about doing better elision and data minimization practices and having different kinds of proofs that can act as stand-ins for a lot of their transparency goals. Things like: yes, there is an accredited press person behind this, but also very carefully hide as much as possible the data of who that particular press person was, where they were at a particular time. That’s been my concern about the C2PA stuff, especially since a lot of the identification they’ve kind of punted on and basically said, “We can’t do people identity, but we can do hardware identity.” So they’re going to get Sony to put it in their camera. But cameras are owned by people and in the possession of people.
Originator Profile vs C2PA
Denken Chen: Do you have any other things to share about the Originator Profile versus the C2PA, the PKI system? I believe you guys have done some research on how to differentiate the two. Do you have anything to share about this?
Shigeya Suzuki: In our context, C2PA is currently relying on PKI. Relying on PKI means everything - operational requirement or anything related to PKI - needs to be bound to the current web PKI. Second thing is that Originator Profile is independent from PKI but it can work alongside with PKI.
Third thing is that C2PA is focusing on the medium and also they are focusing on the workflow within a publishing company like a newspaper company. They are focusing on use mainly inside their internal workflow, but not for the end users. Actually, end users can check the metadata of content including the C2PA signed, attested image files, but whether it is useful or not for end users, only for the media, is questionable.
On the contrary, Originator Profile is focusing on how to deliver the contents to the end user. We are focusing on the text on the website. That is the most influential in terms of disinformation, misinformation. So that is the biggest difference.
I’m currently feeling that it might be possible to discuss OP in the context of the SSI. OP is actually really flexible. I want to discuss OP within the context of SSI in the future. That is clearly an interesting topic to discuss.
Christopher Allen: I don’t think that there is a lens around content identification, content control in and of itself. The closest that comes to it is some of these around choice architectures and context boundaries. But neither really is precisely about things.
I personally ran a fairly popular niche community website for about 20 years. I was always frustrated that we had no ability to control what was advertised on our site. Google controlled it. We had no ability to say, “Wait a second, our site is not about buying cars or buying women’s apparel that is scantily clad that is actually secretly trying to draw you into something.” Total inability - our choice was either don’t monetize or give up all of our ability to do things.
So I certainly have sympathy to the media being able to choose what they wish to display to their customers. But I think C2PA is going beyond that. They’re trying to say a customer can do it.
Verifiable Claims vs Verifiable Credentials History
Christopher Allen: So it was interesting. At the Boston Rebooting Web of Trust, at that particular point it was called Verifiable Claims, the name of the draft of the standard. The early pre-C2PA people, Adobe and a couple of news organizations, basically came by and said, “Hey, we’re really excited about this verifiable claims thing because we want to verify claims that something is factual, that something is by an accredited journalist.”
It basically caused a bunch of soul searching during the event where we were basically going, “I’m not sure we can actually help these people because these are not actually verifiable claims. These are verifiers of who the authors are.” So it’s really verifiable credentials. But there had been a lot of resistance politically when the Verifiable Claims Working Group was founded to use the term credentials at all. Eventually they were able to persuade the changes to call the standard the Verifiable Credential standard, which is just simply making statements about the authorship, not the content of what the authors are doing.
That’s been an ongoing tension ever since. We still see people going, “Oh, well, it’s signed by such and such. That means it must be owned by such and such.” Well, no, that’s another dimension. That’s something entirely different than either authorship or the originator.
Shigeya Suzuki: One of the things I want to discuss within the OP, as part of the Originator Profile project, is that we need to determine what kind of intent actually the signer is stating needs to be firstly clearly stated and maybe distinguish the difference on the signing policy. That may provide the use of the signing as a part of the signaling mechanism to provide to the receiver of the information.
I think that part is not clear enough in the current use of digital signing in general. For example, using within the protocol, like some IT protocols, then it is clear that this is signed for reasons within the protocol definition. So the intention is clear. But if you start using the signing key in various contexts, the intention is not clear from the receiver’s point of view. Sometimes it is intended as the information is verified or validated by the signer, but sometimes it’s not.
Maybe one of the things I need to be working on is how to clearly signal such kind of information to the end user.
Trusted Signing Interfaces
Christopher Allen: There’s definitely a lot more that needs to be done here. My microcosm for this lately has been cryptocurrency wallets. We have over 13 Bitcoin wallets who use our technology. We’re increasingly having interest from Zcash and Polkadot and Tezos and a variety of other wallets beginning to use some of our specifications.
One of the biggest issues is that it is incredibly important to understand not only what the intent is of the signing, but also to convey in some kind of trusted fashion before something is signed, what it actually is that you are signing and what is the nature of your intent, what it really is, is it accurate?
This is particularly important in all of the wallet connect family of tokens and platforms because they use the same key for signing expensive transactions that transfer your assets to someone else as they do for logging into a website, which I think is ridiculous. So that means that it’s even more important that there is a trusted UI that somebody can rely on that basically says, “Oh, you’re not secretly signing away your asset to Shigeya’s retirement fund. All you’re doing is saying that you will respect the work that Shigeya has done or some requirement that you have for access to reprints or whatever it might happen to be.”
We’re just not really good at this. That’s one of the big projects for Rebooting this year is to work on what is a trusted signing interface that accurately describes what the signer is really intending to do. It has nothing to do with content. I don’t think we’re talking about signing content for authorship. But it is very similar in the sense that there’s this whole challenge of what is the nature of a trusted UI? How do you have segregation of interests?
We increasingly work where a single key is insufficient. There’s a key that’s on a hardware device that is separate from the key on your phone. And it’s only when them working together that allows the things to proceed - that is kind of the new paradigm with FROST and some of these other new technologies. I’m not seeing any of that in C2PA or anything yet. I keep getting told, “Oh, when this decides that it’s a standard, we’ll consider it.” And I’m sort of saying, I don’t know if they ever will.
Low-Hanging Fruit: Taiwan Convenience Store Pilot
Christopher Allen: So what is low-hanging fruit at this point? Denken, where - what’s low-hanging fruit that we might be able to tackle? Either a specific self-sovereign principle, or a particular lens, or as you had said something about the commercial side of this - how do we get non-governments involved in this stuff?
Denken Chen: Firstly, I would like to share how we are doing here in Taiwan recently. I posted a press release that we have launched our pilot case. The most important one, the first product for us, is used for convenience store package pickup. And it’s the only scenario for now.
It’s important here in Taiwan because almost 80% of e-commerce delivery uses convenience stores for pickup. And the number is quite huge. In Taiwan, we have around 200 million packages every year. So it’s an important life service in Taiwan.
The scenario really resonates about the binding commitments. The scenario has been really restrained and well designed across different stakeholders. For example, we are using the verifiable credential issued by the telecom company, but they did not use the full phone number, but only the last three or five digits. That’s good enough for the package pickup, along with the full name. Because of that, we’re not impacting the existing business of the telecom company like SMS or their other identity service.
On the convenience store side, they accept that it can be done more quickly compared with using physical cards. So I think we designed it one application scenario by one. It really resonates about the binding commitment, making people involved in the scenario feel safe from the issuer side, from the verifier side, and also from the user side.
The only problem here is that if we are going to build up the scenarios one by one, that is not very scalable from the business point of view. But if the government would like to do that and successfully roll out one by one, that could still work. Because for a public government, at least from our side, they need to be more conservative.
For example, during our building process, we really almost touched every single topic lightly. We also have a credential revocation list. Once your credential is being revoked, you cannot use it anymore on the wallet. That doesn’t sound very right, actually. But the official wallet will go this way. But because it’s an open source system, I think it will allow third-party wallet along the way.
My point of view is that there is room for a self-sovereign identity principle-based wallet to be developed and to be showcased to the world. It’s also interesting that because Shigeya has been focusing on the Originator Profile, it’s really a real use case already. It’s also very similar to a binding commitment because it’s only focusing on the news organizations for now. And there is a browser extension for this. The use case is pretty clear. The problem itself is very clear.
So it could be a good starting point to base on those different use cases to discuss whether that’s a good starting point to discuss how these kind of digital identity could be developed through different use cases, but also considering other lenses.
Cross-Border Use Case Learning
Denken Chen: Based on those discussions, starting from describing how we build a system based on what kind of principles and constraints, and then we can communicate with each other what could be missing for those lenses, for those SSI principles, and what could be improved from any of the use cases. That could be a good starting point and also really low-hanging fruit because we’re already deploying those scenarios.
Christopher Allen: I know that Digital Bazaar has been doing some interesting things with the National Association of Convenience Stores. The true age stuff, right?
Well, true age is part of it. But one of the things when I was very briefly involved in creating some adversary modeling for them - which is a particular unique form of threat modeling that I practice - it turns out one of the biggest things that they’re concerned about is things like you going from one convenience store, picking up your limit on cigarettes, and then going to the next convenience store and picking up your limit of cigarettes and filling the back of your truck with cigarettes and then driving across the state border and selling your cigarettes without paying the taxes.
So they don’t mind if an individual picks up three packs of cigarettes or some booze or whatever. What they’re concerned about is that it’s done repeatedly in a very small period of time. So we were talking about various kinds of zero-knowledge proofs where we could basically make it such that you wouldn’t necessarily be revoked when you tried to do the sixth convenience store pickup of cigarettes. But instead that it would basically say, “No, you can’t for another day or week,” whatever the particular thing was.
I thought that was kind of interesting. There may be some things if you talk with the Digital Bazaar people that might be able to inform your own convenience stores as to maybe some other pilots that might be useful. Combine your cases. And he may not be aware of this digital delivery thing.
I’ve noticed now Amazon has bought Whole Foods, and now there are delivery boxes both inside the Whole Foods store and outside. I can have my Christmas gift for my partner delivered there rather than here. And that way she doesn’t know that I bought her a Christmas gift. I think there are other things like Target and other retailers that are looking at this type of thing.
Could just simply be some cross-fertilization of use cases around convenience stores and small retailers could be an interesting synergy. Not going to drive the self-sovereign principles forward, but at least it’s an interesting connection.
Denken Chen: I met someone from True Age in the last W3C TPAC in Japan, Kobe. I think he’ll be interested in joining our discussion about this. We are going to learn from the existing use case to exchange our experience and how we build our system based on those SSI principles for this use case.
We can learn from each other, from the trade-offs and from the Taiwan experience. Our convenience store experience could also be helpful for convenience stores in Japan because they also have very strong convenience store infrastructure. In Taiwan, we have news organizations that could learn from Shigeya’s Originator Profile staff and vice versa.
Based on those discussions - starting from describing how we build a system based on what kind of principles and constraints - then we can communicate with each other what could be missing for those lenses, for those SSI principles, and what could be improved from any of the use cases.
SD-JWT and Zero-Knowledge Solutions
Christopher Allen: Changing the subject a little bit: you had said that you guys have begun work on doing SD-JWT, but you’re also investigating the Ethereum Foundation’s wrapper for SD-JWT. Is that because in some of your prototypes or pilots, the batch issuance has been a problem and managing lots of certificates? Or what is it that has been driving your adoption of SD-JWT and consideration of things like the Ethereum Foundation’s wrapper?
Denken Chen: So firstly, when we are designing the early privacy technique to be implemented, because privacy is really a strong selling point for our project - even from the minister of the MoDA, he learns a lot about selective disclosure and uses the word to promote to the citizens that we have this kind of privacy-preserving technology so people will feel safe to join in this wallet project.
What’s the technique behind the scenes? Initially, we also considered the W3C BBS and the SD-JWT. At that time, we were not fully aware of the cryptographers talking in the EU that objects to SD-JWT. So in early stage, we just picked SD-JWT for being easier to understand and implement. And it can easily suit our need to be selective disclosure.
As we were building, in the process we found there are issues with it. And there is some ZK research on this. There are multiple solutions out there. Even Google has included something in their own Longfellow solution. And we met Ethereum Foundation’s people that are focusing on this project. They would like to build a ZK solution as well.
So I work with them. It’s actually not an official working partnership between the Ethereum Foundation and government, no. It’s simply based on the fact that we are an open source project and we have an openly available sandbox environment so everyone can do some research on it or fork the open source project to do something on it.
If there’s any zero-knowledge proof system that can solve the privacy issues along the way, I think the government will really seriously consider it. I know the officials within the MoDA are aware of those developments. So that’s one way we push it: we are using the IETF SD-JWT, however, we are still seeking other solutions along the way.
One way is just to generate the zero-knowledge proof purely from the wallet so the issuer side wouldn’t need to change anything. Because as long as it’s deployed from the government side, it’s usually very hard to change that unless there’s strong reasons. So that’s the way we are trying to do. And it’s still in progress.
I think we can have some discussion or share experience with the Swiss people about this.
Data Minimization Standards Gap
Christopher Allen: Yeah, I’m hoping that can happen. What happened - so MDL and SD-JWT basically are doing what we call hash data elision. The particular technique that they’re doing by default is allowing you to leave things out. And that’s how the batch issuance works in the Swiss wallet.
So basically when you’re issued a credential in the Swiss wallet, you’re not actually issued a single credential. You’re actually issued a bunch of credentials that are all slightly different, but they all share the same signature stuff because certain amount of data has been elided.
So we basically said, “Hey, if you look at IETF’s work, you look at international requirements, you look at things of that nature, there ought to be better standards and better best practices just for that.” This is pre-BBS, pre-Longfellow, the OpenAC type designs - just like what is data minimization?
It turns out there are a lot of things that people aren’t doing which they ought to be considering: things like inclusion proofs and facilitating herd privacy and other dealing with encryption and compression, especially when you have paper credentials or putting something on the back of an ID card.
Their response was there are no customers for this. So they’re not willing to have a birds-of-a-feather session to try to figure out what kinds of specifications might be possible.
So when this kind of came up with the Swiss EID, I was really hoping that we could explore these. What is data minimization? There’s also issues of once you’ve issued a credential and it’s one thing that it’s in someone’s wallet, but what are the rules when it’s at a verifier? What are the rules and requirements for data minimization when the verifier collects this data? And what can we do about those problems? I felt like there’s a lot we can do there.
And then we can also look at the various signature anti-correlation things: Longfellow, BBS, OpenAC, etc. And allow us to migrate to those when they’re needed or be able to choose from those as they evolve. Because at this point, I don’t know that any of them are very mature. I think the BBS proofs is probably got the most academic attention. Longfellow, yes, it was written by Google, but I don’t think that it has a lot of academic review at this point. And neither does OpenAC.
So how do we get that? How do we do that? And then there are other kinds of zero-knowledge proofs. So I was really hoping that there could be a series of workshops, initially endorsed by the Taiwanese government and the Swiss government and a few other parties so that we can sort of begin to capture some of these requirements and what are the issues and challenges and what kinds of specs do we need and what works now, what doesn’t work now.
Then I can go back and we can do this session over again and basically say this does deserve real standardization effort - not in this case where they basically said, “Well, you should send it to the cryptographic research group.”
Closing and Next Steps
Christopher Allen: Okay, well, what I’m going to suggest for now is we’ll wrap up on our hour and a half. I’m going to have another meeting in two weeks to see if some of the working circle things have begun to gel a little bit. And if it doesn’t look like it is, I’m still willing to do this as kind of open office hours where people can ask me questions.
But maybe, Shigeya and Denken, if you can reach out to the two CSSPS people and say, “Hey, this is real. We do want to work on this. Come present and we’ll be there.” Maybe we can get that side of things rolling. Because I know there are multiple people that are interested in it.
Denken Chen: Yeah, sure. I’m looking forward to that progress. Both from inviting more people from Asia and also the possibility of Taiwanese government and Swiss government focusing on the SD-JWT issues we all faced.
Christopher Allen: Shigeya, anything to close our evening today? Afternoon, I should say.
Shigeya Suzuki: I think I have enough motivation to continue working on this, while I have a limited amount of time to work on this. I think aligning our discussion at the Originator Profile project towards SSI activities like this seems to be interesting. And potentially writing up some of the lens discussions in the context of the OP might be interesting.
Christopher Allen: Yeah, love to see that. Andre, any closing comments for the evening?
Andre Ferreira: No, it’s still interesting. A lot of information to go and find more information about or more knowledge about. I appreciate it.
Christopher Allen: Okay, well, thank you much. I will send out information - it’ll be basically the same time in two weeks, same Signal link, and we’ll figure out what the agenda is between now and then.
Denken Chen: Great. Thank you.
(multiple voices) Thank you, everybody. See you very soon. See you soon. Bye. Bye.