This page is the static snapshot of the first community draft of a 2026 revision of the original SSI principles, published on the ten-year anniversary of “The Path to Self-Sovereign Identity”. The 2016 language is preserved verbatim wherever it survives; changes are shown inline so that continuity and revision are both legible.
The live, commentable version is maintained in Google Docs:
Principles of Self-Sovereign Identity — 2026 Revised, First Community Draft
For context on why this revision exists, see the project page and the lens briefs. The original 2016 principles remain available for comparison.
Note on visual styling: this static snapshot preserves textual deletions (strikethrough) and, where tracked-change metadata survived export, insertions (
<ins>tags). Color-only visual markup from the Google Doc is not fully reproduced here — refer to the Google Doc for the complete visual redline.
By Christopher Allen <ChristopherA@LifeWithAlacrity.com>
Sponsors: Blockchain Commons, Stream44 Studio (Seeking more sponsors. To fully participate in GDC in Geneva this fall will be expensive!)
Additional Background & Rationale: RevisitingSSI.com/Lenses
Private Signal Discussion Group: (apply here to join)
Preface
Ten years ago this week — on April 26, 2016 — I published “The Path to Self-Sovereign Identity” and its 10 principles. I ended it with a request: “I seek your assistance in taking these principles to the next level.”
I could not have predicted that despite my request, the 10 principles I introduced as a first draft would become the conceptual foundation of an industry, anchoring work on decentralized identifiers, verifiable credentials, and the broader architecture of digital sovereignty and human and civil rights. They have since accumulated more than a thousand academic citations and been quoted, adapted, debated, critiqued, and deployed in contexts I never imagined.
If those principles needed additional thought and reconsideration when I originally offered them to the community, that need has only redoubled in the decade since. Principles written in 2016 could not have anticipated the commodification of behavioral data, the normalization of surveillance capitalism, and the rise of mandatory digital ID regimes, much less generative and agentic AI, consumer eye-tracking and neural interfaces, or the specific ways “self-sovereignty” would be co-opted at the protocol layer by entities with different interests than the persons the principles were meant to protect.
The tenth anniversary has finally given me the opportunity to talk with others deeply about the principles I laid out to define self-sovereign identity (SSI). What follows is a revised version of the original ten principles that grew out of conversations over the last several months, including at RevisitingSSI.com and in exchanges with many practitioners and researchers. Their thinking has shaped the redlines you see here.
This revision also introduces six new principles — Inalienability, Cognitive Liberty, Relational Autonomy, Stewardship, Equity, and Anti-Coercive Design — and organizes them alongside the original ten into four layers: foundational, relational, technical, and political. The 2016 language is preserved verbatim wherever it survives, so that continuity remains visible alongside revision.
This is my first draft, for community review. I am publishing in this redline form, on the anniversary, precisely because it is unfinished. My hope is that many of you will help me iterate these revised principles at the upcoming Internet Identity Workshop in Mountain View next week, and at other venues over the coming months — help me sharpen what is imprecise, challenge what is wrong, and add what is missing — so that we can create a more mature version to present at Global Digital Collaboration (GDC) in Geneva this September.
My goal is not to settle all the questions of self-sovereign identity. It is to make these principles worthy of the next ten years.
(2016 to 2026 Redline)
Legend — Baseline is the 2016 original from 10 Principles in the original The Path to Self-Sovereign Identity. Unmarked text is preserved verbatim from the 2016 version.
Strikethrough= removed since 2016. Green = added since 2016. Principles entirely new since 2016 are presented fully in green.
A note on ordering
This version begins with Existence (#1) not out of deference to prior versions but because every other SSI principle presupposes it. As a result, it cannot itself be made dependent on system judgment or derivation: an identity system that treats Existence as a derivable property rather than a foundational given will, sooner or later, attempt to derive it — first deciding whether you exist, then setting the conditions under which you are permitted to do so — and in doing so, invert the principle entirely. Existence is placed first here because any principle placed ahead of it becomes the mechanism through which Existence itself is made optional.
The remaining principles are grouped into four layers, each doing a different kind of ethical and structural work. The ordering within each layer reflects logical ordering rather than importance. Every principle is necessary; none is sufficient on its own.
Structural summary
| 2026 # | Principle | Layer | 2016 # | Change since 2016 |
|---|---|---|---|---|
| 1 | Existence | Foundational | 1 | Expanded; operationalization-resistance clause; AI-agent refusal |
| 2 | Inalienability | Foundational | — | New |
| 3 | Cognitive Liberty | Foundational | — | New |
| 4 | Control | Relational | 2 | Expanded; technical mechanisms; AI-agent strategic scope |
| 5 | Consent | Relational | 8 | Substantially rewritten; calibrated revocability; attention clause |
| 6 | Access | Relational | 3 | Extended to inferred data |
| 7 | Relational Autonomy | Relational | — | New |
| 8 | Stewardship | Relational | — | New |
| 9 | Persistence | Technical | 5 | Narrative continuity; paradigm succession |
| 10 | Portability | Technical | 6 | Open standards; narrative; statelessness |
| 11 | Interoperability | Technical | 7 | Pluralism; community issuers |
| 12 | Minimalization | Technical | 9 | Inferred data; contextual integrity |
| 13 | Transparency | Technical | 4 | Extended to governance and incentives |
| 14 | Protection | Political | 10 | Coercion spectrum; self-coercion; legal alignment |
| 15 | Equity | Political | — | New |
| 16 | Anti-Coercive Design | Political | — | New |
I. Foundational Principles
These are the principles that cannot be derived from the others and from which the others partially derive. Strip any of them from the top and the foundation beneath the rest becomes implicit and therefore contestable — which is exactly how protective constraints get inverted. Foundational principles resist operationalization by design: attempts to measure them tend to redefine them into their opposites. Existence declares that the subject exists. Inalienability declares that the subject cannot be converted into property. Cognitive Liberty declares that the subject’s mind is not an extractable resource. Each protects the one before it from the most common attack on it.
1. Existence (formerly 2016 #1)
Users must have an independent existence**. Every person has an identity that exists before and beyond any digital system. Systems witness existence. They do not grant it. Any self-sovereign identity is ultimately based on that the ineffable “I” that’s at the heart of identity. It can never exist wholly in digital form. This must be the kernel of self that is upheld and supported. This “I” is not only an existential kernel but also a cognitive and narrative one that incorporates the user’s capacity for self-definition, the coherence of their inner life, and the story they author about themselves over time. A self-sovereign identity simply makes public and accessible some limited aspects of this “I” that already exists. The translation of Existence into digital form being limited is a purposeful part of the design: SSI systems must affirm dignity, not demand disclosure, honoring the real person’s right to be pseudonymous, private, plural, and even offline. This principle explicitly resists operationalization: existence is not a legible attribute, and any attempt to measure it inverts it into its opposite. Existence is a property of persons. Machines, agents, and artificial systems may act in the world and may be governed, but they do not possess Existence in the sense this principle protects; claims to the contrary, whether from commercial platforms seeking personhood for their AI agents or from jurisdictions seeking to naturalize surveillance infrastructure as “identity”, must be refused at the level of foundation. 1
2. Inalienability (new since 2016)
Core identity must not become property. What Existence declares, Inalienability protects: the subject of an identity and its core identifiers must not be converted into units of commerce. Every principle that follows presupposes the subject remains a subject. Biometric data, medical histories, sexual orientation, political beliefs, religious convictions, and family relationships are constitutive of personhood and must not be commodified. SSI systems shall prohibit credential marketplaces, ban identity-token trading, and make credential sale technically impossible through non-transferable cryptographic binding. “Data ownership” language that enables self-commodification must be replaced with agency-delegation frameworks in which revocable authority creates enforceable duties rather than alienable property creating exploitative markets. Identity is personhood expressed through relations, not capital to be accumulated. 2
3. Cognitive Liberty (new since 2016)
Sovereignty of data must reflect sovereignty of mind. Identity systems must uphold four adjacent rights: mental self-determination (the right to author one’s own identity narrative without algorithmic imposition or schema lock-in); mental privacy (the right to protection from inference about mood, attention, belief, or cognitive state — not merely what one shares, but what can be read from what one does); mental integrity (the right to freedom from manipulation, destabilization, gamified pressure, or induced dependency, especially in immersive and AI-mediated environments); and psychological continuity (the right to a coherent self across time, context, device, and life transition). As consumer neural interfaces make mental privacy a literal rather than metaphorical category, this principle requires hard technical commitments: thought-adjacent data must process at the edge, not be exfiltrated; no credential, service, convenience, or court order may revoke the skull as a privacy boundary. Mental integrity likewise extends to resistance against AI companions and reputation systems that shape self-conception through continuous feedback, whether or not the user asked to be shaped. 3
II. Relational Principles
These principles govern how persons interact with systems and with each other. They presuppose the foundational principles and make operational the ethical claims those principles protect. Each is vulnerable to being hollowed out: control without mechanism, consent without reflection, access without completeness, relationship without exit, stewardship without accountability. The structure of a relational principle is therefore always twofold — what must be true, and what manipulation must be resisted.
4. Control (formerly 2016 #2)
Users must control their identities. Sovereignty is political capacity inherent in individuals and communities; it is not a product that platforms provision downward on their terms. As a result, subject to well-understood and secure algorithms that ensure the continued validity of an identity and its claims, the user is the ultimate authority on their identity. They should always be able to refer to it, update it, or even hide it. They must be able to choose celebrity or privacy as they prefer. This doesn’t mean that a user controls all of the claims on their identity: other users may make claims about a user, but they should not be central to the identity itself. This control must be meaningful, not merely nominal: a choice made under manipulation, manufactured urgency, exploitation of cognitive bias, or structural necessity is not control in the relevant sense. This control must include technical mechanisms such as revocable cryptographic capabilities, time-bounded delegation, threshold authorization, auditable logs, and least-privilege attenuation, otherwise it is security theater. This control must also be a continuous, revisable relationship with one’s identity, not a single act of authorization. When AI agents act on behalf of a user at speeds no human can review per-transaction, this control must shift from the transactional to the strategic: the user authorizes scope, not every act within scope, and the system enforces that scope technically. An agent that cannot be reviewed cannot be trusted; an agent that can act without being reviewed must be bounded in what it can do. 4
5. Consent (formerly 2016 #8)
Users must agree togive meaningful, ongoing, and revocable consent for the use of their identity. Any identity system is built around sharing that identity and its claims, and an interoperable system increases the amount of sharing that occurs. However, sharing of data must only occur with the consent of the user. Though other users such as an employer, a credit bureau, or a friend might present claims, the user must still offer consent for them to become valid. Note that this consent might not be interactive, but it must still be deliberate and well-understood. Meaningful consent is deliberate (not produced by fatigue, manufactured urgency, or choice overload), reflective (the user can reasonably understand what is being agreed to), durable against manipulation (see Anti-Coercive Design for the interface-level disciplines this entails), and revocable on proportionate terms. Consent that is merely clicked through is not consent for the purposes of this principle. Consent must allow the user to bind themselves with credible commitments, but only without exploitative lock-in: the freedom to bind yourself is essential for autonomy; the freedom to exit binding is essential for preventing exploitation. The governance framework in which consent is given must itself be procedurally fair, revocable, and auditable: a Rawlsian floor beneath each individual act of agreement. Attention is the only non-renewable input to consent. Systems that demand it continuously are not consent-respecting, regardless of what the user clicks; this is why strategic-level authorization with periodic reflection is preferable to transactional consent at machine cadence. 5
6. Access (formerly 2016 #3)
Users must have access to their own data. A user must always be able to easily retrieve all the claims and other data within histheir identity. There must be no hidden data and no gatekeepers. This does not mean that a user can necessarily modify all the claims associated with histheir identity, but it does mean they should be aware of them. It also does not mean that users have equal access to others’ data, only to their own. Access extends beyond declared attributes to inferred ones: reputation scores, behavioral profiles, psychographic classifications, and algorithmic predictions derived from the user’s data must be equally legible, contestable, and subject to dispute. Access to only what one has explicitly entered, while hidden models quietly accumulate shadow-identities, is a hollow access. 6
7. Relational Autonomy (new since 2016)
Self-sovereignty must include sovereignty over relationships. Independent existence does not preclude interdependence; it guarantees the freedom to choose interdependence, supporting Ubuntu’s “I am because we are” paired with individual agency over which relationships constitute identity. This relational identity must also support voluntary association, mutual vulnerability, and guaranteed exit. Systems must support bilateral consent for relationship credentials (issuance requires cryptographic approval from all parties via pairwise edge identifiers), unilateral exit rights (either party can dissolve without the other’s approval, which is essential to protect against domestic abuse, cults, and coercive organizations), contextual boundaries (work does not see family; medical does not see political), community credentials with contextual standing (mutual aid networks and grassroots associations carry weight where institutional certification would be inappropriate), and relationship-graph privacy. Because social connection patterns can reveal identity, beliefs, and sensitive characteristics through inference and guilt-by-association; de-anonymization through social-graph analysis must be as cryptographically hard as de-anonymization through attribute linkage. 7
8. Stewardship (new since 2016)
Caring for another’s digital identity must not be construed as ownership of it. Where a user cannot self-advocate — children before adulthood, those with impaired capacity, the elderly with dementia, populations coerced by benefit access — stewardship obligations arise in proportion to the power imbalance. Stewards (parents, guardians, platforms) owe the strongest security, a prohibition on exploitation of dependency, guaranteed exit at adulthood or on capacity recovery (with credential reissuance assistance and zero penalty for leaving platforms the steward chose), mandatory non-digital alternatives, community voice in governance, independent oversight, and active resistance to weaponization, even when the weaponization is framed as “protection” or “national security.” Vulnerability creates duty, not opportunity for exploitation. 8
III. Technical Principles
These principles describe testable properties of identity systems. They are the most operationalizable of the four layers and therefore the most susceptible to compliance theater: satisfying the letter of a principle while defeating its spirit. Each technical principle must therefore specify what the property must do, not just what it must be. An identity system that claims persistence but fragments the person, portability that doesn’t survive vendor lock-in, interoperability that collapses into monoculture, minimalization that excludes inferred data, or transparency of code without transparency of incentive is failing these principles at the level of function. Technical principles are the rung on which regulators and auditors will press hardest; the language here must be specific enough to be testable and general enough to survive technology shifts.
9. Persistence (formerly 2016 #5)
Identities must be long-lived. Preferably, identities should last forever, or at least for as long as the user wishes. Though private keys might need to be rotated and data might need to be changed, the identity remains. In the fast-moving world of the Internet, this goal may not be entirely reasonable, so at the least identities should last until they’ve been outdated by newer identity systems. Persistence is narrative as well as technical: the system must preserve the coherence of a user’s self-presentation across device loss, key rotation, jurisdictional migration, trauma, and life transition. A cryptographic anchor that survives while the usable self shatters is a failure of this principle. Persistence must also survive paradigm rotation, not just key rotation: an identity that cannot outlive the cryptographic assumptions it was issued under was never persistent in the sense this principle protects. Post-quantum migration, succession across zero-knowledge regimes, and future epochal shifts must preserve the person beneath the mechanism. This must not contradict a “right to be forgotten”; a user should be able to dispose of an identity if hethey wishes and claims should be modified or removed as appropriate over time. To do this requires a firm separation between an identity and its claims: they can’t be tied forever. 9
10. Portability (formerly 2016 #6)
Information and services about identity must be transportable. Identities must not be held by a singular third-party entity, even if it’s a trusted entity that is expected to work in the best interest of the user. The problem is that entities can disappear — and on the Internet, most eventually do. Regimes may change, users may move to different jurisdictions. Transportable identities ensure that the user remains in control of histheir identity no matter what, and can also improve an identity’s persistence over time. To support this portability requires open standards, standard-format data export, credential migration tools, and universal verifier acceptance. Proprietary formats and vendor lock-in transform “voluntary” platform choice into coercive dependency. Portability must also apply to the surrounding narrative: credentials, reputational context, recovery paths, and relationship graphs should move with the user, so that no single vendor can hold a user’s social existence hostage through lock-in. Portability must also survive the loss of issuing jurisdictions. Where a state ceases to exist — through dissolution, occupation, or literal territorial loss to sea-level rise — its credentials must remain recoverable and re-attestable through open standards and community-issuer fallbacks. Statelessness must not become digital statelessness; where the issuing polity ends, the person does not. 10
11. Interoperability (formerly 2016 #7)
Identities should be as widely usable as possible. Identities are of little value if they only work in limited niches. The goal of a 21st-century digital identity system is to make identity information widely available, crossing international boundaries to create global identities, without losing user control. Thanks to persistence and autonomy these widely available identities can then become continually available. Interoperability, however, must not collapse into monoculture. A single dominant schema, credential format, or trust root would reconstitute at the protocol layer the very gatekeeping this list was designed to dissolve. The goal is pluralistic interoperability: many coexisting vocabularies, trust networks, and modes of identity expression, including the right to remain outside any given schema, bridged by open protocols rather than unified by a single standard. Community issuers (chamas, mutual aid networks, grassroots associations) must have equal technical standing with institutional issuers; peer-to-peer credential exchange must not require any platform’s or government’s permission. 11
12. Minimalization (formerly 2016 #9)
Disclosure of claims must be minimized. When data is disclosed, that disclosure should involve the minimum amount of data necessary to accomplish the task at hand. For example, if only a minimum age is called for, then the exact age should not be disclosed, and if only an age is requested, then the more precise date of birth should not be disclosed. This principle can be supported with selective disclosure, range proofs, and other zero-knowledge techniques, but non-correlatibility is still a very hard (perhaps impossible) task; the best we can do is to use minimalization to support privacy as best as possible. Technical architectures must also provide correlation resistance (such as context-tagged credentials, unlinkable pseudonyms, and anti-aggregation constraints) so that many small voluntary disclosures do not aggregate into involuntary total transparency. Equally, credentials shared in one context must not automatically enable surveillance or control in other contexts: employment credentials should not reach housing, health credentials should not reach politics, and commercial reputation should not determine civic rights. Graph-layer inference must be governed by Relational Autonomy, which sets the cryptographic-hardness floor for social-graph de-anonymization; this principle governs attribute-layer disclosure. This minimalization must also apply to inferred data: attention, affect, mood, biometric response, behavioral patterns, and psychographic profiles should not be silently extracted from interaction, nor retained or shared beyond what the user has explicitly authorized. 12
13. Transparency (formerly 2016 #4)
Systems andSystems, algorithms, governance, and incentives must be transparent. The systems used to administer and operate a network of identities must be open, both in how they function and in how they are managed and updated. The algorithms should be free, open-source, well-known, and as independent as possible of any particular architecture; anyone should be able to examine how they work. Transparency must extend with equal force to the economic and governance substrate: default issuers, pre-selected validators, vendor relationships, compensation flows, and any interface nudges or choice-architecture decisions must be disclosed and auditable. An open codebase operated by an opaque consortium with hidden default-issuer arrangements passes the 2016 test and fails this one. 13
IV. Political Principles
These principles address how an identity system holds up against the political pressures of its deployment environment — capture by incumbents, structural reproduction of inequality, and the design-level coercions that subvert all of the above. This is the youngest of the four groups, its principles the most dependent on the surrounding society. Unlike foundational principles, they must be practiced continuously rather than asserted once; unlike technical principles, they resist full operationalization because the coercions they resist evolve faster than any static audit can track. Political principles are where the ethical and the political meet the implementable, and where an identity system either earns or forfeits its claim to self-sovereignty in practice.
14. Protection (formerly 2016 #10)
The rights of users must be protected. When there is a conflict between the needs of the identity network and the rights of individual users, then the network should err on the side of preserving the freedoms and rights of the individuals over the needs of the network. To ensure this, identity authentication must occur through independent algorithms that are censorship-resistant and force-resilient and that are run in a decentralized manner. Though independence of algorithms is necessary it is not sufficient; protection also requires ongoing practice such as interface audits, inference monitoring, and governance accountability, not just a one-time design claim. This protection must also extend beyond network-versus-user conflicts to the full spectrum of coercion that can arise within a decentralized system, including: structural coercion (marginalized users facing only nominal “choices”), economic coercion (incentive structures that privilege issuers over subjects), normative coercion (schemas that enforce conformity by making non-standard selves illegible), and cognitive coercion (surveillance that produces self-censorship before any prohibition is issued). Knowledge of monitoring must not cause behavior modification, privacy-protective choices must not flag users as suspicious, and pseudonymity and unlinkability must preserve freedom before surveillance creates internalized compliance. Where the system is legally regulated, protection must align with (and where deficient, push beyond) recognized legal standards of duress, undue influence, and freely given consent. Behavioral coercion, which lies at the interface and choice-architecture level, is addressed by Anti-Coercive Design. 14
15. Equity (new since 2016)
Identity systems must not reproduce structural inequality. Because identity infrastructure shapes who can participate in civic, economic, and social life, the burden of poor design falls disproportionately on those with the least latitude to refuse it. Schemas, verification flows, and recovery mechanisms must be co-designed with the communities they will govern and must not encode dominant categories of gender, nationality, family structure, name form, or documentary status as the only legible forms of self. Non-digital alternatives must be preserved; digital-only infrastructure excludes those without access and converts “participation” into coercion by necessity. 15
16. Anti-Coercive Design (new since 2016)
Design is never neutral: it shapes behavior and distributes power. SSI tools must refuse dark patterns in all their known forms including hidden opt-outs, confusing consent flows, urgent prompts, the exploitation of default bias, scarcity and urgency framing, choice overload, loss aversion, and anchoring. In addition to interface design such dark patterns must also be addressed in anything that shapes what the person being affected by the system sees and can do, including for example, algorithms for ranking and recommendation. A design must make any behavioral influence visible, understandable, and contestable while also providing alternative paths. The easiest path through an SSI tool must be the path most aligned with the user’s sovereignty, not the one most profitable for the system surrounding them. This discipline is distinct from, and auditable separately from, the Consent and Protection principles: interface is infrastructure, and it must be reviewed as such, including the algorithmic mechanisms that shape interactions. Consent references this principle for the named interface-level disciplines; Protection references it for the behavioral dimension of coercion. Anti-Coercive Design is the single locus where those concerns are specified, so that compliance is testable in one place rather than scattered, with criteria that make anti-patterns of influence and asymmetry visible to review. 16
Footnotes — Rationale for Each Change Since 2016
[1] Existence (formerly 2016 #1) — I’ve added a refusal clause saying AI agents don’t possess Existence in the sense this principle protects. I think this is right, but I can see an argument that foreclosing it at the foundation makes the document read as reactive against AI rather than for natural persons. Is there a framing that does the same protective work without sounding like a declaration of war on a category of entities that might, in some form, need protection of their own someday?
[2] Inalienability (new since 2016) — The line I’m least sure about is “make credential sale technically impossible through non-transferable cryptographic binding.” It’s the most concrete commitment in the principle, which makes it the most testable — and the most likely to age badly if the cryptographic landscape shifts. Should the principle name the property (non-transferability) and leave the mechanism open, or is naming the mechanism part of what makes it bite?
[3] Cognitive Liberty (new since 2016) — The four-rights structure (self-determination, privacy, integrity, continuity) came out of the legal and bioethics literature, but I’m not sure it’s the right decomposition for an SSI context. In particular, “psychological continuity” overlaps heavily with what Persistence now does. Is Cognitive Liberty actually three rights plus a shared border with Persistence, and if so, should the overlap be named explicitly rather than duplicated?
[4] Control (formerly 2016 #2) — The strategic-versus-transactional distinction is the newest move here and the one I’m most worried about. It’s trying to do real work — acknowledging that per-transaction review breaks when agents act at machine cadence — but “strategic authorization” is vague enough that it could be invoked to justify almost anything. What’s the minimum specificity needed to stop this clause from being weaponized by the platforms it’s meant to constrain?
[5] Consent (formerly 2016 #8) — Calibrated revocability (“not prohibitive, not always free”) is the hardest balance in the whole document. Too much friction and it’s a trap; too little and people can’t credibly commit to anything. I’ve tried to thread this with “emergency exit always preserved against abuse,” but emergency-exit definitions are notoriously gameable. Does anyone have language from credible-commitment theory, or from domestic-abuse policy work, that handles this better than I have?
[6] Access (formerly 2016 #3) — Access to inferred data sounds clean as a principle but becomes strange in practice: a reputation score that’s recomputed on every query isn’t really a stored attribute you can access, it’s a function output. Does the principle need to distinguish between stored inferences (which can be disclosed) and computed ones (which can only be characterized)? Or does insisting on access force systems to materialize their inferences, which might itself be a win?
[7] Relational Autonomy (new since 2016) — I put the graph-privacy / cryptographic-hardness requirement here rather than in Minimalization because it’s about relationships rather than disclosure. But I keep going back and forth on whether that’s a real distinction or a taxonomy I’m imposing for neatness. Would the principle be stronger if graph privacy lived in Minimalization and Relational Autonomy focused purely on the consent-and-exit mechanics?
[8] Stewardship (new since 2016) — The hardest edge here is adulthood transition: a child whose identity was stewarded from age three now has to receive a ten-year shadow-identity and decide what to do with it. The principle says stewards must provide “credential reissuance assistance,” but that doesn’t begin to capture what the psychological and practical handoff actually involves. Is there a maturity-rights literature this should be borrowing from more directly, or is the handoff problem genuinely new?
[9] Persistence (formerly 2016 #5) — “Must survive paradigm rotation, not just key rotation” is a strong claim and I’m not sure it’s achievable. Post-quantum migration is already proving that some identities won’t survive the transition. Should the principle be reframed as “when paradigm rotation fails, persistence must fail in favor of the person rather than the system” — i.e., a loss of cryptographic continuity must not be allowed to erase the social continuity of the person beneath it?
[10] Portability (formerly 2016 #6) — The statelessness extension — “where the issuing polity ends, the person does not” — is the line I’m proudest of and also the one with the least operational content. Community-issuer fallbacks are a real mechanism, but they presuppose communities with the capacity to issue, which many stateless populations lack. Is there a version of this that doesn’t quietly assume the existence of the civil society it’s trying to protect?
[11] Interoperability (formerly 2016 #7) — The tension between “interoperability” and “pluralism” is real and I’ve only papered over it. A system with genuinely plural vocabularies bridged by open protocols still has to agree on something at the bridging layer, and that something becomes a point of capture. Is there a worked example of pluralistic interoperability that’s actually survived scaling, or is this aspirational in a way the principle should acknowledge?
[12] Minimalization (formerly 2016 #9) — I extended this to inferred data but I don’t have a good answer for the composition problem: a series of individually minimal disclosures can aggregate into a maximally revealing profile. “Anti-aggregation constraints” is the phrase I used, but it’s a gesture at a research direction rather than a principle. Should this principle be split into disclosure (minimal at each step) and aggregation (bounded across steps), or is that a distinction that won’t survive contact with real systems?
[13] Transparency (formerly 2016 #4) — “Incentives must be transparent” is easy to write and hard to operationalize. Default-issuer arrangements are knowable; compensation flows inside a consortium often aren’t, and demanding disclosure of them can push systems toward jurisdictions that don’t require it. Is there a transparency-of-incentives regime from another domain (finance, academic publishing, political advertising) that actually works well enough to borrow from?
[14] Protection (formerly 2016 #10) — The coercion taxonomy (structural, economic, normative, cognitive, with behavioral delegated to #16) is probably the right decomposition but I’m aware I’m making up a framework rather than citing one. Is there an existing coercion literature — legal, political-theoretic, or sociological — that already names these dimensions better? If so, borrowing its vocabulary would make this section much stronger.
[15] Equity (new since 2016) — The non-digital-alternatives requirement is the operational heart of this principle, but it’s increasingly in tension with the deployment reality of mandatory digital ID regimes. Writing “non-digital alternatives must be preserved” into a principles document is easy; getting it honored in a jurisdiction that has already digitized benefit access is another matter entirely. Does this principle need a companion clause about what SSI systems should refuse to do when deployed into coercive environments?
[16] Anti-Coercive Design (new since 2016) — I extended this principle late to cover ranking and recommendation algorithms, not just interface elements, and I’m not sure the extension holds together. “Interface is infrastructure” was a clean claim. “Interface plus algorithmic shaping plus recommendation is infrastructure” is true but loses the compression. Should Anti-Coercive Design be split into an interface-level principle and an algorithmic-shaping principle, or is the whole point that they can’t be meaningfully separated in modern systems?
Notes on what was not changed, and why
The 2016 language is preserved wherever it survives.
Unmarked text in this document is the 2016 original, verbatim, with the single exception of updated pronouns. A reader who wants only the 2016 experience can read the unmarked portions of principles 1, 4, 5, 6, 9, 10, 11, 12, 13, and 14 and reconstruct the original ten intact.
The original ten are still here, just reorganized.
The original 2016 list’s structure — ten items in a single sequence — has been traded for sixteen items across four layers, because the layering carries structural information the flat list could not. But the original ten remain fully recoverable as a subset, and can be presented that way in contexts where the 2016 framing is what the audience needs.
Some frontiers are deliberately deferred.
Proof-of-personhood in adversarial environments. AI correlation and the identity consequences of large language models. Intergenerational and post-mortem identity. Non-individual subjects — DAOs, communities, rights-of-nature claims. Biometric drift through bodily change and aging. Ambient identity recognition. Provenance of identity artifacts. The ecological cost of persistent infrastructure. Each of these is a live concern and each would justify either a new principle or a companion document. They are held out of this revision because the goal here is to update the principles that have existed in some form across the 2016-to-2026 arc, not to colonize the space of everything SSI might eventually need to address. They are candidates for a future revision (2031?)