Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.manthan.systems/llms.txt

Use this file to discover all available pages before exploring further.

Canonical serialization is the mechanism that makes every hash, signature, and fingerprint in the governance pipeline reproducible across languages, platforms, operating systems, and time. Without it, the same data produces different bytes on different systems - and different bytes produce different signatures, different fingerprints, and broken verification. Every guarantee in Parmana - deterministic decisions, portable verification, replay protection, independent auditability - depends on canonical serialization being correct.

Why ordinary JSON serialization fails

Standard JSON.stringify() in JavaScript does not specify key ordering. Different engines, different insertion orders, and different serialization libraries may produce different byte sequences for logically identical objects:
// These are logically identical - same data, different bytes
JSON.stringify({ b: 2, a: 1 })  // '{"b":2,"a":1}'
JSON.stringify({ a: 1, b: 2 })  // '{"a":1,"b":2}'
If signing uses one order and verification uses another, the signature fails - not because the data was tampered with, but because two correct implementations produced different bytes. For signals fingerprinting:
// Two requests with the same signals - different fingerprints if order varies
sha256(JSON.stringify({ risk_score: 87, amount: 500 }))  // fp-abc
sha256(JSON.stringify({ amount: 500, risk_score: 87 }))  // fp-def  ← different!
The second request would not be detected as a replay of the first, even though the governance inputs are identical. Replay protection breaks.

What canonical serialization requires

Parmana’s canonicalize() function resolves this by producing a stable, deterministic byte sequence from any input: Sorted keys - all object keys are sorted recursively at every nesting level:
canonicalize({ z: 3, a: 1, m: { b: 2, a: 1 } })
// '{"a":1,"m":{"a":1,"b":2},"z":3}'
Compact form - no optional whitespace that varies by formatting choice or implementation. Unicode NFC normalization - string values are normalized to NFC before serialization. The string é can be encoded as a single code point U+00E9 or as e + combining acute accent U+0301. After NFC normalization, both become the single code point, producing identical bytes. CRLF normalization - Windows-style \r\n line endings are converted to Unix-style \n before serialization. A policy file edited on Windows and a policy file edited on Linux produce the same canonical form. Preserved array order - array element order is semantically meaningful and must not be sorted. Policy rules, for example, are evaluated in declaration order - reordering them would change governance outcomes.

Where canonical serialization is used

Every cryptographic operation in the governance pipeline uses canonical serialization:
OperationWhat is canonicalized
execution_fingerprint derivationInput signals
Token signingThe full ExecutionToken object
Attestation signingThe attestation payload
Bundle manifest hashingPolicy files
Runtime manifest hashingThe runtime manifest definition
Release manifest signingThe release manifest
The same canonicalize() function is used throughout. There is no variant implementation with different behavior - any divergence would produce different bytes and break verification.

The fingerprint depends on canonical form

The execution_fingerprint is the replay protection key:
const execution_fingerprint = sha256(canonicalize(signals));
Because canonicalize() sorts keys, any two requests with logically identical signals produce identical fingerprints - regardless of key insertion order, JavaScript engine, or platform:
// Both produce the same fingerprint
canonicalize({ risk_score: 87, amount: 500 })  // '{"amount":500,"risk_score":87}'
canonicalize({ amount: 500, risk_score: 87 })  // '{"amount":500,"risk_score":87}'

sha256('{"amount":500,"risk_score":87}') === sha256('{"amount":500,"risk_score":87}')
// → identical fingerprint → replay detected on second submission ✓

The signature depends on canonical form

The Ed25519 signature is computed over the canonical byte representation of the payload:
const canonical = canonicalize(token);       // sorted keys, compact JSON
const signature = signer.sign(canonical);    // Ed25519 over UTF-8 bytes
At verification time, the verifier re-canonicalizes the payload and checks the signature:
const canonical = canonicalize(attestation.payload);
const valid = verifier.verify(canonical, attestation.signature);
If the payload were re-canonicalized with a different implementation - different key order, different Unicode normalization - valid would be false even for a legitimate attestation. One canonical implementation is the invariant.

What breaks without canonical serialization

FailureConsequence
Multiple canonicalize() implementations with different behaviorSigning and verification use different bytes - all signatures fail
No Unicode NFC normalizationé as U+00E9 and é as U+0065+U+0301 produce different hashes - same content, different fingerprints
Order-dependent key serializationTwo requests with the same signals produce different fingerprints - replay not detected
CRLF not normalizedA policy edited on Windows has a different bundle hash than the same policy on Linux - bundle verification fails
Pretty-printing vs compactSigning with spaces, verifying without - signature fails
Each of these is a trust portability failure. Governance proof that cannot be independently reproduced is not governance proof.

Canonical serialization across the ecosystem

Because canonical serialization uses standard sorted JSON with NFC-normalized strings, any implementation - in any language - can produce the same bytes given the same input. A verifier written in Python, Go, or Java can verify a Parmana attestation produced by the TypeScript runtime, provided it implements the same canonicalization rules. This is what makes portable verification possible. The canonical form is the shared language between the signer and any verifier, across all implementations.

The @parmanasystems/bundle package

The canonical serialization implementation lives in @parmanasystems/bundle:
import { canonicalize, sha256 } from "@parmanasystems/bundle";

const canonical = canonicalize({ z: 3, a: 1 });
// '{"a":1,"z":3}'

const hash = sha256(canonical);
// deterministic SHA-256 hex digest
This package is used internally by @parmanasystems/governance, @parmanasystems/verifier, and @parmanasystems/execution. Consuming it directly is rarely necessary for application code.

See also