Governance decisions have real-world consequences. Payments are approved, patients are escalated, infrastructure is modified, AI agents are permitted to act. These decisions must be reproducible - the same inputs must always produce the same outcome, regardless of when or where the decision was made.Documentation Index
Fetch the complete documentation index at: https://docs.manthan.systems/llms.txt
Use this file to discover all available pages before exploring further.
AI systems are probabilistic. Governance cannot be.
| AI Systems | Governance Systems |
|---|---|
| Prediction | Admissibility |
| Classification | Enforcement |
| Recommendation | Escalation |
| Generation | Replay protection |
| Optimization | Independent verification |
What non-deterministic governance breaks
If governance depends on probabilistic systems, the same request can produce different governance outcomes at different times. A fraud model that returns 0.87 probability today may return 0.91 after a model update. The same transaction gets approved in one context and rejected in another. This creates:- Inconsistent decisions - identical cases are handled differently
- Unverifiable authority - you cannot reconstruct why a decision was made
- Audit breakdown - historical decisions cannot be independently verified
- Operational ambiguity - teams cannot predict or explain governance behavior
What determinism requires
Reproducibility - the same policy and signals always produce the same decision. No runtime variance, no timestamp dependence, no environmental drift.execution_fingerprint is derived from the inputs themselves. If you submit the same policy and signals twice, you get the same fingerprint - and the second submission is rejected by replay protection. There is no way to produce two attestations for the same inputs.
Independent verification - any party with the public key can verify a historical decision. The verification produces the same result every time. It does not require access to the system that produced the decision.
Fail-closed - when governance guarantees cannot be established, execution is denied. There is no “try anyway” mode, no fallback to a less-strict check, no silent degradation. Uncertainty is always an explicit rejection with a structured error code.
Confidence scores are not governance
AI confidence scores measure probability, not admissibility. Even at 99% confidence, a governance decision must be:- Reproducible from the same inputs
- Traceable to a specific policy rule
- Verifiable by an independent party
- Immune to model updates between the decision and its verification
The role of AI
Deterministic governance does not replace AI. AI systems produce signals - typed, schema-declared inputs that policy rules evaluate against. The AI may have complex internal reasoning. The governance system consumes stable, explicit representations of that reasoning.How Parmana enforces determinism
| Invariant | Enforcement |
|---|---|
| Canonical input form | canonicalize() - sorted keys, NFC-normalized strings, CRLF normalized |
| No wall-clock in signed payload | Timestamps exist as metadata only, outside the signature |
| No environmental state | Policies are loaded from versioned, content-addressed bundles |
| No randomness in fingerprint | execution_fingerprint = sha256(canonicalize(signals)) |
| Schema-version-pinned evaluation | Attestation records exact schema version used |
| Fail-closed on uncertainty | Any verification failure → explicit rejection |
See also
- Governed Signals - how signals bridge AI and governance
- Trust Portability - how determinism enables independent verification
- Portable Verification - what verification looks like in practice
- Replay Protection - how fingerprints prevent duplicate execution