Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.manthan.systems/llms.txt

Use this file to discover all available pages before exploring further.

Governance decisions have real-world consequences. Payments are approved, patients are escalated, infrastructure is modified, AI agents are permitted to act. These decisions must be reproducible - the same inputs must always produce the same outcome, regardless of when or where the decision was made.

AI systems are probabilistic. Governance cannot be.

AI SystemsGovernance Systems
PredictionAdmissibility
ClassificationEnforcement
RecommendationEscalation
GenerationReplay protection
OptimizationIndependent verification
AI systems estimate possibilities. Governance systems determine whether execution is permitted. These are different system responsibilities - and conflating them produces systems that cannot be independently verified.

What non-deterministic governance breaks

If governance depends on probabilistic systems, the same request can produce different governance outcomes at different times. A fraud model that returns 0.87 probability today may return 0.91 after a model update. The same transaction gets approved in one context and rejected in another. This creates:
  • Inconsistent decisions - identical cases are handled differently
  • Unverifiable authority - you cannot reconstruct why a decision was made
  • Audit breakdown - historical decisions cannot be independently verified
  • Operational ambiguity - teams cannot predict or explain governance behavior

What determinism requires

Reproducibility - the same policy and signals always produce the same decision. No runtime variance, no timestamp dependence, no environmental drift.
same signals + same policy version → same decision
                                    always
                                    everywhere
                                    reproducibly
Replay safety - the execution_fingerprint is derived from the inputs themselves. If you submit the same policy and signals twice, you get the same fingerprint - and the second submission is rejected by replay protection. There is no way to produce two attestations for the same inputs. Independent verification - any party with the public key can verify a historical decision. The verification produces the same result every time. It does not require access to the system that produced the decision. Fail-closed - when governance guarantees cannot be established, execution is denied. There is no “try anyway” mode, no fallback to a less-strict check, no silent degradation. Uncertainty is always an explicit rejection with a structured error code.

Confidence scores are not governance

AI confidence scores measure probability, not admissibility. Even at 99% confidence, a governance decision must be:
  • Reproducible from the same inputs
  • Traceable to a specific policy rule
  • Verifiable by an independent party
  • Immune to model updates between the decision and its verification
Probability does not meet these requirements. Deterministic rule evaluation does.

The role of AI

Deterministic governance does not replace AI. AI systems produce signals - typed, schema-declared inputs that policy rules evaluate against. The AI may have complex internal reasoning. The governance system consumes stable, explicit representations of that reasoning.
AI system produces signals → governance evaluates signals deterministically
The separation is what makes governance trustworthy. Signals are the interface between probabilistic intelligence and deterministic authority.

How Parmana enforces determinism

InvariantEnforcement
Canonical input formcanonicalize() - sorted keys, NFC-normalized strings, CRLF normalized
No wall-clock in signed payloadTimestamps exist as metadata only, outside the signature
No environmental statePolicies are loaded from versioned, content-addressed bundles
No randomness in fingerprintexecution_fingerprint = sha256(canonicalize(signals))
Schema-version-pinned evaluationAttestation records exact schema version used
Fail-closed on uncertaintyAny verification failure → explicit rejection

See also