Regulators are no longer satisfied with assertions of compliance. The question is no longer “do you have a governance policy?” It is “can you prove you followed it?” - and that proof must be independent of the infrastructure that made the decision. This document describes how Parmana’s deterministic attestation model addresses the specific technical requirements of key regulatory frameworks: the EU AI Act, NIST AI RMF, SR 11-7 (Federal Reserve model risk management), HIPAA, SOC 2 Type II, and ISO 27001. For each framework, we identify the specific requirements and show how Parmana’s architecture produces the evidence those requirements demand.Documentation Index
Fetch the complete documentation index at: https://docs.manthan.systems/llms.txt
Use this file to discover all available pages before exploring further.
What Regulators Actually Need
Across all major regulatory frameworks, five technical requirements emerge consistently: Decision audit trail. A complete record of what decision was made, when, by what rules, on what inputs, and with what outcome. The record must be structured and queryable, not a free-form log. Tamper evidence. Proof that the audit record was not altered retroactively. A log entry that can be modified by an administrator is not tamper-evident. The tamper-evidence mechanism must be cryptographic, not procedural. Independent verification. An auditor - whether internal, external, or regulatory - must be able to verify the accuracy of the audit record without access to the operator’s infrastructure. If verification requires a database query to the operator’s systems, the auditor is dependent on the operator’s cooperation and system availability. Long-term retention. Decisions must remain verifiable years - sometimes decades - after they were made. The verification mechanism cannot depend on infrastructure that may not exist in the future. Explainability. The audit record must contain not just the outcome (approved/denied) but the reason: which rule matched, why that rule applied, what threshold was crossed. This is the “explainability” requirement that appears across AI governance frameworks. Parmana’sExecutionAttestation is designed to satisfy all five requirements simultaneously, in a format that is self-contained, cryptographically verifiable, and infrastructure-independent.
How Parmana Addresses Each Requirement
Decision Audit Trail
Every call toexecuteFromSignals produces an ExecutionAttestation - a structured JSON record containing:
@parmanasystems/audit-db package provides a PostgreSQL-backed structured store with query support.
Tamper Evidence
The Ed25519 signature covers the canonical serialization of the governance payload: the execution fingerprint, policy reference, decision outcome, and runtime hash. The canonical form uses sorted JSON keys, NFC-normalized strings, CRLF-to-LF normalization, and compact encoding. Any modification to any field in the signed payload - the decision, the policy version, the signals hash, the reason - invalidates the signature. Verification requires only the attestation and the corresponding public key. An auditor can verify that the record is unmodified without any access to the operator’s infrastructure. The tamper-evidence mechanism is cryptographic, not procedural. It does not depend on access controls, append-only storage, or organizational policy. It depends on the mathematical impossibility of forging an Ed25519 signature without the private key.Independent Verification
The verification operation is a pure cryptographic function:Long-Term Retention
Attestations remain verifiable as long as the public key is preserved. There is no dependency on the governance runtime, the operator’s infrastructure, or Parmana Systems as a company. The verification algorithm - Ed25519 over canonical JSON - is a stable, standardized specification. Ed25519 is specified in RFC 8032 and is supported by every major cryptographic library. The canonical JSON format is deterministic by construction. A decision signed today will be verifiable by any standards-compliant Ed25519 implementation in fifty years. Operators should preserve their public key alongside their attestation archives. The key is the only infrastructure dependency for long-term verification.Explainability
Thedecision.reason field contains a human-readable explanation of why the decision was made: which threshold was crossed, which rule matched, what the relevant signal values were. The decision.ruleId field identifies the specific rule within the policy. The policyVersion field identifies the exact policy that was in force.
Together, these fields satisfy the “right to explanation” requirement: an affected party can be told not only the outcome but the reasoning, and that reasoning is cryptographically bound to the decision record.
EU AI Act Alignment
The EU AI Act (Regulation (EU) 2024/1689) imposes obligations on high-risk AI systems, including automated decision-making in credit, employment, education, access to services, and law enforcement. The relevant articles for technical compliance are: Article 9 - Risk Management System. High-risk AI systems must implement a risk management system that identifies and manages risks throughout the lifecycle. Parmana’s policy-based governance provides the rule framework for risk management; attestations provide the audit evidence that risk management controls were applied to each decision. Article 12 - Record-Keeping. High-risk AI systems must automatically log events to a level that enables verification of compliance. Parmana’sExecutionAttestation is a structured event log that satisfies this requirement. The executionId, policyVersion, signals, and decision fields provide the complete record of each governed decision. Logs are tamper-evident and independently verifiable.
Article 13 - Transparency and Provision of Information. Deployers of high-risk AI systems must provide information to natural persons subject to AI decisions. Parmana’s decision.reason and decision.ruleId fields provide the structured explanation data that satisfies this requirement. The execution_fingerprint provides a unique identifier that can be referenced in communications with affected persons.
Article 14 - Human Oversight. High-risk AI systems must enable human oversight. Parmana’s decision.requires_override flag surfaces the governance layer’s determination that a decision requires human review before execution. This flag is part of the signed attestation - it cannot be suppressed without invalidating the signature.
Article 17 - Quality Management System. Providers must establish a quality management system including policies, procedures, and documentation. Parmana’s policy versioning, content-addressed bundle compilation, and signed bundle artifacts provide the documented policy management infrastructure required.
SOC 2 Type II
SOC 2 Type II examinations assess whether security controls operate effectively over a period of time (typically 6-12 months). The Trust Services Criteria most relevant to AI governance attestations are: CC6 - Logical and Physical Access Controls. Parmana’s BYOI model ensures that signing keys remain within the operator’s key management infrastructure (AWS KMS, HashiCorp Vault, Azure Key Vault, or HSM). Keys are never transmitted to or stored by Parmana Systems. Access to keys is governed entirely by the operator’s key management controls, which are within the SOC 2 examination scope. CC7 - System Operations. Parmana’sruntimeHash binds each attestation to the exact runtime version that produced it. Runtime upgrades are explicit version boundary events, detectable in the attestation record. Anomalous runtime behavior - unexpected changes in decision patterns - is detectable by auditing the policyVersion and runtimeHash fields across attestations.
CC9 - Risk Mitigation. The fail-closed design of the replay protection mechanism is a direct risk mitigation control: if the replay store is unavailable, execution is blocked rather than proceeding ungoverned. This control is documented in the architecture and verifiable through the error codes and behavior specification.
SOC 2 auditors examining AI governance controls can use Parmana attestations as direct evidence that each decision was governed by a specific policy version, with tamper-evident audit records, independent of operator infrastructure access.
Financial Regulation: SR 11-7 and MAS TRM
SR 11-7 (Federal Reserve - Model Risk Management, 2011) establishes expectations for model validation and ongoing monitoring. For AI models making credit, trading, or risk decisions:- Model inventory and documentation - Parmana’s policy versioning provides a versioned, content-addressed record of the exact rules governing each model’s decisions. The policy bundle hash is a cryptographic commitment to the exact model logic in force at any decision time.
-
Validation evidence - The
runtimeHashin each attestation identifies the exact runtime version. Validation tests can be keyed to specific runtime versions and the results stored alongside attestations, creating a traceable validation chain. - Ongoing monitoring - Attestation records provide a complete, tamper-evident dataset for monitoring model performance over time. Decision pattern analysis, rate monitoring, and override frequency can be computed from attestation records without access to the underlying model.
- Audit and review - Independent auditors and examiners can verify any attestation without access to the bank’s infrastructure. The examination process does not require system access - only the attestation records and public key.
HIPAA Considerations
For healthcare AI systems making clinical decision support recommendations, HIPAA’s Security Rule (45 CFR Part 164) requires: Audit controls (§164.312(b)). Systems must record and examine activity in systems containing PHI. Parmana attestations, when the signals include PHI-adjacent identifiers, constitute the audit record of decision activity. The tamper-evident signature provides the integrity assurance required. Integrity controls (§164.312(c)). Electronic PHI must be protected against improper alteration or destruction. Parmana’s Ed25519 signatures over canonical payloads provide the integrity mechanism: any modification to the attestation record is detectable without infrastructure access. Right of access (§164.524). Individuals have the right to access their PHI. When attestation records reference patient identifiers, they must be included in access request responses. The structured format ofExecutionAttestation supports this access.
Minimum necessary (§164.514(d)). Only the PHI necessary for the purpose should be included in signals. Parmana enforces no opinion on signal contents - this is a design decision for the operator. Best practice is to include only the signals required for the governance decision, not full patient records.
GDPR Considerations
The EU General Data Protection Regulation applies when signals include personal data of EU residents. Article 22 - Automated Individual Decision-Making. Decisions based solely on automated processing that significantly affect individuals require specific safeguards. Parmana’sdecision.reason field provides the structured explanation required for Article 22 compliance. The requires_override flag provides the mechanism for ensuring human involvement when required.
Article 17 - Right to Erasure. The “right to be forgotten” does not extend to audit records required for legal compliance. Governance attestations may be retained for the duration required by applicable regulation (SR 11-7, EU AI Act, etc.) even when the underlying personal data is erased.
Article 25 - Data Protection by Design. Parmana’s BYOI model is consistent with data protection by design principles: signals are processed within the operator’s infrastructure, no data is transmitted to Parmana Systems, and the operator controls data retention policies.
Data minimization. Include in signals only the data necessary for the governance decision. The policy schema enforces which signals are required - signals not declared in the policy schema are not evaluated, supporting data minimization practices.
ISO 27001
ISO 27001 Annex A controls relevant to AI governance: A.8.15 - Logging. Information systems should produce audit logs. Parmana’s attestation records are tamper-evident audit logs for AI decision systems, satisfying this control. A.8.16 - Monitoring Activities. Anomalous behavior should be monitored and investigated. TheruntimeHash field enables detection of unexpected runtime changes; decision rate monitoring enables detection of anomalous decision patterns.
A.5.14 - Information Transfer. Information transferred to external parties must be subject to agreements and security measures. Parmana’s BYOI model means governance data does not leave the operator’s boundary - there is no external transfer to govern.
A.5.33 - Protection of Records. Records must be protected against loss, destruction, falsification, unauthorized access, and unauthorized release. Parmana’s cryptographic signatures protect against falsification. Storage and access controls are the operator’s responsibility, within the scope of the ISO 27001 assessment.
Producing Compliance Evidence
For a regulatory examination, the evidence package for any Parmana-governed decision consists of:- The attestation - the
ExecutionAttestationJSON record for the specific decision - The public key - the Ed25519 public key corresponding to the signing key in use at decision time
- The policy bundle - the compiled policy bundle (content-addressed), identified by the
policyIdandpolicyVersionin the attestation - The runtime manifest - the runtime version information corresponding to the
runtimeHashin the attestation
- Verify that the attestation signature is valid (proving the record is unmodified)
- Identify the exact policy version that governed the decision
- Confirm the exact inputs that were evaluated
- Confirm the exact outcome and reason
- Confirm the exact runtime version that produced the result
Conclusion
Regulatory requirements for AI governance share a common structure: audit trails that are tamper-evident, independently verifiable, long-term durable, and explainable. Parmana’sExecutionAttestation satisfies all five requirements simultaneously.
The model is not compliance theater. It is cryptographic evidence - the same class of evidence that courts accept for digital signatures, that financial regulators accept for electronic records, and that security frameworks require for integrity controls.
The shift from “we have a policy” to “we can prove we followed it” requires infrastructure-grade governance. Parmana is that infrastructure.
See Also
- Governance as Infrastructure - the architectural case
- Trust Portability - independent verification in detail
- Portable Verification - the verification model
- Bring Your Own Infrastructure - data sovereignty and infrastructure control
- Production Checklist - compliance-ready deployment requirements