Pular para o conteúdo

Compliance Mapping: EU AI Act & ISO 42001

Este conteúdo não está disponível em sua língua ainda.

This document maps Venturalitica SDK capabilities to the EU AI Act articles and ISO/IEC 42001 controls relevant to high-risk AI systems.


Requirement: Establish a risk management system throughout the AI system lifecycle.

SDK CapabilityHow It Fulfills Art 9
OSCAL policy filesRisk controls codified as machine-readable rules
enforce()Automated risk evaluation against defined controls
Dashboard Phase 1System identity and risk context documentation
Dashboard Phase 2Visual policy editor for risk control definition

Example: Define a risk control that checks age disparity:

- control-id: credit-age-disparate
description: "Age disparate impact ratio > 0.5"
props:
- name: metric_key
value: disparate_impact
- name: threshold
value: "0.50"
- name: operator
value: gt
- name: "input:dimension"
value: age

Requirement: Training, validation, and testing data sets shall be relevant, representative, free of errors, and complete.

SDK CapabilityHow It Fulfills Art 10
class_imbalance metricChecks minority class representation
disparate_impact metricChecks group-level selection rates
data_completeness metricMeasures missing values
k_anonymity, l_diversity, t_closenessPrivacy-preserving data quality
Data policy patternSeparate data_policy.oscal.yaml for pre-training checks

Key metrics for Art 10:

MetricArt 10 ClausePurpose
class_imbalance10.3 (representative)Ensure minority classes are not erased
disparate_impact10.2.f (bias examination)Four-Fifths Rule across groups
data_completeness10.3 (free of errors)Detect missing data
group_min_positive_rate10.3 (representative)Minimum positive rate per group

Article 11: Technical Documentation (Annex IV)

Section titled “Article 11: Technical Documentation (Annex IV)”

Requirement: Technical documentation shall be drawn up before the AI system is placed on the market.

SDK CapabilityHow It Fulfills Art 11
monitor() trace filesAutomatic evidence collection (code, data, hardware)
Evidence hash (SHA-256)Cryptographic proof of execution integrity
Dashboard Phase 4Annex IV document generation via LLM
BOM probeSoftware bill of materials for reproducibility

Evidence files produced:

.venturalitica/
trace_<session>.json # Execution trace with AST analysis
results.json # Compliance results per control
Annex_IV.md # Generated documentation (Phase 4)

Requirement: High-risk AI systems shall be designed to ensure their operation is sufficiently transparent.

SDK CapabilityHow It Fulfills Art 13
Glass Box methodFull execution trace, not just results
AST code analysisRecords which functions were called
Data fingerprintingSHA-256 of input data at runtime
Artifact probeHash of policy files used

Article 15: Accuracy, Robustness, and Cybersecurity

Section titled “Article 15: Accuracy, Robustness, and Cybersecurity”

Requirement: High-risk AI systems shall achieve an appropriate level of accuracy, robustness, and cybersecurity.

SDK CapabilityHow It Fulfills Art 15
accuracy_score, precision_score, recall_score, f1_scorePerformance metrics
demographic_parity_diff, equal_opportunity_diffFairness metrics on model predictions
Model policy patternSeparate model_policy.oscal.yaml for post-training checks
Hardware probeCPU, RAM, GPU monitoring for robustness evidence
Carbon probeEnergy consumption tracking

Key metrics for Art 15:

MetricArt 15 ClausePurpose
accuracy_score15.1 (accuracy)Model achieves minimum accuracy
demographic_parity_diff15.3 (non-discrimination)Prediction rates are fair
equalized_odds_ratio15.3 (non-discrimination)Error rates are equitable
counterfactual_fairness15.3 (non-discrimination)Causal fairness analysis

ISO 42001 defines an AI Management System (AIMS) framework. Venturalitica maps to the following control areas:

ISO 42001 ControlDescriptionSDK Mapping
A.2 AI PolicyOrganization-level AI policyOSCAL policy files define machine-readable policies
A.4 AI Risk AssessmentIdentify and assess AI risksenforce() evaluates controls; Dashboard Phase 2 visualizes risks
A.5 AI Risk TreatmentImplement controls to mitigate risksOSCAL controls with thresholds implement risk treatment
A.6 AI System Impact AssessmentAssess impact on individuals/groupsFairness metrics (disparate_impact, demographic_parity_diff)
A.7 Data for AI SystemsData quality managementData policy pattern + data quality metrics
A.8 AI System DocumentationDocument AI system lifecyclemonitor() traces + Dashboard Phase 4 (Annex IV generation)
A.9 AI System PerformanceMonitor system performancePerformance metrics + monitor() evidence collection
A.10 Third-party and Customer RelationsSupply chain transparencyBOM probe captures all dependencies
ISO 42001 ClauseDescriptionSDK Mapping
6.1 Risk assessmentDetermine risks and opportunitiesOSCAL policy defines measurable risk thresholds
6.2 AI objectivesSet measurable objectivesEach OSCAL control is a measurable objective with pass/fail
ISO 42001 ClauseDescriptionSDK Mapping
9.1 MonitoringMonitor AI system performanceenforce() + monitor() provide continuous evaluation
9.2 Internal auditAudit the AIMSEvidence traces provide audit trail
9.3 Management reviewReview AIMS effectivenessDashboard provides visual review interface
ISO 42001 ClauseDescriptionSDK Mapping
10.1 NonconformityHandle control failuresenforce() flags failures; strict=True raises exceptions
10.2 Continual improvementImprove the AIMSVersion policies, re-run audits, track improvement over time

The Two-Policy Pattern and Regulatory Mapping

Section titled “The Two-Policy Pattern and Regulatory Mapping”

Venturalitica’s two-policy pattern maps directly to the regulatory structure:

Regulation Policy File SDK Function Phase
----------- --------------- ---------------- -----
Art 10 (Data) --> data_policy.oscal.yaml --> enforce(target=...) Pre-training
Art 15 (Model) --> model_policy.oscal.yaml-> enforce(prediction=..) Post-training
Art 11 (Docs) --> (generated) --> Dashboard Phase 4 Reporting
Art 9 (Risk) --> (both policies) --> All of the above Continuous

A complete compliance audit produces the following evidence chain:

EvidenceEU AI ActISO 42001File
Policy definitionArt 9A.2, A.5*.oscal.yaml
Data quality resultsArt 10A.7results.json
Model fairness resultsArt 15A.6, A.9results.json
Execution traceArt 13A.8trace_*.json
Software BOMArt 15A.10trace_*.json (BOM section)
Hardware/carbon metricsArt 15A.9trace_*.json (probes)
Technical documentationArt 11 / Annex IVA.8Annex_IV.md