60-Second Quickstart
Goal: Your first bias audit in under 60 seconds.
The Fundamentals: From Risk to Code
Section titled “The Fundamentals: From Risk to Code”Building High-Risk AI requires a fundamental shift in how we approach testing. It is no longer enough to check for technical accuracy (e.g., F1 Score); we must now mathematically prove that the system respects fundamental rights, such as non-discrimination or data quality, as mandated by the EU AI Act.
Venturalitica automates this by treating “Assurance” as a dependency. Instead of vague legal requirements, you define strict policies (OSCAL) that your model must pass before it can be deployed. This turns compliance into a deterministic engineering problem.
The Translation Layer:
-
Fundamental Risk: “The model must not discriminate against protected groups” (Art 9).
-
Policy Control: “Disparate Impact Ratio must be > 0.8”.
-
Code Assertion:
assert calculated_metric > 0.8.
When you run quickstart(), you are technically running a Unit Test for Ethics.
Step 1: Install
Section titled “Step 1: Install”pip install venturaliticaStep 2: Run Your First Audit
Section titled “Step 2: Run Your First Audit”import venturalitica as vl
vl.quickstart('loan')Output:
[Venturalítica] Scenario: Fairness Audit loan_scoring_v2[Venturalítica] Loaded: UCI Dataset #144 (1000 samples)
CONTROL DESCRIPTION ACTUAL LIMIT RESULT ──────────────────────────────────────────────────────────────────────────────────────────────── credit-data-imbalance Data Quality 0.429 > 0.2 PASS credit-data-bias Disparate impact 0.818 > 0.8 PASS credit-age-disparate Age disparity 0.286 > 0.5 FAIL ──────────────────────────────────────────────────────────────────────────────────────────────── Audit Summary: VIOLATION | 2/3 controls passedStep 3: What’s Happening Under the Hood
Section titled “Step 3: What’s Happening Under the Hood”The quickstart() function is a wrapper that performs the full compliance lifecycle in one go:
- Downloads Data: Fetches the UCI German Credit dataset.
- Loads Policy: Uses a built-in OSCAL policy that defines fairness rules (thresholds, metrics, protected attributes).
- Enforces: Runs the audit (
vl.enforce). - Records: Captures the evidence (
trace.json) for the dashboard.
Here’s what the equivalent “manual” flow looks like. In the Full Lifecycle guide you will write your own OSCAL policies and run this yourself:
from ucimlrepo import fetch_ucirepoimport venturalitica as vl
# 1. Load Data (The "Risk Source")dataset = fetch_ucirepo(id=144)df = dataset.data.featuresdf['class'] = dataset.data.targets
# 2. Define the Policy (The "Law")# quickstart() uses a built-in policy dict.# In real projects, you write your own OSCAL YAML file.# See the Full Lifecycle guide for a copy-paste example.
# 3. Run the Audit (The "Test")# This automatically generates the Evidence Bill of Materials (BOM)with vl.monitor("manual_audit"): vl.enforce( data=df, target="class", # The outcome (True/False) gender="Attribute9", # Protected Group A age="Attribute13", # Protected Group B policy="data_policy.oscal.yaml" # Your OSCAL policy file )The Policy Logic
Section titled “The Policy Logic”The OSCAL policy is the bridge between law and code. It tells the SDK what to check so you don’t have to hardcode it.
# ... inside your OSCAL policy YAML ...- control-id: credit-data-bias description: "Disparate impact ratio must be > 0.8 (80% rule)" props: - name: metric_key value: disparate_impact # <--- The Python Function to call - name: threshold value: "0.8" # <--- The Limit to enforce - name: operator value: ">" # <--- The Logic (> 0.8) - name: "input:dimension" value: gender # <--- Maps to "Attribute9"This design decouples Assurance (the policy file) from Engineering (the python code).
Why This Matters
Section titled “Why This Matters”Without this mechanism, your AI model is a legal “Black Box”:
- Liability: You cannot prove you checked for bias before deployment (Art 9).
- Fragility: Compliance is a manual checklist, easily forgotten or skipped.
- Opacity: Auditors cannot see the link between your code and the law.
By running quickstart(), you have just generated an immutable Compliance Artifact. Even if the laws change, your evidence remains.
Step 4: The “Glass Box” Dashboard
Section titled “Step 4: The “Glass Box” Dashboard”Now that we have the evidence (the “Black Box” recording), let’s inspect it in the Regulatory Map.
venturalitica uiNavigate through the Compliance Map tabs:
- Article 9 (Risk): See the failed
credit-age-disparatecontrol. This is your technical evidence of “Risk Monitoring”. - Article 10 (Data): See the data distribution and quality checks.
- Article 13 (Transparency): Review the “Transparency Feed” to see your Python dependencies (BOM).
Step 5: Generate Documentation (Annex IV)
Section titled “Step 5: Generate Documentation (Annex IV)”The final step is to turn this evidence into a legal document.
- In the Dashboard, go to the “Generation” tab.
- Select “English” (or Spanish/Catalan/Euskera).
- Click “Generate Annex IV”.
Venturalitica will draft a technical document that references your specific run:
“As evidenced in
trace_quickstart_loan.json, the system was audited against [OSCAL Policy: Credit Scoring Fairness]. A deviation was detected in Age Disparity (0.36), identifying a potential risk of bias…”
References
Section titled “References”- Policy Used:
loan/risks.oscal.yaml - Legal Basis:
What’s Next?
Section titled “What’s Next?”- API Reference — Full function signatures and parameters
- Policy Authoring Guide — Write your own OSCAL policies from scratch
- Metrics Reference — All 35+ available metrics
- Venturalitica Academy — Guided learning path from Engineer to Architect