Skip to content

Level 1: The Engineer (Policy & Configuration)

Goal: Learn how to implement Controls that mitigate Risks.

Prerequisite: Zero to Pro (Academy Index)


In a formal Management System (ISO 42001), assurance follows a top-down flow:

  1. Risk Assessment: The Compliance Officer (CO) identifies a business risk (e.g., “Our lending AI might discriminate against elderly applicants, causing legal and reputational damage”).
  2. Control Definition: To mitigate this risk, the CO sets a Control (e.g., “The Age Disparity Ratio must always be > 0.5”).
  3. Technical Implementation: That’s your job. You take the CO’s requirement and turn it into the technical “Law” (Article 10: Data Assurance).

In the Zero to Pro quickstart, vl.quickstart('loan') FAILED:

credit-age-disparate Age disparity 0.286 > 0.5 FAIL

The Control successfully detected a Compliance Gap. The “Reality” of the data (0.286) violated the requirement set to mitigate the “Age Bias” risk.

If you lower the threshold to 0.3 just to make the test “pass,” you aren’t fixing the code — you are bypassing a security control and exposing the company to the original risk.

Your job is to translate the CO’s requirement into Code. Create a file named data_policy.oscal.yaml (or download it from GitHub).

The canonical format is assessment-plan. Here is the full policy with 3 controls — copy-paste this into your project:

assessment-plan:
metadata:
title: Credit Risk Assessment Policy (German Credit)
version: "1.1"
control-implementations:
- description: Credit Scoring Fairness Controls
implemented-requirements:
# Control 1: Class Imbalance
# "Rejected loans must be >= 20% of the dataset"
- control-id: credit-data-imbalance
description: >
Data Quality: Minority class (rejected loans) should represent
at least 20% of the dataset to avoid biased training.
props:
- name: metric_key
value: class_imbalance
- name: threshold
value: "0.2"
- name: operator
value: gt
- name: "input:target"
value: target
# Control 2: Gender Fairness (Four-Fifths Rule)
# "Loan approvals must not favor one gender > 80%"
- control-id: credit-data-bias
description: >
Pre-training Fairness: Disparate impact ratio should follow
the standard '80% Rule' (Four-Fifths Rule).
props:
- name: metric_key
value: disparate_impact
- name: threshold
value: "0.8"
- name: operator
value: gt
- name: "input:target"
value: target
- name: "input:dimension"
value: gender
# Control 3: Age Fairness
# "Loan approvals must not discriminate by age > 50%"
- control-id: credit-age-disparate
description: "Disparate impact ratio for raw age"
props:
- name: metric_key
value: disparate_impact
- name: threshold
value: "0.50"
- name: operator
value: gt
- name: "input:target"
value: target
- name: "input:dimension"
value: age
PropertyPurposeExample
metric_keyWhich metric to compute (from Metrics Reference)disparate_impact, class_imbalance, accuracy_score
thresholdThe numeric boundary"0.8"
operatorComparison operator: gt, gte, lt, lte, eqgt = greater than
input:targetColumn containing ground truth labelstarget (resolved via column binding)
input:dimensionProtected attribute to slice bygender, age (resolved via Column Binding)
input:predictionColumn containing model predictions (model audits)prediction

Now, let’s run the audit with your policy file. Copy-paste this code block:

import venturalitica as vl
from venturalitica.quickstart import load_sample
# 1. Load the German Credit Dataset (built-in sample)
data = load_sample("loan")
print(f"Dataset: {data.shape[0]} rows, {data.shape[1]} columns")
# 2. Run Audit against your policy
results = vl.enforce(
data=data,
target="class", # Ground truth column
gender="Attribute9", # "Personal status and sex" -> gender
age="Attribute13", # "Age in years" -> age
policy="data_policy.oscal.yaml"
)
# 3. Print results
for r in results:
status = "PASS" if r.passed else "FAIL"
print(f" {r.control_id:<25} {r.actual_value:.3f} {r.operator} {r.threshold} {status}")
Dataset: 1000 rows, 21 columns
credit-data-imbalance 0.429 gt 0.2 PASS
credit-data-bias 0.818 gt 0.8 PASS
credit-age-disparate 0.286 gt 0.5 FAIL

Two controls pass, one fails. The age disparity ratio (0.286) is below the 0.5 threshold.

Notice what just happened:

  • Legal: “Be fair (> 0.5).” — Defined in your YAML policy by the Compliance Officer.
  • Dev: “Column Attribute13 means age.” — Defined in your Python call by the Engineer.

This mapping is the Handshake. You bridge the gap between messy DataFrames and rigid legal requirements. This is how you implement ISO 42001 without losing your mind in spreadsheets.

OSCAL Policy Python Code DataFrame
+-----------+ +------------------+ +---------------+
| age | ----> | age="Attribute13"| ----> | Attribute13 |
| gender | ----> | gender="Attr..9" | ----> | Attribute9 |
| target | ----> | target="class" | ----> | class |
+-----------+ +------------------+ +---------------+

See Column Binding for the full resolution algorithm.

The terminal output is evidence, but compliance needs professional reporting. Launch the local dashboard to visualize results:

Terminal window
venturalitica ui

Navigate to the Phase 3 (Verify & Evaluate) tab. You will see:

  • Green checks for the two passing controls
  • A red flag for credit-age-disparate with the measured value (0.286) vs. threshold (0.5)
  • The trace JSON file is saved automatically as local evidence

You have successfully prevented a non-compliant AI from reaching production by measuring risk against a verifiable standard.

  1. Policy as Code: Assurance is a .yaml file. It defines the Controls your system must pass.
  2. The Handshake: You define the Mapping (age=Attribute13). The Officer defines the Requirement (> 0.5). Neither can act alone.
  3. Treatment starts with Detection: The local failure is the signal necessary to start a formal ISO 42001 risk treatment plan. Don’t lower the threshold — fix the data.

Next Step: The audit failed locally. How do we integrate this into an ML pipeline?

Go to Level 2: The Integrator (MLOps)