Pular para o conteúdo

API Reference

Este conteúdo não está disponível em sua língua ainda.

Venturalitica exposes five public symbols. This page documents their exact signatures and behavior as of v0.5.0.


Run a pre-configured bias audit demo on a standard dataset. This is the fastest way to see the SDK in action.

import venturalitica as vl
results = vl.quickstart("loan")
ParameterTypeDefaultDescription
scenariostr(required)Predefined scenario name: "loan", "hiring", "health".
verboseboolTruePrint the structured compliance table to the console.

Returns: List[ComplianceResult]


The main entry point for auditing datasets and models against OSCAL policies.

def enforce(
data=None,
metrics=None,
policy="risks.oscal.yaml",
target="target",
prediction="prediction",
strict=False,
**attributes,
) -> List[ComplianceResult]
ParameterTypeDefaultDescription
dataDataFrame or NoneNonePandas DataFrame containing features, targets, and optionally predictions.
metricsDict[str, float] or NoneNonePre-computed metrics dict. Use this when you have already calculated your metrics externally.
policystr, Path, or List"risks.oscal.yaml"Path to one or more OSCAL policy files. Pass a list to enforce multiple policies in a single call.
targetstr"target"Name of the column containing ground truth labels.
predictionstr"prediction"Name of the column containing model predictions.
strictboolFalseIf True, missing metrics, unbound variables, and calculation errors raise exceptions instead of being skipped. Auto-enabled when CI=true or VENTURALITICA_STRICT=true.
**attributeskeyword argsMappings for protected variables and dimensions. For example: gender="Attribute9", age="Attribute13".

Returns: List[ComplianceResult]

Mode 1: DataFrame-based (most common). Pass a DataFrame and let the SDK compute metrics automatically:

results = vl.enforce(
data=df,
target="class",
prediction="prediction",
gender="Attribute9", # maps abstract 'gender' -> column 'Attribute9'
age="Attribute13", # maps abstract 'age' -> column 'Attribute13'
policy="data_policy.oscal.yaml",
)

Mode 2: Pre-computed metrics. Pass a dict of already-calculated values:

results = vl.enforce(
metrics={"accuracy_score": 0.92, "demographic_parity_diff": 0.07},
policy="model_policy.oscal.yaml",
)

When using DataFrame mode, the SDK resolves column names through a synonym system (see Column Binding):

  • target and prediction are resolved first via explicit parameters, then via synonym discovery.
  • **attributes (e.g., gender="Attribute9") are passed directly to metric functions as the dimension parameter.
  • If a column is not found, the SDK falls back to lowercase matching.

enforce() automatically caches results to .venturalitica/results.json and, if inside a monitor() session, to the session-specific evidence directory. Run venturalitica ui to visualize cached results.

Pass a list to enforce several policies in one call:

results = vl.enforce(
data=df,
target="class",
policy=["data_policy.oscal.yaml", "model_policy.oscal.yaml"],
gender="Attribute9",
)

monitor(name, label=None, inputs=None, outputs=None)

Section titled “monitor(name, label=None, inputs=None, outputs=None)”

A context manager that records multimodal telemetry during training or evaluation. Captures hardware, carbon, security, and audit evidence automatically.

@contextmanager
def monitor(
name="Training Task",
label=None,
inputs=None,
outputs=None,
)
ParameterTypeDefaultDescription
namestr"Training Task"Human-readable name for this monitoring session. Used in trace filenames.
labelstr or NoneNoneOptional label for categorization (e.g., "pre-training", "validation").
inputsList[str] or NoneNonePaths to input artifacts (datasets, configs) for data lineage tracking.
outputsList[str] or NoneNonePaths to output artifacts (models, plots) for lineage tracking.
with vl.monitor("credit_model_v1"):
model.fit(X_train, y_train)
vl.enforce(data=df, policy="policy.oscal.yaml", target="class")

monitor() initializes 7 probes automatically. See Probes Reference for details.

ProbeWhat It CapturesEU AI Act Article
IntegrityProbeSHA-256 environment fingerprint, drift detectionArt. 15
HardwareProbePeak RAM, CPU countArt. 15
CarbonProbeCO2 emissions via CodeCarbonArt. 15
BOMProbeSoftware Bill of Materials (SBOM)Art. 13
ArtifactProbeInput/output data lineageArt. 10
HandshakeProbeWhether enforce() was called inside the sessionArt. 9
TraceProbeAST code analysis, timestamps, call contextArt. 11

Evidence is saved to .venturalitica/ or a session-specific directory.


Transparently audit your model during Scikit-Learn standard workflows by hooking into .fit() and .predict().

ParameterTypeDescription
modelobjectAny Scikit-learn compatible classifier or regressor.
policystrPath to the OSCAL policy for evaluation.

Returns: AssuranceWrapper (preserves the original model API: .fit(), .predict(), etc.)

wrapped = vl.wrap(LogisticRegression(), policy="model_policy.oscal.yaml")
wrapped.fit(X_train, y_train)
preds = wrapped.predict(X_test) # Audit runs automatically

Programmatic access to OSCAL policy loading and manipulation.

from venturalitica import PolicyManager

Every call to enforce() returns a list of ComplianceResult dataclass instances:

FieldTypeDescription
control_idstrThe control identifier from the policy (e.g., "credit-data-bias").
descriptionstrHuman-readable description of the control.
metric_keystrThe metric function used (e.g., "disparate_impact").
actualfloatThe computed metric value.
thresholdfloatThe policy-defined threshold.
operatorstrComparison operator (">", "<", ">=", "<=", "==", "gt", "lt").
passedboolWhether the control passed.
for r in results:
print(f"{r.control_id}: {r.actual:.3f} {'PASS' if r.passed else 'FAIL'}")