Debug Logging (v0.4 SCM)

ZeroProofML v0.4 keeps logging lightweight: the core trainer exposes a per-step log_hook callback, and most examples rely on standard Python logging / printing.

Python logging basics

import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger("zeroproof").setLevel(logging.INFO)

Capturing per-step metrics from SCMTrainer

zeroproof.training.trainer.TrainingConfig accepts a log_hook(metrics) callable. The trainer calls it with a small dict containing at least: - loss (float) - coverage (float; fraction of non-⊥ predictions) and may also include: - tau_train (float; sampled threshold for the current step) - bottom_frac (float; 1 - coverage) - denom_abs_min, denom_abs_mean (floats; available when the model outputs a projective (P, Q) tuple)

JSONL logger

from zeroproof.training.trainer import TrainingConfig
from zeroproof.utils.logging import JsonlLogger

cfg = TrainingConfig(log_hook=JsonlLogger("runs/scm_train_metrics.jsonl"))

TensorBoard logger

from zeroproof.training.trainer import TrainingConfig
from zeroproof.utils.logging import TensorBoardLogger

cfg = TrainingConfig(log_hook=TensorBoardLogger("runs/tensorboard"))

Loading JSONL logs

Read the JSONL file back into memory:

from zeroproof.utils.logging import read_jsonl

records = read_jsonl("runs/scm_train_metrics.jsonl")

If you have pandas installed, load it as a DataFrame:

from zeroproof.utils.logging import jsonl_to_dataframe

df = jsonl_to_dataframe("runs/scm_train_metrics.jsonl")

Minimal manual example (JSONL)

import json
from pathlib import Path

from zeroproof.training.trainer import TrainingConfig, SCMTrainer

log_path = Path("runs/scm_train_metrics.jsonl")
log_path.parent.mkdir(parents=True, exist_ok=True)

def log_hook(metrics):
    with log_path.open("a", encoding="utf-8") as fh:
        fh.write(json.dumps({k: (float(v) if hasattr(v, "__float__") else v) for k, v in metrics.items()}) + "\n")

cfg = TrainingConfig(log_hook=log_hook)
# trainer = SCMTrainer(..., config=cfg)
# trainer.fit()

Debugging propagation

  • For Torch layers, inspect bottom_mask directly (e.g., from SCMRationalLayer.forward).
  • For vectorised backends (zeroproof.scm.ops), log both (payload, mask); the payload is intentionally zeroed at mask=True positions to keep tensor math stable.