ZeroProofML v0.4 API Reference (Phase 9)¶
This page collects the public Python API for the signed common meadow (SCM) implementation described in this documentation set. The emphasis is on the functions and classes that are stable for the v0.4 release; experimental helpers remain internal.
Stability map (stable vs experimental)¶
The table below is the stability contract for this documentation set. Anything not listed here should be treated as experimental/internal and may change without notice.
| Module | Symbols | Stability | Notes |
|---|---|---|---|
zeroproof.scm.value |
SCMValue, scm_real, scm_complex, scm_bottom |
Stable | Core SCM value contract |
zeroproof.scm.ops |
scm_add, scm_sub, scm_mul, scm_div, scm_inv, scm_neg, scm_pow, scm_log, scm_exp, scm_sqrt, scm_sin, scm_cos, scm_tan |
Stable | Scalar helpers (backend-specific variants may exist) |
zeroproof.autodiff.policies |
GradientPolicy, gradient_policy, register_policy, apply_policy, apply_policy_vector |
Stable | Policy routing for gradients near ⊥ |
zeroproof.autodiff.graph |
SCMNode, add, sub, mul, div, stop_gradient |
Experimental | Low-level graph scaffolding |
zeroproof.autodiff.projective |
ProjectiveSample, encode, decode, renormalize, projectively_equal |
Stable | Projective tuple utilities |
zeroproof.training |
lift_targets, AdaptiveSampler, TrainingConfig, SCMTrainer, LinearRamp, LossWeightsCurriculum |
Stable | Training entry points + curricula |
zeroproof.layers |
AngularProjectiveHead, SCMRationalLayer, SCMNorm, SCMSoftmax |
Stable | Core layers used throughout examples/benchmarks |
zeroproof.inference |
InferenceConfig, strict_inference, SCMInferenceWrapper, validate_bundle, generate_validation_report, decode_strict_censored_3way, StrictInferenceMonitor, strict_inference_rates, reject_on_bottom, reject_on_gap, safe_sentinel, route_to_analytic_solver, script_module, export_onnx_model, export_bundle |
Stable | Deployment/bundle/monitoring surfaces |
zeroproof.losses |
implicit_loss, margin_loss, sign_consistency_loss, rejection_loss, SCMTrainingLoss |
Stable | SCM loss stack |
zeroproof.utils |
from_ieee, to_ieee |
Stable | IEEE-754 bridge helpers |
SCM Core (zeroproof.scm)¶
zeroproof.scm.value¶
SCMValue: immutable value container carrying either a numeric payload or the absorptive bottom⊥.scm_real(x: float) -> SCMValue: construct a finite SCM value from a real number.scm_complex(z: complex) -> SCMValue: construct a finite SCM value from a complex number.scm_bottom() -> SCMValue: return the absorptive bottom element.
zeroproof.scm.ops¶
- Arithmetic helpers mirror common meadow semantics:
scm_add,scm_sub,scm_mul,scm_div,scm_inv,scm_neg,scm_pow. - Transcendental helpers respect the bottom element:
scm_log,scm_exp,scm_sqrt,scm_sin,scm_cos,scm_tan. - All functions accept
SCMValueor plain Python scalars for ergonomic use in notebooks and examples.
Autodiff (zeroproof.autodiff)¶
Gradient policies (zeroproof.autodiff.policies)¶
GradientPolicy: enum of gradient handling strategies (CLAMP,PROJECT,REJECT,PASSTHROUGH).gradient_policy(policy: GradientPolicy): context manager to override the active policy.register_policy(layer: str, policy: GradientPolicy): register layer-specific defaults.apply_policy(gradient: float, is_bottom: bool, policy: GradientPolicy | None = None) -> float: transform a gradient according to the active policy.apply_policy_vector(...): vectorised version ofapply_policy.
Computation graph (zeroproof.autodiff.graph)¶
SCMNode: lightweight computation node carrying a forwardSCMValue, autodiff metadata, and anis_bottomflag for policy routing.- Constructors:
SCMNode.constant(value),SCMNode.stop_gradient(node). - Primitives:
add,sub,mul,div, each propagating bottom semantics. - Utilities:
backward(upstream=1.0, policy=None),trace(depth=0). - Functional helpers mirror the methods:
add,sub,mul,div,stop_gradient.
Projective tuples (zeroproof.autodiff.projective)¶
ProjectiveSample: dataclass representing a homogeneous tuple(N, D).encode(value: SCMValue) -> ProjectiveSample: lift an SCM value to projective coordinates (finite →(x, 1), bottom →(1, 0)).decode(sample: ProjectiveSample) -> SCMValue: map a projective tuple back to SCM, emitting⊥whenD = 0.renormalize(numerator, denominator, gamma=1e-9, stop_gradient=None): detached renormalisation used in projective training (auto-detects Torch/JAX tensors and appliesdetach/stop_gradientto the norm).projectively_equal(a, b, atol=1e-8) -> bool: compare projective tuples up to scaling.
Training helpers (zeroproof.training)¶
zeroproof.training.targets.lift_targets: lift scalar targets to projective tuples for loss computation.zeroproof.training.sampler.AdaptiveSampler: wraps PyTorch dataloaders with singularity-aware sampling weights.zeroproof.training.trainer.TrainingConfig: config container forSCMTrainer(early-stop thresholds, AMP toggle, optional validation loader, logging hook, and gradient-policy override).zeroproof.training.trainer.SCMTrainer: v0.4 training loop integrating projective decoding, coverage tracking, and gradient policies.- Optional curricula:
LinearRampandLossWeightsCurriculumfor per-epoch loss-weight schedules (passed into loss functions that acceptloss_weights).
Layers (zeroproof.layers)¶
AngularProjectiveHead: unit-circle projective output head (P=cosθ,Q=sinθ) for stable strict gating.SCMRationalLayer,SCMNorm,SCMSoftmax: SCM-aware PyTorch layers.
Inference (zeroproof.inference)¶
InferenceConfig: inference thresholds (tau_infer, optionaltau_trainfor gap diagnostics).strict_inference(...): strict SCM decode for Torch / NumPy / JAX projective tuples.SCMInferenceWrapper: wraps a(P, Q)-outputting model and applies strict decoding in eval mode.- Bundle utilities:
validate_bundle(bundle_dir)checksmodel.onnx+metadata.jsonconsistency.generate_validation_report(bundle_dir, benchmark_dir=...)writes a lightweight Markdown report.- Patterns and operations:
decode_strict_censored_3way(...)implements a strict-gate + optional direction-head decode for 3-way censoring tasks.StrictInferenceMonitorandstrict_inference_rates(...)help monitor bottom/gap/acceptance rates.- Fallback helpers:
reject_on_bottom,reject_on_gap,safe_sentinel,route_to_analytic_solver. - Export helpers:
script_module(model)for TorchScript.export_onnx_model(model, example_inputs, output_path, ...)for ONNX.export_bundle(model, example_inputs, out_dir, ...)for amodel.onnx+metadata.jsonbundle.
Losses (zeroproof.losses)¶
implicit_loss,margin_loss,sign_consistency_loss, andrejection_lossimplement the objectives described in04_loss_functions.md.SCMTrainingLosscombines the individual terms with configurable weights (λ_margin,λ_sign,λ_rej).
How to read this page¶
The functions above intentionally mirror the conceptual contract: train on smooth, infer on strict. Use the docstrings in each module for parameter details and examples; this page is a signpost for the available surfaces.
Utilities (zeroproof.utils)¶
- IEEE bridge:
from_ieee,to_ieee - Plotting (optional):
zeroproof.utils.viz - Log hooks (optional):
zeroproof.utils.logging