All classifiers, including state-of-the-art vision models, contain invariants induced by the geometry of their linear mappings. These invariants reside in the classifier null space and produce equivalent inputs with identical outputs.
SING (Semantic Interpretation of the Null-space Geometry) translates this hidden geometric behavior into human-interpretable semantics by mapping classifier features into a multimodal space and quantifying semantic drift. The framework supports single-image analysis and model-level comparisons.
Consists of four steps:

Decompose classifier-head weights with SVD to obtain principal and null subspaces.

Map classifier features into a shared vision-language embedding space.

Construct equivalent features by null space projection/manipulation while preserving logits.

Quantify semantic drift and visualize induced changes across attributes/classes.

SING links classifier null-space geometry and the invariants it induces to human-readable semantic explanations through equivalent-pair analysis.

We compared architectures by quantifying semantic leakage into null directions, highlighting how well class semantics are preserved across invariant space.

The framework supports systematic class-level probing to reveal sensitivity to concepts and inspect semantic invariants.

Model-level & single-image workflows provide reproducible semantic diagnostics with both quantitative scores and qualitative visual evidence.
@misc{yadid2026sing,
title={Make it SING: Analyzing Semantic Invariants in Classifiers},
author={Harel Yadid and Meir Yossef Levi and Roy Betser and Guy Gilboa},
year={2026},
eprint={2603.14610},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.14610}
}