Towards Generalized Representations for Low-Light Understanding: When Signal Constancy Meets Semantic Enrichment

Wangxuan Institute of Computer Technology, Peking University    

Figure 1. Existing methods are constrained by the biased light conditions and scenarios. By comparison, our UniPrior integrates unified signal/feature-level priors, enabling broader generalization. Building upon this, our TTA Enhancer further improves generalizability to test-time hard cases through sample-adaptive optimization. (b) Our method achieves state-of-the-art performance on multiple low-light understanding tasks by a large margin.


Abstract

Low-light degradation hampers machine understanding at night. Existing methods either overfit labeled data (paired supervision) or specific distributions (unpaired supervision), resulting in poor generalization under unseen degradations. In this paper, we propose UniPrior, a unified prior-based low-light adaptation framework that integrates the general semantic prior embedded in vision foundation models (VFMs) with illumination-invariant priors, to capture both stable and changing semantics under varied low-light degradation without any real low-light training data. In detail, the illumination-invariant prior is used as an auxiliary input, and a parallel decoder reconstructs it as a regularization target, enforcing representation consistency and reducing feature drift. Such signal constancy enables us to build a VFM-aligned semantic space via a contrastive training strategy guided by VFM self-correlation maps, enriching features with high-level cues, thereby improving adaptation to diverse low-light conditions. Beyond high-level features, we also give a joint consideration of such unified prior and low-level signal space through our machine-oriented enhancement scheme. We extend the signal prior to handle overexposure and inject VFM-guided semantic cues into the enhancement process via a CLIP-based loss. This coupling of semantic alignment and pixel correction enables sample-adaptive optimization to improve performance. Extensive experiments on multiple low-light tasks demonstrate our method's superiority and practical utility.

More Visual Results

BibTeX

Please consider to cite UniPrior if it helps your research.
@inproceedings{li2026uniprior,
  title={Towards Generalized Representations for Low-Light Understanding: When Signal Constancy Meets Semantic Enrichment},
  author={Li, Yifan and Huang, Haofeng and Yang, Wenhan and Liu, Jiaying},
  booktitle={CVPR},
  year={2026}
}
>>>>>>> 4c56bb965f1764499e023d67212c6b836e8ab1b5