Teaching Large Language Models to
Regress Accurate Image Quality Scores
Using Score Distribution

1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
2The Chinese University of Hong Kong
3University of Sydney
Corresponding Author
Abstract

With the rapid advancement of Multi-modal Large Language Models (MLLMs), MLLM-based Image Quality Assessment (IQA) methods have shown promising performance in linguistic quality description. However, current methods still fall short in accurately scoring image quality. In this work, we aim to leverage MLLMs to regress accurate quality scores. A key challenge is that the quality score is inherently continuous, typically modeled as a Gaussian distribution, whereas MLLMs generate discrete token outputs. This mismatch necessitates score discretization. Previous approaches discretize the mean score into a one-hot label, resulting in information loss and failing to capture inter-image relationships. We propose a distribution-based approach that discretizes the score distribution into a soft label. This method preserves the characteristics of the score distribution, achieving high accuracy and maintaining inter-image relationships. Moreover, to address dataset variation, where different IQA datasets exhibit various distributions, we introduce a fidelity loss based on Thurstone’s model. This loss captures intra-dataset relationships, facilitating co-training across multiple IQA datasets. With these designs, we develop the Depicted image Quality Assesment for Score regression (DeQA-Score). Experiments across multiple benchmarks show that DeQA-Score stably outperforms baselines in score regression. Also, DeQA-Score can predict the score distribution that closely aligns with human annotations.

Motivation

To train an MLLM as a quality scorer, continuous scores need to be discretized to discrete level tokens as the training label. Previous methods discretize human-labeled mean score to a one-hot label, causing information loss. We discretize the score distribution, which is approximated as a Gaussian distribution, to obtain a more accurate soft label.

Model Architecture

Framework of our DeQA-Score trained with soft label. For the "< LEVEL >" token, the KL divergence loss is calculated between predicted probabilities and our soft label. For other tokens, common cross-entropy loss for next token prediction is calculated.

Results

Results of single-dataset training

Results of co-training on multiple IQA datasets

Qualitative results. DeQA-Score generates a score distribution that aligns better with human annotation. In contrast, Q-Align, which is trained on one-hot label, is more likely to predict a single label.

BibTeX
              
    @article{deqa_score,
      title={Teaching Large Language Models to Regress Accurate Image Quality Scores using Score Distribution},
      author={You, Zhiyuan and Cai, Xin and Gu, Jinjin and Xue, Tianfan and Dong, Chao},
      journal={arXiv preprint arXiv:2501.11561},
      year={2025},
    }