Modern single-image super-resolution (SISR) models deliver photo-realistic results at the scale factors on which they are trained, but collapse when asked to magnify far beyond that regime. We address this scalability bottleneck with Chain-of-Zoom (CoZ), a model-agnostic framework that factorizes SISR into an autoregressive chain of intermediate scale-states with multi-scale-aware prompts. CoZ repeatedly re-uses a backbone SR model, decomposing the conditional probability into tractable sub-problems to achieve extreme resolutions without additional training. Because visual cues diminish at high magnifications, we augment each zoom step with multi-scale-aware text prompts generated by a vision-language model (VLM). The prompt extractor itself is fine-tuned using Generalized Reward Policy Optimization (GRPO) with a critic VLM, aligning text guidance towards human preference. Experiments show that a standard 4x diffusion SR model wrapped in CoZ attains beyond 256x enlargement with high perceptual quality and fidelity.
(a) Conventional SR. When an SR backbone trained for a fixed up-scale factor (e.g., 4x) is pushed to much larger magnifications beyond its training regime, blur and artifacts are produced.
(b) Chain-of-Zoom (ours). Starting from an LR input, a pretrained VLM generates a descriptive prompt, which—together with the image—is fed to the same SR backbone to yield the next HR scale-state. This prompt-and-upscale cycle is repeated, allowing a single off-the-shelf model to climb to extreme resolutions (16x-256x) while preserving sharp detail and semantic fidelity.
To obtain text prompts that further align with human preference, we fine-tune the prompt-extraction VLM under a novel RLHF pipeline leveraging GRPO.
A critic VLM is used to score the prompt for semantic quality, while phrase-exclusion and repetition penalties enforce conciseness and relevance.
Super-resolution with various methods: (a) Nearest neighbor interpolation; (b) One-step direct SR with the backbone SR model; (c-e) Variants of CoZ with different text prompts. Nearest neighbor interpolation and one-step direct SR fall off at higher scales, while CoZ variants produce images of better quality. Incorporating VLM prompts helps overcome the sparsity of the original input signal, leading to generation of more realistic images.
Phrase exclusion reward and repetition penalty converge to 1.00 and 0.00, respectively, in the early stages of training, while the critic reward increases gradually throughout the training process.
RLHF training with GRPO assists the prompt-extraction VLM in creating meaningful prompts for accurate guidance. (Top) Base VLM: generating prompts only from the LR input causes unwanted hallucinations as shown by the incorrect prompts; (Middle) Multi-scale image prompts are helpful at low scales (e.g., accurate prompt of "dog, stick, water, ...") but fail at high scales; (Bottom) VLM aligned with human preference guides samples with improved text guidance.
Mean-opinion-score (MOS) tests for human-preferred image generation and human-preferred text generation validate how GRPO fine-tuning of the VLM enhances human preference alignment.