CoSeR: Bridging Image and Language for Cognitive Super-Resolution

75citations
arXiv:2311.16512
75
citations
#358
in CVPR 2024
of 2716 papers
8
Top Authors
4
Data Points

Abstract

Existing super-resolution (SR) models primarily focus on restoring local texture details, often neglecting the global semantic information within the scene. This oversight can lead to the omission of crucial semantic details or the introduction of inaccurate textures during the recovery process. In our work, we introduce the Cognitive Super-Resolution (CoSeR) framework, empowering SR models with the capacity to comprehend low-resolution images. We achieve this by marrying image appearance and language understanding to generate a cognitive embedding, which not only activates prior information from large text-to-image diffusion models but also facilitates the generation of high-quality reference images to optimize the SR process. To further improve image fidelity, we propose a novel condition injection scheme called "All-in-Attention", consolidating all conditional information into a single module. Consequently, our method successfully restores semantically correct and photorealistic details, demonstrating state-of-the-art performance across multiple benchmarks. Code: https://github.com/VINHYU/CoSeR

Citation History

Jan 27, 2026
71
Feb 13, 2026
74+3
Feb 13, 2026
75+1
Feb 13, 2026
75