α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Dongyoon Han
Dongyoon Han
2
Affiliations
Affiliations
KAIST
NAVER AI Lab
21
papers
1,704
total citations
papers (21)
Rethinking Spatial Dimensions of Vision Transformers
ICCV 2021
arXiv
700
citations
OCR-Free Document Understanding Transformer
ECCV 2022
arXiv
397
citations
Re-Labeling ImageNet: From Single to Multi-Labels, From Global to Localized Labels
CVPR 2021
arXiv
164
citations
Rethinking Channel Dimensions for Efficient Model Design
CVPR 2021
arXiv
107
citations
Model Stock: All we need is just a few fine-tuned models
ECCV 2024
arXiv
77
citations
Switching Temporary Teachers for Semi-Supervised Semantic Segmentation
NEURIPS 2023
arXiv
52
citations
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
ECCV 2024
arXiv
44
citations
Scratching Visual Transformer's Back with Uniform Attention
ICCV 2023
arXiv
37
citations
The Devil Is in the Points: Weakly Semi-Supervised Instance Segmentation via Point-Guided Mask Representation
CVPR 2023
arXiv
32
citations
Demystifying the Neural Tangent Kernel From a Practical Perspective: Can It Be Trusted for Neural Architecture Search Without Training?
CVPR 2022
arXiv
29
citations
Contrastive Vicinal Space for Unsupervised Domain Adaptation
ECCV 2022
arXiv
26
citations
DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation
ICLR 2025
arXiv
16
citations
NegMerge: Sign-Consensual Weight Merging for Machine Unlearning
ICML 2025
arXiv
4
citations
Neglected Free Lunch - Learning Image Classifiers Using Annotation Byproducts
ICCV 2023
arXiv
4
citations
Gramian Attention Heads are Strong yet Efficient Vision Learners
ICCV 2023
arXiv
3
citations
Masking meets Supervision: A Strong Learning Alliance
CVPR 2025
arXiv
3
citations
Morphing Tokens Draw Strong Masked Image Models
ICLR 2025
arXiv
3
citations
Learning with Unmasked Tokens Drives Stronger Vision Learners
ECCV 2024
arXiv
3
citations
SeiT++: Masked Token Modeling Improves Storage-efficient Training
ECCV 2024
arXiv
2
citations
Token Bottleneck: One Token to Remember Dynamics
NEURIPS 2025
arXiv
1
citations
Generating Instance-level Prompts for Rehearsal-free Continual Learning
ICCV 2023
0
citations