α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Seunghoon Hong
Seunghoon Hong
19
papers
900
total citations
papers (19)
Pure Transformers are Powerful Graph Learners
NEURIPS 2022
arXiv
258
citations
Part-Based Pseudo Label Refinement for Unsupervised Person Re-Identification
CVPR 2022
arXiv
236
citations
SetVAE: Learning Hierarchical Composition for Generative Modeling of Set-Structured Data
CVPR 2021
arXiv
104
citations
Improving Unsupervised Image Clustering With Robust Learning
CVPR 2021
arXiv
94
citations
High-Fidelity Synthesis with Disentangled Representation
ECCV 2020
arXiv
69
citations
Variational Interaction Information Maximization for Cross-domain Disentanglement
NEURIPS 2020
arXiv
57
citations
Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
NEURIPS 2023
arXiv
29
citations
Equivariant Hypergraph Neural Networks
ECCV 2022
arXiv
16
citations
Learning to Compose: Improving Object Centric Learning by Injecting Compositionality
ICLR 2024
arXiv
10
citations
Revisiting Random Walks for Learning on Graphs
ICLR 2025
arXiv
8
citations
Towards End-to-End Generative Modeling of Long Videos With Memory-Efficient Bidirectional Transformers
CVPR 2023
arXiv
6
citations
Chameleon: A Data-Efficient Generalist for Dense Visual Prediction in the Wild
ECCV 2024
arXiv
5
citations
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
NEURIPS 2022
arXiv
4
citations
MetaWeather: Few-Shot Weather-Degraded Image Restoration
ECCV 2024
arXiv
3
citations
3D Denoisers Are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
AAAI 2025
arXiv
1
citations
Multi-View Representation Learning via Total Correlation Objective
NEURIPS 2021
0
citations
Bridging the gap to real-world language-grounded visual concept learning
NEURIPS 2025
arXiv
0
citations
Transformers Generalize DeepSets and Can be Extended to Graphs & Hypergraphs
NEURIPS 2021
0
citations
Disentangled Representation Learning via Modular Compositional Bias
NEURIPS 2025
arXiv
0
citations