α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Shiyu Chang
Shiyu Chang
20
papers
1,945
total citations
papers (20)
TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up
NEURIPS 2021
arXiv
472
citations
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
NEURIPS 2020
arXiv
404
citations
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
CVPR 2020
arXiv
278
citations
The Lottery Tickets Hypothesis for Supervised and Self-Supervised Pre-Training in Computer Vision Models
CVPR 2021
arXiv
136
citations
Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models
CVPR 2023
arXiv
122
citations
Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling
ICML 2024
arXiv
101
citations
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
NEURIPS 2021
arXiv
86
citations
Training Stronger Baselines for Learning to Optimize
NEURIPS 2020
arXiv
58
citations
Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis
ICCV 2023
arXiv
53
citations
Understanding Interlocking Dynamics of Cooperative Rationalization
NEURIPS 2021
arXiv
49
citations
Fairness Reprogramming
NEURIPS 2022
arXiv
42
citations
Robust Mixture-of-Expert Training for Convolutional Neural Networks
ICCV 2023
arXiv
38
citations
KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
NEURIPS 2025
arXiv
29
citations
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
CVPR 2022
arXiv
28
citations
VSP: Diagnosing the Dual Challenges of Perception and Reasoning in Spatial Planning Tasks for MLLMs
ICCV 2025
18
citations
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
NEURIPS 2023
arXiv
14
citations
Correcting Diffusion Generation through Resampling
CVPR 2024
arXiv
12
citations
Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning
ICLR 2025
arXiv
5
citations
Speech Self-Supervised Learning Using Diffusion Model Synthetic Data
ICML 2024
0
citations
Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All At Once
ICML 2024
0
citations