α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Min Lin
Min Lin
26
papers
1,874
total citations
papers (26)
Understanding R1-Zero-Like Training: A Critical Perspective
COLM 2025
arXiv
714
citations
On Evaluating Adversarial Robustness of Large Vision-Language Models
NEURIPS 2023
arXiv
280
citations
Scaling up Masked Diffusion Models on Text
ICLR 2025
arXiv
124
citations
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
ICML 2024
arXiv
103
citations
When Attention Sink Emerges in Language Models: An Empirical View
ICLR 2025
arXiv
98
citations
Finetuning Text-to-Image Diffusion Models for Fairness
ICLR 2024
arXiv
87
citations
Improved Techniques for Optimization-Based Jailbreaking on Large Language Models
ICLR 2025
arXiv
85
citations
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
NEURIPS 2021
arXiv
75
citations
EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
NEURIPS 2022
arXiv
72
citations
A Closer Look at Machine Unlearning for Large Language Models
ICLR 2025
arXiv
35
citations
A₀ : An Affordance-Aware Hierarchical Model for General Robotic Manipulation
ICCV 2025
arXiv
34
citations
Exploring Incompatible Knowledge Transfer in Few-Shot Image Generation
CVPR 2023
arXiv
27
citations
Zero Bubble (Almost) Pipeline Parallelism
ICLR 2024
26
citations
Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates
ICLR 2025
arXiv
25
citations
Meta-Unlearning on Diffusion Models: Preventing Relearning Unlearned Concepts
ICCV 2025
arXiv
18
citations
Locality Sensitive Sparse Encoding for Learning World Models Online
ICLR 2024
arXiv
18
citations
Improving Your Model Ranking on Chatbot Arena by Vote Rigging
ICML 2025
arXiv
11
citations
Mutual Information Regularized Offline Reinforcement Learning
NEURIPS 2023
arXiv
9
citations
NU-MCC: Multiview Compressive Coding with Neighborhood Decoder and Repulsive UDF
NEURIPS 2023
arXiv
9
citations
Lifelong Safety Alignment for Language Models
NEURIPS 2025
arXiv
7
citations
ZeroStereo: Zero-shot Stereo Matching from Single Images
ICCV 2025
arXiv
6
citations
On Calibrating Diffusion Probabilistic Models
NEURIPS 2023
arXiv
4
citations
FlowMamba: Learning Point Cloud Scene Flow with Global Motion Propagation
AAAI 2025
arXiv
4
citations
Continual Reinforcement Learning by Planning with Online World Models
ICML 2025
arXiv
3
citations
Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning
NEURIPS 2020
0
citations
IHNet: Iterative Hierarchical Network Guided by High-Resolution Estimated Information for Scene Flow Estimation
ICCV 2023
0
citations