α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Peng Ye
Peng Ye
19
papers
249
total citations
papers (19)
MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer
CVPR 2024
arXiv
47
citations
CasCast: Skillful High-resolution Precipitation Nowcasting via Cascaded Modelling
ICML 2024
arXiv
46
citations
HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction
ICLR 2025
arXiv
35
citations
Critic-V: VLM Critics Help Catch VLM Errors in Multimodal Reasoning
CVPR 2025
arXiv
33
citations
Neural-Symbolic Entangled Framework for Complex Query Answering
NEURIPS 2022
arXiv
29
citations
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
NEURIPS 2022
arXiv
15
citations
Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
CVPR 2024
arXiv
10
citations
All-in-One: Transferring Vision Foundation Models into Stereo Matching
AAAI 2025
arXiv
9
citations
Scaling Physical Reasoning with the PHYSICS Dataset
NEURIPS 2025
arXiv
6
citations
Boosting Residual Networks with Group Knowledge
AAAI 2024
arXiv
6
citations
DeRS: Towards Extremely Efficient Upcycled Mixture-of-Experts Models
CVPR 2025
arXiv
6
citations
Consistency-aware Self-Training for Iterative-based Stereo Matching
CVPR 2025
arXiv
2
citations
scMRDR: A scalable and flexible framework for unpaired single-cell multi-omics data integration
NEURIPS 2025
arXiv
2
citations
Enhanced Sparsification via Stimulative Training
ECCV 2024
arXiv
2
citations
Improved Bounds for Pure Private Agnostic Learning: Item-Level and User-Level Privacy
ICML 2024
arXiv
1
citations
Less is More: Efficient Model Merging with Binary Task Switch
CVPR 2025
arXiv
0
citations
b-DARTS: Beta-Decay Regularization for Differentiable Architecture Search
CVPR 2022
0
citations
RFD-ECNet: Extreme Underwater Image Compression with Reference to Feature Dictionary
ICCV 2023
0
citations
PaceLLM: Brain-Inspired Large Language Models for Long-Context Understanding
NEURIPS 2025
arXiv
0
citations