α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Ge Zhang
Ge Zhang
3
Affiliations
Affiliations
01.ai
The Hong Kong University of Science and Technology
University of Waterloo
18
papers
2,657
total citations
papers (18)
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
CVPR 2024
arXiv
1,715
citations
Omni-MATH: A Universal Olympiad Level Mathematic Benchmark for Large Language Models
ICLR 2025
arXiv
149
citations
UniIR: Training and Benchmarking Universal Multimodal Information Retrievers
ECCV 2024
arXiv
139
citations
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
AAAI 2025
arXiv
105
citations
Training Socially Aligned Language Models on Simulated Social Interactions
ICLR 2024
arXiv
91
citations
General-Reasoner: Advancing LLM Reasoning Across All Domains
NEURIPS 2025
arXiv
86
citations
LRRU: Long-short Range Recurrent Updating Networks for Depth Completion
ICCV 2023
arXiv
80
citations
Massive Editing for Large Language Models via Meta Learning
ICLR 2024
arXiv
59
citations
OmniBench: Towards The Future of Universal Omni-Language Models
NEURIPS 2025
arXiv
53
citations
MARBLE: Music Audio Representation Benchmark for Universal Evaluation
NEURIPS 2023
arXiv
49
citations
Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits
NEURIPS 2022
arXiv
40
citations
McEval: Massively Multilingual Code Evaluation
ICLR 2025
arXiv
31
citations
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large Language Models
ICCV 2025
arXiv
24
citations
Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers
ICCV 2025
arXiv
22
citations
Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment
ICML 2025
arXiv
10
citations
KORGym: A Dynamic Game Platform for LLM Reasoning Evaluation
NEURIPS 2025
arXiv
4
citations
Improving Depth Completion via Depth Feature Upsampling
CVPR 2024
0
citations
Toward Modality Gap: Vision Prototype Learning for Weakly-supervised Semantic Segmentation with CLIP
AAAI 2025
arXiv
0
citations