α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Ke Chen
Ke Chen
17
papers
758
total citations
papers (17)
Unsupervised Domain Adaptation via Structurally Regularized Deep Clustering
CVPR 2020
arXiv
318
citations
Feature Importance Ranking for Deep Learning
NEURIPS 2020
arXiv
147
citations
3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding
CVPR 2021
arXiv
138
citations
Fine-Grained Object Classification via Self-Supervised Pose Alignment
CVPR 2022
arXiv
68
citations
CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition
AAAI 2024
arXiv
23
citations
Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space
NEURIPS 2021
arXiv
22
citations
Weakly Supervised Segmentation With Point Annotations for Histopathology Images via Contrast-Based Variational Model
CVPR 2023
arXiv
20
citations
Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap
ECCV 2022
arXiv
11
citations
FloE: On-the-Fly MoE Inference on Memory-constrained GPU
ICML 2025
arXiv
5
citations
PMA: Towards Parameter-Efficient Point Cloud Understanding via Point Mamba Adapter
CVPR 2025
arXiv
3
citations
In-Dataset Trajectory Return Regularization for Offline Preference-based Reinforcement Learning
AAAI 2025
arXiv
3
citations
Variational Hybrid-Attention Framework for Multi-Label Few-Shot Aspect Category Detection
AAAI 2024
0
citations
CogSQL: A Cognitive Framework for Enhancing Large Language Models in Text-to-SQL Translation
AAAI 2025
0
citations
Real-Time Vanishing Point Detector Integrating Under-Parameterized RANSAC and Hough Transform
ICCV 2021
0
citations
Geometry-Aware Self-Training for Unsupervised Domain Adaptation on Object Point Clouds
ICCV 2021
0
citations
AllGCD: Leveraging All Unlabeled Data for Generalized Category Discovery
ICCV 2025
0
citations
Adapting Pre-trained 3D Models for Point Cloud Video Understanding via Cross-frame Spatio-temporal Perception
CVPR 2025
0
citations