Poster "robotic manipulation" Papers

51 papers found • Page 1 of 2

$\textit{Hyper-GoalNet}$: Goal-Conditioned Manipulation Policy Learning with HyperNetworks

Pei Zhou, Wanting Yao, Qian Luo et al.

NEURIPS 2025
1
citations

Act to See, See to Act: Diffusion-Driven Perception-Action Interplay for Adaptive Policies

Jing Wang, Weiting Peng, Jing Tang et al.

NEURIPS 2025arXiv:2509.25822

AffordDP: Generalizable Diffusion Policy with Transferable Affordance

Shijie Wu, Yihang Zhu, Yunao Huang et al.

CVPR 2025arXiv:2412.03142
26
citations

AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation

Jiafei Duan, Wilbert Pumacay, Nishanth Kumar et al.

ICLR 2025arXiv:2410.00371
85
citations

Articulate-Anything: Automatic Modeling of Articulated Objects via a Vision-Language Foundation Model

Long Le, Jason Xie, William Liang et al.

ICLR 2025arXiv:2410.13882
44
citations

BadRobot: Jailbreaking Embodied LLM Agents in the Physical World

Hangtao Zhang, Chenyu Zhu, Xianlong Wang et al.

ICLR 2025
6
citations

Contractive Dynamical Imitation Policies for Efficient Out-of-Sample Recovery

Amin Soleimani Abyaneh, Mahrokh Boroujeni, Hsiu-Chin Lin et al.

ICLR 2025arXiv:2412.07544
4
citations

Data Scaling Laws in Imitation Learning for Robotic Manipulation

Fanqi Lin, Yingdong Hu, Pingyue Sheng et al.

ICLR 2025arXiv:2410.18647
123
citations

Dynamic Test-Time Compute Scaling in Control Policy: Difficulty-Aware Stochastic Interpolant Policy

Inkook Chun, Seungjae Lee, Michael Albergo et al.

NEURIPS 2025arXiv:2511.20906

DynaRend: Learning 3D Dynamics via Masked Future Rendering for Robotic Manipulation

Jingyi Tian, Le Wang, Sanping Zhou et al.

NEURIPS 2025arXiv:2510.24261

EC-Flow: Enabling Versatile Robotic Manipulation from Action-Unlabeled Videos via Embodiment-Centric Flow

Yixiang Chen, Peiyan Li, Yan Huang et al.

ICCV 2025arXiv:2507.06224
3
citations

EgoBridge: Domain Adaptation for Generalizable Imitation from Egocentric Human Data

Ryan Punamiya, Dhruv Patel, Patcharapong Aphiwetsa et al.

NEURIPS 2025arXiv:2509.19626
6
citations

ET-SEED: EFFICIENT TRAJECTORY-LEVEL SE(3) EQUIVARIANT DIFFUSION POLICY

Chenrui Tie, Yue Chen, Ruihai Wu et al.

ICLR 2025arXiv:2411.03990
16
citations

Exploring the Limits of Vision-Language-Action Manipulation in Cross-task Generalization

Jiaming Zhou, Ke Ye, Jiayi Liu et al.

NEURIPS 2025arXiv:2505.15660
18
citations

Fast-in-Slow: A Dual-System VLA Model Unifying Fast Manipulation within Slow Reasoning

Hao Chen, Jiaming Liu, Chenyang Gu et al.

NEURIPS 2025
27
citations

FedVLA: Federated Vision-Language-Action Learning with Dual Gating Mixture-of-Experts for Robotic Manipulation

Cui Miao, Tao Chang, meihan wu et al.

ICCV 2025arXiv:2508.02190
5
citations

FlowRAM: Grounding Flow Matching Policy with Region-Aware Mamba Framework for Robotic Manipulation

Sen Wang, Le Wang, Sanping Zhou et al.

CVPR 2025arXiv:2506.16201
8
citations

FreqPolicy: Frequency Autoregressive Visuomotor Policy with Continuous Tokens

Yiming Zhong, Yumeng Liu, Chuyang Xiao et al.

NEURIPS 2025arXiv:2506.01583
1
citations

GoalLadder: Incremental Goal Discovery with Vision-Language Models

Alexey Zakharov, Shimon Whiteson

NEURIPS 2025arXiv:2506.16396
1
citations

GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models

Mianchu Wang, Rui Yang, Xi Chen et al.

ICLR 2025arXiv:2310.20025
16
citations

GraspCoT: Integrating Physical Property Reasoning for 6-DoF Grasping under Flexible Language Instructions

Xiaomeng Chu, Jiajun Deng, Guoliang You et al.

ICCV 2025arXiv:2503.16013
2
citations

iManip: Skill-Incremental Learning for Robotic Manipulation

Zexin Zheng, Jia-Feng Cai, Xiao-Ming Wu et al.

ICCV 2025arXiv:2503.07087
4
citations

Learning Precise Affordances from Egocentric Videos for Robotic Manipulation

Li, Nikolaos Tsagkas, Jifei Song et al.

ICCV 2025arXiv:2408.10123
17
citations

Learning View-invariant World Models for Visual Robotic Manipulation

Jing-Cheng Pang, Nan Tang, Kaiyuan Li et al.

ICLR 2025

METASCENES: Towards Automated Replica Creation for Real-world 3D Scans

Huangyue Yu, Baoxiong Jia, Yixin Chen et al.

CVPR 2025arXiv:2505.02388
13
citations

Mitigating the Human-Robot Domain Discrepancy in Visual Pre-training for Robotic Manipulation

Jiaming Zhou, Teli Ma, Kun-Yu Lin et al.

CVPR 2025arXiv:2406.14235
19
citations

Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation

Xiaoqi Li, Lingyun Xu, Mingxu Zhang et al.

CVPR 2025arXiv:2505.02166
5
citations

PASG: A Closed-Loop Framework for Automated Geometric Primitive Extraction and Semantic Anchoring in Robotic Manipulation

Zhihao ZHU, Yifan Zheng, Siyu Pan et al.

ICCV 2025arXiv:2508.05976

Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation

Yang Tian, Sizhe Yang, Jia Zeng et al.

ICLR 2025arXiv:2412.15109
93
citations

Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning

Joey Hong, Anca Dragan, Sergey Levine

ICLR 2025arXiv:2411.05193
8
citations

RoboGround: Robotic Manipulation with Grounded Vision-Language Priors

Haifeng Huang, Xinyi Chen, Yilun Chen et al.

CVPR 2025arXiv:2504.21530
17
citations

Subtask-Aware Visual Reward Learning from Segmented Demonstrations

Changyeon Kim, Minho Heo, Doohyun Lee et al.

ICLR 2025arXiv:2502.20630
3
citations

TACO: Taming Diffusion for in-the-wild Video Amodal Completion

Ruijie Lu, Yixin Chen, Yu Liu et al.

ICCV 2025arXiv:2503.12049
10
citations

TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object Interaction for Generalizable Robotic Manipulation

Hongxiang Zhao, Xingchen Liu, Mutian Xu et al.

CVPR 2025arXiv:2503.11423
22
citations

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper

Xinyue Zhu, Binghao Huang, Yunzhu Li

NEURIPS 2025arXiv:2507.15062
14
citations

Training-Free Generation of Temporally Consistent Rewards from VLMs

Yinuo Zhao, Jiale Yuan, Zhiyuan Xu et al.

ICCV 2025arXiv:2507.04789
2
citations

Two-Steps Diffusion Policy for Robotic Manipulation via Genetic Denoising

Mateo Clémente, Leo Brunswic, Yang et al.

NEURIPS 2025arXiv:2510.21991
1
citations

VTDexManip: A Dataset and Benchmark for Visual-tactile Pretraining and Dexterous Manipulation with Reinforcement Learning

Qingtao Liu, Yu Cui, Zhengnan Sun et al.

ICLR 2025
11
citations

Wavelet Policy: Lifting Scheme for Policy Learning in Long-Horizon Tasks

Hao Huang, Shuaihang Yuan, Geeta Chandra Raju Bethala et al.

ICCV 2025arXiv:2507.04331

An Embodied Generalist Agent in 3D World

Jiangyong Huang, Silong Yong, Xiaojian Ma et al.

ICML 2024arXiv:2311.12871
305
citations

CyberDemo: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation

Jun Wang, Yuzhe Qin, Kaiming Kuang et al.

CVPR 2024arXiv:2402.14795
28
citations

Diffusion Reward: Learning Rewards via Conditional Video Diffusion

Tao Huang, Guangqi Jiang, Yanjie Ze et al.

ECCV 2024arXiv:2312.14134
44
citations

Expert Proximity as Surrogate Rewards for Single Demonstration Imitation Learning

Chia-Cheng Chiang, Li-Cheng Lan, Wei-Fang Sun et al.

ICML 2024arXiv:2402.01057

Language-Driven 6-DoF Grasp Detection Using Negative Prompt Guidance

Tien Toan Nguyen, Minh Nhat Nhat Vu, Baoru Huang et al.

ECCV 2024arXiv:2407.13842
17
citations

ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation

Xiaoqi Li, Mingxu Zhang, Yiran Geng et al.

CVPR 2024arXiv:2312.16217
182
citations

PEARL: Zero-shot Cross-task Preference Alignment and Robust Reward Learning for Robotic Manipulation

Runze Liu, Yali Du, Fengshuo Bai et al.

ICML 2024arXiv:2306.03615
9
citations

Retrieval-Augmented Embodied Agents

Yichen Zhu, Zhicai Ou, Xiaofeng Mou et al.

CVPR 2024arXiv:2404.11699
28
citations

RoboMP$^2$: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models

Qi Lv, Hao Li, Xiang Deng et al.

ICML 2024arXiv:2404.04929
4
citations

Stop Regressing: Training Value Functions via Classification for Scalable Deep RL

Jesse Farebrother, Jordi Orbay, Quan Vuong et al.

ICML 2024arXiv:2403.03950
107
citations

VinT-6D: A Large-Scale Object-in-hand Dataset from Vision, Touch and Proprioception

Zhaoliang Wan, Yonggen Ling, Senlin Yi et al.

ICML 2024arXiv:2501.00510
9
citations
PreviousNext