Poster by Joshua B Tenenbaum Papers
11 papers found
Conference
Can Large Language Models Understand Symbolic Graphics Programs?
Zeju Qiu, Weiyang Liu, Haiwen Feng et al.
ICLR 2025arXiv:2408.08313
29
citations
Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains
Vighnesh Subramaniam, Yilun Du, Joshua B Tenenbaum et al.
ICLR 2025arXiv:2501.05707
63
citations
Vision CNNs trained to estimate spatial latents learned similar ventral-stream-aligned representations
Yudi Xie, Weichen Huang, Esther Alter et al.
ICLR 2025arXiv:2412.09115
3
citations
VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Yichao Liang, Nishanth Kumar, Hao Tang et al.
ICLR 2025arXiv:2410.23156
34
citations
What Makes a Maze Look Like a Maze?
Joy Hsu, Jiayuan Mao, Joshua B Tenenbaum et al.
ICLR 2025arXiv:2409.08202
13
citations
Building Cooperative Embodied Agents Modularly with Large Language Models
Hongxin Zhang, Weihua Du, Jiaming Shan et al.
ICLR 2024arXiv:2307.02485
273
citations
HAZARD Challenge: Embodied Decision Making in Dynamically Changing Environments
Qinhong Zhou, Sunli Chen, Yisong Wang et al.
ICLR 2024arXiv:2401.12975
27
citations
Learning to Jointly Understand Visual and Tactile Signals
Yichen Li, Yilun Du, Chao Liu et al.
ICLR 2024
LILO: Learning Interpretable Libraries by Compressing and Documenting Code
Gabriel Grand, Lio Wong, Maddy Bowers et al.
ICLR 2024arXiv:2310.19791
31
citations
Probabilistic Adaptation of Black-Box Text-to-Video Models
Sherry Yang, Yilun Du, Bo Dai et al.
ICLR 2024
Video Language Planning
Yilun Du, Sherry Yang, Pete Florence et al.
ICLR 2024arXiv:2310.10625
147
citations