α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Daniela Rus
Daniela Rus
23
papers
1,154
total citations
papers (23)
Deep Evidential Regression
NEURIPS 2020
arXiv
567
citations
Efficient Dataset Distillation using Random Feature Approximation
NEURIPS 2022
arXiv
132
citations
LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery
ICML 2024
arXiv
67
citations
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
ICLR 2025
arXiv
61
citations
Causal Navigation by Continuous-time Neural Networks
NEURIPS 2021
arXiv
56
citations
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
NEURIPS 2021
arXiv
54
citations
Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
NEURIPS 2021
arXiv
53
citations
ReasonIR: Training Retrievers for Reasoning Tasks
COLM 2025
arXiv
44
citations
DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models
NEURIPS 2023
arXiv
40
citations
Oscillatory State-Space Models
ICLR 2025
arXiv
23
citations
Sparse Flows: Pruning Continuous-depth Models
NEURIPS 2021
arXiv
19
citations
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
NEURIPS 2022
arXiv
15
citations
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
ICLR 2024
arXiv
13
citations
ReGen: Generative Robot Simulation via Inverse Design
ICLR 2025
arXiv
4
citations
The Master Key Filters Hypothesis: Deep Filters Are General
AAAI 2025
arXiv
2
citations
Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control
ICLR 2024
arXiv
2
citations
Visual Graph Arena: Evaluating Visual Conceptualization of Vision and Multimodal Large Language Models
ICML 2025
arXiv
1
citations
The Quest for Universal Master Key Filters in DS-CNNs
NEURIPS 2025
arXiv
1
citations
ActionSense: A Multimodal Dataset and Recording Framework for Human Activities Using Wearable Sensors in a Kitchen Environment
NEURIPS 2022
0
citations
Large Scale Dataset Distillation with Domain Shift
ICML 2024
0
citations
Compress to Impress: Efficient LLM Adaptation Using a Single Gradient Step on 100 Samples
NEURIPS 2025
arXiv
0
citations
On the Size and Approximation Error of Distilled Datasets
NEURIPS 2023
0
citations
Gigastep - One Billion Steps per Second Multi-agent Reinforcement Learning
NEURIPS 2023
0
citations