Our mission is to create intelligent mobile-manipulation robots that
efficiently perform manipulation tasks in diverse and unstructured
environments. We are inspired by the enormous difficulties that arise in
dealing with large numbers of potentially unknown objects and robot's
limited knowledge and sensing capabilities. To realize this mission, we
conduct interdisciplinary research at the intersection of task and motion
planning, computer vision, and machine learning.
Preference learning for guiding the tree search in continuous POMDPs. Jiyong Ahn, Sanghyeon Son, Dongryung Lee, Jisu Han, Dongwon Son, and Beomjoon Kim.
Pre- and post-contact policy decomposition for non-prehensile
manipulation with zero-shot sim-to-real transfer. Minchan Kim,
Junhyek Han, Jaehyung Kim, and Beomjoon Kim.
Local object crop collision network for efficient simulation of
non-convex objects in GPU-based simulators. Dongwon Son and Beomjoon
Ohm^2: Optimal hierarchical planner for object search in large
environments via mobile manipulation. Yoonyoung Cho*, Donghoon
Shin*, and Beomjoon Kim.
MIT Embodied Intelligence Seminar: Making Robots See and Manipulate
Learning to reason for robot task and motion planning problems