Robotic Collaboration through Scalable Reactive Synthesis

As human-robot collaboration is scaled up to more and more complex tasks, there is an increased need for formally modeling the system formed by human and robotic agents. Such modeling enables reasoning about reliability, safety, correctness, and scalability of the system. The modeling, however, presents a daunting task. This research aspires to formally model scenarios where the robot and the human can have varying roles. The intent is to develop scalable methodologies that will endow the robot with the ability to adapt to human actions and preferences without changes to its underlying software or hardware. An assembly scenario will be used to mimic manufacturing settings where a robot and a human may work together and where the actions of the robot can improve the quality and safety of the work of the human. The project is a critical step towards making robots collaborative with and responsive to humans while allowing the human to be in control. This research will develop a framework for human-robot collaboration that integrates reactive synthesis from formal methods with robotic planning methods. By tightly combining the development of synthesis methods with robotics, it will pursue the development of a framework that is intuitive and scalable. The focus is on task-level collaboration as opposed to physical interaction with a human. The framework takes as input a task specification defined in a novel formal language interpreted over finite traces: a language suitable for robotics problems. It produces a policy for a robotic agent to assist a human agent regardless of which subtask or execution order for this subtask that the human agent chooses. The policy includes both high-level actions for the robotic agent as well as corresponding low-level motions that can be directly executed by the actual robot. One key novel component of the approach is the automated construction of abstractions for robotic manipulation that can be used by synthesis methods. The scalability of the proposed work will be investigated along different dimensions: the extent to which symbolic reasoning can be applied, the development of new synthesis algorithms, and the proper use of abstractions including their automatic refinement and the construction of factored abstractions. The trade-offs in using a combination of partial policies and replanning will be investigated as well as how to account for incomplete information due to incomplete observations. The theoretical contributions will be implemented on real robot hardware and demonstrated in experiments that are analogous to real-world assembly tasks.

This work has been supported by grant NSF NRI 1830549.

Related Publications

  1. K. Elimelech, Z. Kingston, W. Thomason, M. Y. Vardi, and L. E. Kavraki, “Accelerating long-horizon planning with affordance-directed dynamic grounding of abstract skills,” in IEEE International Conference on Robotics and Automation, 2024. To Appear.
    pdf publisher details
    Details
  2. K. Elimelech, L. E. Kavraki, and M. Y. Vardi, “Extracting generalizable skills from a single plan execution using abstraction-critical state detection,” in 2023 International Conference on Robotics and Automation (ICRA), 2023, pp. 5772–5778.
    Details
  3. K. Elimelech, L. E. Kavraki, and M. Y. Vardi, “Automatic Cross-domain Task Plan Transfer by Caching Abstract Skills,” in Algorithmic Foundations of Robotics XV, Cham, Switzerland, 2023, vol. 25, pp. 470–487.
    Details
  4. K. Elimelech, L. E. Kavraki, and M. Y. Vardi, “Efficient task planning using abstract skills and dynamic road map matching,” in Robotics Research, Cham, Switzerland, 2023, vol. 27, pp. 487–503.
    Details
  5. S. Bansal, L. E. Kavraki, M. V. Vardi, and A. Wells, “Synthesis from Satisficing and Temporal Goals,” in Proceedings of the AAAI Conference on Artifical Intelligence, 2022, vol. 36, no. 9, pp. 9679–9686.
    Details
  6. T. Pan, A. M. Wells, R. Shome, and L. E. Kavraki, “Failure is an option: Task and Motion Planning with Failing Executions,” in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 1947–1953.
    Details
  7. A. M. Wells, Z. Kingston, M. Lahijanian, L. E. Kavraki, and M. Y. Vardi, “Finite-Horizon Synthesis for Probabilistic Manipulation Domains,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2021, pp. 6336–6342.
    Details
  8. T. Pan, A. M. Wells, R. Shome, and L. E. Kavraki, “A General Task and Motion Planning Framework For Multiple Manipulators,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021, pp. 3168–3174.
    Details
  9. A. M. Wells, M. Lahijanian, L. E. Kavraki, and M. Y. Vardi, “LTLf Synthesis on Probabilistic Systems,” Electronic Proceedings in Theoretical Computer Science, vol. 326, pp. 166–181, Sep. 2020.
    Details
  10. J. D. Hernández, S. Sobti, A. Sciola, M. Moll, and L. E. Kavraki, “Increasing Robot Autonomy via Motion Planning and an Augmented Reality Interface,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1017–1023, Apr. 2020.
    Details
  11. T. Pan, C. K. Verginis, A. M. Wells, L. E. Kavraki, and D. V. Dimarogonas, “Augmenting Control Policies with Motion Planning for Robust and Safe Multi-robot Navigation,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 6975–6981, 2020.
    Details
  12. J. D. Hernández, M. Moll, and L. E. Kavraki, “Lazy Evaluation of Goal Specifications Guided by Motion Planning,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2019, pp. 944–950.
    Details
  13. K. He, A. M. Wells, L. E. Kavraki, and M. Y. Vardi, “Efficient Symbolic Reactive Synthesis for Finite-Horizon Tasks,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2019, pp. 8993–8999. (Best paper award in Cognitive Robotics)
    Details
  14. K. He, M. Lahijanian, L. E. Kavraki, and M. Y. Vardi, “Automated Abstraction of Manipulation Domains for Cost-Based Reactive Synthesis,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 285–292, Apr. 2019.
    Details