Research Projects
Adaptive Multi-Human Multi-Robot Systems
Multi-human multi-robot (MH-MR) teams are emerging as promising assets for tackling complex and expansive missions, such as environmental monitoring, disaster recovery, and search and rescue. The collaboration of multiple humans and robots with diverse capabilities, expertise, and characteristics presents great potential to enhance team complementarity, productivity, and versatility. However, this inherent heterogeneity within the team also introduces coordination challenges. Moreover, incorporating human operators as the core of the decision-making process can significantly improve the situational awareness and flexibility of the team, but it also introduces more uncertainty and complexity. Human affective conditions, such as cognitive load and emotion, as well as performance, are inconsistent and susceptible to various internal or external factors.
To unlock the full potential of MH-MR teams, this project focuses on developing adaptive systems capable of initializing mission-specific MH-MR teams by considering the inherent heterogeneity, proactively monitoring and analyzing the cognitive and emotional states of operators, and enabling human operators to adapt to robot system changes and robots to adapt to human cognitive and emotional states. Specifically, we aim to:
- Develop sophisticated initial task allocation (ITA) strategies that adapt to team heterogeneity. These strategies will optimally initialize task distribution by allocating and scheduling a variety of tasks, each with unique specifications, to a team comprising multiple humans, each influenced by varied factors, and multiple robots, each with different characteristics.
- Build multimodal human state recognition models that reason in real-time about the cognitive load and emotional states of human operators using various physiological and behavioral signals.
- Develop affective controllers that enable adaptive task re-allocation and team adjustments based on the perceived human states.
This project is supported by the National Science Foundation under Grant No. IIS-1846221. Relevant papers:
- Initial Task Allocation in Multi-Human Multi-Robot Teams: An Attention-enhanced Hierarchical Reinforcement Learning Approach, RA-L 2024.
- MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks, IEEE TAFFC 2024.
- Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition, IEEE TCDS 2024.
- Affective Workload Allocation for Multi-human Multirobot Teams, ArXiv Pre-print.
- Initial Task Allocation for Multi-Human Multi-Robot Teams with Attention-based Deep Reinforcement Learning, IROS 2023.
Human-in-the-loop Robot Learning for Personalized Human-Robot Interactions
Human preferences for robot interaction behaviors are inherently diverse and individual. Adapting and personalizing robot behaviors to these individual preferences is crucial, as it can significantly enhance user satisfaction, engagement, and overall interaction quality. This project aims to develop efficient human-in-the-loop robot learning algorithms to facilitate this personalization process. Our primary objective is to develop innovative and transformative frameworks and algorithms that enable seamless robot adaptation in human-robot interaction by efficiently understanding and learning from human feedback and preferences.
Relevant papers:
- Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation, IROS 2022.
- PrefCLM: Enhancing Preference-based Reinforcement Learning with Crowdsourced Large Language Models, ArXiv Pre-print.
Socially-aware Robot Navigation
Socially-aware robot navigation (SAN), in which a robot must optimize its trajectory to maintain comfortable and compliant spatial interactions with humans while also reaching its goal without collisions, is a fundamental but challenging task in the context of human-robot interaction. In this project, our work focuses on two main areas: 1) Encoding Complex Social Interactions: We are developing algorithms to better encode and interpret the intricate social dynamics within varied environments. This involves utilizing advanced deep learning techniques to understand human behaviors in different settings, enabling robots to navigate with a deeper awareness of social nuances; and 2) Innovative Teaching Methods for Robots: We are exploring new methods to teach robots that move beyond traditional reinforcement and inverse reinforcement learning. This includes devising intuitive and effective reward systems that more accurately reflect social compliance and exploring alternatives to reduce reliance on human demonstrations.
Relevant papers:
- Multi-Robot Cooperative Socially-Aware Navigation using Multi-Agent Reinforcement Learning, ICRA 2024.
- NaviSTAR: Socially Aware Robot Navigation with Hybrid Spatio-Temporal Graph Transformer and Preference Learning, IROS 2023.
- Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation, IROS 2022.