Research Projects
Adaptive Multi-Human Multi-Robot Systems
Multi-human multi-robot (MH-MR) teams are emerging as promising assets for tackling high-stakes and large-scale missions, such as environmental monitoring, disaster recovery, and search and rescue. The simultaneous collaboration of multiple humans and robots with diverse capabilities, expertise, and characteristics offers tremendous potential to enhance team complementarity, productivity, and versatility. However, this inherent heterogeneity also introduces significant coordination challenges.
Furthermore, while integrating human operators at the core of the decision-making process can greatly improve the team’s situational awareness and flexibility, it also introduces additional uncertainty and complexity. Human states, such as cognitive load and emotion, as well as performance are inherently fickle, influenced by various internal or external factors.
To address these challenges and unlock the full potential of MH-MR teams, this project focuses on three core objectives:
Adaptive Teaming Strategies
Develop advanced Initial Task Allocation (ITA) strategies that account for team heterogeneity during the teaming stage. This involves dynamically initializing task distribution, assigning roles, and defining collaboration patterns by considering the diverse capabilities of both humans and robots under varying task requirements. The objective is to harness this heterogeneity constructively, forming complementary human-robot pairings or chains that optimize overall team performance.

Related Papers:
- Initial Task Allocation in Multi-Human Multi-Robot Teams: An Attention-enhanced Hierarchical Reinforcement Learning Approach, IEEE RA-L 2024.
- Initial Task Allocation for Multi-Human Multi-Robot Teams with Attention-based Deep Reinforcement Learning, IROS 2023.
- REBEL: Rule-based and Experience-enhanced Learning with LLMs for Initial Task Allocation in Multi-Human Multi-Robot Teams, Arxiv, pre-print 2025.
Multimodal Human State Reasoning
Investigate the dynamics of human states, including cognitive load and emotion, and develop models that provide real-time assessments using multimodal physiological and behavioral signals.


Related Papers:
- MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks, IEEE TAFFC 2024.
- Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition, IEEE TCDS 2024.
Dynamic Adaptation Mechanisms During Operation
Develop adaptive mechanisms to re-adjust team collaboration patterns and re-allocate tasks within the team according to perceived changes in human states, robot conditions, and evolving task progress.

Related Papers:
- Adaptive Task Allocation in Multi-Human Multi-Robot Teams under Team Heterogeneity and Dynamic Information Uncertainty, ICRA 2025.
- Cognitive Load-based Affective Workload Allocation for Multi-Human Multi-Robot Teams, IEEE THMS.
Human-in-the-Loop Robot Learning for Personalized Human-Robot Interaction
While human-robot systems can be optimized for objective factors, such as team heterogeneity and operational states, individual preferences often transcend these measurable aspects. Individuals with similar capabilities or operational conditions may still prefer different interaction patterns. Personalizing robot behaviors to align with these unique preferences is critical, as it enhances user satisfaction, engagement, and overall interaction quality.
This project aims to develop efficient human-in-the-loop, preference-based robot learning algorithms to facilitate this personalization process. We specifically investigate: how to minimize the amount of human feedback required while maximizing learning outcomes; how to accurately model human preferences toward robot behaviors, and how to allow rapid and effective adaptation of robot policies based on preference data.


Related Papers:
- PrefCLM: Enhancing Preference-based Reinforcement Learning with Crowdsourced Large Language Models, IEEE RA-L 2025.
- Personalization in Human-Robot Interaction through Preference-based Action Representation Learning, ICRA 2025.
- PrefMMT: Modeling Human Preferences in Preference-based Reinforcement Learning with Multimodal Transformers, Arxiv.
- Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation, IROS 2022.
Socially-Aware Robot Navigation
Socially-aware robot navigation (SAN) involves optimizing a robot’s trajectory to maintain comfortable and compliant spatial interactions with humans while efficiently reaching its goal without collisions. This task is fundamental yet challenging within human-robot interaction contexts, as it requires balancing safety, efficiency, and social etiquette.
Our work focuses on modeling complex social interactions by developing algorithms that better encode and interpret the intricate social dynamics across humans and robots within varied environments. This involves leveraging advanced deep learning techniques to understand human behaviors in diverse settings, enabling robots to navigate with a deeper awareness of social nuances.
Related Papers:
- Multi-Robot Cooperative Socially-Aware Navigation using Multi-Agent Reinforcement Learning, ICRA 2024.
- NaviSTAR: Socially Aware Robot Navigation with Hybrid Spatio-Temporal Graph Transformer and Preference Learning, IROS 2023.
- Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation, IROS 2022.