Optimal Interface Design for Human-Robot Collaboration
For robot interfaces to be practical for humans, they must be simple, seamless, and mitigate cognitive load for the user. This is particularly critical for users who may not have familiarity with robotic systems or in settings with complex collaborative robotic systems (e.g., robot swarms) or dynamically changing environments, where managing the system can be overwhelming. In my research, I seek to design user interfaces for human-robot collaboration that effectively communicates information to the human with minimal cognitive load and training to use.
Ergodic Specifications for Flexible Swarm Control and Dynamic Task Adaptation
People often have to perform a variety of tasks-including search-and-rescue, target location, or exploration and terrain mapping- in unfamiliar environments. Drones deployed in the field with them have the ability to greatly improve their situational and perceptual awareness by providing feedback to aid in their task performance and safety. However, swarm deployment can be complicated, as drones can be difficult for a person to control and operate without greatly increasing their cognitive load, which can make task performance difficult and inefficient. With this in mind, I ask the question: How can we design a human-swarm system to best accomplish a task under pressure?
I led the Northwestern team on the DARPA FX-3 Urban Swarm Challenge project, exploring this idea. I developed a formulation for swarm control and high-level task planning that is dynamically responsive to user commands and adaptable to environmental information. I designed an end-to-end pipeline from a tactile tablet interface for user commands to onboard control of robotic agents based on decentralized ergodic coverage. I conducted experiments with a robotic swarm at the DARPA OFFSET FX3 field tests, combining user inputs and task specifications to generate swarm behavior flexible to changing conditions and objectives in real-time. I also developed an experimental VR test bed to validate our approach in a simulation and conduct human subject studies investigating human-robot collaboration under pressure.
User inputs are incorporated through a tablet interface we developed for communicating regions of interest by the user to a swarm in realtime using the TanvasTouch monitor. Using the TanvasTouch, the user can specify regions of exploratory interest by simply shading the regions of interest on the TanvasTouch. The tablet interface transmits a set of desired points on the workspace for the swarm to prioritize and the spatial distribution is generated by assigning the highest priority value at each of those points in a discretized workspace. The user-specified distribution is combined with the task-based distribution to generate swarm control that is dynamically responsive to both task updates and user inputs.
For more information, check out the following papers (paper1, paper2) on this work.
Virtual Reality System for Human-Subject Studies
We developed an experimental urban environment testbed using the Unity game engine and using an HTC Vive for controlling operator movement inside the virtual reality environment. As the operator moved within the VR environment, a swarm of simulated quadrotors running the ergodic swarm algorithm provided assisstane to the operator in real time.The swarm’s behavior was governed by ergodic task specifications as described above, as well as using the TanvasTouch haptic interface to send commands for desired areas of explorations to the swarm.
The VR environment was used to validate the full system architecture of the shared ergodic formulation with multiple two-way communication channels. In addition, we have conducted user studies using this testbed to analyze the impact of the shared ergodic swarm control approach compared to standard methods and analyzes the effects of different ergodic specifications within our framework on task performance under dynamic, time-sensitive constraints.
For more information, check out the following papers (paper1, paper2) on this work.