Optimal Interface Design for Human-Robot Collaboration

For robot interfaces to be practical for humans, they must be simple, seamless, and mitigate cognitive load for the user. This is particularly critical for users who may not have familiarity with robotic systems or in settings with complex collaborative robotic systems (e.g., robot swarms) or dynamically changing environments, where managing the system can be overwhelming. In my research, I seek to design user interfaces for human-robot collaboration that effectively communicates information to the human with minimal cognitive load and training to use.

Ergodic Specifications for Flexible Swarm Control and Dynamic Task Adaptation

People often have to perform a variety of tasks-including search-and-rescue, target location, or exploration and terrain mapping- in unfamiliar environments. Drones deployed in the field with them have the ability to greatly improve their situational and perceptual awareness by providing feedback to aid in their task performance and safety. However, swarm deployment can be complicated, as drones can be difficult for a person to control and operate without greatly increasing their cognitive load, which can make task performance difficult and inefficient. With this in mind, I ask the question: How can we design a human-swarm system to best accomplish a task under pressure?

Simulation of a swarm dynamically adapting to the environment while also responding to user commands. (a) When the swarm discovers a DD, the agents cover the rest of the workspace while avoiding that location. (b) When a user inputs a bimodal distribution (shown as the dark region on the map), the swarm responds to the user commands, while continuing to avoid the DD location. (c) When the swarm finds an EE, it simultaneously converges on the EE, covers the user inputs, and avoids the DD location. (d) shows the resulting target distribution for the combined tasks.

I led the Northwestern team on the DARPA FX-3 Urban Swarm Challenge project, exploring this idea. I developed a formulation for swarm control and high-level task planning that is dynamically responsive to user commands and adaptable to environmental information. I designed an end-to-end pipeline from a tactile tablet interface for user commands to onboard control of robotic agents based on decentralized ergodic coverage. I conducted experiments with a robotic swarm at the DARPA OFFSET FX3 field tests, combining user inputs and task specifications to generate swarm behavior flexible to changing conditions and objectives in real-time. I also developed an experimental VR test bed to validate our approach in a simulation and conduct human subject studies investigating human-robot collaboration under pressure.

The Tanvas tactile tablet allows the operator to communicate their preferences for coverage to the swarm.

User inputs are incorporated through a tablet interface we developed for communicating regions of interest by the user to a swarm in realtime using the TanvasTouch monitor. Using the TanvasTouch, the user can specify regions of exploratory interest by simply shading the regions of interest on the TanvasTouch. The tablet interface transmits a set of desired points on the workspace for the swarm to prioritize and the spatial distribution is generated by assigning the highest priority value at each of those points in a discretized workspace. The user-specified distribution is combined with the task-based distribution to generate swarm control that is dynamically responsive to both task updates and user inputs.

For more information, check out the following papers (paper1, paper2) on this work.

Virtual Reality System for Human-Subject Studies

We developed an experimental urban environment testbed using the Unity game engine and using an HTC Vive for controlling operator movement inside the virtual reality environment. As the operator moved within the VR environment, a swarm of simulated quadrotors running the ergodic swarm algorithm provided assisstane to the operator in real time.The swarm’s behavior was governed by ergodic task specifications as described above, as well as using the TanvasTouch haptic interface to send commands for desired areas of explorations to the swarm.

The VR environment was used to validate the full system architecture of the shared ergodic formulation with multiple two-way communication channels. In addition, we have conducted user studies using this testbed to analyze the impact of the shared ergodic swarm control approach compared to standard methods and analyzes the effects of different ergodic specifications within our framework on task performance under dynamic, time-sensitive constraints.

For more information, check out the following papers (paper1, paper2) on this work.

Avatar
Ahalya Prabhakar
Lecturer and Associate Research Scientist

My research interests include robot active learning from high-dimensional sensory signals and human-robot interaction using information-theoretic algorithms.

Publications

Collaborative robots can augment human cognition in regret-sensitive tasks

Despite theoretical benefits of collaborative robots, disappointing outcomes are well documented by clinical studies, spanning …

Measuring Human-Robot Team Benefits Under Time Pressure in a Virtual Reality Testbed

During a natural disaster such as hurricane, earthquake, or fre, robots have the potential to explore vast areas and provide valuable …

Scale-Invariant Specifications for Human-Swarm Systems

We present a method for controlling a swarm using its spectral decomposition—that is, by describing the set of trajectories of a swarm …