Immersive Commodity Telepresence with the AVATRINA Robot Avatar
Published in International Journal of Social Robotics, 2024
Recommended citation: Marques, J.M.C.*, Naughton, P.*, Peng, J.-C.*, Zhu, Y.*, Nam, J. S., Kong, Q., Zhang, X., Penmetcha, A., Ji, R., Fu, N., Ravibaskar, V., Yan, R., Malhotra, N., and Hauser, K., (2024). "Immersive Commodity Telepresence with the TRINA Robot Avatar ". International Journal of Social Robotics https://link.springer.com/article/10.1007/s12369-023-01090-1
Immersive robotic avatars have the potential to aid and replace humans in a variety of applications such as telemedicine and search-and-rescue operations, reducing the need for travel and the risk to people working in dangerous environments. Many challenges, such as kinematic differences between people and robots, reduced perceptual feedback, and communication latency, currently limit how well robot avatars can achieve full immersion. This paper presents AVATRINA, a teleoperated robot designed to address some of these concerns and maximize the operator’s capabilities while using a commodity light-weight human–machine interface. Team AVATRINA took 4th place at the recent $10 million ANA Avatar XPRIZE competition, which required contestants to design avatar systems that could be controlled by novice operators to complete various manipulation, navigation, and social interaction tasks. This paper details the components of AVATRINA and the design process that contributed to our success at the competition. We highlight a novel study on one of these components, namely the effects of baseline-interpupillary distance matching and head mobility for immersive stereo vision and hand-eye coordination.