Research Post
Children with physical impairments may face challenges during play due to limitations in reaching and handling objects. Telerobotic systems that provide guidance towards toys may help provide access to play, but intuitive methods to control the guidance are required. As a first step towards this, adults without physical impairments tested two eye gaze interfaces. One was an attentive user interface that predicts the target toy that users want to reach using a neural network, trained to recognize the movements performed on the user-side robot and the user’s point of gaze. The other interface was an explicit eye input interface that detects the toy that a user fixates on for at least 500ms. This study compared the performance and advantages of each interface in a whack-a-mole game. The purpose was to test the feasibility of activating haptic guidance towards toys with an attentive interface and to assure the safety of the system before children use it. The prediction accuracy of the attentive interface was 86.4% on average, compared to 100% with the explicit interface, thus, seven participants preferred using the explicit interface over the attentive interface. However, using the attentive user interface was significantly faster, and it was less tiring on the eyes. Ways to improve the accuracy of the attentive eye gaze interface are suggested.
Mar 3rd 2023
Research Post
Feb 26th 2023
Research Post
Sep 15th 2022
Research Post
Looking to build AI capacity? Need a speaker at your event?