October 2017 - Researchers at Binghamton University, led by professor of Computer Science Lijun Yin, are working to develop software that interprets facial recognition data in real-time to enhance the virtual reality (VR) experience. The team has successfully paired several facial expressions and movements in a simple VR game so that the user can navigate the virtual reality environment using movements of the head and mouth solely. New facial recognition features, such as the technology created by Professor Yin and his team, point to ways in which VR experiences can be inclusive of people with disabilities. Simple entertaining experiences like the researchers’ game that allowed for virtual looking around, walking, and eating can now be controlled using movements of the mouth rather than relying on coordinated movements of the arms and hands that excluded some individuals from taking part. As virtual reality communication applications such as interviews and training programs expand, this technology demonstrates the potential of an accessible interface option for greater inclusivity in the VR experience.
While this user interface is still in its infancy, there are many potential applications for more advanced programs and devices that could build on this technology. Seemingly the most direct use case is virtual “interviews and communication” that allows for live facial expressions to be communicated realistically. Professor Lin believes technologies like this will allow a virtual reality experience to seem more authentic, which could in turn increase development of VR the applications. Professor Yin and his team intend to make the technology work with more than one person at a time. The team believes that the most effective use of their technology will be in making collaborative and communicative programs more realistic. [Source: Global Accessibility News]