NYU Today

‘ITP Professor Partners with Lighthouse International to Help the Visually Impaired

By Richard Pierce

 When Kent Higgins and Lei Liu saw a demonstration of digital artist Jean-Marc Gauthier’s work they immediately realized the potential for using the technology to help visually impaired people. Gauthier, an assistant art professor in TSOA’s Interactive Telecommunications Program (ITP), was showing off “Nighthawks2,” an interactive urban installation inspired by a street intersection in New York in the 1940’s and artist Edward Hopper’s painting “Nighthawks.”

 Higgins and Liu, of Lighthouse International, New York, recognized that Gauthier’s use of immersive-display technology creating a realistic simulation of street intersections could also be used to train persons with very poor vision to negotiate real street intersections.

 Gauthier's program will be used to create a virtual environment withhin a room using screens on four walls to simulate an interesection or other street scene and allow a visually impaired person to opportunity to practice before negotiating it for real.

 “This ingenious design uses off-the-shelf components at a tenth of the cost of average virtual reality simulators,” explains Kent Higgins, vice president for vision research at Lighthouse.

 “Nighthawks2,” an installation commissioned by Le Cube for the Festival Premier Contact, was part of a street festival of interactive art organized by Florent Aziosmanoff in spring 2005 in the city of Issy-Les-Moulineaux, France. The work, performing non-stop outdoors for several weeks, reacted to prevailing light and weather conditions. Some visitors referred to the installation as “an interactive 3-D comic’s album” and enjoyed playing with the characters in the scenes using their cellular phones as a remote control.

 “The process to recreate a virtual street intersection from the 1940’s required a mix of artificial intelligence, multiple screens display, and the design of a ‘moody’ database that could influence the look and feel of a scene according to the time of day,” said Gauthier. “I conceived this project in 2003, before the present generation of 3-D hardware was available. When I finished programming it there was no hardware available that could play a scene with more than 20 animated 3-D characters with artificial intelligence, and with cars moving on three screens at 60 frames per second. So I waited another six months for the technology to catch up with me.”

 Lei Liu, a vision research investigator at the Lighthouse, explained that if visual experiences inside a virtual simulator can be transferred to the real world, the affordability, efficiency, and the scope of training low vision people can be greatly improved. “Twenty million Americans suffer from various degrees of visual impairment, and many have difficulty traveling due to vision problems,” he said.

 The Lighthouse researchers point out that learning to interact with vehicle and pedestrian traffic on the street is a key issue in low-vision training. At present, visually impaired people rely on the instructor’s subjective observations and on the evaluation of events naturally occurring by the roadside. “Recent developments in 3-D games and virtual reality technology make it possible to build a desktop, computer-based virtual system that can provide sufficient visual information to simulate a complex and dynamic environment such as a street intersection,” added Liu.

 The next step for the ITP/Lighthouse team includes the design of a simulator where the spatial/temporal combination of visual and auditory stimulus can be influenced in order to make the pedestrian environment more challenging. Without additional hardware investment, the virtual reality system can be used for a wide spectrum of low-vision training, such as boarding buses and subways, gathering information in transit terminals or airports, and navigating parking lots and sidewalks.

 When eye/head movement monitoring devices are integrated into the system, the viewer can receive instant feedback, thus facilitating the establishment of safer and more efficient eye/head movement patterns. Using advanced sensors—eye- and head-movement sensors— positioning and pointing devices can generate data about clients’ behaviors before, during, and after training. A database of behaviors and artificial intelligence can be used to form training curriculums.

 “The flexible and quantitative nature of a virtual reality system makes it an ideal bench to conduct low-vision research.” says Liu.

NYU Today
Vol 19, Issue 98