Top of page

Backpack AI system makes navigation safer for people with vision disabilities

Young woman curly red head girl with backpack in front of tram at the city street

Engineers have created a voice-activated wearable system that can track barriers in real-time and describe surroundings, giving people with visual disabilities more confidence to navigate safely in the outside world.

This new visual assistance device is made up of several components that can be worn without much bulk and unnoticeable. It consists of either a vest or a fanny pack, a backpack and a pair of earphones. The creators said that concealing the electronics was a very important part of the design, so users don’t look like robots walking down the street.

The vest or fanny pack encases a series of cameras – a high-resolution camera that provides color information and a pair of cameras that map field depth. This visual data is then sent to the “brains” of the system in the backpack.

A computing unit like a laptop or Raspberry Pi, runs an artificial intelligence interface called OpenCV’s Artificial Intelligence Kit with Depth, which uses neural networks to analyze the visual data. It also contains a portable battery that provides up to eight hours of use, and a USB-connected GPS unit.

The analyzed visual data is then relayed, via Bluetooth, to a pair of earphones, informing the user of their surroundings. It warns of obstacles of different shapes, sizes, and types and describes where they are related to the user, using descriptors like front, top, bottom, left, right and center.

For example, the system can tell someone walking down the street that they are approaching a trash can on their “bottom, left” or a low-hanging branch with “top, center.” Tripping hazards like curbs or stairs are distinguished as changes in elevation, and the system can even recognize key things like Stop signs or crosswalks as they approach a corner.

Users can also issue voice commands to ask for additional information. Saying “describe” will make the system reply with a list of what’s around them and where, such as “car, 10 o’clock,” “person 12 o’clock,” and “traffic light, 1 o’clock.”

Specific places can be saved for future reference with commands like “save location, coffee shop.” When the user wants to return to that place, they will simply say “locate coffee shop” and the system will give directions and the distance to that destination.

Other high-tech tools for people with vision disabilities are on the horizon, like laser-ranging canes that sense objects and terrain changes, and Toyota’s guide collar that buzzes on the left or right side to let people know which way to turn.
However, this AI system looks more detailed-oriented and allows people with limited vision to maneuver the world independently.

The development team hopes to fast-track the system by making it non-commercial and available to the public.

You might also like

Robot guide dog testing

Robot guide dog created to help people with vision disabilities

Researchers in California have developed a guide dog bot that…

Young blind man with smartphone on street in city, making phone call

Experts identify priority actions to ICT accessibility across Europe

European countries, in laying the groundwork for inclusive economies and…

American Sign Language for “interpret”

University of Florida DRC to hire sign language interpreters

The Disability Resource Center at the University of Florida is recruiting full-time…

RoboEYE wheelchair

‘RoboEYE’ wheelchair operated by gaze detection is a game changer

The creation of a semi-autonomous wheelchair, called RoboEye, could be…