Back in April 2016, Facebook introduced a new feature in its iOS app to help users who are blind or have low vision ‘see’ photos.
By leveraging AI, the app is able to automatically generate a description of every photo a users comes across.
By using a screen reader on iOS, users can hear a list of items that feature in the photos, such as, for example, ‘Image may contain six people sitting on a bench and laughing.”
This is possible because of Facebook’s object recognition technology, which is based on a neural network that has billions of parameters and is trained with millions of examples. Each advancement in object recognition technology means that the Accessibility team will be able to make technology even more accessible for more people.