Our project is based on Tango, Google cloud and Android vision and speech API.
The functionalities of our app, as described in phase one, include selection of start point and destination point, directions along the route and voice instructions from start till the end.
The app at this stage of development will first display a menu screen for user to pick the start and destination floors.
Afterwards it will transfer to the next screen where the user will need to hit clear button to initialize the Tango service to get the information about the 3d space around the user.
Then an image of direction will be shown as well as audio that speaks out that direction, i.e. If going up, then an up arrow is displayed and voice saying “Up!” will be heard.
In the end, when destination is reached, stop sign is displayed and return to initial UI screen.
With the app, visually impaired people will be able to navigate up and down stairs without problem. They will be able to know where exactly the next flight of stairs (through edge detection) is, when to start walking, turning, and stopping.