Aira is a service that helps visually impaired individuals to live independently. In doing so, we — through the latest, personalized assistive technology and approaches — serve as visual interpreters and navigators for blind and low-vision people.
Here is a detailed description of how the Aira platform works:
Smart glass technology: The blind user puts on a Smart glass that has a camera, network connectivity and a rich set of sensors, such as Google Glass and taps on a button. An “Uber” style routing will connect the user to an available trained Agent to help the user with his or her request.
Aira Agents are certified and trained: Agents are specially selected and trained through a 4-step process (described below). Agents are either from the user’s friends and family circle, or from our Aira Agent Team. The user will always interact with Agents whom they feel comfortable with and have trained/worked with in the past.
Agent selection and training is a four-step process:
— -Agents are selected by a pre-screening process that is similar to tests of a 911 operator.
— -Every Agent is trained by experienced Aira team members on the use of the Agent Dashboard and go
through various emergency protocols.
— -The Agent is also introduced to “Aira Phraseology,” which will
help him or her deliver timely, clear and concise information while taking into consideration the
— -In addition, Agents go through a certification process after undergoing a thorough
background check and signing confidentiality agreements.
The Agent has a sophisticated Agent Dashboard which has the following data feeds:
— -The current view from the user’s Google Glass User’s current location.
— -Current traffic conditions, points of interest, government warnings, etc.
— -Personal data gathered from the users — such as medical history, emergency contacts, things that
they enjoy doing, places they like visiting, and names and contact information of family members and personal assistants.
— -Facial recognition that will enable the user to identify individuals around him or her.
Agent-User interaction: Agents are first paired with users to get acquainted with users’ needs by working with them on simpler tasks under supervision. This provides an opportunity for both the Agent and blind user to get to know each other and develop a compatible relationship. Once the user builds trust and becomes comfortable with the guidance received, the Agent will transition to guiding them independently with their tasks.
User safety is paramount:
— -Our platform, electronic equipment, system protocols, Agent training and user acclimatization are
designed to put user safety as our top priority. All the transactions via Aira platform are secured
— -The system has automatic algorithms to assess potential dangers. For example, the system is constantly
monitoring for sound, video input, and sensor feed. As new safety systems and wearable devices become
available, Aira’s platform is built to incorporate them and remain at the leading edge of safety.
— -Aira has mechanisms to handle contingent situations, such as device or technical failure. Agents will
be able to make calls on behalf of the user, request taxi service from reliable service providers to
take the blind user home safely, or follow a specific user’s contingency plan.
— -Even when network coverage is lost, Aira continues to store the user’s location locally on the wearable
device. Once the network connection is regained, their location history is updated on the server,
enabling the Agent to pinpoint the user’s current location.
— -Aira’s platform is also capable of solving latency issues (slow network) for when there is limited
network bandwidth, such as in crowded or remote areas.
Environmental adaptation: In order to provide precise and accurate guidance in a noisy area, the majority of communications between the Aira Agent and the user can be done via pre-set tones and clicks.
Latency adaptation (aka Slow 4G LTE Network): The Aira team has worked diligently to reduce time effects due to network delays. New algorithms being developed will optimize the quality of video streaming and reduce network delay to 1–2 seconds (depending on network reception in the area).
— -Aira will be able to leverage indoor locations, based on systems such as iBeacon and Google Indoor
Maps in the future when the technologies mature.
— -Aira was built using its API platform, which will be open to external developers, allowing them to
expand it to most of the industry’s wearable technologies. This means Aira, as a system, can
incorporate information from other wearables, cameras, proximity sensors, infrared canes and other
sensors to provide a complete solution for users.
— -Aira has access to next generation Artificial Intelligence algorithms and advanced object and pattern
Market research and experiments:
— -Aira has conducted experiments at FFB Dining in the Dark 2014, Vision Walk 2014, LA Tech talk, CSUN 2015
and Braille Institute.
— -There have been 50+ blind user experiments, 200 blind user in-depth interviews, and 5 sessions for
Agents and training.
— -There are 100+ beta users signed up for trials in summer 2015.
— -There are 8 formal focus groups with Mobility and Orientation specialists, assistive tech experts, and