As technology is getting faster, cheaper, and smaller at exponential rates, the technology to create an all-in-one wearable AR solution is approaching. I believe designers such as myself as a recent graduate of design school will have the responsibility to design for mass-market wearable AR devices. As a result, we have a moral obligation to design for new technologies with accessible, human-centered designs at the forefront of the design process.
This project is my first attempt at solving how to utilize the technology of augmented reality to provide most of its benefits and uses for visually impaired users through auditory assistance.
What is Augmented Reality?
Augmented Reality is a “technology that allows users to view and interact in real time with virtual images seamlessly superimposed over the real world”(5). The most popular AR device, Google Glass, uses a small liquid crystal on silicon array projected on a prism, displaying a digital overlay atop the real world.
What differentiates designing for AR headsets of the past such as Google Glass compared to newer solutions such as Microsofts’ HoloLens or MagicLeap is the ability to accurately create “a 3D model of it's environment while also tracking the camera pose”(6) using a method called Simultaneous Localization and Mapping, (abbreviated as SLAM).
AR utilizing SLAM for object recognition allows for a better understanding of a physical space by "providing a human-scale understanding of space and motion", creating the foundation for new experience’s to be built and designed on(7).
Who is currently using AR?
Many of the worlds biggest tech companies such as Apple, Google, Microsoft, and many others have seen the potential AR has, and have begun creating AR SDK’s and platforms for developers to create and design AR content. Some of the most popular platforms for upcoming AR development are Apple’s ARKit, Google’s ARCore, Facebook’s CameraEffects, and Microsoft’s HoloLens SDK, bringing AR capabilities to the masses.(12) There are also other companies working on wearable AR devices as seen below.
What is Apple doing with AR?
With the race for AR market domination beginning, Apple acquired AR startup Metaio(13), a German augmented reality software maker that helped create ARKit(14). ARKit is the largest AR platform in the world” with ARKit and iOS 11, the company is investing in both software and hardware for domination in the future AR market(15). With a DXO mark of 97, the latest iPhone X boasts powerful cameras, showing Apple’s years of innovations and upgrades to mobile phone cameras are setting the foundation for smartphone AR(16). With iPhone X’s depth sensing front facing camera, this technology will be beneficial in the future for recognizing and being able to identify physical objects(17).
What is Microsoft doing with AR?
Both the iPhone X and Microsoft Kinect systems use “a depth-sensing camera to see the world in three-dimensions”. Microsoft has been developing and designing for AR since X-Box Kinect, and Alex Kipman, the primary inventor of more than 100 patents, has lead this project as well as HoloLens. Kipman states in an interview with the National Public Radio that AR is a "monumental shift where we move the entire computer industry from this old world, where we have to understand technology, into this new world, where technology disappears and it starts more fundamentally understanding us."
This is critical for wearable AR of the future as it should be so unintrusive that it feels like its not even there. The HoloLens is “the first fully self contained, holographic computer, enabling you to interact with high definition programs in your world.” Microsoft buying many AR patents, as them own a “large number of pending applications indicates that this dominant position will only get stronger."
What is Google doing with AR?
Google has been investing in AR hardware and software after its unsuccessful launch of Google Glass. Glass does “not use cutting-edge technologies, but rather combines standard technologies in a cutting-edge manner”(22)This is because Glass projects a 2D screen in front of the users eyes, without spatial awareness of the physical world.Glass is still being developer however under a new name, Glass Enterprise Edition. The company hasn’t stopped there however as Google now has “212 issued and 438 pending US patents directed towards augmented reality”(23) proving its commitment to the AR space(24). With Google’s “$500-plus million investment in Magic Leap, which recently announced an additional funding round of $1 billion”, showing Google also wants control in the AR market.(25)
Google is also making advancements with smartphones utilizing the Google Assistant AI with Google Lens. Lens is a visual search engine capable of looking at something and using object recognition to gather more information about it. This use of computer vision and machine learning will be fundamental to the development of AR, and Google investing into this platform is a step in the right direction.
How AR is advancing
AR is evolving everyday and increasing its popularity through smartphone applications is evident through Snapchat, Pokémon Go, and many other apps. These smartphones have become faster and more powerful throughout the years and offer features such as phone calls, text messages, internet access over wi-fi and cellular, web browsing, video calls, fast processing speeds, GPS, advanced cameras and many more features that are now considered essential. Ultimately it is what developers and designers use this technology for that will present the opportunities and potential AR has.
Smartphone Apps
Pokémon GO is a great example that showcases the potential of augmented reality on smartphones, and the impact this technology has when brought to the masses in an enjoyable way. Pokémon Go is an app where people all around the world are able to come together to play a video game utilizing augmented reality to catch and train Pokémon.Pokémon GO became the post popular mobile game ever as the game had 65 million users within the first week and 90 days after launch, the game generated $600 million in revenue(29).
There are many other smartphone apps utilizing AR in unique and creative ways that help raise awareness of the technology. Snapchat offers lenses and filters that appear to fit to the shape of the users face using ARkit, exposing many of Snapchat’s daily users to the technology. More examples of this technology can be anything from measuring something in the real world with only an AR smartphone app, to placing a virtual couch in your living room with IKEA Place. In today’s digital age of utilizing the newest and most innovative technology in fun and engaging ways, businesses will begin to utilize augmented reality as it has proved to be useful and effective at engaging with customers through digital content.
Human-Centered Artificial Intelligence
Mark Riedl, an associate professor in the College of Computing: School of Interactive Computing, states that human-centred artificial intelligence ”is the recognition that the way AI systems solve problems — especially using machine learning”(30). As AI focuses on helping the user with their personal content such as calendars and events, photos, videos, friends, messages, reservations and more, we use personal assistants and their AI to solve human-centred problems by constantly learning through us(31). If an action takes place that results in something functional or helpful, then it is a good example of compelling content with a human-centred design. An example of this type of human-centred AI is a company Google acquired called Word Lens, which allows the user to “point their smartphone at printed text in a foreign language and translate it to a language of your choice”(32) providing intuitive solutions through AI and machine learning. This technology is extremely useful when combined with a camera and proves to be useful for people with visual impairments. I have created an example of ways AI & AR can work together to solve everyday tasks and problems.
The intersection of AI & AR
Genevieve Bell, an anthropologist and Senior Fellow at Intel who focuses on culture and technologies intersection also discusses AR and AI. Genevieve believes that artificial intelligence will be key to AR powered devices as “they are going to be more intuitive about who we are, they’re going to have a memory of us, and as a result not be so much of an interaction, but a relationship…where they might anticipate what we are doing, where they might deliberately do things on our behalves" (36).
This would be beneficial to users with visual impairments as they would not have to remember things like where they placed something as AI with a contextual memory alleviates the stress of having limited vision. Having AI that utilizes machine learning to mold itself to an individuals personality will be critical to creating a wearable all in one AR device that can help the user before they even know they need help.
An auditory design system would be primarily used by people with low or no vision, although there is no one specific demographic the device and system would be limited to. This would work by contextually understanding the users environment through the devices cameras and sensors and relaying the information through an auditory personal assistant to the user(over the device speakers). Overtime with machine learning, the device will be able to understand what the user wants to accomplish by anticipating the needs of the user and only presenting relevant information, limiting auditory bombardment.
What I ultimately learned from my research is the potential for wearable AR is limitless, but the technology is not yet availablefor an all-in-one mass market wearable device. This is because such a device would require four key components:
- Object Recognition
- Simultaneous Location and Mapping
- Machine Learning
- An Advanced Personal Assistant
Augmented reality pioneer Ronald Azuma stated in his 1997
A Survey of Augmented Reality, that augmented reality “allows the user to see the real world with virtual objects superimposed or composited with the real world. Therefore, AR supplements reality, rather than completely replacing it” (Azuma 1997).
As Azuma states, augmented reality should supplement reality and through my research, I found this technology can be used to assist visually impaired users in everyday life as opposedto adding visual digital overlays. This is because augmented reality can be anything that augments an aspect of ones life, such as an auditory system that relays visual information to the user.
To see my bibliography for this project, click here.
After researching the technology, I wanted to focus on the intersection of accessibility and augmented reality. As designers, we have a moral obligation to design for everyone, and with new and emerging technologies such as augmented reality, accessible needs need to be met.
I began creating ARSight by brainstorming different instances that having a camera relay visual information auditory can help visually impaired users. The first area I researched was communication, as this would be a critical component of ARSight. I created a website analyzing the history of the telephone to better understand the technology, available to view below.
This research lead to me making ARSight exclusively auditory and requiring a personal assistant to help communicate the users world to them.
Next, I had to understand what information the average user with good vision would utilize wearable AR for and began to translate how these experiences could work auditory. I created a visual design system so I could work backwards to create these visual experiences through sounds.
I began creating instances of interactions that would benefit the user by helping them navigate their physical world safely.
This augmented hearing would provide contextual information about whatever the user needs to know about their physical or digital worlds. This can range from knowing when to stop pouring a drink or doing everyday tasks such as setting alarms and reminders while being aware of the users location.