Projects

Augmented Reality based Human Memory Augmentation using Artificial Intelligence

The goal of this project is to develop an assistive expert system. We intend to create assistive tools that are capable of human memory augmentation by combining computer vision tools, such as deep learning based object recognition, and camera technology capable of recording the data. In this study, we also intend to create an assistive path showing tool based on the collected image data from the participants of the experiments.

Such an application has the potential to significantly ease the cognitive load on the humans reducing the need to memorize a lot of objects and locations in daily life or within the work environment. In addition, these assistive tools might benefit individuals with impaired memory conditions such as people with brain damage due to stroke or Alzheimer’s Dementia. During our study, we also would like to evaluate the effectiveness of an automatic assistive path showing tool created based on the collected visual-spatial data. We will compare the navigation performance of the human memorized path with the AR augmented case. We are planning to collect the visual data from the diverse groups of healthy people within Nazarbayev University community.

Our Aim:
Our data collection setup consists of two components: 1) Microsoft HoloLens Augmented Reality Goggles with built-in color camera for the collection of the images representing the view of the human user donning these digital glasses, and 2) a server computer where the automatic object recognition and localization processes operate in real-time. For the automatic localization of the participants wearing the digital glasses, we will use the fiducial markers placed within the floors 4-6 of C4 Building. During walking stage of the experiments, we will ask to memorize object (around 10) labels and their locations on the test floor in C4 Building. Then after the walking stage of the experiments, we will ask participants to fill a form in which they will list the seen objects and their locations. For the evaluation of the automatic path finding video-based assistive tool, we will ask participants to find an object on the 4th, 5th or 6th floor corridors of C4 building based on information that they have memorized during the walking stage of the data recording. Then we will ask to perform the same procedure but with the help of AR assistant showing short instructive videos on how to get to the objects of interest placed along the corridors of C4 building.

The data will be collected from the human users wearing the digital glasses for Augmented Reality known as the Microsoft HoloLens. The data is represented as the RGB images collected from the built-in color cameras to the digital glasses Microsoft HoloLens. In addition to the collection of the images of the human user’s view in HoloLens we will also collect the spatial data representing the position of the human user in the C4 building during the experiments. As the result, we will have the experimental data containing the information about what the human users have seen and where. The collected visual-spatial data will be sent to the server running the Ubuntu 16.04 operating system system in order to process the data by running the automatic object recognition and localization. Then the result of the object recognition and localization tools will be sent to the user wearing the digital glasses HoloLens in real-time.