Real time hardware vision processing for a bionic eye

2017-03-01T23:47:50Z (GMT) by Josh, Horace Edmund
A recent objective in medical bionics research is to develop visual prostheses - devices that could potentially restore the sight of blind individuals. The Monash Vision Group is currently working towards implementing a fully autonomous direct-to-brain vision implant called the Gennaris. Although research in this field is progressing quickly, initial implementations of these devices will be quite naive, offering very basic levels of vision. The vision is anticipated to be binary - that is with black and white pixels - and a low resolution of several hundred pixels. Improving this dramatically is currently improbable, as it would require significant advancement in electrode stimulation technology and substantial research into the complexities of the visual cortex. This PhD project aims to contribute to the development of the Gennaris and other bionic vision devices, in the hope of improving the quality of life for future patients. More specifically, the key goals of this work have been to develop a portable real time visual prosthesis simulator that is suitably representative of anticipated vision expected of the Gennaris; to investigate the potential capabilities of future patients under this limited vision, and possible image processing techniques that can be used to improve their performance; and to investigate the feasibility of integrating 3D depth sensing and advanced functionality that could aid navigation. An emphasis of this work has been high framerate, low latency and real time operation. An immersive real time simulator system, based on a Field Programmable Gate Array architecture, has been developed called the Hatpack Simulator. This system improves upon limitations of platforms in existing research. The Hatpack is portable, weighing only 3 kilograms, operates at 60 frames per second with a constant low latency of 17 ms, and is low in power consumption, able to last up to 4 hours on a full charge. Five psychophysics trials have been carried out in order to evaluate the effectiveness of various 2D image processing functions implemented, and the ability of users to complete simple tasks that resemble everyday activities. Results of the psychophysics experiments show that the reduction in user capabilities due to binary and low resolution degrading of vision is significant. This motivates the use of 3D sensing to assist with image representations for bionic vision. The integration of a second generation Microsoft Kinect depth sensor has been investigated and a new hardware plane fitting algorithm based on least squares has been achieved. An implementation of this has been applied to the detection and highlighting of tables and free floor space in a real time end-to-end system running at 60 frames per second.