In this thesis, a guidance system is proposed for an autonomous wheelchair in outdoor environments, by means of face tracking, to aid the mobility of disabled people using computer vision. In the close loop system, the user provides high level commands, by head, eyes and mouth movements, that allow to move the wheelchair at will. The system is not intrusive, easy to handle, adaptive to lighting conditions and applicable to any person, regardless of sex and race, if no head mobility problems are exhibited.
To carry out these objectives, a micro colour video camera has been placed on a fixed position in the chair to continuosly capture an image containing the user head. A new real time skin clustering process is applied on these images, based on an Unsupervised Adaptive Sthocastic Gaussian Model. This process denoted as UASGM constitutes the greatest contribution of this work.
Different colour spaces have been studied concluding that the normalized RG space is the optimum one for skin clustering purposes. Likewise, the use of a Gaussian model has been proved to be appropriate to solve this problem. A competitive learning method called VQ (Vector Quantization) has been used to initialize the model, achieving a particular model adjustment for each user. A modification of the Fisher generalized ratio was utilized to determine the optimum number of classes to perform the clustering algorithm. The model is adapted by a linear combination of the already known parameters, following the Maximum Likelihood criterium.
It has been proved that the clustering method developed in this thesis is a simplification of the multiple Gaussians mixture, used in histogram modelling, and the Expectation-Maximization (EM). Experimental results achieved in this work have proved to be equal or better than those yielded by other methods based on multiple Gaussian functions. On the other hand, this method provides better results than the GLVQ-F (Fuzzy Generalized Learning Vector Quantization), that uses the popular FCM (Fuzzy C-Means) clustering algorithm.
A Kalman filter has been developed to track the skin blob on the image plane, once the clustering process finishes. The system state vector is introduced to a finite state machine, previously adjusted for each particular user, to provide high level commands for the wheelchair. These commands are sent to another finite state machine that issues the linear and angular velocities of the chair. Using the kinematic model, these velocities are translated to angular velocities for each wheel, and sent to the low level controllers via a LonWorks bus (by ECHELON) to perform a traditional PI control. Additional vision based modules provide the system with the ability to locate eyes and mouth on the image plane, to activate special commands like, On/Off, Forward/Backward, that work in conjuction with the commands generated by head movements.
The user adaptation to the proposed guidance system is aided by a 3D simulator, where the first floor of the former Polytechnic School of the University of Alcalá is emulated. Wheelchair movements can be simulated using this 3D environment as a sort of training, without putting the user in danger.
To demonstrate the proposed system capabilities, several tests have been performed on a prototype wheelchair (specially adapted for this kind of guidance) in the frame of the SIAMO project. Finally, the results of a quiz evaluated on different users about the system controlability and performance, are presented.