INTELLIGENT WIDGET RECONFIGURATION FOR MOBILE PHONES: GAZE-X

Gaze-X is an agent-based intelligent system that supports multimodal human-computer interaction. The system comprises 2 main parts – context modeling and context sensing. It models user’s actions and emotions and then adapts the interface to support the user in his activities. The context information, known as W5+ (who, where, what, when, why, how), is obtained through a number of human communication modalities, such as speech, eye gaze direction, face and facial expression, and a number of standard interface modalities like mouse moves, keystrokes and active software identification. Various commercially available solutions such as voice recognition software, image processing software are required for the multimodal inputs.

The inference engine used is based on case-base reasoning, a type of lazy learning method. Lazy learning methods store the current input data and postpone the generalization of data until an explicit request is made. The case-base used is a dynamic, incrementally self-organizing, event-content-addressable memory that allows facts retrieval and events evaluation based on user preferences and generalizations formed from prior inputs. Based on the evaluation, Gaze-X will execute the most appropriate user-supportive action. The case-based reasoning can also unlearn actions according to user instructions and thereby increasing its expertise in user-profiled, user-supportive, intelligent interaction.

Gaze-X runs in either unsupervised or supervised modes. In the unsupervised mode, the user’s affective state is used to decide on his satisfaction with the executed action and adaptive, user-supportive actions are executed one at a time. In the supervised mode, the user explicitly confirms that a preferred action has been executed and may provide feedback to the system. Gaze-X has to be setup initially in the supervised mode to build up the profile of the user using the system. Once the system has captured enough cases of the user, the system would then be able to operate correctly in the unsupervised mode. Gaze X currently runs only on desk-top platforms based on Linux, Windows, or Mac Os X.