INTELLIGENT WIDGET RECONFIGURATION FOR MOBILE PHONES: MINIMAL INTELLIGENCE ALGORITHM(3)

The approach is to try an increasing sequence of C to obtain different numbers of hidden nodes, train the neural network for each n, and observe the n which generates the smallest root mean squared error. Haykin stated that the optimal number of hidden neurons is a number that would yield a performance near to the Bayesian classifier. His tests showed that a MLP neural network using two hidden neurons is already reasonably close to the Bayesian performance (for his test problem). There are also some rule-of-thumb methods specified in for determining the number of hidden neurons.

In our research, we also performed tests to see if there is an improvement in the performance with different number of hidden neurons. Table 3 shows the result of using different numbers of hidden neurons and their respective prediction accuracies. It is apparent that there is no significant performance improvement observed when using more hidden neurons. Each hidden neuron added to the hidden layer also introduced more lag into the system as more time is required to calculate the output.

Table 3: Summary Overview of Prediction Accuracy

Two hidden Three hidden Five hidden Tenhidden
neurons neurons neurons neurons
Prediction accuracy for weekly repeating usage pattern 62% 62.5% 62.2% 62.1%

With different number of hidden neurons, it was observed that there is an average of 5% increment in the processing time required with each new hidden neuron added. This is especially an important consideration as the processing power available on the mobile platform is very limited. If there is not a big improvement in the performance for using large numbers of neurons and layers, then it would be better to use the minimum required.

For the daily repeating usage pattern, MLP’s performance is similar to the SLP algorithm in that it is able to achieve over 90% accuracy due to consistency in the input data patterns (Figure 13). This consistency in the usage data also enables better training of the nueral network. However, the MLP algorithm is observed to introduce a considerable amount of lag into the application due to this training.
Fig13Intelligent Widget_decrypted
Figure 13: MLP Prediction Accuracy for daily repeating usage pattern

The error correction for MLP is based on mean-squared error reduction (number of iterations required to achieve the acceptable output). To achieve good mean-squared error reduction, the number of iterations must be about 10,000. During testing with the 15 widgets, an average lag of about 200ms was incurred for every learning period. This lag may become significant if more widgets and contexts are involved since the learning duration is proportional to the product of the number of widgets and number of contexts.