To reduce processing overheads, the MI algorithm uses only the most recent user activity for all widgets in the phone to decide whether to update rules as shown in Figure 4.
Fig4Intelligent Widget_decrypted
Figure 4: Minimal Intelligence Algorithm Process Flow

Each widget has an indicator in the rule and the widget will be displayed for the context if the indicator is 1. The algorithm first checks if the current user activity for each widget is present or absent (1 or 0). If user activity is present and the rule indicator is 0 (user accessed the widget but widget is not displayed), it will include the widget in the rule. However, if user activity is absent and the rule indicator is 1 (user did not access the widget but widget is displayed), the widget is removed from the rule. MI does not track user activity pattern over time, only using the most recent user activity data on all widgets to set the rules.

Fig5Intelligent Widget_decrypted
Figure 5: Single Layer Perceptron with Error Correction Process Flow


The second algorithm is a modified Single Layer Perceptron (SLP) with error correction. The SLP neural network is a simple feed forward neural network that consists of a single processing unit (cell). Each input to the cell is associated with a weight that can be positive or negative to indicate reinforcement or inhibition on the cell. The sigmoid function of the cell sums the products of the weighted inputs and adds a bias. The bias adjustment (error correction) is based on the previous prediction. The SLP process flow is shown in Figure 5.


The MLP consists of multiple layers of cells and permit more complex, nonlinear relationships of input data to output results. There is an input layer, a hidden layer and an output layer. The input layer represents the weighted inputs to the hidden layer. The hidden layer results are computed and used as weighted inputs to the output layer. The output layer uses these weighted inputs to compute the final output. With Back Propagation, the output error is corrected by back propagating this error through the network and adjusting weights in each layer. Convergence can take some time depending on the allowable error in the output. The steps for the back propagation are (for learning data E and expected output C):

1. Compute the forward propagation of E through the network (compute the weighted sums of the network, S, and the inputs, u, of every cell).
2. From the output, make a backward pass through the intermediate layers, computing the error values.
a. For output cells o : erroro = (Co – uo)uo(1 – uo)
b. For all hidden cells i: errori = (Xwmi erroro)ui (1 – ui) m – all cells connected to hidden cell
c. w – given weight, u – cell input
3. Lastly, update the weights within the network as follows :
a. For weights connecting hidden to output layers: w = w + p * erroro * uo
b. For weights connecting hidden to input layers: w = w + p * error * ui

The forward pass through the network computes the cell inputs and an output. The backward pass computes the gradient and the weights are then updated so that the error is minimized. The learning rate, p, minimizes the amount of change that may take place for the weights. Although it may take longer for a smaller learning rate to converge, it can minimize the chance of overshooting the target. If the learning rate is set too high, the network may not converge at all. The process flow of MLP is shown in Figure 6.
Fig6Intelligent Widget_decrypted
Figure 6: Multi Layer Perceptron with Back Propagation Process Flow