Figure 11 shows the score of each result sets with different number of image. The points made in general ovservation can be easily observed with this graph. The total scores for each results set with different number of images (i.e. 10,15 and 20) are more or less similar.

Fig11Generating Representative_decrypted
Figure 11 : Overall evaluation

Another evaluation is based on the number of images inside the summary. Here we would like to know the evaluator’s perception to see the summary with different number of images. As per our experiments for each image set, we initially started with 10 images then we saw 15 images and then 20 images of the same representative set.

The evaluation says that, after seeing summaries of 10 images, 89% of the evaluators were interested to rate the summary of 15 images. Figure 12 shows that the interest ratio of evaluators for the 15 images set. That means that they would like to change the rating for the new 15 images summary of the same representative set. It can be either positive or negative. The result shows four results sets were secured higher or equal total score than the 10 images, while two sets namely S5-75 and S5-85 got less score compared to 10 images.

Same for the next set of 20 images, there were 69% evaluator who would like to rate 20 images summaries. Figure 13 shows that the interest ratio of evaluators for the 20 images set. Evaluation shows that the summary with 20 images secured almost same result as 15 images. So, there is no much difference in the total score between the summary of 15 images and 20 images.

The interest level to see summary of 10 to 15 images and 15 to 20 images shows that the number of images inside the summary affects the result.

Fig12Generating Representative_decrypted
Figure 12: Distribution of interest of participant to look summary of 15 images

Fig13Generating Representative_decrypted
Figure 13: Distribution of interest of participant to look summary of 20 images


Our focus of this work is to generate the best representative set and summary of the large dataset by cropping images randomly and sequentially with different coverage. Since the algorithm takes too much time for the overall computation, we got good human based evaluation for sequential datasets rather than random datasets. We also observed that the higher coverage gives the best result regardless of sequence or random windows. And we also came across that the number of images inside a summary varies the results.

For the future work, we can do further analysis on more values of number of windows N and coverage C. We are also interested to extend our work and find the fast algorithm to deal with the large dataset. We would also like to see the number of images inside the summary aspect in order to have a more comprehensive conclusion.