Automatic Happiness Strength Analysis of a Group of People using Facial Expressions

  IJCTT-book-cover
 
International Journal of Computer Trends and Technology (IJCTT)          
 
© 2016 by IJCTT Journal
Volume-34 Number-3
Year of Publication : 2016
Authors : Sagiri Prasanthi, Maddali M.V.M. Kumar
  10.14445/22312803/IJCTT-V34P127

MLA

Sagiri Prasanthi, Maddali M.V.M. Kumar "Automatic Happiness Strength Analysis of a Group of People using Facial Expressions". International Journal of Computer Trends and Technology (IJCTT) V34(3):150-155, April 2016. ISSN:2231-2803. www.ijcttjournal.org. Published by Seventh Sense Research Group.

Abstract -
The latest improvement of social media has given users a stand to socially involve and interact with a higher population. Lakhs of videos, photos and group images are being uploaded daily by users on the web from different events and social gatherings. There is a collective concern in designing systems capability of under-standing human expressions of emotional attributes and affective displays. As images and videos from social events generally hold multiple subjects, it is an important step to study these sets of people. In this paper, we study the problem of happiness strength analysis of a set of people in a group image using facial expression analysis. A user awareness study is showed to understand several attributes, which affect a person’s awareness of the happiness strength of a group. We detect the difficulties in developing an automatic mood analysis system and propose model built on the attributes in the study. An in the wild image based database is gathered. To functional the methods, both quantitative and qualitative experiments are done and applied to the problem of shot selection, event summarization and album creation. The experiments illustration that the attributes defined in the paper provide useful information for theme expression analysis, with results close to human awareness results.

References
[1] M. Caroll, “How tumblr and pinterest are fueling the image intelli- gence problem,” Forbes, Jaunary 2012.
[2] W. Ge, R. T. Collins, and B. Ruback, “Vision-based analysis of small groups in pedestrian crowds,” IEEE Transaction on Pattern Analysis & Machine Intelligence, vol. 34, no. 5, pp. 1003–1016, 2012.
[3] J. Hernandez, M. E. Hoque, W. Drevo, and R. W. Picard, “Mood meter: counting smiles in the wild,” in Proceedings of the 2012 ACM Conference on Ubiquitous Computing, 2012, pp. 301–310.
[4] A. Dhall, R. Goecke, S. Lucey, and T. Gedeon, “Collecting large, richly annotated facial - expression databases from movies,” IEEE Multimedia, vol. 19, no. 3, p. 0034, 2012.
[5] A. C. Gallagher and T. Chen, “Understanding Images of Groups of People,” in Proceedings of the IEEE Confernece on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 256–263.
[6] G. Wang, A. C. Gallagher, J. Luo, and D. A. Forsyth, “Seeing people in social context: Recognizing people and social relationships,” in Proceedings of the European Conference on Computer Vision, 2010, pp. 169–182.
[7] C. Ku¨ blbeck and A. Ernst, “Face detection and tracking in video sequences using the modifiedcensus transformation,” Image Vision Computing, vol. 24, no. 6, pp. 564–572, 2006.
[8] S. G. Barsa¨de and D. E. Gibson, “Group emotion: A view from top and bottom,” Deborah Gruenfeld, Margaret Neale, and Elizabeth Mannix (Eds.), Research on Managing in Groups and Teams, vol. 1, pp. 81–102, 1998.
[9] J. R. Kelly and S. G. Barsade, “Mood and emotions in small groups and work teams,” Organizational behavior and human decision processes, vol. 86, no. 1, pp. 99–130, and 2001.
[10] A. C. Murillo, I. S. Kwak, L. Bourdev, D. J. Kriegman, and S. Belongie, “Urban tribes: Analyzing group photos from a social perspective,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition and Workshops, 2012, pp. 28–35.
[11] M. H. Kiapour, K. Yamaguchi, A. C. Berg, and T. L. Berg, “Hipster wars: Discovering elements of fashion styles,” in Computer Vision– ECCV 2014. Springer, 2014, pp. 472– 488.
[12] Y. J. Lee and K. Grauman, “Face discovery with social context,” in Proceedings of the British Machine Vision Conference (BMVC), 2011, pp. 1–11.
[13] A. Torralba and P. Sinha, “Statistical context priming for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2001, pp. 763–770.
[14] D. Parikh, C. L. Zitnick, and T. Chen, “From appearance to context- based recognition: Dense labeling n small images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
[15] Z. Stone, T. Zickler, and T. Darell, “Autotagging facebook: Social network context improves photo annotation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8.
[16] O. K. Manyam, N. Kumar, P. N. Belhumeur, and D. J. Kriegman, “Two faces are better than one: Face recognition in group pho- tographs,” in Proceedings of the International Joint Conference on Biometrics (IJCB), 2011, pp. 1–8.
[17] J. Fiss, A. Agarwala, and B. Curless, “Candid portrait selection from video,” ACM Transaction on Graphics, p. 128, 2011.
[18] S. Zhang, Q. Tian, Q. Huang, W. Gao, and S. Li, “Utilizing affective analysis for efficient movie browsing,” in Proceedings of the IEEE International C o n f e r e n c e on Image Processing ( ICIP), 2009, pp.1853–1856.
[19] J. Whitehill, G. Littlewort, I. R. Fasel, M. S. Bartlett, and J. R. Movellan, “Toward Practical Smile Detection,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp.2106–2111, 2009.
[20] M. Everingham, J. Sivic, and A. Zisserman, “Hello! My name is... Buffy” – Automatic Naming of Characters in TV Video,” in Proceedings of the British Machine and Vision Conference), 2006, pp. 899–908
[21] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: A database and web-based tool for image annotation,” International Journal of Computer Vision, vol. 77, no. 1-3, pp. 157–173, 2008.
[22] F. Korc and D. Schneider, “Annotation tool,” University of Bonn, Department of Photogrammet ry, Tech. Rep. TR-IGG-P-2007-01, 2007.
[23] P. A. Viola and M. J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2001, pp. I–511.
[24] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” 2001
[25] A. Dhall, J. Joshi, K. Sikka, R. Goecke, and N. Sebe, “The More the Merrier: Analysing the Effect of a Group of People In Images,” in Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2015.
[26] A. Kleinsmith and N. Bianchi-Berthouze, “Affective body expres- sion perception and recognition: a survey,” IEEE Transactions on Affective Computing, vol. 4, no. 1, pp. 15–33, 2013
[27] N. Berthouze and L. Berthouze, “Exploring kansei in multimedia information,” Kansei Engineering International, vol. 2, no. 2, pp.1–10, 2001.

Keywords
Facial expression recognition, group mood, unconstrained conditions.