Skeleton Based Human Action Recognition Using Doubly Linked List

  IJCTT-book-cover
 
         
 
© 2022 by IJCTT Journal
Volume-70 Issue-2
Year of Publication : 2022
Authors : Muhammad Sajid Khan, Andrew Ware, Usman Habib, Muhammad Junaid Khalid, Nisar Bahoo
DOI :  10.14445/22312803/IJCTT-V70I2P103

How to Cite?

Muhammad Sajid Khan, Andrew Ware, Usman Habib, Muhammad Junaid Khalid, Nisar Bahoo, "Skeleton Based Human Action Recognition Using Doubly Linked List," International Journal of Computer Trends and Technology, vol. 70, no. 4, pp. 18-21, 2022. Crossref, https://doi.org/10.14445/22312803/IJCTT-V70I2P103

Abstract
Human Action Recognition is a significant focus for research because of its many applications in robotics and automation. This paper demonstrates how doubly linked lists can be used to sequence the 3D human actions recorded as video clips in the NTU RGBD 60 dataset. The nodes and edges in the list represent the joints and bone structure in the human skeleton. Each node holds information about the joint’s position within the skeleton and pointers to its parent and child nodes. The doubly link list is constructed by first utilising the nodes representing the torso joints and then adding the nodes for the limbs’ joints. The chosen sequence of nodes preserves the structural shape of the skeleton. The linked lists for many known activities are used as the training set for a classifier capable of identifying subsequent human actions. The classifier is based on the displacement between consecutive nodes in the action sequence. This approach minimises the complexity of the tree structure and improves the accuracy of 3D action recognition.

Keywords
Skeleton based action recognition, Human Action recognition, Video Processing, Doubly linked list.

Reference

[1] Dasgupta, Poorna Banerjee. Compressed Representation of Color Information for Converting 2D Images Into 3D Models ., International Journal of Computer Trends and Technology, 68(11) (2020) 59-63.
[2] Snehal Shah, Kishan PS, Jitendra Jaiswal., Implementation of Python Packages For Image Recognition International Journal of Computer Trends and Technology, 69(11) (2021) 6-10.
[3] Y. Kuniyoshi, H. Inoue, and M. Inaba, Design and implementation of a system that generates assembly programs from visual recognition of human action sequences, in IEEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications, 2 (1990) . 567–574 . doi: 10.1109/IROS.1990.262444.
[4] C. Robert, C. Guilpin, and A. Limoge, Comparison between conventional and neural network classifiers for rat sleep-wake stage discrimination, Neuropsychobiology, 35(4) (1997) 221–225. doi: 10.1159/000119348.
[5] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, (2020) 1010–1019. Accessed: Sep. 23, 2020. [Online]. Available: https://openaccess.thecvf.com/content_cvpr_2016/html/Shahroudy_NTU_RGBD_A_CVPR_2016_paper.html.
[6] R. Vemulapalli, F. Arrate, and R. Chellappa, “Human Action Recognition by Representing 3D Skeletons as Points in a Lie Group,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014) 588–595. Accessed: Sep. 23, 2020. [Online].Available:https://www.cvfoundation.org/openaccess/content_cvpr_2014/html/Vemulapalli_Human_Action_Recognition_2014_CVPR_paper.html.
[7] B. Fernando, E. Gavves, J. M. Oramas, A. Ghodrati, and T. Tuytelaars, Modeling Video Evolution for Action Recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2015) 5378–5387. Accessed: Sep. 23, 2020. [Online].Available:https://www.cvfoundation.org/openaccess/content_cvpr_2015/html/Fernando_Modeling_Video_Evolution_2015_CVPR_paper.html.
[8] C. Caetano, F. Brémond, and W. R. Schwartz, Skeleton Image Representation for 3D Action Recognition Based on Tree Structure and Reference Joints, in 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), (2019) 16–23. doi: 10.1109/SIBGRAPI.2019.00011.
[9] L. Shi, Y. Zhang, J. Cheng, and H. Lu, Skeleton-Based Action Recognition with Directed Graph Neural Networks, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2019) 7912–7921, Accessed: Sep. 23, 2020. [Online]. Available:https://openaccess.thecvf.com/content_CVPR_2019/html/Shi_SkeletonBased_Action_Recognition_With_Directed_Graph_Neural_Networks_CVPR_2019_paper.html.
[10] M. S. Khan, A. Ware, M. Karim, N. Bahoo, and M. J. Khalid, Skeleton based Human Action Recognition using a Structured-Tree Neural Network, Eur. J. Eng. Res. Sci., 5(8) (2020) . doi: 10.24018/ejers.2020.5.8.2004.