Abstract:Human activity recognition based on wearable sensors has been widely used in various fields, but complex human activity recognition based on multiple wearable sensors still has many problems. These problems include the incompatibility of many signals from multiple sensors and the low classification accuracy of complex activities. This paper presents a multi-sensor decision-level data fusion model using multi-task deep learning for complex activity recognition. The model uses deep learning to automatically extract the features of the original sensor data. In addition, the concurrent complex activities are divided into multiple sub-tasks using a multi-task learning method. Each sub-task shares the network structure and promotes mutual learning, which improves the generalization performance of the model. Tests show that the model can achieve a 94.6% recognition accuracy rate for cyclical activities, 93.4% for non-cyclical activities, and 92.8% for concurrent complex activities. The recognition accuracy rate is on average 8% higher than those of three baseline models.
[1] CORNACCHIA M, OZCAN K, ZHENG Y, et al. A survey on activity detection and classification using wearable sensors[J]. IEEE Sensors Journal, 2017, 17(2):386-403. [2] SHOAIB M, BOSCH S, INCEL O D, et al. Complex human activity recognition using smartphone and wrist-worn motion sensors[J]. Sensors, 2016, 16(4):426. [3] GRAVINA R, ALINIA P, GHASEMZADEH H, et al. Multi-sensor fusion in body sensor networks:State-of-the-art and research challenges[J]. Information Fusion, 2017, 35:68-80. [4] CAPELA N A, LEMAIRE E D, BADDOUR N. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients[J]. PLoS One, 2015, 10(4):e0124414. [5] BULLING A, BLANKE U, SCHIELE B. A tutorial on human activity recognition using body-worn inertial sensors[J]. ACM Computing Surveys (CSUR), 2014, 46(3):33. [6] KUNCHEVA L I. Combining pattern classifiers:Methods and algorithms[M]. Hoboken, USA:John Wiley & Sons, 2004. [7] DIETTERICH T G. Ensemble methods in machine learning[C]//Proceedings of the First International Workshop on Multiple Classifier Systems. Cagliari, Italy, 2000:1-15. [8] IGNATOV A. Real-time human activity recognition from accelerometer data using convolutional neural networks[J]. Applied Soft Computing, 2018, 62:915-922. [9] YANG J B, NHUT N M, SAN P P, et al. Deep convolutional neural networks on multichannel time series for human activity recognition[C]//Proceedings of the 24th International Joint Conference on Artificial Intelligence. Buenos Aires, Argentina, 2015. [10] LEE S M, YOON S M, CHO H. Human activity recognition from accelerometer data using convolutional neural network[C]//2017 IEEE International Conference on Big Data and Smart Computing (BigComp). Jeju, South Korea, 2017:131-134. [11] KIM J C, CLEMENTS M A. Multimodal affect classification at various temporal lengths[J]. IEEE Transactions on Affective Computing, 2015, 6(4):371-384. [12] ORDÓÑEZ F, ROGGEN D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition[J]. Sensors, 2016, 16(1):115. [13] CHUNG J, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[Z]. arXiv preprint. arXiv:1412.3555, 2014. [14] VAIZMAN Y, ELLIS K, LANCKRIET G. Recognizing detailed human context in the wild from smartphones and smartwatches[J]. IEEE Pervasive Computing, 2017, 16(4):62-74. [15] VAIZMAN Y, WEIBEL N, LANCKRIET G. Context recognition in-the-wild:Unified model for multi-modal sensors and multi-label classification[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2017, 1(4):168.