基于传感器的人体活动识别被广泛应用到各个领域,但利用多种异构传感器识别日常的复杂人体活动,仍然存在很多问题。对多个异构传感器数据进行数据融合时,存在兼容性问题,导致对并发复杂活动识别准确率较低。该文提出基于多传感器决策级数据融合的多任务深度学习模型。该模型利用深度学习自动地从每个传感器原始数据中进行特征提取。利用多任务学习的联合训练方法将并发复杂活动分为多个子任务,多个子任务共享网络结构,相互促进学习,提高模型的泛化性能。实验表明:该模型对周期性活动的识别准确率可达到94.6%,非周期性活动可达到93.4%,并发复杂活动可达到92.8%。该模型比3个基线模型的识别准确率平均高出8%。
Human activity recognition based on wearable sensors has been widely used in various fields, but complex human activity recognition based on multiple wearable sensors still has many problems. These problems include the incompatibility of many signals from multiple sensors and the low classification accuracy of complex activities. This paper presents a multi-sensor decision-level data fusion model using multi-task deep learning for complex activity recognition. The model uses deep learning to automatically extract the features of the original sensor data. In addition, the concurrent complex activities are divided into multiple sub-tasks using a multi-task learning method. Each sub-task shares the network structure and promotes mutual learning, which improves the generalization performance of the model. Tests show that the model can achieve a 94.6% recognition accuracy rate for cyclical activities, 93.4% for non-cyclical activities, and 92.8% for concurrent complex activities. The recognition accuracy rate is on average 8% higher than those of three baseline models.
[1] CORNACCHIA M, OZCAN K, ZHENG Y, et al. A survey on activity detection and classification using wearable sensors[J]. IEEE Sensors Journal, 2017, 17(2):386-403.
[2] SHOAIB M, BOSCH S, INCEL O D, et al. Complex human activity recognition using smartphone and wrist-worn motion sensors[J]. Sensors, 2016, 16(4):426.
[3] GRAVINA R, ALINIA P, GHASEMZADEH H, et al. Multi-sensor fusion in body sensor networks:State-of-the-art and research challenges[J]. Information Fusion, 2017, 35:68-80.
[4] CAPELA N A, LEMAIRE E D, BADDOUR N. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients[J]. PLoS One, 2015, 10(4):e0124414.
[5] BULLING A, BLANKE U, SCHIELE B. A tutorial on human activity recognition using body-worn inertial sensors[J]. ACM Computing Surveys (CSUR), 2014, 46(3):33.
[6] KUNCHEVA L I. Combining pattern classifiers:Methods and algorithms[M]. Hoboken, USA:John Wiley & Sons, 2004.
[7] DIETTERICH T G. Ensemble methods in machine learning[C]//Proceedings of the First International Workshop on Multiple Classifier Systems. Cagliari, Italy, 2000:1-15.
[8] IGNATOV A. Real-time human activity recognition from accelerometer data using convolutional neural networks[J]. Applied Soft Computing, 2018, 62:915-922.
[9] YANG J B, NHUT N M, SAN P P, et al. Deep convolutional neural networks on multichannel time series for human activity recognition[C]//Proceedings of the 24th International Joint Conference on Artificial Intelligence. Buenos Aires, Argentina, 2015.
[10] LEE S M, YOON S M, CHO H. Human activity recognition from accelerometer data using convolutional neural network[C]//2017 IEEE International Conference on Big Data and Smart Computing (BigComp). Jeju, South Korea, 2017:131-134.
[11] KIM J C, CLEMENTS M A. Multimodal affect classification at various temporal lengths[J]. IEEE Transactions on Affective Computing, 2015, 6(4):371-384.
[12] ORDÓÑEZ F, ROGGEN D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition[J]. Sensors, 2016, 16(1):115.
[13] CHUNG J, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[Z]. arXiv preprint. arXiv:1412.3555, 2014.
[14] VAIZMAN Y, ELLIS K, LANCKRIET G. Recognizing detailed human context in the wild from smartphones and smartwatches[J]. IEEE Pervasive Computing, 2017, 16(4):62-74.
[15] VAIZMAN Y, WEIBEL N, LANCKRIET G. Context recognition in-the-wild:Unified model for multi-modal sensors and multi-label classification[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2017, 1(4):168.