机械工程

自适应跑步机多运动模式人机交互技术

  • 钱宇阳 ,
  • 鲁森 ,
  • 杨开明 ,
  • 朱煜
展开
  • 清华大学 机械工程系, 精密超精密制造装备及控制北京市重点实验室, 北京 100084
钱宇阳(1995—),男,博士研究生。

收稿日期: 2022-10-12

  网络出版日期: 2023-11-06

Multi-locomotion mode human-robot interaction technology for self-paced treadmills

  • QIAN Yuyang ,
  • LU Sen ,
  • YANG Kaiming ,
  • ZHU Yu
Expand
  • Beijing Key Laboratory of Precision/Ultra-Precision Manufacturing Equipments and Control, Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China

Received date: 2022-10-12

  Online published: 2023-11-06

摘要

自适应跑步机是虚拟现实(virtual reality,VR)环境中用于人机交互的重要设备,实现该设备在VR环境下的多运动模式交互是丰富其应用场景的关键。该文构建了一种基于自适应跑步机的多运动模式交互控制架构,该架构包括感知层和控制层,感知层基于足底压力鞋垫实现高泛化性能的人体运动模式识别;控制层依据人体稳定性条件对不同运动模式(站立、行走、倒退、奔跑和跳跃)下的控制策略进行切换。该架构自动将运动模式识别结果与跑步机运动控制相结合,实现了多运动模式人机交互控制。实验结果表明:分层控制方法能够实现稳定、平滑的多运动模式人机交互,保证用户在交互过程中步态自然、姿态稳定,满足自适应跑步机的人机交互需求。

本文引用格式

钱宇阳 , 鲁森 , 杨开明 , 朱煜 . 自适应跑步机多运动模式人机交互技术[J]. 清华大学学报(自然科学版), 2023 , 63(12) : 1961 -1973 . DOI: 10.16511/j.cnki.qhdxxb.2022.25.042

Abstract

[Objective] A self-paced treadmill (SPT) is key human-robot interactive equipment for virtual reality, which can enable a user to walk at the intended speed by using a re-positioning technology. Realizing multimode interactions of SPTs is crucial for enriching their applications. However, existing studies only realizes a few interaction modes. To realize multimode interactions in self-paced treadmills, a novel multilayer control framework is proposed in this paper.[Methods] In this study, the control system is divided into two layers:the recognition layer and the control layer. First, a novel hybrid spatial-temporal graph convolutional neural network is proposed to realize user-independent human locomotion mode recognition based on plantar pressure insoles in the recognition layer. The proposed network separates the pressure and acceleration signals and dealt with them individually. It is also utilized to extract the natural spatial topology between pressure nodes. Long short-term memory layers are used to individually extract temporal-dependent features of pressure and acceleration signals and to fuse multimodal features for final recognition. A multilayer perceptron is utilized to map the fusion features to the locomotion modes. By extracting the natural spatial-temporal features of multimodal data during human locomotion, a high generalization capability of the recognition results can be expected. Second, control strategies for different locomotion modes are designed in the control layer according to the stability condition of different human locomotion modes. Meanwhile, a walking speed feedforward control strategy is proposed to re-position the user and ensure natural gaits for the walking mode. Variable gain control strategies are adopted to manipulate the acceleration for the running and back walking modes. A buffer control strategy is proposed to improve the stability during jump landing for the jumping mode. Then, a finite state machine is used to automatically switch the control strategies. The states are transited based on the recognition results.[Results] 1) The proposed locomotion mode recognition method was evaluated on a dataset that comprises eight subjects with five locomotion modes through the leave-one-subject-out cross validation. Then, it was compared with the convolutional neural network (CNN) and domain-adversarial neural network (DANN). Experimental results indicated that the mean and standard deviation classification accuracies of the CNN, DANN, and HSTGCN are (90.26±8.54)%, (97.71±3.60)%, and (97.37±1.40)%, respectively. These results validated that the proposed method can achieve high generalization capability without any dependency on the data of target subjects. Hence, the burden of repeated data collection and network training was reduced. 2) Based on the recognition results, experiments on the multi-locomotion mode human-robot interaction were conducted using a finite state machine. Experimental results indicated that a user can freely change the locomotion modes on the treadmill, and the balance was not significantly affected by the treadmill acceleration.[Conclusions] The proposed framework can automatically combine the recognition results with the treadmill control and can realize the control of multi-locomotion mode human-robot interactions. Further, experimental results validate that the proposed multilayer control strategy can achieve a stable and smooth multi-locomotion mode human-robot interaction, ensure natural gaits and posture stability of the user, and meet the requirements of multi-locomotion mode human-robot interactions for self-paced treadmills.

参考文献

[1] LICHTENSTEIN L, BARABAS J, WOODS R L, et al. A feedback-controlled interface for treadmill locomotion in virtual environments[J]. ACM Transactions on Applied Perception, 2007, 4(1):7-28.
[2] VON ZITZEWITZ J, BERNHARDT M, RIENER R. A novel method for automatic treadmill speed adaptation[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2007, 15(3):401-409.
[3] YOON J, PARK H S, DAMIANO D L. A novel walking speed estimation scheme and its application to treadmill control for gait rehabilitation[J]. Journal of Neuroengineering and Rehabilitation, 2012, 9(1):62-74.
[4] SLOOT L H, VAN DER M M, HARLAAR J. Self-paced versus fixed speed treadmill walking[J]. Gait&Posture, 2014, 39(1):478-484.
[5] SLOOT L H, HARLAAR J, VAN DER KROGT M M. Self-paced versus fixed speed walking and the effect of virtual reality in children with cerebral palsy[J]. Gait&Posture, 2015, 42(4):498-504.
[6] SCHERER M. Gait rehabilitation with body weight-supported treadmill training for a blast injury survivor with traumatic brain injury[J]. Brain Injury, 2007, 21(1):93-100.
[7] DARKEN R P, COCKAYNE W R, CARMEIN D. The omni-directional treadmill:A locomotion device for virtual worlds[C]//Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. Banff, Canada:ACM, 1997:213-221.
[8] SOUMAN J L, GIORDANO P R, SCHWAIGER M, et al. Cyberwalk:Enabling unconstrained omnidirectional walking through virtual environments[J]. ACM Transactions on Applied Perception, 2011, 8(4):1-22.
[9] SOUMAN J L, GIORDANO P R, FRISSEN I, et al. Making virtual walking real:Perceptual evaluation of a new treadmill control algorithm[J]. ACM Transactions on Applied Perception, 2010, 7(2):1-14.
[10] WANG W, YANG K M, ZHU Y, et al. Speed adaptation and acceleration ripple suppression of treadmill user system using a virtual force moment balance model[J]. Transactions of the Institute of Measurement and Control, 2020, 42(2):322-329.
[11] HEJRATI B, CRANDALL K L, HOLLERBACH J M, et al. Kinesthetic force feedback and belt control for the treadport locomotion interface[J]. IEEE Transactions on Haptics, 2015, 8(2):176-187.
[12] DE LUCA A, MATTONE R, GIORDANO P R, et al. Control design and experimental evaluation of the 2D CyberWalk platform[C]//2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. St. Louis, USA:IEEE, 2009:5051.
[13] ASL H J, PYO S H, YOON J. An intelligent control scheme to facilitate abrupt stopping on self-adjustable treadmills[C]//2018 IEEE International Conference on Robotics and Automation. Brisbane, Australia:IEEE, 2018:1639-1644.
[14] HUA Y X, FAN J Z, LIU G F, et al. A novel weight-bearing lower limb exoskeleton based on motion intention prediction and locomotion state identification[J]. IEEE Access, 2019, 7:37620-37638.
[15] ISLAM M, HSIAO-WECKSLER E T. Detection of gait modes using an artificial neural network during walking with a powered ankle-foot orthosis[J]. Journal of Biophysics, 2016, 2016:7984157.
[16] DEHZANGI O, TAHERISADR M, CHANGALVALA R. IMU-based gait recognition using convolutional neural networks and multi-sensor fusion[J]. Sensors, 2017, 17(12):2735-2756.
[17] ORDÓÑEZ F J, ROGGEN D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition[J]. Sensors, 2016, 16(1):115.
[18] ZHANG K G, WANG J, DE SILVA C W, et al. Unsupervised cross-subject adaptation for predicting human locomotion intent[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2020, 28(3):646-657.
[19] WANG J D, ZHENG V W, CHEN Y Q, et al. Deep transfer learning for cross-domain activity recognition[C]//Proceedings of the 3rd International Conference on Crowd Science and Engineering. Singapore:ACM, 2018:1-8.
[20] IVANOV K, MEI Z Y, PENEV M, et al. Identity recognition by walking outdoors using multimodal sensor insoles[J]. IEEE Access, 2020, 8:150797-150807.
[21] ZHANG K, SUN M, LESTER D K, et al. Assessment of human locomotion by using an insole measurement system and artificial neural networks[J]. Journal of Biomechanics, 2005, 38(11):2276-2287.
[22] TÖLGYESSY M, DEKAN M, CHOVANEC L, et al. Evaluation of the azure kinect and its comparison to kinect V1 and kinect V2[J]. Sensors, 2021, 21(2):413-437.
[23] ANTICO M, BALLETTI N, LAUDATO G, et al. Postural control assessment via Microsoft Azure Kinect DK:An evaluation study[J]. Computer Methods and Programs in Biomedicine, 2021, 209:106324.
[24] DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional neural networks on graphs with fast localized spectral filtering[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain:NIPS, 2016:3844-3852.
[25] YAN S J, XIONG Y J, LIN D H, et al. Spatial temporal graph convolutional networks for skeleton-based action Recognition[C]//32nd AAAI Conference on Artificial Intelligence. New Orleans, USA:AAAI, 2018:912.
[26] GANIN Y, USTINOVA E, AJAKAN H, et al. Domain-adversarial training of neural networks[J]. The Journal of Machine Learning Research, 2016, 17(1):2096-2030.
[27] TANG H, JIA K. Discriminative adversarial domain adaptation[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA:AAAI, 2020:5940-5947.
[28] QIAN Y Y, YANG K M, ZHU Y, et al. Local dynamic stability of self-paced treadmill walking versus fixed-speed treadmill walking[J]. Journal of Biomechanical Engineering, 2020, 142(4):044502.
[29] QIAN Y Y, YANG K M, ZHU Y, et al. Combining deep learning and model-based method using Bayesian Inference for walking speed estimation[J]. Biomedical Signal Processing and Control, 2020, 62:102117.
文章导航

/