Loading...
首页
期刊介绍
期刊订阅
联系我们
快速检索
引用检索
图表检索
高级检索
最新录用
|
预出版
|
当期目录
|
过刊浏览
|
阅读排行
|
下载排行
|
引用排行
|
百年期刊
ISSN 1000-0585
CN 11-1848/P
Started in 1982
About the Journal
»
About Journal
»
Editorial Board
»
Indexed in
»
Rewarded
Authors
»
Online Submission
»
Guidelines for Authors
»
Templates
»
Copyright Agreement
Reviewers
»
Guidelines for Reviewers
»
Online Peer Review
Office
»
Editor-in-chief
»
Office Work
»
Production Centre
Table of Content
, Volume 64 Issue 10
Previous Issue
Next Issue
For Selected:
View Abstracts
Download Citations
EndNote
Reference Manager
ProCite
BibTeX
RefWorks
Toggle Thumbnails
SPECIAL SECTION: ROBOTICS
Select
Segmentation and location algorithm for oblong holes in robotic automatic assembly
JIANG Xiao, WANG Song, WU Dan
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1677-1685. DOI: 10.16511/j.cnki.qhdxxb.2024.27.023
Abstract
HTML
PDF
(4417KB) (
301
)
[Objective] Oblong holes are commonly used across various industries to improve fault tolerance and adjustment capabilities. However, their complex geometric characteristics pose significant challenges for vision detection and location algorithms in industrial applications, impacting their utilization in automatic assembly processes. [Methods] This research investigates a high-precision and robust vision segmentation and location algorithm tailored for oblong holes. First, the geometric features of oblong holes, which are symmetric but lack a simple analytical description, are analyzed. This complexity renders traditional imaging methods ineffective for accurate localization. The detection and segmentation of oblong hole features are conducted using a novel vision location algorithm that integrates deep learning with conventional image processing techniques. Specifically, the algorithm employs a sequential connection framework of YOLO and fully convolutional networks to achieve accurate localization. This framework first identifies the region of interest and then performs semantic segmentation. YOLO networks rapidly detect the region of interest, prioritizing areas where the oblong hole is prominently featured. Semantic segmentation is subsequently performed using fully convolutional networks. Afterward, a skeleton feature extraction method based on medial axis transformation is applied to precisely locate the oblong hole. This method effectively reduces the impact of shape errors from semantic segmentation, achieving subpixel accuracy. However, medial axis transformation may produce redundant lines owing to the presence of image artifacts, potentially leading to inaccuracies. To address this issue, principal component analysis is employed to approximate the center of the oblong hole, thereby minimizing errors. For further precision, a Hough transformation ellipse detection method is utilized to identify the central skeleton of the oblong hole, which is interpreted both as a line segment and a special ellipse. The center of this skeleton represents the center of the oblong hole. [Results] Experimental validation conducted in a specific robotics automatic assembly system confirms the effectiveness of the proposed algorithm. The robustness of the algorithm is further demonstrated through image sampling using camera hardware distinct from that used in the training dataset. Additionally, the impact of surface features and oblong hole shapes on the detection performance is analyzed. The experimental outcomes indicate the optimal performance of the algorithm on objects with nonreflective surfaces, with minimal effect from the shape of the oblong hole on accuracy. Despite potential deformations in segmentation output due to hardware variations, the oblong hole region degenerating location algorithm, based on medial axis transformation, accurately locates the center. The final location error is recorded at 1.05 pixels, which surpasses the accuracy achieved through the direct calculation of the center of gravity of the segmented region. These results underscore the substantial benefits of the algorithm in scenarios with varying hardware and object conditions, demonstrating its high accuracy and exceptional robustness. [Conclusions] By merging deep learning techniques with traditional image processing methods, the location tasks for diverse objects are effectively resolved. The extraction of highly nonlinear features through deep learning, followed by processing with traditional image methods incorporating prior geometric knowledge, enhances the robustness and accuracy of the algorithm, making it suitable for practical production applications.
References
|
Related Articles
|
Metrics
Select
Structural design and workspace analysis of a winch-integrated underwater cable-driven robot based on variable thrust
WU Hao, LI Guotong, LI Dongxing, TANG Xiaoqiang
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1686-1695. DOI: 10.16511/j.cnki.qhdxxb.2024.21.022
Abstract
HTML
PDF
(6876KB) (
124
)
[Objective] Underwater multiple-degree-of-freedom robots possess a broad range of application potentials in diverse fields such as marine resource exploration, scientific investigation, and engineering construction and maintenance. However, during the execution of large-scale, long-distance underwater tasks, the conventional rigid serial and parallel manipulators frequently face the challenge of inadequate working range. Cable-driven parallel robots offer advantages such as large workspace, small inertia, and strong load capacity. However, prevalent cable-driven parallel robots for underwater applications are typically passively tensioned by gravity or buoyancy with their drive units (winches) mounted on a static platform, which constraints their motion ability and reconfigurability. Hence, a winch-integrated underwater cable-driven parallel robot based on a variable thrust mechanism is presented. The proposed robot adopts a hybrid drive form of six cables and a propeller. The thrust generated by the propeller, equivalent to cable tension, is adjustable in terms of magnitude and direction. [Methods] First, the overall mechanical structure of the robot is examined, and its kinematic and static models are established. On the basis of analyzing the judgment criterion of the wrench-feasible workspace (WFW), the wrench-feasibility testing problem under variable thrust is transformed into a constrained quadratic programming problem through the linear approximation method, and a new WFW calculation method is obtained. Then, a set of structural and force parameters of the robot are provided to evaluate and compare the WFWs of the robot with varied moving platform orientations and external forces under constant- and variable-direction thrusts. In addition, a large-span spiral trajectory is selected, a two-norm force index is implemented to optimize the thrust and cable tensions, and then the changes of all driving forces during the quasi-static motion of the robot on the trajectory are assessed under the two different thrust strategies. [Results] Calculation and analysis reveal that under constant-direction thrust, the WFW of the robot appears columnar. Although the moving platform can extend over 10 meters in the
Z
direction, its motion ranges in the
X
and
Y
directions are small, and the WFW is influenced by the orientation of the moving platform and the external forces, which suggest that the robot is susceptible to out-of-control phenomenon. By contrast, under variable-direction thrust, the WFW becomes a cone-shaped space; compared with the condition of constant-direction thrust, the
X
and
Y
direction motion ranges of the moving platform increase, and the volume of the robot workspace remarkably improves. The simulation for spiral trajectory motion also reveals that under constant-direction thrust, the cable tensions vary substantially, which facilitates exceeding the limit and causes problems such as slack. The change of thrust direction can considerably alleviate the variability of the tensions, and guarantee that they remain within feasible limits, hence expanding the robot's range of motion. [Conclusions] Results reveal the remarkable improvement outcome of the variable thrust mechanism on the WFW of the robot, which solves the problem of inadequate working range of the existing underwater multiple-degree-of-freedom robots. This paper can provide a reference for further studies on the design and analysis of underwater cable-driven parallel robots.
References
|
Related Articles
|
Metrics
Select
Optimization-based parallel learning of quadruped robot locomotion skills
ZHANG Siyuan, ZHU Xiaoqing, CHEN Jiangtao, LIU Xinyuan, WANG Tao
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1696-1705. DOI: 10.16511/j.cnki.qhdxxb.2024.27.018
Abstract
HTML
PDF
(8161KB) (
83
)
[Objective] Inspired by the skill learning of quadruped animals in nature, deep reinforcement learning has been widely applied to learn the quadruped robot locomotion skill. Through interaction with the environment, robots can autonomously learn complete motion strategies. However, traditional reinforcement learning has several drawbacks, such as large computational requirements, slow convergence of algorithms, and rigid learning strategies, which substantially reduce training efficiency and generate unnecessary time costs. To address these shortcomings, this paper introduces evolutionary strategies into the soft actor-critic (SAC) algorithm, proposing an optimized parallel SAC (OP-SAC) algorithm for the parallel training of quadruped robots using evolutionary strategies and reinforcement learning. [Methods] The algorithm first uses a variant temperature coefficient SAC algorithm to reduce the impact of hyperparameter temperature coefficients on the training process and then introduces evolutionary strategies using the reference trajectory trained by the evolutionary strategy as a sample input to guide the training direction of the SAC algorithm. Additionally, the state information and reward values obtained from SAC algorithm training are used as inputs and offspring selection thresholds for the evolutionary strategy, achieving the decoupling of training data. Furthermore, the algorithm adopts an alternating training approach, introducing a knowledge-sharing strategy where the training results of the evolutionary strategy and reinforcement learning are stored in a common experience pool. Furthermore, a knowledge inheritance mechanism is introduced, allowing the training results of both strategies to be passed on to the next stage of the algorithm. With these two training strategies, the evolutionary strategy and reinforcement learning can guide each other in terms of the training direction and pass useful information between different generations, thereby accelerating the learning process and enhancing the robustness of the algorithm. [Results] The simulation experiment results were as follows: 1) Using the OP-SAC algorithm to train quadruped robots achieves a reward value convergence of approximately 3 [KG-*7]000, with stable posture and high speed after training completion. The algorithm can effectively complete the bionic gait learning of quadruped robots. 2) Compared with other algorithms combining SAC and evolutionary strategies, the OP-SAC algorithm has a faster convergence rate and higher reward value after convergence, demonstrating higher robustness in the learned strategies. 3) Although the convergence speed of the OP-SAC algorithm is slower than that of other reinforcement learning algorithms combined with central pattern generator, it ultimately achieves a higher reward value and more stable training results. 4) The ablation experiments confirm the importance of knowledge inheritance and sharing strategies for improving training effectiveness. [Conclusions] The above analysis shows that the proposed OP-SAC algorithm accomplishes the learning of quadruped robot locomotion skill, improves the convergence speed of reinforcement learning to a certain extent, optimizes learning strategies, and significantly enhances training efficiency.
References
|
Related Articles
|
Metrics
Select
Simulation of three-dimensional deformation of skin in human-robot interaction tasks based on the mass-spring-damper model
ZHAI Jingmei, ZHANG Hao
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1706-1716. DOI: 10.16511/j.cnki.qhdxxb.2024.26.046
Abstract
HTML
PDF
(11347KB) (
81
)
[Objective] In human-robot interaction tasks, where the robot moves tangentially along the skin surface with a specified normal force, stacking and stretching deformations are displayed by the skin ahead of and behind the movement of the end effector. Discomfort, including sensations of compression and pulling on the human body, can be attributed to these deformations. In addition, the anticipated operational trajectory can deviate because of such deformations. Therefore, under stick-slip friction, this paper introduces a three-dimensional skin deformation simulation model. [Methods] First, a three-layered mass-spring-damper (MSD) model, representing the mechanical properties of the skin, muscle fat, and bone layers, is established. Considering the tensile, shear, and bending forces, this model describes the skin deformations. Vision processing methods, including filtering, cropping, uniform sampling, and hand-eye calibration, are employed on the point cloud data obtained from the operation area to establish the particle position of the model. Spring-damper elements, comprising springs and dampers parallelly arranged, are used to connect adjacent particles in the MSD model. Combining modulus of elasticity of various tissue layers helps determine the elastic coefficient of the spring. For the damping properties, a damping algorithm that simulates the viscosity of tissues by reducing the velocity of particles is included in the model. After establishing the simulation model, the stick-slip friction mechanisms between a rigid end effector and a flexible skin surface during tangential sliding in real human-robot interaction tasks are investigated from macroscopic and microscopic perspectives. A particle dynamics equation is established based on the positional dynamics constraints and an improved Kelvin-Voigt dynamic model to facilitate dynamic model simulation under the stick-slip friction. The semi-implicit Euler method is finally employed to solve for the particle position information. The particles of the model are fitted using a cubic spline interpolation surface to obtain three-dimensional deformation information of the skin under various operational environments. [Results] Based on the skin-stretch measurement experimental data, the model displayed vertical and horizontal displacement errors of 0.157 and 0.562 mm, respectively, during a reciprocating linear sliding process with a 17.6 mm travel distance. A robotic arm massage experiment platform and a measurement vision system for skin surface deformation, measuring the stretch deformation of the forearm and the stacking deformation of the upper arm were established to further verify the accuracy of the model. The model simulation produced stretch deformation with average errors of 0.295 and 0.360 mm on the
X
-axis of tangential movement and the
Z
-axis of normal loading, respectively, revealing standard deviations of 0.164 and 0.085 mm. The stacking deformation exhibited average errors of 0.317, 0.248, and 0.471 mm in the
X, Y
, and
Z
-directions, respectively, revealing standard deviations of 0.090, 0, and 0.232 mm, respectively. [Conclusions] The proposed simulation model demonstrates minimal error and high stability, enabling an accurate simulation of three-dimensional skin deformations due to various working environments in human-robot interaction tasks. Important references for parameter selection and online trajectory planning and control can be obtained using this model to enhance the comfortable operation experience.
References
|
Related Articles
|
Metrics
SPECIAL SECTION: BIG DATA ANALYTICS
Select
Modeling and evolution characteristics of urban rail transit network resistance under the impact of unbalanced large passenger flows
MA Fei, JIANG Jinfeng, AO Yuyun, MA Zhuanglin, LIU Qing
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1717-1733. DOI: 10.16511/j.cnki.qhdxxb.2024.26.045
Abstract
HTML
PDF
(11819KB) (
72
)
[Objective] Despite unbalanced large passenger flows, urban rail transit network (URTN) frequently encounter the dual pressures of structural and functional resistance. This can result in cascade failure, potentially leading to partial or even total collapse of the URTN. To ensure the normal operation of these networks and understand the characteristics of their disaster resistance evolution, this study explores how an unbalanced large passenger flow affects the disaster resistance of URTN. [Methods] This study initially examines the effect of unbalanced large passenger flows on the URTN cascade failure from two perspectives: transport efficiency and passenger service. Subsequently, a passenger-flow weighting network is constructed to calculate the passenger-flow intensity. Herein, the weights of different nodes represent the proportion of the passenger flow per unit of time at different track stations during periods of unbalanced large passenger flows. This allows the measurement of a track station's importance level based on node number, node betweenness, and passenger flow intensity. Moreover, this study adapts the coupled map lattice (CML) model, building upon cascade failure theory and chaos dynamics, to obtain more accurate values for the failure node ratio and network strength entropy. In the modified CML model, the sudden disturbance level is defined according to the breakdown degree and influence range. The initial state value is determined by the saturation degree of the passenger flow at the track station, thus addressing the sensitive dependence of the spatiotemporal chaotic system on the initial state value. Subsequently, the dynamic evolution characteristics of the URTN structural and functional resilience levels are explored under different conditions of fault propagation and passenger flow strength. These analyses were based on failure node ratio and network strength entropy metrics. Finally, a case study was conducted using the Xi'an subway as an example. [Results] The results showed the following: (1) During a URTN cascade failure, the disaster resistance evolution trends of structure and function aligned, changed, and failed simultaneously. (2) Critical values existed for the inter-station coupling coefficient
ε
and sudden event disturbance
R
, which were
ε
=0.3 and
R
=1, respectively. Below these thresholds, the URTN cascade failure effect did not occur. However, when
ε
>0.3 and
R
>1, the failure time of the URTN's structural and functional disaster resistance decreased as
ε
and
R
increased. (3) The intensity of the passenger flow negatively affected the structural and functional resilience of the URTN. When disturbed, stations with high passenger flow intensity were more likely to trigger a URTN cascade failure. (4) Stations with large interconnectors and high passenger flow intensity exhibited lower structural and functional vulnerability after a sudden disturbance than stations with larger degrees. [Conclusions] This study has important theoretical and practical implications. Theoretically, it helps uncover the factors affecting cascade failure and the evolution characteristics of the URTN's disaster resistance under the impact of an unbalanced large passenger flows. In practice, this study provides a crucial foundation for decision-making regarding the enhancement of safety management in rail transit when faced with challenges posed by unbalanced large passenger flows.
References
|
Related Articles
|
Metrics
Select
A comparative study of machine learning algorithm models for predicting carbon emissions of residential buildings in cold zones
LIU Yiming, YANG Junhan, ZHANG Zhongli, XU Peiqi, LIU Nianxiong
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1734-1745. DOI: 10.16511/j.cnki.qhdxxb.2024.22.031
Abstract
HTML
PDF
(6005KB) (
76
)
[Objective] Machine learning algorithms provide valuable data support for designing and optimizing low-carbon residential buildings. However, when used directly for carbon emission prediction and analysis, these models often lack proper parameter tuning and optimization. The different impacts of various independent variable datasets on predictive performance also remain to be clarified. In China's cold zones, where residential buildings share similar architectural structures, energy-saving designs, and spatial layouts, carbon emissions primarily come from the operational phase and the production stages of building materials, with heating emissions being a significant component. This study aims to elucidate the effectiveness of different machine learning algorithm models in guiding low-carbon residential design in these cold zones, offering architects criteria for selecting proper algorithms. This study focuses on automatic parameter tuning and optimization for several commonly used algorithms in the context of low-carbon design of buildings, including multiple linear regression, classification and regression tree, random forest, adaptive boosting, gradient boosting regression tree, and multilayer perceptron. The study compares and analyzes the performance limits and applicability of these algorithms and independent variable datasets in predicting carbon emissions during building material production and heating stages. [Methods] This paper elaborates on the target boundaries, parameter ranges, optimization processes, and validation methods for optimizing machine learning algorithm models. Through comprehensive research and simulation analysis of 37 reinforced concrete shear wall residential buildings and their derivative schemes in cold zones, multiple independent variable datasets suitable for establishing predictive models are identified. Cross-validation and grid search techniques are employed to optimize the predictive performance limits of different machine learning algorithms and independent variable datasets. Subsequently, 120 models for predicting carbon emissions from building materials and 60 models for transforming steady-state heating consumption into dynamic heating consumption using the six mentioned algorithms are established. [Results] A horizontal comparison of the models reveals that algorithms such as multiple linear regression, random forest, and gradient boosting regression trees exhibit relatively good performance (
R
2
over 0.900) in carbon emission prediction after hyperparameter tuning across different independent variable datasets. Random forest and gradient boosting regression tree models excel in error control and offer similar predictive accuracy to multiple linear regression but lack interpretability. In contrast, multiple linear regression models provide clearer equations and stronger guidance for low-carbon design and optimization, focusing on carbon emission reduction during building material production or winter heating stages. Models based on the total residential building area exhibit optimal performance in predicting building material carbon emissions. Predictive models built on parameters such as the number of above-ground and underground floors, building width and depth, total household numbers, number of bedrooms for standard floor, and total number of residential bathrooms in the residence also demonstrate strong predictive capabilities for building material carbon emissions. For predicting the conversion coefficient during the heating stage, including the number of households and bedrooms per standard floor as independent variables significantly enhances predictive performance. [Conclusions] Although various machine learning models are useful for predicting residential building carbon emissions, the multiple linear regression model stands out owing to its excellent predictive performance and its intuitive representation of how design parameters affect carbon emissions. By utilizing different and appropriate independent variable datasets, such as the total number of floors, floor height, building dimensions, number of households and bedrooms on a floor, and corrected coefficients for urban meteorological parameters (including outdoor average temperature during the heating season, actual heating days, and roof and wall heat transfer coefficients), or by adopting the finally determined total building area, the multiple linear regression algorithm can deliver timely and multi-faceted guidance. These results are crucial for low-carbon design and optimization during the primary stages of the residential lifecycle in China's cold zones.
References
|
Related Articles
|
Metrics
Select
Quantitative study on the key driving factors of grain production in China from 1949 to 2020
QIN Changhai, WANG Ming, ZHAO Yong, HE Guohua, QU Junlin, YOU Mengyuan
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1746-1758. DOI: 10.16511/j.cnki.qhdxxb.2024.27.001
Abstract
HTML
PDF
(7327KB) (
101
)
[Objective] In China, food is a fundamental necessity for the people and represents a key national interest. Food security is vital for economic development, social stability, and national security. However, current research often features relatively short time series data, and the vital role of irrigation as a key factor in grain production has been largely overlooked. This oversight has hindered the effective elucidation of the patterns of contribution from diverse production factors across regions and stages. Regarding food security strategy, data from 31 province-level regions in China were analyzed using grain production as a metric. During this analysis, we quantified the impact and variations of different factors on regional grain production from 1949 to 2020. The analysis was conducted at the different geographical scale. This study aims to identify the primary driving factors and provide insights to support the stable growth of grain production in China. [Methods] To achieve this goal, a model was constructed using the Cobb-Douglas function, with grain production as the dependent variable. The explanatory variables introduced into the model comprised practitioners in the primary industry, agricultural machinery power, effective irrigation area, net fertilizer quantity, affected area, and cropping index. Additionally, a random error term was incorporated into the model. To address issues of multicollinearity among the data, ridge regression was used to fit the model. The values required for machinery, effective irrigation area, net fertilizer quantity, and affected area in the model were calculated by multiplying the total power of agricultural machinery, effective irrigation area of farmland, net quantity of fertilizer for agricultural production, and total affected area for agriculture by the proportion of grain-sowing area to the total sowing area of crops. [Results] The research results indicated a continuous increase in the contribution of effective irrigation area and net fertilizer quantity to grain production in China, while the elasticity coefficients of the practitioners in the primary industry and cropping index on grain output have significantly decreased. Additionally, the contribution of mechanical power to grain production first increased and then decreased, and the impact of the affected area on grain yield reduction strengthened before weakening. Notably, the elasticity coefficient of the effective irrigation area on grain production in China has increased from 0.155 (1949—1959) to 0.424 (2000—2020), making it the primary driving factor for the increase in grain production. Moreover, the impact of various production factors on grain yield tended toward equilibrium, and the significant contribution of individual factors considerably decreased over time. In the future, ensuring food security will require a coordinated approach involving multiple factors, with the effective irrigation area serving as the foundational component. Additionally, the impact of random error terms such as prices, seeds, and policies in grain production has gradually increased, requiring increased attention in the future. Furthermore, the grain transportation pattern of China is determined by the alignment of population, land, and water. In recent years, the northward expansion of effective irrigation areas and the southward shift of the population center have jointly facilitated the transformation of the grain transportation pattern of China from south-to-north to north-to-south. In the near future, as the population gradually moves southward and faces limitations on southern farmland, the north-to-south grain transportation pattern can persist and may even intensify. [Conclusions] The research findings indicated that effective irrigation area plays a crucial role in coordinating the configuration of agricultural production factors. In the future, maximizing the water resource allocation function of the national water network is vital. Therefore, the construction and expansion of the national water network will enhance water resource security and support the expansion of irrigation scale, thereby facilitating a synergistic combination of multiple factors to promote grain production.
References
|
Related Articles
|
Metrics
Select
Advances in invention patents related to flue gas desulfurization gypsum amendments for ameliorating saline-alkali soils
LI Mingzhu, TIAN Rongrong, LI Ran, ZHANG Jing, WANG Shujuan, LIU Jia, XU Lizhen, LI Yan, ZHAO Yonggan
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1759-1770. DOI: 10.16511/j.cnki.qhdxxb.2024.21.020
Abstract
HTML
PDF
(2522KB) (
55
)
[Objective] Saline-alkali soil is an important reserve resource of cultivated land and potential granary in China, and its management and utilization are related to national food security. Therefore, innovative techniques and amendments should be developed to address these challenges in saline-alkali regions. Among these, calcium supplementation is recognized as one of the most effective methods for ameliorating saline-alkali soil. In the past two decades, gypsum from the desulfurization of flue gas (FGDG) in coal-fired power plants has become a preferred calcium source for ameliorating saline-alkali soil because of its high calcium content and economic feasibility. Given that FGDG has developed into a soil amendment and has been widely used, a profound understanding of the progress of its patents can provide technical guidance for the large-scale amelioration of saline-alkali soil. [Methods] Based on the incoPat global patent database, a bibliometric analysis was conducted on 520 invention patents in the field of using FGDG to ameliorate saline-alkali soil from 2003 to 2022. The application and authorization trends, high-yield mechanisms, operational status, substance composition, and their correlation with patents in this field were systematically analyzed. In addition, a comparative analysis was conducted on the effectiveness of 52 patents with application cases. [Results] The results showed that the annual number of patent applications for using FGDG amendments to ameliorate saline-alkali soil has a trend of first increasing and then decreasing, with a peak period of 115 patents in 2016. Most patents take 20-30 months from publication to authorization. However, the overall proportion of authorization has shown a decreasing trend. The number of patents granted by universities and research institutes is higher than that granted by enterprises, whereas the number of patents jointly granted by universities and enterprises accounts for 15.6% of the total. A total of 37 patents were converted, 7 of which were pledged, accounting for 33.3% of the total number of grants, all of which were transferred by universities to enterprises and pledged by enterprises for financing. More than 70% of patents comprised three or more substances, primarily including organic and inorganic minerals, microbial agents, and nutrient supplements. Organic materials can directly provide nutrients for the soil to make up for the shortage of FGDG in terms of nutrients, with the frequency of application as high as 95.7%, followed by inorganic minerals, which account for 44.5%; microbial agents, which account for 41.3%; and nutrient supplements, which account for 21.3%. Compared with soils with or without other types of amendments, the application of FGDG amendments significantly decreased soil pH, exchangeable sodium percentage, and salt ions that are toxic to crop growth and increased soil Ca
2+
, SO
4
2-
, and total/available nitrogen and phosphorus contents, which provided a better soil environment, thereby increasing crop yield. [Conclusions] Generally, research and development on FGDG amendments for saline-alkali soil amelioration have matured, and some innovative achievements have been transformed into real productivity; thus, the value of related patents has been increasingly highlighted. However, problems such as the relatively simple composition of current patents, unclear technical requirements for the amount of application and method, and serious homogeneity of patents have been encountered. In the future, we should strengthen the cooperation among schools, enterprises, universities, and research institutes, intensify research on the FGDG formula used in saline-alkali soil, and enhance the application benefits of FGDG amendments.
References
|
Related Articles
|
Metrics
Select
Research on early warning of negative public opinion based on sentiment topic modeling
CUI Su, HAN Yiliang, ZHU Shuaishuai, LI Yu, WU Xuguang
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1771-1784. DOI: 10.16511/j.cnki.qhdxxb.2024.27.005
Abstract
HTML
PDF
(7734KB) (
69
)
[Objective] The effect of negative public opinion events on social networks is underestimated. To address the issue of sentiment-based methods not being able to directly achieve early warning of negative online public opinion, this study proposes a sentiment classification and topic extraction-based approach to public opinion topic modeling. Using negative emotional topics as an entry point, this study shifts from investigating negative public opinion events to examining negative public opinion topics, thus facilitating statistical and quantifiable analysis of such events. Additionally, to address the persistent shortcomings of methods for negative public opinion early warning, we construct a novel early warning evaluation metric, which is known as the public opinion topic arithmetic index (POI). This index comprehensively assesses the developmental trends of public opinion topics across three dimensions: explosion index (EI), sentiment index (SI), and dissemination index (DI). [Methods] This study employs the ERNIE 3.0 large-scale language model for sentiment classification. The annotated sentiment dataset is further trained and fine-tuned to obtain the required sentiment classifier. It performs sentiment classification on a COVID-19 Weibo emotional dataset, computing various post sentiments. The topic extraction module uses the TF-IDF algorithm to extract topics. Each noun tag is considered a potential topic, whereas each Weibo post is treated as a document. The TF-IDF method captures frequently occurring words by calculating their frequencies and avoiding less important terms that appear in each document. The TF-IDF topic extraction algorithm extracts topics from negative emotional Weibo posts and identifies relevant topics associated with negative public opinion events. Finally, POI is employed for further analysis based on the extracted public opinion topics. Consequently, early warning is achieved by analyzing negative public opinion topics instead of events. Furthermore, POI comprehensively calculates the effect of negative public opinion topics by combining EI, SI, and DI. EI reflects the growth rate of the current number of textual instances related to negative emotional topics compared to the average number in a previous period; SI mainly reflects the public's emotions and sentiments triggered by public opinion topics; and DI mainly represents the scope and speed of dissemination of public opinion topics. Finally, a comprehensive negative emotional topic public opinion index is derived by calculating the EI, SI, and DI of emotional topics and postdata information, and the topics that exceeded the warning threshold are warned. [Results] The experimental results reveal that the proposed early warning model effectively predicts social media public opinion events. Among the top ten negatively perceived topics ranked based on weight, the earliest warning time exceeds the average outbreak day by 161.01 hours, with an average of 2.1 early warnings. Additionally, the earliest warning time exceeds the average peak day by 261.81 hours, with an average of 5.8 early warnings. [Conclusions] We establish a threshold for triggering the arithmetic index of public opinion topics by modeling and calculating the arithmetic index of negative public opinion topics in this study. This enables us to exclude negative topics and corresponding public opinion events that surpass the threshold, thereby achieving early warning for topic-related negative public opinion events. The proposed negative public opinion warning model accomplishes its intended objective by employing sentiment analysis methods for the early detection of online public opinions.
References
|
Related Articles
|
Metrics
Select
A Chinese keyphrase extraction method for multimodal information enhancement representation
ZHOU Xuanyu, LIU Lin, LU Xiao, LI Xuan, ZHANG Siming
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1785-1796. DOI: 10.16511/j.cnki.qhdxxb.2024.27.015
Abstract
HTML
PDF
(4011KB) (
77
)
[Objective] At present, China is undergoing a critical digital transformation in education. This shift has led to an explosive growth of educational content online, presenting a challenge for researchers who find it increasingly difficult to sift through massive amounts of text data. The necessity to quickly grasp important information has made keyphrase extraction an invaluable tool. Keyphrase extraction automates the process of identifying words or phrases that encapsulate the main themes of a text, proving critical for text retrieval, text summary, and other tasks. Despite its importance, the current keyphrase extraction tasks mainly rely on pretrained language models to obtain text representation. These models are often trained based on a generic text corpus and struggle to adapt to specific domains according to the characteristics of downstream tasks owing to their limited ability to capture the subtle semantic representation of single-mode information. Therefore, developing methods for accurate and efficient keyphrase extraction from massive texts remains a pressing research challenge. [Methods] This paper presents a novel approach for Chinese keyphrase extraction, dubbed multimodal information enhancement representation for keyphrase extraction (MIEnhance-KPE). Our method first deconstructs characters into radicals using a character splitting dictionary and extracts radical features through a convolutional neural network. At the same time, we integrate a trainable adapter layer between the transformer layers of a pretrained language model. Through the above operations, the bottom level semantic features of the pretrained language model and radical features are fully integrated to obtain a domain adaptive text representation. Characters are then transformed into glyph images representing different periods in history and writing styles. Subsequently, we employ group convolution to extract the glyphic features of these characters. Meanwhile, a cross-attention mechanism is used to fuse the glyphic and text features, yielding richer and more comprehensive semantic representations. The final step involves using a conditional random field model to learn the relationship between the fused features and labels. Through sequence labeling, we identify candidate keyphrases, ranking them based on position and word frequency weight to determine the most relevant keyphrases. [Results] MIEnhance-KPE's performance was tested using two datasets: the published Chinese Scientific Literature (CSL) and the self-constructed Chinese Education Keyphrase Extraction Dataset (CEKED). Our method demonstrated a substantial improvement compared to the most advanced keyphrase extraction methods, with
F
values increasing by 15.71% and 3.40% on the CSL and CEKED datasets, respectively. Ablation experiments further confirmed the effectiveness of both the domain adaptive module and the visual semantic enhancement module in enhancing keyphrase extraction accuracy. In addition, this paper explored various methods for fusing glyphic and semantic features, concluding that the cross-attention mechanism excels in adaptively merging different features to improve task accuracy. [Conclusions] The MIEnhance-KPE proposed in this paper can considerably improve the accuracy of keyphrase extraction tasks. This aids educational researchers in quickly locating relevant literature and understanding the cutting-edge trends of educational development. Additionally, MIEnhance-KPE introduces a novel approach to literature analysis in the educational sector. It provides a solid data foundation for examining the motivation of educational reform and innovation, thereby accelerating the digital transformation process in education.
References
|
Related Articles
|
Metrics
ELECTRONIC ENGINEERING
Select
Angle estimation algorithm of high-speed multiple targets in the delay-Doppler domain for passive radars
LIU Dapeng, FENG Xinxing, LIU Chengcheng, REN Yong, DU Jun
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1797-1808. DOI: 10.16511/j.cnki.qhdxxb.2024.22.028
Abstract
HTML
PDF
(10339KB) (
49
)
[Objective] The passive radar systems for urban aerial target surveillance highlight the importance of accurately determining the angle of arrival (AOA) of weak target echoes. The AOA information is crucial for locating targets using passive radars, considerably impacting the detection capabilities of the system. Traditionally, research on AOA estimation has focused on algorithms utilizing two-dimensional correlation processing in the delay-Doppler domain. These methods enhance the signal-to-noise ratio of the echo signal, leveraging the accumulated gain from the mutual ambiguity function between the reference signal and monitoring signals and subsequently facilitating angle estimation. However, existing algorithms face notable challenges. For instance, they are particularly prone to the distance migration effect when tracking weak targets moving at high speeds, adversely affecting the accumulation gain and the accuracy of parameter estimations. In addition, the computational requiremens of the mutual ambiguity function are high, complicating real-time implementation. Although certain rapid implementation methods for the mutual ambiguity function can reduce the computational requiremens, they are unsuitable for platforms with limited processing power. Additionally, current algorithms struggle to differentiate between multiple targets within the same range-Doppler unit owing to their inability to refine target distinction along the angle dimension. Considerably, this paper proposes a more efficient algorithm for delay-Doppler angle estimation tailored to high-speed, multitarget scenarios. [Methods] The proposed algorithm is divided into three steps. (1) The reference and monitoring signals undergo segmented processing; this division is based on the target movement and the signal parameters of the external radiation source, distinguishing between the fast time within each segment and the slow time across segments. (2) The second step addresses distance migration, which can occur owing to the high-speed movement of the target. Thus, the Keystone transform is used to adjust the time axis of each frequency, effectively correcting the distance migration for high-speed targets. Next, the energy of the target echo signal is aggregated into a singular delay-Doppler unit. The process continues with the detection and extraction of the slow time-sampling signal from the delay unit containing the target echo. This extracted signal forms the basis for converting the problem into one of the angle measurements, focusing on the multifast beat signal within the slow time dimension. (3) The target azimuth and pitch angles are estimated by employing axial virtual shift coherence within a uniform circular array. The multiple signal classification (MUSIC) algorithm is applied to these coherent signals for efficient processing in scenarios involving multiple targets. [Results] The algorithm can distinguish multiple targets in the same delay-Doppler cell. This differentiation is facilitated by the array axial virtual translation method, which improves the capability of the algorithm to process multiple-target signals. [Conclusions] Simulation results have demonstrated the effectiveness of the proposed method for the delay-Doppler processing, particularly its segmented processing combined with the Keystone transform, which corrects the distance migration of the target and greatly reduces the computational complexity. Consequently, the stability and the real-time performance of the algorithm are markedly improved. The algorithm exhibits obvious performance advantages, especially in scenarios characterized by high-speed movements and the presence of multiple targets.
References
|
Related Articles
|
Metrics
Select
Design of a two-step time interpolation-based field-programmable gate array-time-to-digital converter
LU Jiangrong, LI Wenchang, LIU Jian, ZHANG Tianyi, WANG Yanhu
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1809-1817. DOI: 10.16511/j.cnki.qhdxxb.2023.27.009
Abstract
HTML
PDF
(4323KB) (
70
)
[Objective] Time-to-digital converters (TDCs), vital components in time measurement, have been widely used in various scientific research fields. The demand for enhanced performance in TDC resolution and improved linearity within its system has increased owing to increasingly stringent requirements across various fields. In recent years, TDCs based on field-programmable gate arrays (FPGAs) have received significant attention owing to their short development period, low cost, and improvements in FPGA fabrication processes and technology. Reducing the processing time of delay units in TDC improves TDC resolution. However, extending the length of a tapped-delay line (TDL) results in an increased nonlinear accumulation of delay units, leading to a reduction in system linearity. To address the challenge of balancing enhanced TDC resolution and preserved system linearity based on an architecture that combines coarse and fine counting, this study introduces a two-step time interpolation method designed specifically for the fine counting stage within the time signal quantization process. [Methods] In this method, the two-step time interpolation for the system clock involves the following steps. First, a set of clock signals with different phases is used to interpolate the system clock. Second, the time intervals between the adjacent phased clock signals are encapsulated using TDL. In accordance with the interpolation operation, during the time measurement process, when a start signal, triggered by a time signal, activates TDC, the first interpolation result is encoded from a one-cold code. This code is obtained using a set of synchronizers, where each synchronizer consists of two serial D flip-flops to identify the phase that corresponds to the start signal. The second interpolation result is obtained using the thermometer code encoder to process the output from TDL, which finely quantifies the time interval between the start signal and the matched phased clock signal. Finally, the quantified result of the time signal is generated by subtracting both the first and second interpolation results from the coarse counting result obtained from the period counter. The time interval between any pair of time signals can be determined by calculating the difference between their quantified results. Compared with the generalized method of directly interpolating the system clock, the proposed two-step time interpolation method can effectively maintain a desirable resolution and improve the system linearity of TDC. This improvement can be achieved by shortening the length of the delay chain, which reduces the accumulation of nonlinearity of delay units in TDL and prevents severe nonlinear changes caused by the TDL crossing device boundaries associated with the clock region. Moreover, the reduced length of TDL contributes to the downsizing of modules, such as the thermometer code encoder, that must be integrated into TDC to maintain the low consumption of FPGA logic resources during circuit implementation. [Results] The two-step time interpolation-based FPGA-TDC method is implemented using a Xilinx Virtex UltraScale + FPGA. To assess the effectiveness of improving system linearity, an additional FPGA-TDC is implemented using the direct interpolation of the system clock method. The experimental results reveal that with the implementation of the two-step time interpolation method, the differential nonlinearity (DNL) and integral nonlinearity (INL) improved by 23.64% and 40.15%, respectively. The two-step time interpolation-based FPGA-TDC achieved a resolution of 1.72 [KG-*7]ps, with DNL and INL variation ranges of 4.49 and 26.55 [KG-*7]LSB, respectively. Additionally, a comparison with FPGA-TDCs constructed using other methods is demonstrated. [Conclusions] Consequently, the proposed two-step time interpolation-based FPGA-TDC method achieves better system linearity and requires fewer FPGA logic resources.
References
|
Related Articles
|
Metrics
NUCLEAR AND NEW ENERGY TECHNOLOGY
Select
Application and research progress of high-temperature heat pipe technology in space nuclear power systems for thermal energy conversion
LI Xin, YUAN Dazhong, CHEN Min, DU Baorui
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1818-1838. DOI: 10.16511/j.cnki.qhdxxb.2024.22.040
Abstract
HTML
PDF
(10132KB) (
98
)
[Significance] The increasing demand for diversified space missions necessitates addressing the extreme heat transfer challenges in space nuclear power systems, such as spatial constraints, long-distance unpowered cycles, and zero-gravity environments. Thus, the stable transfer of the substantial heat generated by the reactor core to the energy conversion device is crucial. High-temperature heat pipes, characterized by high heat flux, high operating temperatures, minimal temperature differences in heat transfer, and strong adaptability, are ideal for the key components of nuclear thermal energy conversion in space power systems. Traditional nuclear reactor power systems using coolants are less competitive in space because of their complexity, leakage risk, and stringent material strength requirements. Conversely, space heat-pipe-cooled reactors do not require auxiliary equipment such as high-temperature pumps. The phase change of the working fluid and the diffusive transport of vapor in a high-temperature heat pipe form a natural cycle that transfers heat from the core to the thermoelectric converter, thereby reducing system complexity, enhancing safety and reliability, and thus providing an effective solution for efficient heat transfer and energy conversion in space nuclear power systems. [Progress] The research and development of high-temperature heat pipe technology are pivotal for improving energy conversion efficiency and heat transfer performance in space nuclear power applications. Research on high-temperature heat pipes encompasses working fluid flow and heat transfer mechanisms, model establishment, experimental verification of frozen startup and heat transfer limits, steady-state heat transfer experimental analysis, and failure mechanisms, yielding promising results. As this technology advances, its applications and research in space nuclear thermal energy conversion systems have become more extensive and in-depth. Studies have focused on the startup and safety performance of space nuclear power systems, the coupling performance of high-temperature heat pipes in nuclear thermal energy conversion systems, and the research and design of high-temperature heat pipes in the radiators of space reactors. However, high-temperature heat pipes face challenges in achieving efficient thermal energy transfer and management, structural design of heat pipes and wicks, and adaptability to extreme space environments when applied to space nuclear power systems for thermal energy conversion. In response to these challenges, new research and attempts have been conducted. Studies on heat pipe performance under microgravity conditions have demonstrated their feasibility. In addition, research on shaped high-temperature heat pipes designs more flexible and efficient structures to meet complex heat transfer requirements. Furthermore, studies on additive manufacturing, aerogel insulation, and advanced testing techniques provide theoretical support and a technical foundation for the space application of high-temperature heat pipes. The current research on the space applications of high-temperature heat pipes still has some limitations. Ground-based tests of high-temperature heat pipes have advanced but cannot match the demands of real space scenarios. This mismatch prevents us from knowing their true performance in space reactors. Moreover, studies on the performance of high-temperature heat pipes coupled with nuclear reactors and thermoelectric converters are scarce. Thus, the overall coupling performance is largely unknown. [Conclusions and Prospects] High-temperature heat pipe technology shows promise, but still faces challenges in space applications. Currently, most space heat-pipe-cooled reactors are in the design and feasibility exploration stages. Future research will focus on optimizing high-temperature heat pipe models and their applicability, exploring the heat and mass transfer mechanisms of working fluids in space environments, conducting theoretical research and complex wick design and manufacturing studies for shaped heat pipes, and ensuring reliable coupling of high-temperature heat pipes with other components in space nuclear power systems.
References
|
Related Articles
|
Metrics
Select
Transient flow characteristic analysis of the step-down process of the control rod hydraulic drive system
YANG Linqing, QIN Benke, BO Hanliang
Journal of Tsinghua University(Science and Technology). 2024,
64
(10): 1839-1848. DOI: 10.16511/j.cnki.qhdxxb.2023.27.010
Abstract
HTML
PDF
(4844KB) (
52
)
[Objective] Based on 5-MW hydraulic drive technology and commercial pressurized water reactor magnetic drive technology, Tsinghua University has developed a control rod hydraulic drive system (CRHDS). CRHDS is a new type of built-in control rod drive technology primarily utilized in integrated reactors, such as the 200-MW nuclear heating reactor (NHR200). CRHDS uses three hydraulic cylinders to drive two sets of latch assemblies to move in a predefined sequence and achieve the control rod step-up, step-down, and scram functions. Water hammer occurs during the operation of CRHDS. It can trigger a large fluctuation of fluid pressure, causing vibration in the driving system and equipment, interfering with the instruments, and endangering the safety of the system. Therefore, the transient flow process needs to be analyzed through experimental and theoretical studies. [Methods] Theoretical modeling and experiments were performed at the system level. The transient flow mechanism of CRHDS was illustrated, and the key characteristic parameters were analyzed with driving pressure at high temperature. Most importantly, the composition and principle of CRHDS are described. The structure of the hydraulic cylinder was described in detail because it was a key component of the theoretical model. Combined with the structure of CRHDS, a full-scale transient flow performance test rig was built, and the experiments were completed. Based on the displacement and hydraulic cylinder pressure test results, the transient flow process of CRHDS was analyzed at different stages. Based on the mechanism analysis, a step-down transient flow model incorporating the trend and water hammer models was innovatively established. The trend model comprised the fluid continuity equation, fluid momentum equation, leakage flow relationship, and dynamic and kinematic equations. The trend model was solved using the finite difference method. The water hammer model was solved using the method of characteristics. The boundary conditions included a hydraulic cylinder, straight junction, integrated valve, and test vessel. The step-down transient flow model results were verified using experimental data. In the NHR200, CRHDS operates under high temperature and pressure conditions. Finally, the step-down transient flow model of CRHDS was applied to the high-temperature condition, and the fluid physical properties were changed accordingly. The driving pressure was set at 800, 850, and 950 [KG-*7]kPa. Variations in key parameters with driving pressure are explained. [Results] [BP(]The following research results are presented:[BP)] (1) The step-down transient flow model of CRHDS comprises the trend and water hammer models. The trend model represents the overall change in hydraulic cylinder pressure, while the water hammer model illustrates the water hammer phenomenon in the system. The step-down transient flow process is analyzed by superimposing the solution results of these two models. (2) Combined with the motion of the inner cylinder, the step-down process can be divided into pre-step, step-down, and post-step stages. Pressure decreases rapidly in the pre-step and post-step stages but slowly in the step-down stage. (3) The rapid movement of the hydraulic cylinder and the sudden change in the leakage flowrate cause the water hammer phenomenon, and the water hammer pressure decays rapidly. The hydraulic cylinder acts as a water hammer source and fluid energy damper during the entire transient flow process. (4) The variation rules of the key parameters of CRHDS at high temperature are obtained. As the driving pressure increases, the pre-step stage duration increases, the step-down time decreases, and the average step-down velocity and water hammer pressure amplitude increase. [Conclusions] The research results illustrate the transient flow mechanism of the step-down process of CRHDS, guide the design of vibration reduction, and provide a basis for the operation monitoring of CRHDS.
References
|
Related Articles
|
Metrics
News
More
»
aaa
2024-12-26
»
2023年度优秀论文、优秀审稿人、优秀组稿人评选结果
2023-12-12
»
2022年度优秀论文、优秀审稿人、优秀组稿人评选结果
2022-12-20
»
2020年度优秀论文、优秀审稿人评选结果
2021-12-01
»
aa
2020-11-03
»
2020年度优秀论文、优秀审稿人评选结果
2020-10-28
»
第十六届“清华大学—横山亮次优秀论文奖”暨2019年度“清华之友—日立化成学术交流奖”颁奖仪式
2020-01-17
»
a
2019-01-09
»
a
2018-12-28
»
a
2018-01-19
Links
More
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd