Chinese  |  English

Top access

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • COMPUTER SCIENCE AND TECHNOLOGY
    XIE Tian, YU Lingyun, LUO Changwei, XIE Hongtao, ZHANG Yongdong
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1350-1365. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.002
    Abstract (3022) PDF (1292)   Knowledge map   Save CSCD(3)
    [Significance] Deep face manipulation technology involves the generation and manipulation of human imagery by different strategies, such as identity swapping or face reenactment between the source face and the target face. On the one hand, the rise of deep face manipulation has inspired a series of applications, including video making and advertising marketing. On the other hand, because face manipulation technology is usually open source or packaged as APPs for free distribution, it makes the threshold of tampering technology lower, resulting in the proliferation of fake videos. Moreover, when face manipulation technology is maliciously used by criminals to produce fake news, especially for important military and political officials, it will guide and intervene in public opinion, posing a great threat to national security and social stability. Therefore, the research on deep face forgery detection technology is particularly important. Hence, it is necessary to summarize the existing research to rationally guide deep face manipulation and detection technology.[Progress] Nowadays, deep face manipulation technology can be roughly divided into four types, namely, identity swapping, face reenactment, face editing, and face synthesis. Deepfakes bring real-world identity swapping to a new level of fidelity. The region-aware face-swapping network provides the identity information of source characters from local and global perspectives, making the generated faces more natural. In the field of facial reenactment, Wav2lip uses pretrained lip synchro models as expert models, encouraging the model to generate natural and accurate lip movements. In the field of face editing, FENeRF, a three-dimensional perception generator based on a neural radiation field, aligns semantic, geometric, and texture information in spatial domain and improves the consistency of the generated image between different perspectives while ensuring that the face can be edited. In the field of face synthesis, Anyface proposes a cross-modal distillation module for the alignment of language and visual representation, realizing the use of text information to generate more diversified face images. Deep face forgery detection technology can be roughly divided into image-level forgery detection and video-level forgery detection methods. In the image-level methods, SBI proposes a self-blended technique to generate realistic fake face images with data augmentation, effectively improving the generalization ability of the model. M2TR proposes a multimodal and multi-scale Transformer model to detect local artifacts at different levels of the image in spatial. Frequency domain features are also added as auxiliary information to ensure the forgery detection ability of the model for highly compressed images. In the video-level methods, RealForensics learns the natural correspondence between the face and audio in a real video in a self-supervised way, enhancing the generalization and robustness of the model.[Conclusions and Prospects] Presently, deep face manipulation and detection technologies are rapidly developing, and various corresponding technologies are in the process of continuous update and iteration. First, this survey reviews the deep face manipulation and detection methods and discusses their strengths and weaknesses. Second, the common datasets and the evaluation results of different manipulation and detection methods are summarized. Finally, the main challenges of face manipulation and fake detection are discussed, and the possible research direction in the future is pointed out.
  • PUBLIC SAFETY
    DENG Lizheng, YUAN Hongyong, ZHANG Mingzhi, CHEN Jianguo
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 849-864. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.002
    Abstract (2892) PDF (1189)   Knowledge map   Save CSCD(26)
    [Significance] Landslide hazards are widely distributed in China and are severely harmful. The registered landslide hazards have achieved remarkable benefits in disaster reduction through a comprehensive prevention and control system. However, approximately 80% of all geo-disasters in China still occur outside the scope of identified hazards yearly. Therefore, monitoring and early warning are important means to actively prevent landslide disasters and achieve great success in disaster mitigation owing to promptness, effectiveness, and relatively low-cost advantages. Deformation is the most significant monitoring parameter for landslides and has become a focus and general trend. Landslide deformation monitoring engineering has strict requirements for controlled cost and high reliability to achieve widespread application and accurate early warning. Therefore, the commonly used monitoring instruments focus on surface deformation and rainfall to meet the requirements for easy equipment installation and low implementation cost. However, surface deformation and rainfall are not sufficient conditions to determine the occurrence of landslides. Various challenges exist in the existing monitoring technologies and early warning methods regarding engineering feasibility and performance improvement. Thus, it is important and urgent to summarize the existing research to rationally guide future development.[Progress] The deformation monitoring methods are divided into surface and subsurface monitoring. Most surface deformation monitoring technologies are vulnerable to the interference of terrain, environment, and other factors; therefore, their timeliness and reliability are not easily guaranteed. Additionally, slope subsurface deformation monitoring technologies can directly obtain the development and damage information of the sliding surface; thus, they can recognize the disaster precursor. Subsurface monitoring has advanced early warning ability; however, the existing instruments have problems, such as high cost, small measuring range, or difficult operation. Acoustic emission technology has the advantages of low cost, high sensitivity, and continuous real-time monitoring of large deformation, and has gradually developed into an optional method for landslide subsurface deformation monitoring. Thus, efficient landslide monitoring should comprehensively use multiple technologies to overcome the limitations of a single technology, and an integrated monitoring system becomes the state-of-the-art trend. The purpose of landslide monitoring is to provide a basis for decision-making of disaster early warning, thus, avoiding casualties and property losses through effective early warning efforts. In the field of early warning, regional meteorological and individual landslide early warning methods are gradually developed and improved. Deformation monitoring data are the main basis for landslide early warning, and experts analyze the deformation trend and sudden change characteristics. Different early warning levels could be triggered by the threshold values of velocity, acceleration, or other criteria. However, a landslide has complex dynamic mechanisms and individual differences; thus, the generic early warning model needs further exploration. The intelligent early warning model integrates machine learning technology with geological engineering analysis to improve the accuracy and automation level of landslide early warning.[Conclusions and Prospects] Deformation monitoring is essential in landslide prevention, and deformation data are the main basis for landslide early warning. Moreover, surface monitoring technologies have been widely used in the perception and decision-making process of landslides. Subsurface monitoring technologies can detect early precursors of landslide evolution to continuously improve early warning accuracy. Analyses show that early warning methods can be improved in the future by integrating machine learning models and geotechnical engineering.
  • PUBLIC SAFETY
    DAI Xin, HUANG Hong, JI Xinyu, WANG Wei
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 865-873. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.013
    Abstract (2392) PDF (984)   Knowledge map   Save CSCD(6)
    [Objective] Rapid prediction of rainstorm waterlogging is crucial for disaster prevention and reduction. However, the traditional numerical models for simulating and predicting large-scale and complex subsurface conditions are complicated and time-consuming; moreover, the time-efficiency requirement of rainstorm waterlogging prediction is difficult to meet. To address these shortages of the numerical models, this study constructs a spatiotemporal prediction model of urban rainstorm waterlogging based on machine learning methods to rapidly predict waterlogging extent and water depth changes.[Methods] This study constructs a rapid prediction model of urban rainstorm waterlogging based on a hydrodynamics model and machine learning algorithms. First, a hydrodynamic model is constructed based on InfoWorks integrated catchment management (InfoWorks ICM) for rainstorm waterlogging in the study area with the parameter rate determination and model validation to realize the high-precision simulation of urban rainstorm waterlogging. On this basis, a rainfall scenario-driven hydraulics model is designed to further obtain rainstorm waterlogging simulation results. These results are used as the base dataset for machine learning. Second, the spatial characteristics data of rainstorm waterlogging are obtained from three aspects: rainfall situation, subsurface information, and the drainage capacity of the pipe network, which, together with the grid simulation results, comprise the dataset. The spatial prediction models are based on random forest, extreme gradient boosting (XGBoost), and K-nearest neighbor algorithms. Finally, the simulation results of waterlogging points are used to generate rainstorm waterlogging time series data. The rainfall, cumulative rainfall, and water depth of the first four moments (every 5 min) are used as the input for a long short-term memory (LSTM) neural network to predict the present water depth of the flooding point. The two models collaborate to achieve rapid spatial and temporal predictions of urban rainstorm waterlogging.[Results] For spatial predictions, the random forest model has the best fitting performance regarding evaluation indexes such as the mean square error, the mean absolute error, and the coefficient of determination (R2). When a rainstorm scenario with an 80-year event and a 2.5 h rainfall calendar prediction set is used, the prediction results concur with the risk map of urban waterlogging in Beijing. Compared with the simulation results of InfoWorks ICM, the prediction accuracy of the predicted inundation extent reaches 99.51%, and the average prediction error of waterlogging depth does not exceed 5.00% by the random forest model. For temporal predictions, the trend of the water depth change of the LSTM neural network model is more consistent with the simulation results of InfoWorks ICM, the R2 of four typical inundation points are above 0.900, the average absolute error of water depth prediction at the peak moment is 1.9cm, and the average relative error is 4.0%.[Conclusions] When addressing sudden rainstorms, the rapid prediction model based on machine learning algorithms built in this study can generate accurate prediction results of flooding extent and water depth in seconds by simply updating the forecast rainfall data in the model input. The model computational speed is greatly improved compared to the hydrodynamics-based numerical model, which can help plan waterlogging mitigation and relief measures.
  • COMPUTER SCIENCE AND TECHNOLOGY
    WANG Yun, HU Min, TA Na, SUN Haitao, GUO Yifeng, ZHOU Wuai, GUO Yu, ZHANG Wanzhe, FENG Jianhua
    Journal of Tsinghua University(Science and Technology). 2024, 64(4): 649-658. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.042
    Abstract (2020) PDF (844) HTML (15)   Knowledge map   Save CSCD(6)
    [Significance] Since the turn of the 21st century, artificial intelligence (AI) has advanced considerably in many domains, including government affairs. Furthermore, the emergence of deep learning has taken the development of many AI fields, including natural language processing (NLP), to a new level. Language models (LMs) are key research directions of NLP. Referred to as statistical models, LMs were initially used to calculate the probability of a sentence; however, in recent years, there have been substantial developments in large language models (LLMs). Notably, LLM products, such as the generative pretrained transformer (GPT) series, have driven the rapid revolution of large language research. Domestic enterprises have also researched LLMs, for example, Huawei’s Pangu and Baidu's enhanced language representation with informative entities (ERNIE) bot. These models have been widely used in language translation, abstract construction, named-entity recognition, text classification, and relationship extraction, among other applications, and in government affairs, finance, biomedicine, and other domains. [Progress] In this study, we observe that improving the efficiency of governance has become one of the core tasks of the government in the era of big data. With the continuous accumulation of government data, traditional statistical models relying on expert experience and local features gradually suffer limitations during application. However, LLMs, which offer the advantages of high flexibility, strong representation ability, and effective results, can rapidly enhance the intelligence level of government services. First, we review the research progress on early LMs, such as statistical LMs and neural network LMs. Subsequently, we focus on the research progress on LLMs, namely the Transformers series, GPT series, and bidirectional encoder representations from transformers (BERT) series. Finally, we introduce the application of LLMs in government affairs, including government text classification, relationship extraction, public opinion risk identification, named-entity recognition, and government question answering. Moreover, we propose that research on LLMs for government affairs must focus on multimodality, correctly benefit from the trend of “model as a service,” focus on high data security, and clarify government responsibility boundaries. Additionally, a technical path for studying LLMs for government affairs has been proposed. [Conclusions and Prospects] The application of LLMs in government affairs mainly focuses on small-scale models, lacking examples of application in large-scale models. Compared with smaller models, large models offer many advantages, including high efficiency, broader application scenarios, and more convenience. These advantages can be understood as follows. In terms of efficiency, large models are usually trained on a large amount of heterogeneous data, thus delivering better performance. In terms of application scenarios, large models gradually support multimodal data, resulting in more diverse application scenarios. In terms of convenience, we emphasize the “pretraining + fine-tuning” mode and the invocation method of interfaces, making LLMs more convenient for research and practical applications. This study also analyzes the issues suffered by LLMs, specifically from the technological and ethical perspectives, which have resulted in a panic to a certain extent. For example, ChatGPT has generated many controversies, including whether the generated files are novel, whether using ChatGPT will lead to plagiarism and ambiguity as to who are property rights owners for the generated files. Overall, it can be said that LLMs are in the stage of vigorous development. As the country promotes research on AI and its application in government affairs, LLMs will play an increasingly crucial role in the field.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    WANG Yan, OU Guoli
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1693-1706. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.033
    Abstract (1647) PDF (643) HTML (8)   Knowledge map   Save
    [Significance] The issue of climate change is extremely complex and encompasses multiple factors such as the environment, economy, society, and related aspects. With the ongoing maturation of complex system modeling technology, low-carbon transportation research using the computable general equilibrium (CGE) model presents a new approach to policy evaluation. The CGE model has three primary advantages for analyzing the economic challenges of transitioning to low-carbon transportation. First, the approach has a solid microeconomic foundation that can directly reflect the mechanism and influence of economic subjects' behavior under the assumption of a rational economic player. Second, CGE models are capable of fully simulating the connections of different economic sectors, which can uncover the transmission effect of transportation policy impact among various sectors, as well as the response of various sectors to the policy impact. Third, the model has two major types, static and dynamic CGE models, which can analyze the short- and long-term impact of different policies, respectively. As an essential prediction tool for policy impact and trend analysis, CGE models can comprehensively reveal the interaction characteristics between the transportation industry and the whole national economy, enabling the prediction of the economic and social impact of low-carbon transportation policies. [Progress] This study investigates contemporary research on transportation policies based on the CGE model. A total of 78 relevant empirical studies are collected from the Web of Science, Science Direct, and China National Knowledge Infrastructure, of which more than 50% focus on predicting the impact of low-carbon transportation policies, indicating that the investigation of traffic-related carbon emissions has gradually become a popular topic of empirical analysis using CGE models. The research topics include: (1) The influence of low-carbon transportation economic incentives, such as carbon tax, emission trading scheme, and transportation subsidies. (2) The application effect of low-carbon technologies, such as electric vehicles and carbon capture and storage. (3) The effect of low-carbon transportation urban planning, including land use, vehicle speed limits, walking-oriented urban design, and bicycle-oriented urban space development. (4) Predicting the economic and social impact of the implementation of nationally determined contributions and fuel economy standards. Previous research establishes a solid foundation for prediction and policy analysis in low-carbon transportation research; however, in the context of China's 2030 carbon peak and 2060 carbon neutrality goals, some issues remain that require further exploration and investigation. [Conclusions and Prospects] First, regarding emissions reduction policies, differing transportation needs, transportation structure, energy structure, technical level, and macropolicies will affect transportation carbon emissions. The carbon emissions reduction potential of various policies requires further study, and it is essential to propose structured solutions referencing the prediction and design of composite system transportation emissions reduction policies. Based on China's 1+N policy system for advancing the dual carbon goals, this study constructs a low-carbon transportation policy matrix based on the “avoid/shift/improve-planning/regulatory/economic/information/technological (ASI-PREIT)” structure, producing a proposed “policy basket” for low-carbon transportation CGE modeling. This policy matrix will comprehensively reveal the correlation between policy tools for low-carbon transportation CGE modeling and help put forward structured low-carbon solutions. Second, in terms of model construction, accessibility is the most intuitive factor for transportation. As with other sectors, treating the transportation sector simply as a product production sector risks neglecting network and external benefits; therefore, this study proposes the inclusion of transportation accessibility factors in low-carbon transportation CGE models as spatial computable general equilibrium model to identify regional economic correlations and regional product flow. Third, in terms of synergies, carbon emissions reduction in transportation is crucial to achieving China's dual carbon goals and can advance innovation and economic growth, leveraging a wide range of synergies, including sustainable development, improving public health, and enhancing the overall quality of life. Currently, increasingly severe ecological and environmental challenges are forcing global economies to reassess the GDP-centered development model, seeking balanced and sustainable development strategies that include environment, economy, and society. This study proposes the development of a comprehensive low-carbon transportation CGE model to compare and analyze the optimal solutions for balancing the co-benefits of environment-economy-society from a global perspective and design low-carbon transportation policy combinations to advance sustainable development. In summary, this study endeavors to systematically review the empirical research applying CGE models in the field of low-carbon transportation, provide a reference for expanding the research on low-carbon transportation, and help policymakers and the transportation sector achieve China's dual carbon goals.
  • BIG DATA
    ZHAO Xingwang, HOU Zhedong, YAO Kaixuan, LIANG Jiye
    Journal of Tsinghua University(Science and Technology). 2024, 64(1): 1-12. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.001
    Abstract (1593) PDF (631) HTML (10)   Knowledge map   Save
    [Objective] Multiview graph clustering aims to investigate the inherent cluster structures in multiview graph data and has received quite extensive research attention over recent years. However, there are differences in the final quality of different views, but existing methods treat all views equally during the fusion process without assigning the corresponding weights based on the received quality of the view. This may result in the loss of complementary information from multiple views and go on to ultimately affect the clustering quality. Additionally, the topological structure and attribute information of nodes in multiview graph data differ significantly in terms of content and form, making it somewhat challenging to integrate these two types of information effectively. To solve these problems, this paper proposes two-stage fusion multiview graph clustering based on an attention mechanism.[Methods] The algorithm can be divided into three stages:feature filtering based on graph filtering, feature fusion based on the attention mechanism, and topological fusion based on the attention mechanism. In the first stage, graph filters are applied to combine the attribute information with the topological structure of each view. In this process, a smoother embedding representation is achieved by filtering out high-frequency noise. In the second stage, the smooth representations of individual views are fused using attention mechanisms to obtain the consensus smooth representation, which incorporates information from all views. Additionally, a consensus Laplacian matrix is obtained by combining multiple views' Laplacian matrices using learnable weights. To obtain the final embedded representation, the consensus Laplacian matrix and consensus smooth representation are inputted into an encoder. Subsequently, the similarity matrix for the final embedded representation is computed. Training samples are selected from the similarity matrix, and the embedded representation and learnable weights of the Laplacian matrix are optimized iteratively to obtain a somewhat more compressed embedded representation. Finally, performing spectral clustering on the embedding representation yields the clustering results. The performance of the algorithm is evaluated using widely-used clustering evaluation metrics, including accuracy, normalized mutual information, an adjusted Rand index, and an F1-score, on three datasets:Association for Computing Machinery (ACM), Digital Bibliography & Library Project (DBLP), and Internet Movie Database (IMDB).[Results] 1) The experimental results show that the proposed algorithm is more effective in handling multiview graph data, particularly for the ACM and DBLP datasets, compared to extant methods. However, it may not perform as well as LMEGC and MCGC on the IMDB dataset. 2) Through the exploration of view quality using the proposed methods, the algorithm can learn weights specific to each view based on quality. 3) Compared to the best-performing single view on each dataset (ACM, DBLP, and IMDB), the proposed algorithm achieves an average performance improvement of 2.4%, 2.9%, and 2.1%, respectively, after fusing all views. 4) Exploring the effect of the number of graph filter layers and the ratio of positive to negative node pairs on the performance of the algorithm, it was found that the best performance was achieved with somewhat small graph filter layers. The optimal ratio for positive and negative node pairs was around 0.01 and 0.5.[Conclusions] The algorithm combines attribute information with topological information through graph filtering to obtain smoother representations that are more suitable for clustering. The attention mechanisms can learn weights from both the topological and attribute information perspectives based on view quality. In this way, the representation could get the information from each view while avoiding the influence of poor-quality views. The proposed method in this paper achieves the expected results, greatly enhancing the clustering performance of the algorithm.
  • AEROSPACE ENGINEERING
    LIN Weiquan, XU Hangrui, LAN Xudong
    Journal of Tsinghua University(Science and Technology). 2024, 64(9): 1521-1535. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.024
    Abstract (1424) PDF (970) HTML (12)   Knowledge map   Save CSCD(1)
    [Significance] With the rapid advancements in aerospace engineering technology, the performance requirements for aircraft are increasingly escalating. At present, there are numerous goals for aircraft utilization in military, transportation, and other sectors. Hypersonic aircraft, which operate under multiple operating conditions and across a wide velocity range, have become a hot research topic in many countries. A critical component of such high-performance aircraft is their power system. A high-performance aircraft must have a high-performance power system to match it. Existing mature power systems have limitations in terms of working conditions and performance. Therefore, the development trend has shifted toward combined power systems. Currently, prominent combined engines include rocket-based combined cycle (RBCC), turbine-based combined cycle (TBCC), air turborocket, and precooled engines. When considering factors like cost, performance, and safety, TBCC engines emerge as the most promising power system for hypersonic aircraft within the near space range of 20-100 km because of their flight envelope width, reusable, large unit thrust, and other advantages. Therefore, summarizing the key TBCC technologies and exploring their development path is crucial. [Progress] The United States, Japan, and the United Kingdom are pioneers in combined power research. These countries have achieved significant technical achievements, possess mature technologies, and have completed the entire research and development cycle for combined engine products. They are at the forefront of this field. In the future research and development strategy, the United States focuses on system-wide research of TBCC and RBCC technologies. Following the completion of the HYPR90 program, Japan has conducted an in-depth study into the precooled engine ATREX. Meanwhile, the UK continues its extensive research on SABRE, aiming to deploy it in future single-stage spacecraft. Other countries, such as Germany, Russia, and China, are also engaged in large-scale TBCC research, accumulating a large number of technologies to achieve breakthroughs from theory to engineering application in the future. In terms of TBCC key technologies, this paper analyzes and summarizes advancements in propulsion system technology and subsystem technology. For subsystems, current TBCC inlet forms are reviewed, with advanced mixed rectangular divergent and integrated multidimensional cross-sectional configurations being analyzed. The future direction points toward the development of 3D internal contraction inlets. The advantages and disadvantages of series and parallel exhaust systems are analyzed alongside the basic theory of the exhaust process, emphasizing the need for more theoretical support for exhaust systems. Numerous achievements in modal conversion control technology are listed, highlighting that future research should focus on integrating strongly coupled flight control with modal control technology. Regarding propulsion system technology, a comprehensive theoretical model for aircraft-engine integration is presented, pointing out the defects of the traditional separate design approach for aircraft and engines. This paper reviews the development of performance simulation and testing technologies domestically and internationally, suggesting that future assignments should involve developing sophisticated simulation software and building new test benches. [Conclusions and Prospects] The combined engine essentially integrates four types of engines: turbine, rocket, ramjet and precooled. This paper summarizes the key technologies of TBCC and explores their development routes while also providing three prospects for the future form of combined engines: combining new basic power forms, adopting new energy sources, and incorporating the external drive platforms.
  • CONSTRUCTION MANAGEMENT
    LI Enyuan, LIU Hongyu, ZHU Enwei
    Journal of Tsinghua University(Science and Technology). 2024, 64(2): 173-180. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.045
    Abstract (1406) PDF (540) HTML (13)   Knowledge map   Save
    [Objective] The land market plays a key role in achieving a stable and healthy market for real estate development and has a significant impact on macroeconomic conditions, government finances, and overall financial stability. However, many participants in the Chinese land market engage in blind expansion and irrational land acquisition, which interferes with achieving policy objectives such as price stability for land and housing, as well as the healthy development of the land market. This study analyzes market feedback of land auction participants and the factors influencing them. We use auction theory to examine the existence of the winner's curse phenomenon in the Beijing land market and provide policy recommendations for the government to regulate the land market. [Methods] This study is based on micro-level auction data from land sales in Beijing between 2013 and 2018 and from the Wind enterprise database. We first construct models to calculate cumulative abnormal returns for the auction participants' stock prices in the periods following land auctions. We then use an event study to explore the effects of participant, land, and auction characteristics on stock price changes. Particular attention is given to market feedback on the auction winners. The uniqueness of this study lies in the vast amount of data used. We consider factors that are crucial elements of market feedback but have been relatively unexplored in previous studies, such as participants' bidding premiums, past experiences in land auctions, and joint bidding. [Results] We find that:(1) The higher the final bidding premium of land auction participants, the more negative the market's reaction. Previous experience with repeated bidding and joint bidding enables participants to access more market information, helping mitigate irrational bidding. Variables such as land value, bidding intensity, and the frequency of winning bids on a single day that reflect a bidder's economic strength lead to more positive market evaluations. (2) Evidence of the winner's curse phenomenon is observed in the Beijing land market. Although cumulative abnormal returns do not show significant inter-group differences between winners and losers, results of controlling for the final bidding premium reveal that higher bidding premiums result in more negative market evaluations for the winners. Joint bidding helps winners to make rational bids, but the effect of repeated participation in the short term is not significant. (3) The market holds a significantly negative view of bidders who are active over an extended period, and this effect is more pronounced for the winners, providing additional evidence for the existence of the winner's curse phenomenon. [Conclusions] Based on these findings, we recommend the government to enact policies to encourage market participants to make rational bids. This could be achieved by promoting complementary advantages and sharing market information through joint bidding to some extent. The government should also enhance information disclosure through various means to alleviate information asymmetry in the market and strengthen supervision of active market participants' funds and the development and construction processes to reduce irrational bidding behavior.
  • Research Article
    FU Wen, WEN Hao, HUANG Junhui, SUN Binxuan, CHEN Jiajie, CHEN Wu, FENG Yue, DUAN Xingguang
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1068-1077. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.025
    Abstract (1342) HTML (15)   Knowledge map   Save CSCD(3)
    [Objectives] The South-to-North water diversion project is a strategic project in China. Since its construction, it has become the main source of water conservancy in more than 280 cities.The diversion tunnel is the key building to support the South-to-North water diversion project. Due to its long line, large diameter, high water pressure, complex surrounding rock geology, as well as many years of water conservancy erosion, biochemical substances erosion, geological effect and other influences, typical defects such as cracks, collapse, exposed steel bars are prone to occur. Artificial detection of defects in the tunnel not only consumes time and energy, but also has low accuracy and timeliness. Therefore, underwater robot inspection technology has become a hotspot of current research.Among them, the underwater manipulator can not only be installed on the underwater vehicle, but also can be selectively installed on the required platform to complete the tasks of cleaning the water surface, laying and repairing cables, salvaging sunken objects, cutting off ropes and so on. However, the control of the underwater manipulator is more complicated and difficult due to its time-varying mechanics, nonlinear properties, external interference and hydrodynamic influence. The main purpose of this paper is to establish the dynamics model of the underwater manipulator and improve the accuracy of the trajectory tracking of the manipulator. [Methods] In this paper, a modeling method combining Newton-Euler equation and Morrison's dynamic model is proposed, and then the dynamic parameters are identified. Then, in order to improve the precise control ability of the manipulator in complex transient underwater environment, an adaptive sliding mode control method is designed based on compensating nonlinear dynamics model and using radial basis function (RBF) neural network to compensate the unmodeled and modeling errors of the system. Through the dynamic modeling in Section 4, a detailed dynamic simulation environment of the underwater manipulator is obtained. Gaussian noise errors with amplitudes of 5, 20, 15, 10, 8, and 5 N·m are set for each joint. On this basis, Experiment 1(P1): double loop proportional integral differential (PID) controller is designed for control simulation. Then, in experiment 2(P2), RBF neural network is used to make fitting compensation for system modeling errors and unmodeled items. In experiment 3(P3), dynamic model compensation is added on the basis of P2. [Results] The trajectory tracking effect ratio of P2 and P3 was obviously better than that of P1 experiment, and the tracking effect of P3 experiment was also better than that of P2 experiment after compensating the dynamic model. [Conclusions] Through simulation, this paper has proved the effectiveness of the proposed hydrodynamic modeling of the manipulator, and on the basis of compensating nonlinear dynamic model, The adaptive sliding mode control method using RBF neural network to compensate the unmodeled and modeling errors of the system has higher trajectory tracking accuracy than the traditional PID control and the general RBF network adaptive sliding mode control.
  • Review
    CHEN Yongcan, CHEN Jiajie, WANG Haoran, GONG Yu, FENG Yue, LIU Zhaowei, QI Ningchun, LIU Mei, LI Yonglong, XIE Hui
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1015-1031. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.015
    Abstract (1308) PDF (509) HTML (8)   Knowledge map   Save CSCD(4)
    [Significance] Headrace tunnels are key structures of major projects characterized by long tunnel lines, large tunnel diameters, high water pressure, and complex surrounding rock geology. Typical defects, such as cracks, landslides, and exposed reinforcement, will occur during long years of operation. If they are not prevented, the safe operation of the project will be seriously affected. Long cycles, high safety risk, high leak rate, and insufficient information are all issues with traditional manual inspection. Given the urgent need for regular inspection of large-diameter and super-long headrace tunnels in super-large water conservancy and hydropower projects, this study solved key scientific issues, such as the adaptability of robot underwater environment tasks, the active detection of super-long headrace tunnel apparent defects, and the safety risk assessment of tunnel structures based on robot inspection data. The key technology breakthroughs include the sub-parent cooperation of complex underwater environments, the fine operation of load manipulator, ultra-long distance underwater high-voltage power supply, umbilical cable safe release and recovery, ultra-long distance human-machine cooperative control, special environment adaptation of underwater robots, active defect detection and identification based on multi-sensor fusion. Structural safety classification, risk analysis and evaluation, and virtual drills were also carried out. The developed underwater robot inspection system was successfully applied to large-diameter and long headrace tunnels for comprehensive verification. [Progress] The application performance of underwater robots in special environments has improved due to breakthroughs in key technologies such as remote power supply, cooperative operation, intelligent patrol inspection, defect identification, and safety assessment of robots in complex underwater environments including water turbidity, high water pressure, adhesion and siltation, and local accessibility difficulties. The safety classification and risk assessment of the headrace tunnel structure are completed through the research and development of the multi-function “sub-parent” underwater robot system, and the whole process integration of “inspection, inspection, control, diagnosis, and use” of the underwater robot is realized, which has been demonstrated and verified in the eastern route of the South-to-North Water Transfer Project, Jinping Ⅱ Hydropower Station, and other major national projects, to improve the intelligent degree of the inspection of the headrace tunnel of large water conservancy and hydropower projects and support the safe operation of large projects. [Conclusions and Prospects] The research findings can significantly improve the accuracy of the headrace tunnel inspection, reduce the headrace tunnel inspection cost, and improve the guaranteed rate of the safe operation of large water conservancy and hydropower projects; promote the interdisciplinary integration of artificial intelligence and water conservancy disciplines to form interdisciplinary advantages; promote the application of robots in special environments, especially in the inspection of headrace tunnels, and guide the development of robots in special environments; promoting the application of artificial intelligence and intelligent management of water conservancy projects, as well as improving the level of technology and equipment in relevant fields in China and cultivating a large number of versatile talents, will have significant social, economic and scientific values.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    SONG Yuanyuan, YAO Enjian, XU Honglei, HUANG Quansheng, WU Rui, WANG Renjie
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1707-1718. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.021
    Abstract (1292) PDF (505) HTML (8)   Knowledge map   Save CSCD(3)
    [Significance] Climate change is the primary challenge that intensely affects sustainable human development. The transport sector has been one of the major sources of carbon emissions and is considerably affected by climate change. Because of the growth of China's economy and total transport demand, transport-related carbon emissions are also gradually increasing. Moreover, frequent complex and extreme climate events with clear regional differences have negatively affected the construction, maintenance, and operation of the transport infrastructure. Therefore, China's transport sector needs to reduce carbon emissions for green and low-carbon developments and improve its adaptability and resistance to various adverse climatic conditions. However, China's transport sector still faces many challenges in mitigating and adapting to climate change, and its policy tools, measures, and basic capacity to cope with climate change need to be enhanced. Therefore, transport sector-related strategies and routes to adapt to climate change need to be explored. [Progress] First, the policies and measures implemented in different countries to address climate change were introduced from the perspectives of mitigation and adaptation. Second, the advancements made by China's transport sector in mitigating climate change were summarized from the perspectives of the construction of green and low-carbon transport infrastructure, optimization of the transport structures, and promotions and applications of new and clean energy. The measures implemented to adapt to climate change in China's transport sector were summarized from the perspectives of improving the adaptability of the transport infrastructure, strengthening the monitoring and warning systems of climate change, and managing risk. Third, the interactions between each subfield and sublink of the transport system and climate change, as well as the main measures implemented to mitigate and adapt to climate change in the transport sector, were analyzed. Finally, key areas, strategies, and methods to mitigate and adapt to climate change were proposed. [Conclusions and Prospects] Analysis results are provided and discussed. First, the current plan for China's transport response to climate change needs improvement. The capacity to respond to climate change has not been planned at the subfield and sublink level of the transport system. For mitigating climate change, carbon emissions reduction measures, such as the promotion of new energy vehicles and ships, as well as the optimization of the transport structure, are inadequate. Furthermore, the assessment of the effects of the transport infrastructure on climate change is still in its infancy. Second, the direction of the transport system's development should be combined with the strategic requirements of mitigation and adaptation to climate change. Third, in the transport field, the infrastructure, equipment, and transport structure should be improved; moreover, the infrastructure should be adapted to climate change, and emergency support of transport equipment and transportation organization in extreme weather should be optimized to enhance the capability to adapt to climate change. Finally, the following measures are proposed: Mitigation and adaptation to climate change should be jointly and appropriately implemented to comprehensively address climate change in the transport sector. Greenhouse gases and air pollutants should be jointly controlled to realize the goal of “double carbon”. Adaptation to climate change should be applied in conjunction with ecological protection and restoration to strengthen the capacity of the transport sector to adapt to climate change.
  • ECONOMIC AND PUBLIC MANAGEMENT
    ZHU Wuxiang, LIAO Jingqiu, ZHAN Ziliang, TAN Zhijia
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1467-1482. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.008
    Abstract (1281) PDF (457)   Knowledge map   Save
    [Significance] Because of multiple factors, such as deleveraging policy, slowing economic growth, trade friction, and the COVID-19 pandemic, debt defaults are occurring with increasing frequency, which could trigger risk contagion and even lead to systemic financial risks. However, some facts indicate that the existing financial distress prediction model is not sufficiently effective; for example, the nonperforming loan ratio of commercial banks shows a rising trend, and the downgrade of ratings usually lags considerably. Thus, government departments and market entities have a strong demand for improving and optimizing the financial distress prediction model, which is necessary to realize risk identification and early warning. An effective prediction model can provide early warnings of investment risks and help financial institutions and investors reduce losses, assist regulators in establishing a multichannel default disposal mechanism, and improve the credit environment of the capital market.[Progress] Based on an extensive literature search in top journals and conferences from 1932 to 2020, this paper reviews four topics, including the financial distress definition, statistical model, variable selection, and model efficiency evaluation method, then further summarizes three research anomalies: 1) Existing financial distress prediction models often focus on the prediction of deep crises, such as insolvency and bankruptcy, which may lead to a delayed warning and market panic. 2) The innovation of financial distress prediction research focuses on applying new computer algorithms and statistical models as well as considering nonfinancial information. One confusing fact is that the judgment of financial distress depends on the selected model, indicators, and sample set rather than the fundamental factors of the enterprise; thus, different prediction models may produce contradictory results on the judgment of the same enterprise. 3) The identification of financial distress relies on comparing an enterprise's future capital cash flow and rigid payment. However, most of the existing financial distress prediction models apply a multivariate weighting method according to common historical financial indicators.[Conclusions and Prospects] This paper proposes a cross-model evaluation framework to compare their financial distress prediction effectiveness and provides improvement suggestions including “one principle, three directions.” The one principle indicates that to accurately assess and manage the absolute risk of financial distress, the study of financial distress prediction should return to the financial principle and pay attention to future capital cash flow. The three directions that need to pay attention include: 1) early financial distress warnings, such as liquidity crisis warnings; 2) steady repayment sources, including operating cash inflows, reliable asset disposal earnings, and refinancing, rather than relying on the total assets of the balance sheet, current assets, and other indicators; 3) financing contracts and full scenario analyses of future capital cash outflows rather than just current ratio, quick ratio, asset-liability ratio, and other liability indicators. In the future, with the development of big data and the improvement in information transmission efficiency, corporate information disclosure will be considerably enhanced, allowing more accurate cash flow and repayment prediction. A prediction model assessing absolute financial distress risk has greater potential.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    HUANG Ailing, WANG Zijian, ZHANG Zhe, LI Mingjie, SONG Yue
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1729-1740. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.034
    Abstract (1280) PDF (474) HTML (5)   Knowledge map   Save
    [Objective] In the airport ground-transport system, it is operationally important to match the evacuation requirement of passengers, and the capacity of multimodal transport vehicles is crucial. Numerous studies have investigated single-mode transport capacity allocation; however, research on multimode allocation is scarce. [Methods] To mitigate the difficulty in realizing an exact match between the evacuation demand of passengers and the capacity of multimodal transport, a bi-level programming model for multimodal transport resource allocation is proposed according to the analysis of the interaction between capacity allocation and passenger travel choice. A utility function of multiple travel modes, including airport buses, metro, taxis, and private cars, is formulated with the following four features: travel time, travel cost, punctuality, and comfort. The upper-level objective is to minimize the total enterprise-operation cost, passenger-waiting cost, and carbon emission cost for optimizing the headway of public transit and the taxi arrival rate, which is subject to the capacity of each transit mode, the range of each public-transit headway decided by fixed equipment, and the range of taxi arrival rate decided by the capacity of boarding location. Based on the output of the transport capacity allocation scheme used by the upper level, the low-level objective is to assign the passenger flow toward multiple travel modes according to a stochastic user equilibrium-logit model with a utility function. Furthermore, an improved genetic algorithm combined with method of successive algorithm (MSA) is designed to solve the proposed bi-level programming model. To improve the solving efficiency of the algorithm, a pre-search mechanism is proposed, in which the infeasible solution is filtered out using low-precision MSA to reduce the computational cost of repeatedly calling the low-level model. [Results] The Beijing Daxing International Airport was considered as a case study to illustrate the efficiency and effectiveness of the proposed bi-level programming model in optimizing transport capacity allocation in airport ground-transport centers. The transport capacity allocation scheme obtained via the proposed model reduced the average passenger-waiting time and the total carbon emission of the system by 14.08% and 6.21%, respectively, while increasing the operation cost by only 1.32%. Moreover, the optimized capacity allocation scheme resulted in the switching of 6.7% of passengers who availed taxis and private cars to buses and metro, which were more environmentally friendly. The proposed solution algorithm could efficiently solve the bi-level model. Under the pre-search mechanism, the generation time of the scheme was 217.6 s, which could meet the production demand within the acceptable time. [Conclusions] Results show that the optimized scheme obtained from the bi-level model and algorithms is considerably better than before. The proposed scheme reduces passenger-waiting time and the carbon emissions of the multimodal transport system at a negligible cost. Using the optimized scheme, the organizers of airport ground-transport centers can coordinate the capacities of landside multiple transport modes and guide passengers reasonably. This will reduce operation costs, improve airport landside traffic structure, and encourage green and low-carbon travel.
  • Editorial
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1013-1013.
    Abstract (1195) PDF (497) HTML (20)   Knowledge map   Save
  • COMPUTER SCIENCE AND TECHNOLOGY
    ZHANG Yang, JIANG Minghu
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1390-1398. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.013
    Abstract (1170) PDF (525)   Knowledge map   Save
    [Objective] Authorship identification is a study for inferring authorship of an unknown text by analyzing its stylometry or writing style. The traditional research on authorship identification is generally based on the empirical knowledge of literature or linguistics, whereas modern research mostly relies on mathematical methods to quantify the author's writing style. Currently, researchers have proposed various feature combinations and neural network models. Some feature combinations can achieve better results with traditional machine learning classifiers, while some neural network models can autonomously learn the relationship between the input text and corresponding author to extract text features implicitly. However, the current research mostly focuses on character and lexicon features. Furthermore, the exploration of syntactic features is limited. How to use the dependency relationship between different words in a sentence and combine syntactic features with neural networks still remains unclear. This paper proposes an authorship identification method based on the syntax tree node embedding, which introduces syntactic features into a deep learning model. [Methods] We believe that an author's writing style is mainly reflected in the way he chooses words and constructs sentences. Therefore, this paper mainly develops the authorship identification model from the perspectives of words and sentences. The attention mechanism is used to construct sentence-level features. First, an embedding representation of the syntax tree node is proposed, and the syntax tree node is expressed as a sum of embeddings corresponding to all its dependency arcs. Thus, the information on sentence structure and the association between words are introduced into the neural network model. Then, a syntactic attention network using different embedding methods to vectorize text features, such as dependencies, part-of-speech tags, and words, is constructed, and a syntax-aware vector is obtained through this network. Furthermore, the sentence attention network is used to extract the features from the syntax-aware vector to distinguish between different authors, thereby generating the sentence representation. Finally, the result is obtained by the classifier and the correct rate is used to evaluate the result. [Results] Experiments on CCAT10, CCAT50, IMDb62, and the Chinese novel data sets show that an increase in the number of authors causes a downward trend in the accuracy rate of the model proposed in the paper. In some data points, an increase in the number of authors resulted in an increase instead of a decrease in the correct rate. This shows that the ability of the model proposed in this study to capture the writing style of different authors is considerably different. Furthermore, when we change the number of authors on the IMDb dataset, the correct rate of the model in the paper is found to be slightly lower than the BertAA model in the case of 5 authors; however, the rate is higher than the BertAA model in the case of 10, 25, and 50 authors. Additionally, when the experimental results of the model are compared to other models on the CCAT10, CCAT50, and IMDb62 data sets, the performance of this model is observed to be ranked as second or third. [Conclusions] The attention mechanism demonstrated its efficiency in text feature mining, which can fully capture an author's style that is reflected in different parts of the document. The integration of lexical and syntactic features based on the attention mechanism enhances the overall performance of the model. Our model performs well on different Chinese and English datasets. Notably, the introduction of dependency syntactic combination provides more space for the interpretation of the model, which can explain the text styles of different authors at the word selection and sentence construction levels.
  • BIG DATA
    HU Minghao, WANG Fang, XU Xiantao, LUO Wei, LIU Xiaopeng, LUO Zhunchen, Tan Yushan
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1309-1316. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.010
    Abstract (1167) PDF (447)   Knowledge map   Save CSCD(1)
    [Objective] The abundant information resources available on the internet about defense technology are of vital importance as data sources for obtaining high-value military intelligence. The aim of open information extraction in the field of defense technology is to extract structured triplets containing subject, predicate, object, and other arguments from the massive amount of information available on the internet. This technology has important implications for ontology induction and the construction of knowledge graphs in the defense technology domain. However, while information extraction experiments in the general domain yield good results, open information extraction in the defense technology domain faces several challenges, such as a lack of domain annotated data, arguments overlapping unadaptability, and unrecognizable long entities.[Methods] In this paper, an annotation strategy is proposed based on the entity boundaries, and an annotated dataset in the defense technology field combined with the experience of domain experts was constructed. Furthermore, a two-stage open information extraction method is proposed in the defense technology field that utilizes a pretrained language model-based sequence labeling algorithm to extract predicates and a multihead attention mechanism to learn the prediction of argument boundaries. In the first stage, the input sentence was converted into an input sequence <[CLS], input sentence[SEP]>, and the input sequence was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. Based on this sentence representation, a conditional random field (CRF) layer was used to predict the position of the predicates, i.e., to predict the BIO labels of the words. In the second stage, the predicated predicates from the first stage were concatenated with the original sentence and converted into an input sequence <[CLS], predicate[SEP], and input sentence[SEP]>, which was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. This representation was then fed to a multihead pointer network to predict the position of the argument. The predicted position was tagged with the actual position to calculate the cross-entropy loss function. Finally, the predicates and the arguments predicted by the predicate and argument extraction models were combined to obtain the complete triplet.[Results] The experimental results from the extensive experiments conducted on a self-built annotated dataset in the defense technology field reveal the following. (1) In predicate extraction, our method achieved a 3.92% performance improvement in the F1 value as compared to LSTM methods and more than 10% performance improvement as compared to syntactic analysis methods. (2) In argument extraction, our method achieved a considerable performance improvement of more than 16% in the F1 value as compared to LSTM methods and about 11% in the F1 value as compared to the BERT+CRF method.[Conclusions] The proposed two-stage open information extraction method can overcome the challenge of arguments overlapping unadaptability and the difficulty of long-span entity extraction, thus improving the shortcomings of existing open information extraction methods. Extensive experimental analysis conducted on the self-built annotated dataset proved the effectiveness of the proposed method.
  • BIG DATA
    YANG Bo, QIU Lei, WU Shu
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1339-1349. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.030
    Abstract (1106) PDF (434)   Knowledge map   Save CSCD(1)
    [Objective] Collaborative filtering algorithms are widely used in various recommendation systems and can be used to recommend information of interest to users similar to the user based on historical data. Recently, collaborative filtering algorithms based on graph neural networks have become one of the hot research topics. A collaborative filtering model based on a graph structure usually encodes the interaction between users and information items as a two-part diagram, and high-order connectivity modeling of the bipartite graph can be used to capture the hidden relationship between the user and the item. However, this bipartite graph model does not explicitly obtain the similarity relationship between users and between items. Additionally, the bipartite graph sparsity causes high-order connectivity dependence problems in the model. [Methods] Herein, a collaborative filtering model is proposed based on a heterogeneous graph convolutional neural network that explicitly encodes the similarities between users and that between items into the graph structure so that the interaction relationships between users and between items are modeled as a heterogeneous graph. The heterogeneous graph structure allows the similarities between users and between items to be directly captured, reducing the need for high-order connectivity and alleviating the bipartite graph sparsity problem. [Results] We conducted experiments on four typical datasets and compared the results using four typical methods. The results showed that our model achieved better experimental results than the traditional collaborative filtering models and existing graph neural network models. Moreover, based on the different types of edges, different similarity methods, and different similarity thresholds, our model obtained better experimental results. [Conclusions] Our model explicitly encodes the similarities between users and between items into the heterogeneous graph structure as edges so that the model can directly learn these similarities during training to get the embedded information of users and items. The proposed model alleviates the sparsity and high-order connectivity modeling problems of bipartite graphs. The collaborative filtering model based on heterogeneous graph neural networks can also fully capture the interaction relationships between users and items through low-order connectivity in the graph.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    WANG Yue, YAO Enjian, HAO He
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1741-1749. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.024
    Abstract (1103) PDF (420) HTML (8)   Knowledge map   Save CSCD(2)
    [Objective] Optimising travel structure, improving travel efficiency, and reducing transport carbon emissions are essential paths to green and low-carbon transport development. Research into fine-grained carbon management has received much attention in recent years. However, the implementation is complex, and setting a price on carbon estimation tends to elicit negative feelings from travellers. [Methods] In the concept of mobility as a service (MaaS), the service can provide an end-to-end travel service by the combination of multi-transport modes, including roads and public transport, as well as many new forms of transportation. Thus, the service provider can realise flexible price adjustments for multi-transport modes and sections in a single trip. Consequently, this paper proposes a low-carbon-oriented pricing strategy for the service provider. From the different perspectives of the MaaS servicer, travellers and the environment, we propose a multi-objective optimisation model. The object includes maximising service providers' revenue and minimising network travel time and transportation network carbon emissions. The model is a two-layer planning model. The upper layer of the model is the process of finding decision variables to calculate the objective function. The lower layer is the joint traffic mode and route choice process, as well as traffic equilibrium allocation in a multi-modal transportation network. In this model, the joint choice of mode and route of travellers depends on the upper-layer decision variables. Then, to solve the above optimisation problem, the reference point based non-dominated sorting genetic algorithm (NSGA-Ⅲ) and the method of successive algorithm (MSA) are introduced. [Results] The case study was conducted on an example network with 1 origin-destination pair, 16 sections in 3 traffic modes (travel by car, bus, and metro), and 6 nodes. Three representative strategies of Pareto solutions were selected, including optimise service provider benefits (OP-S), optimise network travel time (OP-T), and optimise transportation carbon emissions (OP-C). Furthermore, the original (OR) state was also presented as the background. The result showed that the travel price significantly increased in OP-S, which was unfriendly to travellers. In contrast, OP-T and OP-C were respectively metro-friendly and public transport-friendly strategies. Compared with the OR state, service benefits and carbon emissions were optimised, which means that the service provider could achieve emission reductions in multi-modal transport networks while ensuring their own profitability through rationalised regulation of service pricing. The traffic volume analysis also proved that the service provider could optimise the network travel mode structure, thereby reducing road congestion and increasing the share of public transport. By comparing the results of the optimisation strategies under different demands, we found that with the travel demand increased, the service provider benefits continued to grow (especially in OP-S). Although traffic carbon emissions increased, the optimisations could always reduce the traffic carbon emissions of the system. [Conclusions] This paper validates the feasibility of travel service pricing strategies in multi-modal network traffic optimisation and low-carbon transport development. Service providers should not only seek to maximise their own revenue but also take into account the cost of travel and its impact on the transport environment and take responsibility for the coordination and reduction of transport system emissions. This paper identifies the profitability and responsibilities of travel service providers in the green and low-carbon development of transport and provides a basis for service pricing strategies.
  • Review
    XU Pengfei, CHEN Meiya, KAI Yan, WANG Zipeng, LI Xinyu, WAN Gang, WANG Yanjie
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1032-1040. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.018
    Abstract (1098) PDF (443) HTML (7)   Knowledge map   Save CSCD(4)
    [Significance] China uses a large amount of hydropower, and the safety of hydropower dams is related to the safety of people's lives, properties, and the national economy. Therefore, regular inspection of dam defects in large hydropower plants is vital to ensure their safe operation. Most of the common dam defects, such as cracks and leakage, originate from the surface of the structure and can affect the service life of the dams. In recent years, remotely operated vehicles (ROVs) have been used for the underwater inspection of dam defects in hydropower plants, as they can mitigate many disadvantages associated with manual inspections while improving detection accuracy and efficiency. [Progress] Thus, we explore the environmental conditions of dams and the main content of dam defect inspection in hydropower plants and review the research on ROV application for underwater inspection in large hydropower dams. We find that different sensors can be combined with ROVs to inspect large hydropower dams underwater according to detection and operation needs. The method can achieve intelligent mobile inspection and remote control of dam operation safety, automatically identify dam defect characteristics, and store shore-station interactive information. At present, ROVs are less used for inspecting dam defects in large hydropower plants but are widely used in fields such as deep-sea exploration, undersea operations, and rescue assistance. The use of ROVs for crack and leakage inspection in hydropower plants has tremendous advantages. The research on using ROVs for the intelligent inspection of other structures has certain implications for developing ROVs for the intelligent underwater inspection of large hydropower dams. We analyze the progress of ROV technology in domestic and international research on hydropower engineering in terms of the overall technology, underwater absorber, power system, inspection technology, underwater positioning, and control system. Moreover, we explore the modular design and overall scale optimization of ROVs for underwater inspection in large hydropower dams, with the design objectives of lightweight, high stability, and high anti-current and anti-disturbance capability. Thrusters with high propulsion ratios have been developed to ensure high ROV power. Adsorbers have been added to the ROV systems to control the hovering of ROVs, which can also improve their underwater anti-disturbance ability to ensure stable detection and operation. Acoustic-optical inspection technology has been proposed to improve detection accuracy, and intelligent algorithms have been used for defect identification and image post-processing. Regarding underwater positioning and control systems, a complementary approach combining information from multiple sensors has been adopted, and the dam defect inspection is validated to improve the operational capability of the ROV movement and inspection. [Conclusions and Prospects] The use of ROVs for underwater inspection in large hydropower dams has major advantages in targeting cracks and other dam defects, and the research on the intelligent inspection of hydropower dams opens up a wide range of prospects.
  • AEROSPACE ENGINEERING
    CHEN Zhongcan, ZHANG Kai, LI Feng, ZHAO Yue, WU Jianhui, HE Qilian, CHEN Min
    Journal of Tsinghua University(Science and Technology). 2024, 64(2): 318-336. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.047
    Abstract (1066) PDF (384) HTML (15)   Knowledge map   Save CSCD(4)
    [Significance] Aerospace vehicles have undergone significant modifications in terms of aerodynamic shape, flight speed, flight environment, and flight duration compared with conventional flight vehicles. They must withstand harsh aerodynamic thermal environments for long durations and maintain a sharp leading-edge shape with a high lift-to-drag ratio, imposing extremely stringent requirements on the temperature resistance, durability, structural efficiency, and reliability of the thermal protection system. Traditional thermal protection depends largely on passive methods such as heat insulation, heat sink, and radiation heat dissipation. Although the thermal protection performance of related technologies has improved, which is restricted by several constraints, such as ensuring that the prototype is safe under harsh conditions of extremely high heat flux and ultrahigh temperature along with structural stability, long-term operation, light-weight nature, and repeatability. Thus, a new active thermal protection technology is necessary. In this context, transpiration cooling technology offers the advantage of high thermal efficiency without requiring any changes in the prototype of a vehicle. It has been widely considered a potential active thermal protection technology. However, when transpiration cooling is used for thermal protection of a flight vehicle, some challenges related to the complexity of the system, a mismatch between coolant supply and demand, unstable control of the operation, and development of a high-precision prediction model etc., arise. [Progress] Research on transpiration cooling primarily focused on quick evaluation of performance, numerical simulation of flow and heat transfer, evaluation of cooling mechanism performance, development of optimal control algorithm for efficiency, and optimization of structure form and yielded beneficial results. However, several fundamental scientific issues needed to be urgently addressed to fully realize the engineering application of this technology in aerospace vehicles. In the context of numerical simulation, the accuracy and adaptability of the heat and mass transfer model should be improved. Most existing studies had mathematically described and solved the physical process of heat and mass transfer in porous media at the macroscale. But some parameters related to specific phase change heat and mass transfer (such as evaporation/condensation coefficient and fluid-solid convection heat transfer coefficient) that affect the model's accuracy must be modified through experiments, and the adaptation was partially successful. Most existing models assumed that the temperature of porous media, liquid phases, and gas phases were equal. Although a few models explored the nonequilibrium effect between porous media and fluids, they did not consider the nonequilibrium effect between gas and liquid phases. There were few flight experiments in the research and a large gap between the ground experimental test and practical use conditions. Furthermore, extreme effects related to high-temperature, real, and rarefied gases and shock wave/boundary layer interference during high-speed flight could not be effectively reproduced on the ground. Moreover, there was a lack of experimental data that could be used to verify the accuracy of the heat and mass transfer model. The experimental test method was relatively simple, and the flow and heat transfer process of the liquid in the porous medium could not be obtained. It was challenging to effectively obtain the boundary layer flow law of the liquid when it entered the high-speed mainstream flow from the porous medium. In terms of control strategy, the present research on transpiration cooling control systems lacked a transient simplified mathematical model that could be quickly established, particularly for liquid phase change transpiration cooling with the multiphase flow and phase change process. Simultaneously, there were few transpiration cooling control systems with practical engineering values based on modern control theory, which made it difficult to achieve optimal performance in practical engineering applications. Some adaptive and self-driven transpiration cooling systems had been proposed as new forms of transpiration cooling structures; however, they were still at the mechanism verification stage, and the engineering application effect needed to be verified. [Conclusions and Prospects] Follow-up research will focus on the micro/mesoscale fine numerical calculation model, advanced visual experimental testing methods, rapid response-precise control strategies, self-driven and adaptive structural engineering systems, and combined active and passive thermal protection.
  • COMPUTER SCIENCE AND TECHNOLOGY
    ZHAO Chuanjun, WU Meiling, SHEN Lihua, SHANGGUAN Xuekui, WANG Yanjie, LI Jie, WANG Suge, LI Deyu
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1380-1389. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.012
    Abstract (1065) PDF (392)   Knowledge map   Save
    [Objective] Deep learning models for text sentiment analysis, such as recurrent neural networks, often require many parameters and a large amount of high-quality labeled training data to effectively train and optimize recurrent neural networks. However, obtaining domain-specific high-quality sentiment-labeled data is a challenging task in practical applications. This study proposes a cross-domain text sentiment classification method based on syntactic structure transfer and domain fusion (SSTDF) to address the domain-invariant learning and distribution distance difference metric problems. This method can effectively alleviate the dependence on domain-specific annotated data due to the difference in the data distribution among different domains. [Methods] A method combining SSTDF was proposed in this study to solve the problem of cross-domain sentiment classification. Dependent syntactic features are introduced into the recurrent neural network for syntactic structure transfer for designing a migratable dependent syntactic recurrent neural network model. Furthermore, a parameter transfer strategy is employed to transfer syntactic structure information across domains efficiently for supporting sentiment transfer. The conditional maximum mean discrepancy distance metric is used in domain fusion to quantify the distribution differences between the source and target domains and further refine the cross-domain same-category distance metric information. By constraining the distributions of source and target domains, domain variable features are effectively extracted to maximize the sharing of sentiment information between source and target domains. In this paper, we used a joint optimization and training approach to address cross-domain sentiment classification. Specifically, the sentiment classification loss of source and target domains is minimized, and their fusion losses are fully considered in the joint optimization process. Hence, the generalization performance of the model and classification accuracy of the cross-domain sentiment classification task are considerably improved. [Results] The dataset used in this study is the sentiment classification dataset of Amazon English online reviews, which has been widely used in cross-domain sentiment classification studies; furthermore, it contains four domains—B (Books), D (DVD), E (Electronic), and K (Kitchen)—each with 1 000 positive and negative reviews. The experimental results show that the accuracy of the SSTDF method is higher than the baseline method, achieving 0.844, 0.830, and 0.837 for average accuracy, recall, and F1 values, respectively. Fine-tuning allows the fast convergence of the network, thereby improving its transfer efficiency. [Conclusions] Finally, we used deep transfer learning methods to solve the task of cross-domain text sentiment classification from the perspective of cross-domain syntactic structure consistency learning. A recurrent neural network model that integrates syntactic structure information is used; additionally, a domain minimum distance constraint is added to the syntactic structure transfer process to ensure that the distance between the source and target domains is as similar as possible during the learning process. The effectiveness of the proposed method is finally verified using experimental results. The next step is to increase the number of experimental and neutral samples to validate the proposed method on a larger dataset. Furthermore, a more fine-grained aspect-level cross-domain sentiment analysis will be attempted in the future.
  • HYDRAULIC ENGINEERING
    GUO Shiyuan, MA Weizhi, LU Ruilin, LIU Jinlong, YANG Zhigang, WANG Zhongjing, ZHANG Min
    Journal of Tsinghua University(Science and Technology). 2023, 63(12): 1924-1934. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.005
    Abstract (1058) PDF (388) HTML (17)   Knowledge map   Save CSCD(1)
    [Objective] Water discharge prediction in canals under complex conditions is a fundamental problem with prominent practical significance in improving farmland irrigation water efficiency, conserving water resources, and reducing involved costs. The state-of-art solution of prediction is establishing nonlinear partial differential equations with numerical calculation methods, with time cost being exponential to the fineness of the spatiotemporal division. Moreover, the current time step calculation depends on the result of the last time step, i.e., the calculation cannot be parallelized, which results in a tradeoff between accuracy and efficiency. In actual irrigation areas, the control of gate openings in canals primarily relies on human experience, which has an extremely long feedback process. Therefore, it is challenging to employ human experience and numerical calculation methods when multiple gate changes are required. The rapid development of artificial intelligence-related technologies has yielded more opportunities for modernizing conventional industries. In this study, the input and output were definite for the water discharge prediction task, which corresponds to the "regression" problem-one of the two types of fundamental problems that neural networks are good at solving. This study presents new insights to leverage the neural network to solve the water discharge prediction problem end-to-end. The neural network only needs to be trained once, and further, multiple results can be obtained with high efficiency during testing. Therefore, the proposed approach overcomes the shortcomings of the conventional methods, which involve extremely high time costs.[Methods] Based on the Internet-of-Water theory of "real-time perception, water-information interconnection, process tracking, and intelligent processing", this study introduced a novel approach for water discharge prediction. First, we investigated the sequence features of the upstream and downstream canal water discharge gate control and introduced the static features of the gates and canal. Second, we proposed a novel predicting method for canal discharge based on a long short-term memory (LSTM) neural network, in which the gating mechanism allows better modeling and prediction of problems with sequential information. Feature discretization and normalization were applied to the static features to improve the generalization ability of the model to predict unseen data. Layer normalization was performed on the output of the LSTM network to adjust the distribution of the output to the unsaturated region of the activation function, making the neural network more sensitive to the input and output, as well as accelerating its convergence.[Results] The following comparative experimental results were obtained:1) The proposed model can complete the prediction task with an accuracy rate exceeding 97% in every canal segment, which is significantly better than all baselines, indicating the effectiveness of using the hidden sequence features inside the canal and the gating mechanism of the LSTM neural network. 2) Under normal circumstances, introducing static features as part of the model's input improves the prediction performance. 3) The proposed model demonstrates good robustness. It successfully learns and shows good prediction performance without too much data fed into it. Hence, it is extremely useful in situations of data shortage and when requiring model migration to other canals. 4) Compared to the conventional numerical calculation method, the proposed model demonstrates 308 times higher prediction efficiency, reducing the prediction time from 950 h to about 3 h on 100,000 pieces of data.[Conclusions] This study verifies the feasibility of artificial intelligence-based methods in improving the conventional canal discharge prediction problem, achieves a win-win situation between accuracy and efficiency through a reasonably designed deep learning model, and provides a new idea for applying artificial intelligence-based methods in solving hydraulic problems.
  • COMPUTER SCIENCE AND TECHNOLOGY
    JIA Fan, KANG Shuya, JIANG Weiqiang, WANG Guangtao
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1399-1407. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.007
    Abstract (1047) PDF (402)   Knowledge map   Save CSCD(1)
    [Objective] In recent years, the number of publicly disclosed vulnerabilities has increased, and software security personnel and vulnerability enthusiasts have experienced increasing difficulty in finding the vulnerability information they are interested in. A recommendation algorithm can provide personalized vulnerability suggestions to help users obtain valuable vulnerability information efficiently. However, recommendation systems related to vulnerabilities generally have problems such as one-sided analysis, complex implementation methods, strong professionalism, and data privacy, and research on directly recommending vulnerabilities as recommendation items is scarce.[Methods] This paper selects the vulnerability itself as the recommendation item, collects data from public datasets, and adopts a simple and efficient recommendation algorithm for personalized recommendations of vulnerabilities. As a classical recommendation model, the collaborative filtering recommendation algorithm is widely used and computationally efficient. However, the user–vulnerability interaction matrix is sparser than the interaction matrix analyzed by the classical recommendation model, which seriously affects the use effect of the collaborative filtering recommendation algorithm. To solve this problem, this paper introduces a vulnerability similarity research algorithm, comprehensively considers 13 features, such as vulnerability type, severity, and vulnerability description text, and integrates them into content-based recommendation algorithms, emphasizing the universal connection between vulnerabilities in recommendation algorithms. By calculating the similar vulnerabilities of each vulnerability the target user has interacted with, the algorithm summarizes the list of vulnerabilities with the highest recommended value and recommends it to the user. Simultaneously, the algorithm fully considers the characteristics of personal users and product users and combines the labeling mechanism to finally form a multi-user vulnerability recommendation algorithm based on similarity, effectively improving the sparsity and cold start of the recommendation algorithm.[Results] The experiments on public datasets show that 1) the content recommendation algorithm based on similarity can achieve better accuracy than the traditional collaborative filtering algorithm on all types of users. Particularly, the precision, recall, and F1 score of the recommendation algorithm results for product users increase by 58.86%, 58.53%, and 0.586 1, respectively. 2) The recommendation list of the content recommendation algorithm based on similarity is more effective and more consistent with the user's vulnerability preferences. For product users, the the normalized discounted cumulative gain score of the recommendation list increases by 0.596 5. 3) The result coverage of the content recommendation algorithm based on similarity is much higher than that of the collaborative filtering algorithm. Among human users, the result coverage of the content recommendation algorithm based on similarity is 7.6 times that of original interest data, which shows that the recommendation algorithm successfully mobilizes more vulnerabilities to recommend that users have not previously interacted with.[Conclusions] This paper takes vulnerabilities as a recommendation item to recommend vulnerabilities for multiple types of users and proposes a multi-user vulnerability recommendation algorithm based on similarity. The algorithm mainly introduces the vulnerability similarity calculation method and integrates it into the content-based recommendation algorithm. The algorithm proposed in this paper solves the problems of the high sparsity of a user–vulnerability interaction matrix and cold-start problems of user-based collaborative filtering algorithms and effectively improves the accuracy and effectiveness of recommendations.
  • MECHANICAL ENGINEERING
    YAO Ming, DUAN Jinhao, SHAO Zhufeng, YUAN Shaolun, SU Yunzhou
    Journal of Tsinghua University(Science and Technology). 2024, 64(1): 117-129. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.022
    Abstract (1043) PDF (312) HTML (6)   Knowledge map   Save CSCD(1)
    [Objective] automated guided vehicle (AGV) forklift is an important material transportation equipment in the industrial field. Its positioning and path-tracking accuracy is an important basis for improving material transportation efficiency, factory automation, and intelligence. Thus, this paper uses a single steering wheel AGV forklift in an indoor structured environment of the pharmaceutical industry as an object, realizing the lidar positioning based on the reflector using the density-based spatial clustering of applications with noise (DBSCAN) and the fast iterative closest point (FICP) algorithms, and designing a proportional-integral (PI) controller to address the path-tracking problem of the AGV forklift.[Methods] First, the kinematics characteristics of the single steering wheel AGV forklift are analyzed, and its kinematics equations and state space equations are established. Subsequently, the DBSCAN and FICP algorithms were used to implement a reflector-based lidar positioning method for an accurate positioning problem. Moreover, a distance-based outlier elimination rule is proposed to address the problem of outliers interfering with the positioning process, which ensures the stability of the positioning results and the robustness of the algorithm. The Kalman filter algorithm is used to fuse the measurement data of the inertial measurement unit (IMU) and the angle sensor to improve the accuracy of the lidar positioning algorithm of the AGV forklift. This study establishes the position error and attitude error in the two core paths of straight lines and arcs based on the geometric relationship for the path-tracking problem. Following that, a PI controller is designed to realize the path tracking of the AGV forklift. Considering curvature discontinuity when the arc of equal curvature is connected with the straight-line path, the arc path based on the third-order Bézier curve was designed in this study. Furthermore, according to the limitation of the AGV forklift in the arc movement process, the parameters of the Bézier curve are analyzed and optimized to avoid the decrease of the path-tracking accuracy caused by the abrupt change of the path curvature.[Results] The experimental verification showed that the lidar positioning algorithm based on DBSCAN and FICP algorithms could achieve ±3 mm positioning accuracy. Stable AGV forklift positioning could be achieved when combined with the outlier elimination rules. Furthermore, the Kalman filter-based fusion of IMU and angle sensor data resulted in accurate AGV forklift positioning. The improved arc path based on the Bézier curve reduced the arc path tracking error by about 72% compared with the equal-curvature arc path. The AGV's position and attitude errors were controlled based on the PI controller, which could control the dynamic tracking accuracy to within 25 mm. Furthermore, the repeated positioning accuracy of the work site reached ±12 mm, meeting the expected design requirements.[Conclusions] This paper studies the lidar positioning and path-tracking technology of a single steering wheel AGV forklift in an indoor structured environment. An accurate and stable lidar positioning algorithm based on DBSCAN and FICP algorithms is realized by introducing outlier elimination rules and the Kalman filter. The AGV forklift's path tracking is realized using the PI controller, and the tracking accuracy of the arc path is improved using the Bézier curve. Finally, the positioning accuracy, path-tracking accuracy, and repeated positioning accuracy of the work site all met the expected design requirements.
  • Review
    LI Yanzhi, DU Jiayu, WU Xinxin, SUN Libin, MIN Qi
    Journal of Tsinghua University(Science and Technology). 2023, 63(8): 1173-1183. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.004
    Abstract (1028) PDF (408) HTML (8)   Knowledge map   Save CSCD(3)
    [Significance] Aiming at carbon neutrality, energy structure transformation and upgrading has become a trend for global energy system progress. Nuclear energy can effectively fill the power and heat supply gap during coal substitution. It has the advantages of a flexible layout, wide application, and insensitivity to climate change and the global market, which ensures national energy security. A heat pipe (HP) is a passive and efficient heat exchange element with a wide temperature range, stable and reliable performance, and high security. It is ubiquitously applied in the aerospace, energy and chemical industries, as a solar collector, for electronic cooling, and in other fields. HPs are irreplaceable in advanced nuclear energy with multi-domain, multi-scale, and multi-section applications. Therefore, existing studies on HPs must be summarized for advanced nuclear technology.[Progress] According to operation temperature, HP applications in nuclear technology are classified into three parts:nuclear power/propulsion systems, unclear safety facilities, and nuclear urban service. First, heat pipe-cooled reactors (HPRs) use alkali metal high-temperature HPs to passively export the core heat, which has the advantages of inherent safety and storage and transportation. Because of a long phase transition during startup and the unraveling alkali metal dynamic and heat transfer process in the steady state and the transitory state, the startup characteristic and heat transfer performance of alkali metal high-temperature HPs have been the difficult part of HPRs development. To adapt to different energy needs, the designs of HPRs ranging from kilowatts to megawatts and the corresponding thermoelectric conversion schemes have been proposed. HPRs will have broad prospects in aerospace, ship power, deep sea exploration, land-based power supplies and other fields. Second, with passive characteristics, an HP is a better technical choice for safety facilities. In nuclear power plants, separated HPs have been applied to passive heat removal systems, passive emergency core cooling systems, passive containment cooling systems, and passive spent fuel pool cooling systems. In nuclear spacecraft cooling, an HP space radiator composed of an HP and a heat sink is a more promising space radiator, having good thermal properties, temperature conversion characteristics, environmental adaptability, anti-debris impact performance, and anti-single point failure characteristics. In a thermonuclear reactor, HP is also used in first-wall cooling. Third, HPs are mainly used in waste-heat recovery and low-temperature heat transfer to improve energy efficiency and safety in nuclear industry applications and urban services. Researchers have developed several desalination systems based on HP systems and waste heat from steam power plants and generators. Districted heating and nuclear power generation, hydrogen production, and heating triple production systems are promoted and have become popular in China. Finally, challenges in HP performance, adaptive design in HPRs, and HP operation and maintenance were discussed.[Conclusions and Prospects] The HP is perfectly in line with the advanced nuclear safety design concept. Currently, although HPs are widely used in nuclear power/propulsion systems and reactor safety facilities, their practical applications in the nuclear industry and urban service remain relatively scarce, and there is almost no participation in the intermediate temperature segment. At last, we propose the prospects of advanced HP technology.
  • Research Article
    YU Hesheng, QI Haiying, TAN Zhongchao
    Journal of Tsinghua University(Science and Technology). 2023, 63(8): 1226-1235. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.017
    Abstract (1023) PDF (354) HTML (7)   Knowledge map   Save CSCD(2)
    [Significance] China's strategic goals of "carbon peaks" and "carbon neutrality" will have a significant impact on the country's economic and social development despite the challenges along the way. The energy and power industry is an important player in carbon dioxide emissions in China and the main battlefield for constructing new energy systems and initiating relevant industrial revolutions. Despite the increasing maturity of carbon capture, utilization, and storage(CCUS) technologies, their deployment faces strong resistance from the industry because of the high cost and energy consumption. For example, the cost of carbon capture alone ranges between 260 and 280 RMB/t, corresponding to an increase in utility cost of 57.51% to 93.38%, depending on the region. More importantly, the planet earth has a physical limit for carbon storage, and an alternative technical route is needed to achieve cost-effective zero-carbon emissions. Nonetheless, despite the importance of constructing new energy systems, China's energy resources determine that we will continue to rely on traditional fossil fuels for decades to come.[Progress] Therefore, this study analyzes the feasibility of state-of-the-art technologies, such as catalytic conversion, carbon material and hydrogen utilization, and hydrogen-fired power generation. This study proposes the use of coal, gasoline, natural gas, and biomass as chemicals rather than fuels. The "fuels" are first converted into hydrogen-carbon chemicals and then decomposed into elemental carbon and hydrogen by catalytic conversion. The resultant elemental carbon is upgraded into high-value carbon materials, such as carbon nanotubes, graphene, and carbon fibers, which can be used for battery production. Meanwhile, hydrogen is used for energy production through combustion and fuel cells. The batteries produced using carbon materials can also support decentralized energy and energy storage from power plants. Regarding hydrogen-based energy production, developed countries, such as the USA, and Japan, have developed hydrogen-fired power generation aimed at commercialization in 2030 or earlier. We also conduct a feasibility study by pilot testing and techno-economic analysis. State-of-the-art experimental studies show that the key technical elements include (1) the production of carbon-hydrogen feedstock from coal, which is ready for deployment to the market; (2) the catalytic decomposition of hydrogen-carbon, e.g., CH4 and C3H6, into carbon nanotube and hydrogen, which is proven feasible at the pilot scale but requires further research and development in catalysis and fluidized bed reactor system for upscaled production; (3) the separation and purification of downstream products for high-purity carbon materials and hydrogen, where catalytic removal or recycling is essential to the pure carbon product, and membrane separation needs to be developed for pure hydrogen production; and (4) the most challenging, but essential, technology is the hydrogen-based gas turbine for power generation, with pilot plants built in the USA, Australia, and China for testing with 5% to 10% of hydrogen. Nonetheless, only catalytic conversion of CH4 can provide the amount of hydrogen needed in a power plant in real time. Thus, we conducted a techno-economic analysis by retrofitting a natural gas-fired power plant, where part of the natural gas is converted into hydrogen and the hydrogen is mixed with the incoming natural gas for power generation. The proposed pathway has been proven to be economically feasible, provided all of the technologies are ready.[Conclusions and Prospects] In conclusion, we propose a novel pathway to efficient and clean utilization of fossil fuels as resources to produce high-efficiency, low-carbon, and low-cost hydrogen and high-value-added carbon materials, as well as zero-emission power generation. Admittedly, it takes decades to reach the final goal, but this pathway is expected to tackle the economic challenges to achieving the "carbon peaks" and "carbon neutrality" goals (or "double carbon" goals) of the energy and power industry.
  • SPECIAL SECTION: BIG DATA
    WU Houyue, LI Xianwei, ZHANG Shunxiang, ZHU Honghao, WANG Ting
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 1997-2006. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.027
    Abstract (1022) PDF (415) HTML (8)   Knowledge map   Save
    [Objective] The generation of adversarial samples in text represents a significant area of research in natural language processing. The process is employed to test the robustness of machine learning models and has gained widespread attention from scholars. Owing to the complex nature of Chinese semantics, generating Chinese adversarial samples remains a major challenge. Traditional methods for generating Chinese adversarial samples mainly involve word replacement, deletion/insertion, and word order adjustment. These methods often produce samples that are easily detectable and have low attack success rates, and thus, the methods struggle to balance attack effectiveness and semantic coherence. To address these limitations, this study introduces DiffuAdv, a novel method for generating Chinese adversarial samples. This approach enhances the generation process by simulating the data distribution during the adversarial attack phase. The gradient changes between adversarial and original samples are used as guiding conditions during the model's reverse diffusion phase in pre-training, resulting in the generation of more natural and effective adversarial samples. [Methods] DiffuAdv entails the introduction of diffusion models into the generation of adversarial samples to improve attack success rates while ensuring the naturalness of the generated text. This method utilizes a gradient-guided diffusion process, leveraging gradient information between original and adversarial samples as guiding conditions. It consists of two stages: forward diffusion and reverse diffusion. In the forward diffusion stage, noise is progressively added to the original data until a noise-dominated state is achieved. The reverse diffusion stage involves the reconstruction of samples, in which the gradient changes between adversarial and original samples are leveraged to maximize the adversarial objective. During the pre-training phase, data capture and feature learning occur under gradient guidance, with the aim of learning the data distribution of original samples and analyzing the deviations from adversarial samples. In the reverse diffusion generation phase, adversarial perturbations are constructed using gradients and integrated into the reverse diffusion process, ensuring that at each step of reverse diffusion, samples evolve toward greater adversarial effectiveness. To validate the effectiveness of the proposed method, extensive experiments are conducted across multiple datasets and various natural language processing tasks, and the performance of the method is compared with those of seven existing state-of-the-art methods. [Results] Compared with existing methods for generating Chinese adversarial samples, DiffuAdv demonstrates higher attack success rates across three tasks: text sentiment classification, causal relation extraction, and sentiment cause extraction. Ablation experiments confirm the effectiveness of using gradient changes between original and adversarial samples to guide the generation of adversarial samples and improve their quality. Perplexity (PPL) measurements indicate that the adversarial samples generated by DiffuAdv have an average PPL value of only 0.518, demonstrating that these samples are superior in rationality and readability compared with the samples generated by other methods. [Conclusions] DiffuAdv effectively generates high-quality adversarial samples that closely resemble real text in terms of fluency and naturalness. The adversarial samples produced by this method not only achieve high attack success rates but also exhibit strong robustness. The introduction of DiffuAdv enhances the research perspective on generating adversarial text samples and broadens the approaches for tasks such as text sentiment classification, causal relationship extraction, and emotion-cause pair extraction.
  • SPECIAL SECTION: ROBOTICS
    JIANG Xiao, WANG Song, WU Dan
    Journal of Tsinghua University(Science and Technology). 2024, 64(10): 1677-1685. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.023
    Abstract (1015) PDF (397) HTML (8)   Knowledge map   Save CSCD(2)
    [Objective] Oblong holes are commonly used across various industries to improve fault tolerance and adjustment capabilities. However, their complex geometric characteristics pose significant challenges for vision detection and location algorithms in industrial applications, impacting their utilization in automatic assembly processes. [Methods] This research investigates a high-precision and robust vision segmentation and location algorithm tailored for oblong holes. First, the geometric features of oblong holes, which are symmetric but lack a simple analytical description, are analyzed. This complexity renders traditional imaging methods ineffective for accurate localization. The detection and segmentation of oblong hole features are conducted using a novel vision location algorithm that integrates deep learning with conventional image processing techniques. Specifically, the algorithm employs a sequential connection framework of YOLO and fully convolutional networks to achieve accurate localization. This framework first identifies the region of interest and then performs semantic segmentation. YOLO networks rapidly detect the region of interest, prioritizing areas where the oblong hole is prominently featured. Semantic segmentation is subsequently performed using fully convolutional networks. Afterward, a skeleton feature extraction method based on medial axis transformation is applied to precisely locate the oblong hole. This method effectively reduces the impact of shape errors from semantic segmentation, achieving subpixel accuracy. However, medial axis transformation may produce redundant lines owing to the presence of image artifacts, potentially leading to inaccuracies. To address this issue, principal component analysis is employed to approximate the center of the oblong hole, thereby minimizing errors. For further precision, a Hough transformation ellipse detection method is utilized to identify the central skeleton of the oblong hole, which is interpreted both as a line segment and a special ellipse. The center of this skeleton represents the center of the oblong hole. [Results] Experimental validation conducted in a specific robotics automatic assembly system confirms the effectiveness of the proposed algorithm. The robustness of the algorithm is further demonstrated through image sampling using camera hardware distinct from that used in the training dataset. Additionally, the impact of surface features and oblong hole shapes on the detection performance is analyzed. The experimental outcomes indicate the optimal performance of the algorithm on objects with nonreflective surfaces, with minimal effect from the shape of the oblong hole on accuracy. Despite potential deformations in segmentation output due to hardware variations, the oblong hole region degenerating location algorithm, based on medial axis transformation, accurately locates the center. The final location error is recorded at 1.05 pixels, which surpasses the accuracy achieved through the direct calculation of the center of gravity of the segmented region. These results underscore the substantial benefits of the algorithm in scenarios with varying hardware and object conditions, demonstrating its high accuracy and exceptional robustness. [Conclusions] By merging deep learning techniques with traditional image processing methods, the location tasks for diverse objects are effectively resolved. The extraction of highly nonlinear features through deep learning, followed by processing with traditional image methods incorporating prior geometric knowledge, enhances the robustness and accuracy of the algorithm, making it suitable for practical production applications.
  • PUBLIC SAFETY
    CHEN Feiyu, SHEN Liangchang, FU Ming, SHEN Shifei, LI Yayun
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 900-909. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.020
    Abstract (1003) PDF (414)   Knowledge map   Save CSCD(2)
    [Objective] Numerical heat and moisture transfer model of clothing is a crucial tool for evaluating clothing protective performance, calculating body-environment heat and moisture transfer, and assessing human safety during cold exposure. Existing models primarily concentrate on conventional passive protective clothing (PPC). However, actively-heated clothing (AHC) remains poorly understood, with fiber research being the primary focus in previous studies, which cannot simulate dressing conditions of the human body. In this study, we developed a multilayer heat and moisture transfer model of AHC, which can be coupled with a human thermal response model.[Methods] First, based on a published model of PPC, the heat production and transfer mechanism of active heating technologies, including electrical heating, phase change material (PCM), and moisture-absorption heating, were considered. Accordingly, we developed a general model for AHC. Particularly, the heat production of electrical heating was calculated using system voltage, current, and efficiency, and that of PCM was calculated using the phase change speed ratio and enthalpy. For moisture-absorption heating, the heat production was obtained using the moisture-absorption and heat-generation curves of the fabric, calculated by applying the specific heat and temperature change ratio. Second, we specifically considered electrically-heated clothing (EHC), which is the most widely used in practical applications. Further, the model was improved for EHC considering the clothing's detailed layer structure and radiative and horizontal heat transfer. The clothing layer containing the heating pad was further divided into interlining, pad, and fabric layers to establish more realistic heat-transfer equations. The radiative heat transfer between two clothing layers was derived using the Stefan-Boltzmann law, as heat radiation is significant in EHC systems. The body segment containing the heat area was further divided into heated and nonheated zones, in which horizontal heat transfer was modeled to accurately calculate the local skin temperature.[Results] The model coupled with a published human thermal response model was validated with existing experiments with air temperatures ranging from -20℃ to 8℃. Moreover, the general model was validated with data from an EHC experiment at 8℃ and a PCM clothing experiment at 5℃. The errors of mean skin, core, and microclimate temperatures did not exceed 0.58℃, 0.16℃ and 1.59℃, respectively. The improved EHC model was validated with data from a series of experiments with air temperatures ranging from -20℃ to 0℃ and air velocities from 0 to 5 m/s. Considering the thermal response prediction, the errors of mean skin, local skin, and core temperatures did not surpass 0.20℃, 0.47℃, and 0.14℃, respectively. Moreover, considering clothing evaluation, the error of effective heating power was ~0.10 W.[Conclusions] The proposed model can be used to assess human thermal safety and clothing protective performance in cold exposure cases with AHC and serve as a reference for personal protection, emergency management, and protective equipment research in the field of public safety and environmental ergonomics.
  • Research Article
    CHEN Pu, TONG Jiejuan, LIU Tao, ZHANG Qinzhao, WANG Hong
    Journal of Tsinghua University(Science and Technology). 2023, 63(8): 1219-1225. https://doi.org/10.16511/j.cnki.qhdxxb.2022.25.017
    Abstract (988) PDF (397) HTML (8)   Knowledge map   Save
    [Objective] The helium circulator of the high-temperature gas-cooled reactor (HTGR) is advanced core equipment independently developed by Tsinghua University and is highly important for normal reactor operation. The shutdown of the helium circulator will lead to an emergency shutdown of the reactor, directly affecting the operation of the nuclear power plant and possibly causing safety problems. Therefore, it is necessary to evaluate the reliability of the helium circulator and study the preventive maintenance strategy to ensure the high-quality operation of the HTGR demonstration project (HTR-PM).[Methods] First, we used the failure mode, effects and criticality analysis (FMECA) method to analyze the failure modes, causes, effects, and degree of severity of the components of the helium circulator and list the usage guarantee recommendations. Since FMECA has not been performed on HTGR thus far, we referred to the national military standard to specify the severity degree of the helium circulator's failure consequences. Through FMECA, we can also identify its key components, the parts that must be emphasized during the design and maintenance of the circulator. Then, we used the general component data to determine the failure rate of the circulator and the failure rate proportion of each component. Finally, we used the reliability-centered maintenance analysis (RCMA) method to plan the preventive maintenance strategy of the circulator and put forward preventive maintenance plan suggestions. Preventive maintenance is mainly performed through condition monitoring, function test, etc., which will not affect the normal operation of nuclear power plants. According to RCMA, the preventive maintenance measures of the helium circulator mainly include condition-based maintenance (CBM), usage inspection, function test, and so on. CBM can be performed online, and other preventive maintenance measures can be completed during the overhaul; thus, these measures can effectively improve system availability and reduce financial losses. In addition, the maintenance interval is mainly based on the severity degree and the proportion of the failure rate of components, as well as the corresponding maintenance measures. A more accurate maintenance interval must be updated after receiving the monitoring data feedback.[Results] The calculated failure rate of the helium circulator was 0.18 times/year, which met the design criteria that the helium circulator shut down due to failure should occur less than once a year. However, a more accurate failure rate evaluation needs to be further updated after accumulating actual operation data. The calculation results showed that the drive motor in the helium circulator exhibited the highest failure rate of 88.57%, while that of the frequency converter in the drive motor was 60.82%. Therefore, the reliability of these components should be increased to improve that of the helium circulator. The reliability prediction results can provide a reference for improving the design and then the operation reliability of the helium circulator.[Conclusions] The research process of this paper is significant as a reference for conducting reliability analysis, improving the design quality, and planning maintenance strategy of newly developed nuclear power equipment, and it can also provide insights for relevant analyses of equipment in other nuclear power plants.
  • PUBLIC SAFETY
    XIAO Xingyu, MEI Shiyu, LIU Ruiqi, WANG Kuo, DENG Qing, HUANG Lida, YU Feng
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 994-1002. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.003
    Abstract (974) PDF (337)   Knowledge map   Save CSCD(1)
    [Objective] The negative impact of COVID-19 pandemic has hindered the development of urban agglomerations. Because of close geographical, transportation, and economic ties, COVID-19 is more likely to be transmitted repeatedly in urban agglomerations. This paper provides references for coordinating the development and security of urban agglomerations and building “resilient cities” in the context of pandemic. By constructing a system of indicators for the urban development level and using an autoregressive integrated moving average (ARIMA) model to predict the urban development level without pandemic, this study can quantitatively assess the impact of pandemic on individual cities in the Beijing-Tianjin-Hebei urban agglomeration.[Methods] The study applied an ARIMA model to investigate the urban development mechanism in urban agglomerations in different stages of pandemic. First, indicators were selected from multiple sources based on the collection and analysis of literature. Their reliabilities were tested with the Cronbach’s coefficients. Second, the indicators were assigned weights using an integrated method with the analytic hierarchy process (AHP) and the entropy weight method. Third, the stages of the COVID-19 pandemic were divided based on the monthly data collected from Weibo and other websites. Fourth, based on historical data and urban development trends before pandemic, the ARIMA model was used to predict the urban development level without the effect of pandemic. Finally, a comparison analysis was conducted between the prediction value and the real value to quantitatively assess the impact of pandemic on individual cities in the urban agglomeration.[Results] (1) In the context of pandemic, the urban development level indicators of three cities reached peak and trough values in the same month. (2) The degree of influence was less than 0 during the outbreak period and gradually decreased to a stable trough value. (3) The degree of influence was greater than 0 in the early stage of the recovery period and gradually decreased to less than 0 in the later stage until it reached the trough point.[Conclusions] This study shows that: (1) the COVID-19 pandemic in the central city of the urban agglomeration affects the formulation and implementation of the overall urban agglomeration development strategy; (2) the development pattern of urban agglomeration converges because of pandemic; and (3) cities are resilient and have a certain disaster-bearing capacity. To strengthen the construction of the Beijing-Tianjin-Hebei urban agglomeration, the paper suggests that the government should start from the economy, transportation, people’s livelihood, and disaster resilience to improve the urban development level.
  • Special Section: Construction Management
    Hong ZHANG, Zhijun BI
    Journal of Tsinghua University(Science and Technology). 2025, 65(1): 1-11. https://doi.org/10.16511/j.cnki.qhdxxb.2024.22.038
    Abstract (956) PDF (844) HTML (211)   Knowledge map   Save

    Objective: The β coefficient is a critical indicator for stock sector investment, and its stability is essential for making informed future investment decisions based on historical data. The real estate sector, known for its high investment risks and stock fluctuations, plays a crucial role in many investors' portfolios. Although there is a growing body of literature on the β coefficient of the real estate sector, research on its systematic calculation and stability remains limited. This paper analyzes the changes and stability of the β coefficient in the real estate sector, providing valuable insights for investors. Methods: Through method screening, this paper uses the single index equation to calculate the monthly and annual β coefficients of the Chinese A-share real estate sector from 2013 to 2022. After confirming data stationarity, daily data are processed through least squares regression analysis to obtain accurate and reliable monthly and annual β coefficients. The stability of the β coefficient is assessed using the Chow test for adjacent calendar months and years, and statistical analysis is conducted on the results. Ultimately, the study includes a comparative analysis between the real estate, financial, and construction sectors to provide a comprehensive understanding of the β coefficient characteristics. Results: The research results reveal the followings: (1) The monthly and annual mean β coefficients of the real estate sector are close to but less than 1. Monthly β coefficients show significant variability, while the annual β coefficient initially increases and then decreases. (2) The monthly β coefficient demonstrates stronger stability compared to the annual β coefficient. (3) The trajectories of the β coefficient in both the real estate and construction sectors are highly similar, with the stability of the β coefficient in the real estate sector being lower than that of the construction sector but higher than that of the financial sector. Conclusions: There are clear differences in the stability characteristics of the monthly and annual β coefficients in the real estate sector, and these differences vary across different sectors. This paper suggests that the followings: (1) Short-term investors should monitor changes in monthly β coefficients to predict market volatility. (2) For long-term investment decisions based on the real estate sector's β coefficients, timely adjustments should be made according to macroeconomic factors and other variables. (3) When investing across different stock sectors, investors should focus on the volatility relationship among the construction, financial, and the real estate sectors, and adopt appropriate risk hedging strategies to reasonably diversify investment risks.

  • Research Article
    SU Yang, LI Xiaowei, WU Xinxin, ZHANG Zuoyi
    Journal of Tsinghua University(Science and Technology). 2023, 63(8): 1184-1203. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.042
    Abstract (942) PDF (360) HTML (6)   Knowledge map   Save CSCD(1)
    [Significance] Two-phase flow instability is a classic problem in the field of steam generators and other two-phase flows. Therefore, it has been studied extensively. In nuclear reactor steam generators, two-phase flow instability may occur on the secondary side and interfere with the control system, causing fatigue-induced damage to the equipment. While two-phase flow instability can have different complex mechanisms and many influencing factors, there are various methods to research and analyze this phenomenon.[Progress] The phenomenon of two-phase flow instability can be classified into two types. The mechanisms of flow excursion (LE), density wave oscillation (DWO) and pressure drop oscillation (PDO) are introduced. LE and PDO can occur in conditions corresponding to the region of negative slope in the hydrodynamic characteristic curve (mass flow rate vs. pressure drop curve) in the heated tube and can be avoided by eliminating the negative slope region. However, DWO can also occur in the positive slope region due to the phase difference between two transient processes. One of these is the mass flow rate variation caused by variation in the driving pressure difference, which is controlled by the rate of momentum transfer. The other is the transient variation of the subcooled water region length and the density of saturated two-phase region fluid, which is caused by heat transfer. Changes caused by heat transfer are slower than changes in flow and pressure. Various research methods of two-phase flow instability are systematically summarized, including the theoretical time-domain method (nonlinear and linear methods), theoretical frequency-domain method, and discrete numerical method, starting from the conservation equations. The mathematical criterion obtained from the theoretical time-domain model can analyze the parameters' influence exactly over a wide range. The spatial distribution of density, enthalpy, and other physical parameters in the frequency domain can be obtained using the theoretical frequency-domain method, and the stability boundary it predicts is more accurate than that predicted by the theoretically simplified linear time-domain method. In addition, the research status of LE, DWO, and PDO is systematically summarized, with a particular focus on the work of our research group. New dimensionless numbers (two-phase number, superheated number, dimensionless pump number, and dimensionless bypass number) are proposed to describe the stability of the complex, superheated, two-phase flow boiling systems. A law unifying the influence of the Froude number, friction number, and geometric parameters (tube length, tube diameter, etc.) on DWO was developed. Previous contradictory conclusions are explained. A rigorous theoretical derivation and proof of the effects of model simplification and boundary conditions are presented. The requirements for conservatively modeling a real nuclear power plant steam generator and secondary loop system using a test section consisting of a single or multiple parallel small-scale heated tubes and a simplified engineering verification test loop in the laboratory are clarified. Finally, methods to avoid LE and DWO in the steam generator of the high-temperature gas-cooled reactor are introduced based on reactor design. To predict the stability of the high-temperature gas-cooled reactor-pebble bed module (HTR-PM) engineering test facility-steam generator (ETF-SG), theoretical time-domain method, theoretical frequency-domain method, RELAP5 model, and one-dimensional transient program are developed, which are in good agreement with the experiments.[Conclusion and Prospects] The results from the ETF-SG can conservatively predict the stability boundary of the steam generator and secondary loop of the HTR-PM nuclear power plant. The conditions for the occurrence of in-phase and out-of-phase DWO in ETF-SG are revealed, and methods for eliminating them are recommended. The above achievements are applied in the design, commissioning, and operation of the HTR-PM steam generator.
  • BIG DATA
    FAN Xiaoliang, PENG Zhaopeng, ZHENG Chuanpan, WANG Cheng
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1317-1325. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.029
    Abstract (907) PDF (327)   Knowledge map   Save CSCD(2)
    [Objective] Spatio-temporal correlation mining is a key technology in intelligent transportation systems and is usually applied to spatio-temporal data prediction problems such as traffic flow prediction. Accurately predicting traffic flows in urban management is extremely important for alleviating urban traffic congestion, improving traffic efficiency, and reducing traffic accident occurrences. However, it is extremely challenging to accurately predict traffic flows in large-scale traffic networks due to the high nonlinearity and complexity of the massive traffic flow data. Most existing methods usually conduct two separate components to capture the spatial and temporal correlations. A static spatial graph is constructed for each time step in the spatial dimension; furthermore, the same nodes on different time steps are connected to build a spatio-temporal graph in the temporal dimension. However, the potential correlations between the traffic flow data of different nodes at different time steps are ignored and the complex spatio-temporal correlations in the traffic flow data cannot be effectively modeled. [Methods] In this paper, we proposed a spatio-temporal combinational graph convolutional network (STCGCN) to address the issue of traffic flow prediction. STCGCN consisted of three modules: the spatio-temporal combinational graphs (STCG) construction module, the spatio-temporal combinational graph convolution (STCGC) module, and the prediction module. The STCG construction module constructed an adaptive STCG adjacency matrix across temporal slices based on spatio-temporal embedding vectors, which could automatically learn parameters during training, accommodate complex spatio-temporal correlations between nodes, and solve the problem that existing prediction methods hardly captured the potential spatio-temporal correlation between nodes. The STCGC module designed adaptive STCGC operators and adaptive STCGC layers to extract spatio-temporal features from historical traffic data of nodes and the constructed adaptive STCG. Finally, the prediction module aggregated the hidden layer representation of all historical time steps obtained using the STCGC module and outputed the prediction result via fully connected layer mapping. We evaluated STCGCN on PeMSD4 and PeMSD8, two public datasets from Caltrans performance measurement system (PeMS), by comparing it with 11 baseline methods: vector autoregressive (VAR), support vector regression (SVR), fully connected long-short term memory (FC-LSTM) neural network, diffusion convolutional recurrent neural network (DCRNN), spatio-temporal graph convolutional networks (STGCN), attention based spatial-temporal graph convolutional networks (ASTGCN), Graph WaveNet, spatial-temporal synchronous graph convolutional networks (STSGCN), adaptive graph convolutional recurrent network (AGCRN), graph multi-attention network (GMAN), and time zigzags at graph convolutional networks (Z-GCNETs). We adopted two widely used metrics for evaluation: mean absolute error and root mean squared error. [Results] The experimental results revealed that using a unified component, the proposed STCGCN model effectively modeled the dynamic temporal correlation, spatial correlation, and cross-spatio-temporal correlation in the traffic flow data. Furthermore, the model achieved the best prediction results at each moment, and its error growth was slower than other baseline methods as the prediction time increased. We also explored the effect of three hyperparameter settings in STCGCN on model performance, and the experiments demonstrated differential model performance under different hyperparameter settings. The number of parameters and training times of all models, including STCGCN and 11 baseline methods, were compared at the end of the experiment. The results showed that the STCGCN achieved the best model performance with the least number of model parameters and training time, and the algorithm efficiency was close to the best. [Conclusions] Experiments on the public datasets show that the STCGCN model outperforms 11 baseline methods in prediction accuracy.
  • BUILDING SCIENCE
    JIE Yuxin, FU Zhibin, WANG Yangqiang, YIN Changyun
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1897-1908. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.033
    Abstract (903) PDF (396) HTML (3)   Knowledge map   Save CSCD(1)
    [Objective] The determination of bearing capacity of foundation soil is one of the classical research fields of soil mechanics. For large area artificial fill projects, the current specifications of China do not consider the correction factor of foundation width in the calculation of bearing capacity. This paper discusses this problem, and investigates the definition and measurement methods of the foundation bearing capacity, related theoretical principles, and the method of determining allowable bearing capacity. We also discuss and speculate the possible basis for the selection of correction factors of the bearing capacity of foundation soil made of artificial fill and soft underlying stratum. It may provides theoretical guidance for width and depth correction of bearing capacity of foundation soil in large area artificial fill under current technical conditions. [Methods] In this paper, two main approaches for calculating bearing capacity of foundation soil are studied. One is to determine the bearing capacity of the foundation soil according to the extension range of the plastic zone; the other is to use the ultimate load as the ultimate bearing capacity, and then divide it by the safety factor to obtain the bearing capacity. The width and depth correction of bearing capacity is also based on these theories. We reviews the sources of the calculation method of bearing capacity together with the width and depth correction factors in the current specifications of China. On the basis of deriving the calculation formula and analyzing the principle of bearing capacity of foundation soil, two cases are investigated in this paper: one is a homogeneous foundation soil, and the other is that the shear strength of the soil outside the foundation boundaries is lower than the foundation soil. The possible essence for the correction factors of the bearing capacity is then discussed in order to guide the determination of the width and depth correction factors for large area artificial fill. [Results] For large area artificial fill projects, the correction factor of width is not considered in current specifications of China. The main reasons may be as follows: 1) The compaction quality of the foundation soil is not be able to guarantee easily. 2) The quality control standard of the soil under the foundation is stricter than that outside the foundation boundaries. 3) Post-construction settlement may occur for artificial fill. Since that the existing construction technology and quality control level has made great progress compared to the past, it is theoretically feasible to increase the correction factors of width and depth for large area artificial fill such as in island and reef. However, the following items are needed: 1) The degree of compaction should meet the requirement in the fill site. 2) The engineering quality of the fill outside the foundation boundaries should also be guaranteed not to be lower than the foundation soil. 3) Settlement calculation of the buildings are necessary. [Conclusions] Based on the theories of determining the bearing capacity of the foundation soil, this paper investigates the method of selecting the values of correction factors of width and depth. It is thought that the correction factors of width and depth can be appropriately increased under certain conditions for large area artificial fill. This will have good economic benefits, especially for offshore islands and reefs.
  • PUBLIC SAFETY
    WANG Hongping, HU Yanzhu, ZHANG Yufeng, WANG Song
    Journal of Tsinghua University(Science and Technology). 2023, 63(10): 1584-1597. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.036
    Abstract (902) PDF (330) HTML (5)   Knowledge map   Save CSCD(1)
    [Objective] The rapid proliferation of electric vehicles (EVs) and the large-scale deployment of charging facilities have considerably increased the electrification of transportation road networks. However, road networks exhibit vulnerability to failure at several critical sections, which in turn may trigger a cascade of failures, ultimately leading to widespread road network disruptions. In the context of mixed electric and nonelectric vehicular flows, such adverse impacts may further spread and cascade due to EV-specific characteristics, such as limited EV range and required charging time. Protective measures for vulnerable road sections of electrified road networks against hazards could mitigate the risk of cascading failures and the further spread of disruptive events. Therefore, assessing the vulnerability of electrified transportation road networks and identifying critical road sections have become paramount. Given that the vulnerability of electrified transportation road networks has been scarcely explored in existing literature, this paper proposes a two-layered attacker-defender model to study the vulnerability of electrified transportation road networks. [Methods] The outer layer model aims to minimize system performance by targeting roads within the system for disruption, i.e., maximizing the total system travel time. The inner layer model serves as a defender, minimizing the total system travel time by dynamically and optimally distributing traffic flows containing both electric and nonelectric vehicles. The inner layer model is formulated based on an enhanced link transmission model, taking into consideration the critical characteristics of the electrified transportation road networks. This two-layered model can describe the temporal and spatial evolution of the mixed electric and nonelectric vehicular flows. Additionally, this paper provides a detailed solution method and theoretical analysis of this model. A mixed-integer quadratic programming problem is obtained by considering the dual of the inner problem and combining the inner problem with the outer problem. This problem is subsequently converted into a mixed-integer linear programming problem using the big M method. [Results] The proposed model is applied to a segment of the highway network in North Carolina, U.S. The experimental results reveal that (1) critical road sections as determined with and without EVs differ considerably. Therefore, it is necessary to incorporate EVs when analyzing the vulnerability of an electrified transportation road network. (2) The set of critical road sections varies depending on the level of attack resources. In particular, the set of critical road sections in the low attack resource level scenarios is not necessarily a subset of the critical road sections in the high attack resource level scenarios. (3) The experimental results confirm the existence of a critical point in the attack resource level. When this critical point is reached, the system performance displays a phase change phenomenon, marked by a notable decline. [Conclusions] The results verify that the proposed model can identify the set of critical road sections in the system and provide theoretical support to improve the vulnerability of the electrified transportation road networks.
  • MECHANICAL ENGINEERING
    FENG Xiaobing, WANG Jianjun, WANG Yongke, CHEN Suyun, LIU Aiping
    Journal of Tsinghua University(Science and Technology). 2023, 63(10): 1608-1625. https://doi.org/10.16511/j.cnki.qhdxxb.2022.26.057
    Abstract (890) PDF (290) HTML (4)   Knowledge map   Save CSCD(2)
    [Significance] Welding has reached a very important position in large structural workpieces. The welding of large structural members also has been covered many fields, including high-end manufacturing fields such as shipbuilding, oil and gas chemical industry, nuclear power engineering, energy and power, building steel structure, and rail transit. The problems of traditional manual welding are instability and low efficiency. Even semi-automatic welding such as gantry welding and rail robot welding, which has not yet to meet the requirements of more efficient and high-quality automation industry. The welding development trend of large structural parts is more intelligent and automatic welding. In this case, the welding robot that has no track and can crawl in all positions has become the representative of intelligent welding equipment, and gradually has promote and solve the automatic intelligent welding of large structural parts. [Progress] In this paper, the research status of intelligent robots for welding large structural parts at home and abroad was systematically introduced, and the welding robots were classified according to the welding robots with and without rails. The application scenarios, advantages and disadvantages of these two types of welding robots were compared. Through these comparisons, it was concluded that the welding robot without rail was more suitable for welding large structural parts because of its better flexibility, adaptability and convenience. There were many problems in the welding process of large structural parts, especially when multi-layer and multi pass thick plates were welded. At the same time, these problems caused low welding process accuracy and low welding quality. Problems included low machining accuracy, inaccurate assembly, metal thermal deformation, many weld passes, weld beads stacked together and many other reasons. In the welding of large structural parts, there were still many difficulties that need to be solved urgently. Automatic backing welding technology and multi-layer and multi-channel automatic routing technology were two very critical and difficult problems that hindered the progress of the welding industry. Until now, the backing welding of large structural parts has mainly depends on manual work, because backing welding was the first weld connecting two welding workpieces, which was very critical in the whole welding process. In the process of automatic backing welding, there were strict requirements on the assembly clearance, unfitness, blunt edge, curling and welding heat input. If these factors were not well controlled, a large number of welding defects such as missing welding and incomplete welding would occur in the process of backing welding. Multi layer and multi pass welding also depended on manual arrangement to cause unstable welding quality. Therefore, it was more and more important to automatically plan the number of welding passes and layers, arrange the welding sequence, determine the position coordinates of each weld pass and adjust the welding parameters. [Conclusions and Prospects] Based on this situation, this paper summarizes the exploration of related technologies from the aspects of automatic backing welding technology, seam tracking technology, multi-layer and multi-channel automatic lane arrangement technology, etc. Multi-modal deep learning and multi-sensor fusion technology are widely used in many fields, and have become a concern in military, industrial and high-tech development. Thus, these two technologies will be the key to developing welding robots and provide guidance for the intelligent welding of large structural parts. In the future, artificial intelligence technology will lead the welding robot to achieve better welding quality.
  • SPECIAL SECTION: BIG DATA ANALYTICS
    LI Mingzhu, TIAN Rongrong, LI Ran, ZHANG Jing, WANG Shujuan, LIU Jia, XU Lizhen, LI Yan, ZHAO Yonggan
    Journal of Tsinghua University(Science and Technology). 2024, 64(10): 1759-1770. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.020
    Abstract (890) PDF (662) HTML (3)   Knowledge map   Save CSCD(5)
    [Objective] Saline-alkali soil is an important reserve resource of cultivated land and potential granary in China, and its management and utilization are related to national food security. Therefore, innovative techniques and amendments should be developed to address these challenges in saline-alkali regions. Among these, calcium supplementation is recognized as one of the most effective methods for ameliorating saline-alkali soil. In the past two decades, gypsum from the desulfurization of flue gas (FGDG) in coal-fired power plants has become a preferred calcium source for ameliorating saline-alkali soil because of its high calcium content and economic feasibility. Given that FGDG has developed into a soil amendment and has been widely used, a profound understanding of the progress of its patents can provide technical guidance for the large-scale amelioration of saline-alkali soil. [Methods] Based on the incoPat global patent database, a bibliometric analysis was conducted on 520 invention patents in the field of using FGDG to ameliorate saline-alkali soil from 2003 to 2022. The application and authorization trends, high-yield mechanisms, operational status, substance composition, and their correlation with patents in this field were systematically analyzed. In addition, a comparative analysis was conducted on the effectiveness of 52 patents with application cases. [Results] The results showed that the annual number of patent applications for using FGDG amendments to ameliorate saline-alkali soil has a trend of first increasing and then decreasing, with a peak period of 115 patents in 2016. Most patents take 20-30 months from publication to authorization. However, the overall proportion of authorization has shown a decreasing trend. The number of patents granted by universities and research institutes is higher than that granted by enterprises, whereas the number of patents jointly granted by universities and enterprises accounts for 15.6% of the total. A total of 37 patents were converted, 7 of which were pledged, accounting for 33.3% of the total number of grants, all of which were transferred by universities to enterprises and pledged by enterprises for financing. More than 70% of patents comprised three or more substances, primarily including organic and inorganic minerals, microbial agents, and nutrient supplements. Organic materials can directly provide nutrients for the soil to make up for the shortage of FGDG in terms of nutrients, with the frequency of application as high as 95.7%, followed by inorganic minerals, which account for 44.5%; microbial agents, which account for 41.3%; and nutrient supplements, which account for 21.3%. Compared with soils with or without other types of amendments, the application of FGDG amendments significantly decreased soil pH, exchangeable sodium percentage, and salt ions that are toxic to crop growth and increased soil Ca2+, SO42-, and total/available nitrogen and phosphorus contents, which provided a better soil environment, thereby increasing crop yield. [Conclusions] Generally, research and development on FGDG amendments for saline-alkali soil amelioration have matured, and some innovative achievements have been transformed into real productivity; thus, the value of related patents has been increasingly highlighted. However, problems such as the relatively simple composition of current patents, unclear technical requirements for the amount of application and method, and serious homogeneity of patents have been encountered. In the future, we should strengthen the cooperation among schools, enterprises, universities, and research institutes, intensify research on the FGDG formula used in saline-alkali soil, and enhance the application benefits of FGDG amendments.
  • ECONOMIC AND PUBLIC MANAGEMENT
    YANG Jialun, WANG Yintian
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1452-1466. https://doi.org/10.16511/j.cnki.qhdxxb.2022.21.045
    Abstract (889) PDF (339)   Knowledge map   Save
    [Objective] Traditional social concepts still hold certain negative views and stereotypes about bankruptcy. After the implementation of the Enterprise Bankruptcy Law in China in 2007, the positive role of the bankruptcy system and the impact of creditor protection are still not fairly understood. Until now, no literature has systematically analyzed and studied the effect of the Enterprise Bankruptcy Law on the innovative behavior of enterprises in financial distress. In fact, only a few quantitative analyses of the economic effects of implementing the Enterprise Bankruptcy Law have been found. This study examines the impact of the Enterprise Bankruptcy Law implementation on firm innovation and finds an increase in patent activity for firms in financial distress following the implementation of the law, thereby filling a gap in the literature on law and finance. [Methods] The 2007 Enterprise Bankruptcy Law is used as a quasi-natural experiment to construct a difference-in-difference model, as the model is intended to net out the common trend between the treatment and control firm groups. This study takes Merton's option pricing theory as the theoretical foundation of the credit monitor model and gauges the degree of financial distress faced by enterprises by calculating their distance to default. Based on the financial distress level faced by sample firms in 2006, the year before implementing the Bankruptcy Law, the firms are grouped by the annual median level of financial distress. However, firms with high financial distress typically differ from those with low financial distress in terms of other company characteristics, which could lead to concerns about missing variables. This paper introduces a propensity score matching method based on the difference model and selects four firms in the control group that belong to the same industry and have the closest propensity scores to those in the treatment group to effectively eliminate this concern. The estimation of the trend-matching scoring is performed using logical regression, and the sample set has passed the balance test. Furthermore, multistage dynamic regressions are employed to account for the interference of causal issues with the results. In addition, a volatility indicator based on the three-factor model is used to assess a firm's risk bearing to study the channel through which the Enterprise Bankruptcy Law influences firm innovation. [Results] After the implementation of the Enterprise Bankruptcy Law, the results revealed the following: 1) The number of patent applications of the experiment group increased by 18.77%, and the number of invention patent applications increased by 25.86% compared with the control group. 2) The implementation of the Enterprise Bankruptcy Law has strengthened the protection of creditors and thus improved the level of firm risk bearing, which has significantly increased innovation output and innovation quality of financially-distressed firms. 3)The above phenomena are more pronounced in firms with good corporate governance in regions with strict rules of law and strong intellectual property protection. [Conclusions] This study confirms the effect of creditor protection on business innovation. The introduction of the Enterprise Bankruptcy Law has enhanced the legitimate rights and interests of creditors in China, provided a strong guarantee for creditors to obtain potential repayment, enhanced the risk tolerance of firms, and encouraged firms to innovate. This study reveals the far-reaching effects of the legal system on the real economy and expands our knowledge of how the legal system environment influences enterprise innovation.
  • PUBLIC SAFETY
    REN Jianqiang, CUI Yapeng, NI Shunjiang
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 1003-1011. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.006
    Abstract (883) PDF (338)   Knowledge map   Save
    [Objective] To estimate and predict the actual infection scale of COVID-19 in a population, a COVID-19 pandemic trend prediction method based on machine learning is proposed. This method uses detection data to predict the development trend of the pandemic and can implicitly consider the impact of prevention and control measures. Additionally, this method can predict the number of confirmed cases in the future and estimate the actual infection scale of COVID-19.[Methods] In this paper, a three-step prediction model based on machine learning (TSPM-ML) is proposed. Machine learning algorithms, such as neural networks, random forest, long short-term memory (LSTM), and sequence to sequence (seq2seq), are introduced into the prediction of the COVID-19 development situation, and the detection data are used to predict the number of people diagnosed and the actual scale of the infection in the future. The TSPM-ML includes three steps: (1) predicting the actual infection scale of COVID-19 based on the detection data, (2) predicting the future development trend of the actual infection scale based on the predicted results of the first step, and (3) predicting the number of people diagnosed in the future based on the actual infection scale obtained in the second step. The TSPM-ML is used to predict the actual pandemic situation in Germany, France, South Korea, the United States, Russia, and Finland.[Results] The largest prediction error is in the United States, with a forecast error of 23.71 per million people, while South Korea has the smallest prediction error of 0.63 per million people. Overall, the prediction results of the TSPM-ML are consistent with the simulation and actual data, and the reliability of the model is verified.[Conclusions] The predicted results are consistent with the actual data, and the TSPM-ML is highly reliable. The prediction results can enable government management departments to more accurately understand the actual development trend of COVID-19 and allocate medical resources more effectively, and provide decision support for COVID-19 prevention and control.