Chinese  |  English

Top access

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • COMPUTER SCIENCE AND TECHNOLOGY
    XIE Tian, YU Lingyun, LUO Changwei, XIE Hongtao, ZHANG Yongdong
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1350-1365. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.002
    Abstract (2744) PDF (1158)   Knowledge map   Save CSCD(2)
    [Significance] Deep face manipulation technology involves the generation and manipulation of human imagery by different strategies, such as identity swapping or face reenactment between the source face and the target face. On the one hand, the rise of deep face manipulation has inspired a series of applications, including video making and advertising marketing. On the other hand, because face manipulation technology is usually open source or packaged as APPs for free distribution, it makes the threshold of tampering technology lower, resulting in the proliferation of fake videos. Moreover, when face manipulation technology is maliciously used by criminals to produce fake news, especially for important military and political officials, it will guide and intervene in public opinion, posing a great threat to national security and social stability. Therefore, the research on deep face forgery detection technology is particularly important. Hence, it is necessary to summarize the existing research to rationally guide deep face manipulation and detection technology.[Progress] Nowadays, deep face manipulation technology can be roughly divided into four types, namely, identity swapping, face reenactment, face editing, and face synthesis. Deepfakes bring real-world identity swapping to a new level of fidelity. The region-aware face-swapping network provides the identity information of source characters from local and global perspectives, making the generated faces more natural. In the field of facial reenactment, Wav2lip uses pretrained lip synchro models as expert models, encouraging the model to generate natural and accurate lip movements. In the field of face editing, FENeRF, a three-dimensional perception generator based on a neural radiation field, aligns semantic, geometric, and texture information in spatial domain and improves the consistency of the generated image between different perspectives while ensuring that the face can be edited. In the field of face synthesis, Anyface proposes a cross-modal distillation module for the alignment of language and visual representation, realizing the use of text information to generate more diversified face images. Deep face forgery detection technology can be roughly divided into image-level forgery detection and video-level forgery detection methods. In the image-level methods, SBI proposes a self-blended technique to generate realistic fake face images with data augmentation, effectively improving the generalization ability of the model. M2TR proposes a multimodal and multi-scale Transformer model to detect local artifacts at different levels of the image in spatial. Frequency domain features are also added as auxiliary information to ensure the forgery detection ability of the model for highly compressed images. In the video-level methods, RealForensics learns the natural correspondence between the face and audio in a real video in a self-supervised way, enhancing the generalization and robustness of the model.[Conclusions and Prospects] Presently, deep face manipulation and detection technologies are rapidly developing, and various corresponding technologies are in the process of continuous update and iteration. First, this survey reviews the deep face manipulation and detection methods and discusses their strengths and weaknesses. Second, the common datasets and the evaluation results of different manipulation and detection methods are summarized. Finally, the main challenges of face manipulation and fake detection are discussed, and the possible research direction in the future is pointed out.
  • PUBLIC SAFETY
    DENG Lizheng, YUAN Hongyong, ZHANG Mingzhi, CHEN Jianguo
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 849-864. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.002
    Abstract (2597) PDF (1095)   Knowledge map   Save CSCD(14)
    [Significance] Landslide hazards are widely distributed in China and are severely harmful. The registered landslide hazards have achieved remarkable benefits in disaster reduction through a comprehensive prevention and control system. However, approximately 80% of all geo-disasters in China still occur outside the scope of identified hazards yearly. Therefore, monitoring and early warning are important means to actively prevent landslide disasters and achieve great success in disaster mitigation owing to promptness, effectiveness, and relatively low-cost advantages. Deformation is the most significant monitoring parameter for landslides and has become a focus and general trend. Landslide deformation monitoring engineering has strict requirements for controlled cost and high reliability to achieve widespread application and accurate early warning. Therefore, the commonly used monitoring instruments focus on surface deformation and rainfall to meet the requirements for easy equipment installation and low implementation cost. However, surface deformation and rainfall are not sufficient conditions to determine the occurrence of landslides. Various challenges exist in the existing monitoring technologies and early warning methods regarding engineering feasibility and performance improvement. Thus, it is important and urgent to summarize the existing research to rationally guide future development.[Progress] The deformation monitoring methods are divided into surface and subsurface monitoring. Most surface deformation monitoring technologies are vulnerable to the interference of terrain, environment, and other factors; therefore, their timeliness and reliability are not easily guaranteed. Additionally, slope subsurface deformation monitoring technologies can directly obtain the development and damage information of the sliding surface; thus, they can recognize the disaster precursor. Subsurface monitoring has advanced early warning ability; however, the existing instruments have problems, such as high cost, small measuring range, or difficult operation. Acoustic emission technology has the advantages of low cost, high sensitivity, and continuous real-time monitoring of large deformation, and has gradually developed into an optional method for landslide subsurface deformation monitoring. Thus, efficient landslide monitoring should comprehensively use multiple technologies to overcome the limitations of a single technology, and an integrated monitoring system becomes the state-of-the-art trend. The purpose of landslide monitoring is to provide a basis for decision-making of disaster early warning, thus, avoiding casualties and property losses through effective early warning efforts. In the field of early warning, regional meteorological and individual landslide early warning methods are gradually developed and improved. Deformation monitoring data are the main basis for landslide early warning, and experts analyze the deformation trend and sudden change characteristics. Different early warning levels could be triggered by the threshold values of velocity, acceleration, or other criteria. However, a landslide has complex dynamic mechanisms and individual differences; thus, the generic early warning model needs further exploration. The intelligent early warning model integrates machine learning technology with geological engineering analysis to improve the accuracy and automation level of landslide early warning.[Conclusions and Prospects] Deformation monitoring is essential in landslide prevention, and deformation data are the main basis for landslide early warning. Moreover, surface monitoring technologies have been widely used in the perception and decision-making process of landslides. Subsurface monitoring technologies can detect early precursors of landslide evolution to continuously improve early warning accuracy. Analyses show that early warning methods can be improved in the future by integrating machine learning models and geotechnical engineering.
  • PUBLIC SAFETY
    DAI Xin, HUANG Hong, JI Xinyu, WANG Wei
    Journal of Tsinghua University(Science and Technology). 2023, 63(6): 865-873. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.013
    Abstract (2168) PDF (909)   Knowledge map   Save CSCD(5)
    [Objective] Rapid prediction of rainstorm waterlogging is crucial for disaster prevention and reduction. However, the traditional numerical models for simulating and predicting large-scale and complex subsurface conditions are complicated and time-consuming; moreover, the time-efficiency requirement of rainstorm waterlogging prediction is difficult to meet. To address these shortages of the numerical models, this study constructs a spatiotemporal prediction model of urban rainstorm waterlogging based on machine learning methods to rapidly predict waterlogging extent and water depth changes.[Methods] This study constructs a rapid prediction model of urban rainstorm waterlogging based on a hydrodynamics model and machine learning algorithms. First, a hydrodynamic model is constructed based on InfoWorks integrated catchment management (InfoWorks ICM) for rainstorm waterlogging in the study area with the parameter rate determination and model validation to realize the high-precision simulation of urban rainstorm waterlogging. On this basis, a rainfall scenario-driven hydraulics model is designed to further obtain rainstorm waterlogging simulation results. These results are used as the base dataset for machine learning. Second, the spatial characteristics data of rainstorm waterlogging are obtained from three aspects: rainfall situation, subsurface information, and the drainage capacity of the pipe network, which, together with the grid simulation results, comprise the dataset. The spatial prediction models are based on random forest, extreme gradient boosting (XGBoost), and K-nearest neighbor algorithms. Finally, the simulation results of waterlogging points are used to generate rainstorm waterlogging time series data. The rainfall, cumulative rainfall, and water depth of the first four moments (every 5 min) are used as the input for a long short-term memory (LSTM) neural network to predict the present water depth of the flooding point. The two models collaborate to achieve rapid spatial and temporal predictions of urban rainstorm waterlogging.[Results] For spatial predictions, the random forest model has the best fitting performance regarding evaluation indexes such as the mean square error, the mean absolute error, and the coefficient of determination (R2). When a rainstorm scenario with an 80-year event and a 2.5 h rainfall calendar prediction set is used, the prediction results concur with the risk map of urban waterlogging in Beijing. Compared with the simulation results of InfoWorks ICM, the prediction accuracy of the predicted inundation extent reaches 99.51%, and the average prediction error of waterlogging depth does not exceed 5.00% by the random forest model. For temporal predictions, the trend of the water depth change of the LSTM neural network model is more consistent with the simulation results of InfoWorks ICM, the R2 of four typical inundation points are above 0.900, the average absolute error of water depth prediction at the peak moment is 1.9cm, and the average relative error is 4.0%.[Conclusions] When addressing sudden rainstorms, the rapid prediction model based on machine learning algorithms built in this study can generate accurate prediction results of flooding extent and water depth in seconds by simply updating the forecast rainfall data in the model input. The model computational speed is greatly improved compared to the hydrodynamics-based numerical model, which can help plan waterlogging mitigation and relief measures.
  • COMPUTER SCIENCE AND TECHNOLOGY
    WANG Yun, HU Min, TA Na, SUN Haitao, GUO Yifeng, ZHOU Wuai, GUO Yu, ZHANG Wanzhe, FENG Jianhua
    Journal of Tsinghua University(Science and Technology). 2024, 64(4): 649-658. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.042
    Abstract (1782) PDF (730) HTML (3)   Knowledge map   Save CSCD(4)
    [Significance] Since the turn of the 21st century, artificial intelligence (AI) has advanced considerably in many domains, including government affairs. Furthermore, the emergence of deep learning has taken the development of many AI fields, including natural language processing (NLP), to a new level. Language models (LMs) are key research directions of NLP. Referred to as statistical models, LMs were initially used to calculate the probability of a sentence; however, in recent years, there have been substantial developments in large language models (LLMs). Notably, LLM products, such as the generative pretrained transformer (GPT) series, have driven the rapid revolution of large language research. Domestic enterprises have also researched LLMs, for example, Huawei’s Pangu and Baidu's enhanced language representation with informative entities (ERNIE) bot. These models have been widely used in language translation, abstract construction, named-entity recognition, text classification, and relationship extraction, among other applications, and in government affairs, finance, biomedicine, and other domains. [Progress] In this study, we observe that improving the efficiency of governance has become one of the core tasks of the government in the era of big data. With the continuous accumulation of government data, traditional statistical models relying on expert experience and local features gradually suffer limitations during application. However, LLMs, which offer the advantages of high flexibility, strong representation ability, and effective results, can rapidly enhance the intelligence level of government services. First, we review the research progress on early LMs, such as statistical LMs and neural network LMs. Subsequently, we focus on the research progress on LLMs, namely the Transformers series, GPT series, and bidirectional encoder representations from transformers (BERT) series. Finally, we introduce the application of LLMs in government affairs, including government text classification, relationship extraction, public opinion risk identification, named-entity recognition, and government question answering. Moreover, we propose that research on LLMs for government affairs must focus on multimodality, correctly benefit from the trend of “model as a service,” focus on high data security, and clarify government responsibility boundaries. Additionally, a technical path for studying LLMs for government affairs has been proposed. [Conclusions and Prospects] The application of LLMs in government affairs mainly focuses on small-scale models, lacking examples of application in large-scale models. Compared with smaller models, large models offer many advantages, including high efficiency, broader application scenarios, and more convenience. These advantages can be understood as follows. In terms of efficiency, large models are usually trained on a large amount of heterogeneous data, thus delivering better performance. In terms of application scenarios, large models gradually support multimodal data, resulting in more diverse application scenarios. In terms of convenience, we emphasize the “pretraining + fine-tuning” mode and the invocation method of interfaces, making LLMs more convenient for research and practical applications. This study also analyzes the issues suffered by LLMs, specifically from the technological and ethical perspectives, which have resulted in a panic to a certain extent. For example, ChatGPT has generated many controversies, including whether the generated files are novel, whether using ChatGPT will lead to plagiarism and ambiguity as to who are property rights owners for the generated files. Overall, it can be said that LLMs are in the stage of vigorous development. As the country promotes research on AI and its application in government affairs, LLMs will play an increasingly crucial role in the field.
  • INFORMATION SCIENCE
    WEI Zeyang, LIU Yi, WANG Chunyan, ZHANG Jia, BIAN Jiang, YAO Linjie, LIN Sijie, EWE Kaijie
    Journal of Tsinghua University(Science and Technology). 2022, 62(12): 1839-1850. https://doi.org/10.16511/j.cnki.qhdxxb.2022.21.006
    Abstract (1654) PDF (697) HTML (1)   Knowledge map   Save
    As an emerging interdisciplinary concept, environmental computing is a term used for the quantitative research process of environmental process numerical analysis and (or) environmental data analysis based on computing. Under this conceptual framework, various kinds of environment and computational science integrations are discussed together for ensuring development in this field as well as summarizing advanced research models and methods. This paper introduces the definition and basic characteristics of environmental computing and explains the methodological characteristics of various types of environmental computing based on typical cases. Environmental computing has transitioned from theory-driven to data-driven and then to hybrid computing. The comprehensive computing framework shows considerable advantages compared to conventional approaches or single methods. To achieve significant breakthroughs, researchers need to constantly explore basic theories, including environmental and computational theories, and promote the transformation of environmental thinking to adapt to the frontier content of computational science. Additionally, challenges such as big data theory, technical application scenarios, and computing power also need to be overcome.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    WANG Yan, OU Guoli
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1693-1706. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.033
    Abstract (1475) PDF (609) HTML (1)   Knowledge map   Save
    [Significance] The issue of climate change is extremely complex and encompasses multiple factors such as the environment, economy, society, and related aspects. With the ongoing maturation of complex system modeling technology, low-carbon transportation research using the computable general equilibrium (CGE) model presents a new approach to policy evaluation. The CGE model has three primary advantages for analyzing the economic challenges of transitioning to low-carbon transportation. First, the approach has a solid microeconomic foundation that can directly reflect the mechanism and influence of economic subjects' behavior under the assumption of a rational economic player. Second, CGE models are capable of fully simulating the connections of different economic sectors, which can uncover the transmission effect of transportation policy impact among various sectors, as well as the response of various sectors to the policy impact. Third, the model has two major types, static and dynamic CGE models, which can analyze the short- and long-term impact of different policies, respectively. As an essential prediction tool for policy impact and trend analysis, CGE models can comprehensively reveal the interaction characteristics between the transportation industry and the whole national economy, enabling the prediction of the economic and social impact of low-carbon transportation policies. [Progress] This study investigates contemporary research on transportation policies based on the CGE model. A total of 78 relevant empirical studies are collected from the Web of Science, Science Direct, and China National Knowledge Infrastructure, of which more than 50% focus on predicting the impact of low-carbon transportation policies, indicating that the investigation of traffic-related carbon emissions has gradually become a popular topic of empirical analysis using CGE models. The research topics include: (1) The influence of low-carbon transportation economic incentives, such as carbon tax, emission trading scheme, and transportation subsidies. (2) The application effect of low-carbon technologies, such as electric vehicles and carbon capture and storage. (3) The effect of low-carbon transportation urban planning, including land use, vehicle speed limits, walking-oriented urban design, and bicycle-oriented urban space development. (4) Predicting the economic and social impact of the implementation of nationally determined contributions and fuel economy standards. Previous research establishes a solid foundation for prediction and policy analysis in low-carbon transportation research; however, in the context of China's 2030 carbon peak and 2060 carbon neutrality goals, some issues remain that require further exploration and investigation. [Conclusions and Prospects] First, regarding emissions reduction policies, differing transportation needs, transportation structure, energy structure, technical level, and macropolicies will affect transportation carbon emissions. The carbon emissions reduction potential of various policies requires further study, and it is essential to propose structured solutions referencing the prediction and design of composite system transportation emissions reduction policies. Based on China's 1+N policy system for advancing the dual carbon goals, this study constructs a low-carbon transportation policy matrix based on the “avoid/shift/improve-planning/regulatory/economic/information/technological (ASI-PREIT)” structure, producing a proposed “policy basket” for low-carbon transportation CGE modeling. This policy matrix will comprehensively reveal the correlation between policy tools for low-carbon transportation CGE modeling and help put forward structured low-carbon solutions. Second, in terms of model construction, accessibility is the most intuitive factor for transportation. As with other sectors, treating the transportation sector simply as a product production sector risks neglecting network and external benefits; therefore, this study proposes the inclusion of transportation accessibility factors in low-carbon transportation CGE models as spatial computable general equilibrium model to identify regional economic correlations and regional product flow. Third, in terms of synergies, carbon emissions reduction in transportation is crucial to achieving China's dual carbon goals and can advance innovation and economic growth, leveraging a wide range of synergies, including sustainable development, improving public health, and enhancing the overall quality of life. Currently, increasingly severe ecological and environmental challenges are forcing global economies to reassess the GDP-centered development model, seeking balanced and sustainable development strategies that include environment, economy, and society. This study proposes the development of a comprehensive low-carbon transportation CGE model to compare and analyze the optimal solutions for balancing the co-benefits of environment-economy-society from a global perspective and design low-carbon transportation policy combinations to advance sustainable development. In summary, this study endeavors to systematically review the empirical research applying CGE models in the field of low-carbon transportation, provide a reference for expanding the research on low-carbon transportation, and help policymakers and the transportation sector achieve China's dual carbon goals.
  • BIG DATA
    ZHAO Xingwang, HOU Zhedong, YAO Kaixuan, LIANG Jiye
    Journal of Tsinghua University(Science and Technology). 2024, 64(1): 1-12. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.001
    Abstract (1460) PDF (608) HTML (1)   Knowledge map   Save
    [Objective] Multiview graph clustering aims to investigate the inherent cluster structures in multiview graph data and has received quite extensive research attention over recent years. However, there are differences in the final quality of different views, but existing methods treat all views equally during the fusion process without assigning the corresponding weights based on the received quality of the view. This may result in the loss of complementary information from multiple views and go on to ultimately affect the clustering quality. Additionally, the topological structure and attribute information of nodes in multiview graph data differ significantly in terms of content and form, making it somewhat challenging to integrate these two types of information effectively. To solve these problems, this paper proposes two-stage fusion multiview graph clustering based on an attention mechanism.[Methods] The algorithm can be divided into three stages:feature filtering based on graph filtering, feature fusion based on the attention mechanism, and topological fusion based on the attention mechanism. In the first stage, graph filters are applied to combine the attribute information with the topological structure of each view. In this process, a smoother embedding representation is achieved by filtering out high-frequency noise. In the second stage, the smooth representations of individual views are fused using attention mechanisms to obtain the consensus smooth representation, which incorporates information from all views. Additionally, a consensus Laplacian matrix is obtained by combining multiple views' Laplacian matrices using learnable weights. To obtain the final embedded representation, the consensus Laplacian matrix and consensus smooth representation are inputted into an encoder. Subsequently, the similarity matrix for the final embedded representation is computed. Training samples are selected from the similarity matrix, and the embedded representation and learnable weights of the Laplacian matrix are optimized iteratively to obtain a somewhat more compressed embedded representation. Finally, performing spectral clustering on the embedding representation yields the clustering results. The performance of the algorithm is evaluated using widely-used clustering evaluation metrics, including accuracy, normalized mutual information, an adjusted Rand index, and an F1-score, on three datasets:Association for Computing Machinery (ACM), Digital Bibliography & Library Project (DBLP), and Internet Movie Database (IMDB).[Results] 1) The experimental results show that the proposed algorithm is more effective in handling multiview graph data, particularly for the ACM and DBLP datasets, compared to extant methods. However, it may not perform as well as LMEGC and MCGC on the IMDB dataset. 2) Through the exploration of view quality using the proposed methods, the algorithm can learn weights specific to each view based on quality. 3) Compared to the best-performing single view on each dataset (ACM, DBLP, and IMDB), the proposed algorithm achieves an average performance improvement of 2.4%, 2.9%, and 2.1%, respectively, after fusing all views. 4) Exploring the effect of the number of graph filter layers and the ratio of positive to negative node pairs on the performance of the algorithm, it was found that the best performance was achieved with somewhat small graph filter layers. The optimal ratio for positive and negative node pairs was around 0.01 and 0.5.[Conclusions] The algorithm combines attribute information with topological information through graph filtering to obtain smoother representations that are more suitable for clustering. The attention mechanisms can learn weights from both the topological and attribute information perspectives based on view quality. In this way, the representation could get the information from each view while avoiding the influence of poor-quality views. The proposed method in this paper achieves the expected results, greatly enhancing the clustering performance of the algorithm.
  • Review
    AN Jian, CHEN Yuxuan, SU Xingyu, ZHOU Hua, REN Zhuyin
    Journal of Tsinghua University(Science and Technology). 2023, 63(4): 462-472. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.001
    Abstract (1257) PDF (522) HTML (0)   Knowledge map   Save CSCD(1)
    [Significance] With the development of combustion science, large amounts of data containing various kinds of effective physical information are generated by numerical simulation and experimental measurement. Traditional research methods mainly apply model-based physical rules to illustrate such information. However, as the amount of data increases, data-driven methods have gradually gained research attention. Due to the remarkable success of machine learning (ML) techniques in data analysis and processing, they also offer a new way of processing large amounts of data in the field of combustion. [Progress] This study reviews the applications of ML in turbulent combustion, including chemical reactions, combustion modeling, engine performance prediction and optimization, and combustion instability prediction and control. The challenges and future prospects are also discussed. In the area of chemical reactions, the use of ML has been successfully demonstrated for the simplification and optimization of chemical mechanisms as well as for the efficient representation of chemical systems. Similarly, ML applications have produced encouraging results for modeling subgrid-scale processes and for parameterizing PDFs, often outperforming physics-based closure models in a priori studies. However, caution should be exercised in extrapolating these findings to a posteriori applications. Moreover, further studies are necessary to examine the performance of these data-driven models that are typically generated for specific operating conditions in practical applications. To address the limitations of regression models, physics-informed neural networks provide avenues for incorporating physical principles and other fundamental consistencies that are necessary for enabling robust combustion simulations. As for applications in engines, robust intelligent control via ML has only become feasible for combustion experiments in recent years, mainly due to the developments of deep learning. As such, these methods are still not feasible for commercial applications. This is largely caused by the lack of confidence in ML models under unseen conditions, especially in safety-critical applications, and by the large amounts of online training required for the convergence of current ML methods. [Conclusions and Prospects] Given such a background, robustness study is still a top priority. Although many successful studies on the combination between ML and combustion research have been accomplished, the conceptualization of combustion problems in ML frameworks remains a laborious task. Formulating the problem into an ML framework is a prerequisite for the issue to be successfully solved using ML. Clarifying the combustion problem and carefully selecting and preprocessing the obtained data are important. In addition, the careful selection of the ML model, the loss function, and the training and tuning of the model are necessary components for building a predictive model. Moreover, the ML models exhibit various degrees of predictive uncertainties, which are exacerbated by the lack of interpretability in complex models. Therefore, there is an urgent need to establish ML methods with physical insights. More attempts, such as sample construction method, modeling method, and uncertainty quantification or sensitivity analysis, should be conducted to effectively verify the performance of the model. This ensures that the model abides by the laws of physics and that it can accurately represent the simulated system. The holistic combination of data-driven methods with physical insights could have profound impacts on all areas of combustion science and technology, such as data-assisted modeling and simulation techniques, in situ control and optimization strategies, and data-driven screening of alternative fuels.
  • CONSTRUCTION MANAGEMENT
    LI Enyuan, LIU Hongyu, ZHU Enwei
    Journal of Tsinghua University(Science and Technology). 2024, 64(2): 173-180. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.045
    Abstract (1253) PDF (516) HTML (1)   Knowledge map   Save
    [Objective] The land market plays a key role in achieving a stable and healthy market for real estate development and has a significant impact on macroeconomic conditions, government finances, and overall financial stability. However, many participants in the Chinese land market engage in blind expansion and irrational land acquisition, which interferes with achieving policy objectives such as price stability for land and housing, as well as the healthy development of the land market. This study analyzes market feedback of land auction participants and the factors influencing them. We use auction theory to examine the existence of the winner's curse phenomenon in the Beijing land market and provide policy recommendations for the government to regulate the land market. [Methods] This study is based on micro-level auction data from land sales in Beijing between 2013 and 2018 and from the Wind enterprise database. We first construct models to calculate cumulative abnormal returns for the auction participants' stock prices in the periods following land auctions. We then use an event study to explore the effects of participant, land, and auction characteristics on stock price changes. Particular attention is given to market feedback on the auction winners. The uniqueness of this study lies in the vast amount of data used. We consider factors that are crucial elements of market feedback but have been relatively unexplored in previous studies, such as participants' bidding premiums, past experiences in land auctions, and joint bidding. [Results] We find that:(1) The higher the final bidding premium of land auction participants, the more negative the market's reaction. Previous experience with repeated bidding and joint bidding enables participants to access more market information, helping mitigate irrational bidding. Variables such as land value, bidding intensity, and the frequency of winning bids on a single day that reflect a bidder's economic strength lead to more positive market evaluations. (2) Evidence of the winner's curse phenomenon is observed in the Beijing land market. Although cumulative abnormal returns do not show significant inter-group differences between winners and losers, results of controlling for the final bidding premium reveal that higher bidding premiums result in more negative market evaluations for the winners. Joint bidding helps winners to make rational bids, but the effect of repeated participation in the short term is not significant. (3) The market holds a significantly negative view of bidders who are active over an extended period, and this effect is more pronounced for the winners, providing additional evidence for the existence of the winner's curse phenomenon. [Conclusions] Based on these findings, we recommend the government to enact policies to encourage market participants to make rational bids. This could be achieved by promoting complementary advantages and sharing market information through joint bidding to some extent. The government should also enhance information disclosure through various means to alleviate information asymmetry in the market and strengthen supervision of active market participants' funds and the development and construction processes to reduce irrational bidding behavior.
  • PUBLIC SAFETY
    ZHANG Lin, WANG Jinyu, WANG Xin, WANG Wei, QU Li
    Journal of Tsinghua University(Science and Technology). 2023, 63(5): 765-774. https://doi.org/10.16511/j.cnki.qhdxxb.2022.21.038
    Abstract (1211) PDF (500) HTML (5)   Knowledge map   Save CSCD(3)
    The frequent occurrence of major natural disasters not only endangers national stability and people's safety but also causes serious economic losses. Since most sudden natural disasters are unpredictable, how to transport emergency supplies to disaster-affected areas quickly and accurately has attracted wide attention. Unlike existing research, this study begins with the rescue characteristics of major natural disasters. In this study, an intelligent dispatching model of emergency supplies for multidisaster areas is constructed considering. Considering that the emergency materials of each rescue area to meet the needs of each disaster area, this study constructs an uncertain multiobjective intelligent dispatching model of emergency supplies in fully. Due to the uncertainty and fuzziness of information in emergency situations, using triangular fuzzy number method can help decision-makers to make effective decisions. Therefore, triangular fuzzy number method is used to express the uncertainty of emergency supplies demand and transportation time in different disaster areas. The rainstorm disaster in Henan Province, China, in 2021 is taken as a typical case in this study. The objective and actual data of emergency supplies dispatched in this disaster are obtained from the official websites of Zhengzhou Temporary Disaster Relief Reserve, Red Cross Society of China Henan Branch, and Henan Charity Network. This study sets emergency supplies as variable x(Ze)ij, unit cost as variable cij, transportation time as a variable tij. According to the triangular fuzzy number of emergency supplies demand and transportation time which set in this study, the uncertain variables are represented by the triangular fuzzy number method. Thus, the model is transformed into a deterministic multiobjective intelligent dispatching model. Two-dimensional Euclidean distance weighting is used to simulate the calculation and solve the model. Then, the linear interactive and general optimizer (LINGO) software is used to calculate the emergency supplies dispatching strategy from each rescue area to each disaster area. Given the actual situation of limited transportation conditions, each rescue area is usually unable to dispatch all emergency supplies at one time. Therefore, the weight of various emergency supplies is determined according to the urgency of the actual situation, and the LINGO software is used again in this study to calculate the phased emergency supplies transportation scheme. Finally, the optimal emergency dispatching strategy is formulated to meet the research objective in this study. Based on the above, a visual comparison is made between the results obtained using the constructed emergency supplies intelligent dispatching model and the demand quantity of emergency supplies in each disaster area. It can be seen that the dispatching quantity of various emergency supplies obtained by the model in this work has little difference from the actual emergency supply demand of each disaster area. As a result, large waste in major natural disasters can be avoided. The research findings show that the model has high reliability, and the simulation results are close to the actual situation. It can meet the emergency supplies demand of multidisaster areas and help decision-makers develop effective disaster relief strategies in major natural disasters.
  • Research Article
    FU Wen, WEN Hao, HUANG Junhui, SUN Binxuan, CHEN Jiajie, CHEN Wu, FENG Yue, DUAN Xingguang
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1068-1077. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.025
    Abstract (1198) HTML (1)   Knowledge map   Save CSCD(2)
    [Objectives] The South-to-North water diversion project is a strategic project in China. Since its construction, it has become the main source of water conservancy in more than 280 cities.The diversion tunnel is the key building to support the South-to-North water diversion project. Due to its long line, large diameter, high water pressure, complex surrounding rock geology, as well as many years of water conservancy erosion, biochemical substances erosion, geological effect and other influences, typical defects such as cracks, collapse, exposed steel bars are prone to occur. Artificial detection of defects in the tunnel not only consumes time and energy, but also has low accuracy and timeliness. Therefore, underwater robot inspection technology has become a hotspot of current research.Among them, the underwater manipulator can not only be installed on the underwater vehicle, but also can be selectively installed on the required platform to complete the tasks of cleaning the water surface, laying and repairing cables, salvaging sunken objects, cutting off ropes and so on. However, the control of the underwater manipulator is more complicated and difficult due to its time-varying mechanics, nonlinear properties, external interference and hydrodynamic influence. The main purpose of this paper is to establish the dynamics model of the underwater manipulator and improve the accuracy of the trajectory tracking of the manipulator. [Methods] In this paper, a modeling method combining Newton-Euler equation and Morrison's dynamic model is proposed, and then the dynamic parameters are identified. Then, in order to improve the precise control ability of the manipulator in complex transient underwater environment, an adaptive sliding mode control method is designed based on compensating nonlinear dynamics model and using radial basis function (RBF) neural network to compensate the unmodeled and modeling errors of the system. Through the dynamic modeling in Section 4, a detailed dynamic simulation environment of the underwater manipulator is obtained. Gaussian noise errors with amplitudes of 5, 20, 15, 10, 8, and 5 N·m are set for each joint. On this basis, Experiment 1(P1): double loop proportional integral differential (PID) controller is designed for control simulation. Then, in experiment 2(P2), RBF neural network is used to make fitting compensation for system modeling errors and unmodeled items. In experiment 3(P3), dynamic model compensation is added on the basis of P2. [Results] The trajectory tracking effect ratio of P2 and P3 was obviously better than that of P1 experiment, and the tracking effect of P3 experiment was also better than that of P2 experiment after compensating the dynamic model. [Conclusions] Through simulation, this paper has proved the effectiveness of the proposed hydrodynamic modeling of the manipulator, and on the basis of compensating nonlinear dynamic model, The adaptive sliding mode control method using RBF neural network to compensate the unmodeled and modeling errors of the system has higher trajectory tracking accuracy than the traditional PID control and the general RBF network adaptive sliding mode control.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    SONG Yuanyuan, YAO Enjian, XU Honglei, HUANG Quansheng, WU Rui, WANG Renjie
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1707-1718. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.021
    Abstract (1171) PDF (485) HTML (0)   Knowledge map   Save CSCD(3)
    [Significance] Climate change is the primary challenge that intensely affects sustainable human development. The transport sector has been one of the major sources of carbon emissions and is considerably affected by climate change. Because of the growth of China's economy and total transport demand, transport-related carbon emissions are also gradually increasing. Moreover, frequent complex and extreme climate events with clear regional differences have negatively affected the construction, maintenance, and operation of the transport infrastructure. Therefore, China's transport sector needs to reduce carbon emissions for green and low-carbon developments and improve its adaptability and resistance to various adverse climatic conditions. However, China's transport sector still faces many challenges in mitigating and adapting to climate change, and its policy tools, measures, and basic capacity to cope with climate change need to be enhanced. Therefore, transport sector-related strategies and routes to adapt to climate change need to be explored. [Progress] First, the policies and measures implemented in different countries to address climate change were introduced from the perspectives of mitigation and adaptation. Second, the advancements made by China's transport sector in mitigating climate change were summarized from the perspectives of the construction of green and low-carbon transport infrastructure, optimization of the transport structures, and promotions and applications of new and clean energy. The measures implemented to adapt to climate change in China's transport sector were summarized from the perspectives of improving the adaptability of the transport infrastructure, strengthening the monitoring and warning systems of climate change, and managing risk. Third, the interactions between each subfield and sublink of the transport system and climate change, as well as the main measures implemented to mitigate and adapt to climate change in the transport sector, were analyzed. Finally, key areas, strategies, and methods to mitigate and adapt to climate change were proposed. [Conclusions and Prospects] Analysis results are provided and discussed. First, the current plan for China's transport response to climate change needs improvement. The capacity to respond to climate change has not been planned at the subfield and sublink level of the transport system. For mitigating climate change, carbon emissions reduction measures, such as the promotion of new energy vehicles and ships, as well as the optimization of the transport structure, are inadequate. Furthermore, the assessment of the effects of the transport infrastructure on climate change is still in its infancy. Second, the direction of the transport system's development should be combined with the strategic requirements of mitigation and adaptation to climate change. Third, in the transport field, the infrastructure, equipment, and transport structure should be improved; moreover, the infrastructure should be adapted to climate change, and emergency support of transport equipment and transportation organization in extreme weather should be optimized to enhance the capability to adapt to climate change. Finally, the following measures are proposed: Mitigation and adaptation to climate change should be jointly and appropriately implemented to comprehensively address climate change in the transport sector. Greenhouse gases and air pollutants should be jointly controlled to realize the goal of “double carbon”. Adaptation to climate change should be applied in conjunction with ecological protection and restoration to strengthen the capacity of the transport sector to adapt to climate change.
  • Review
    CHEN Yongcan, CHEN Jiajie, WANG Haoran, GONG Yu, FENG Yue, LIU Zhaowei, QI Ningchun, LIU Mei, LI Yonglong, XIE Hui
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1015-1031. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.015
    Abstract (1155) PDF (484) HTML (1)   Knowledge map   Save CSCD(1)
    [Significance] Headrace tunnels are key structures of major projects characterized by long tunnel lines, large tunnel diameters, high water pressure, and complex surrounding rock geology. Typical defects, such as cracks, landslides, and exposed reinforcement, will occur during long years of operation. If they are not prevented, the safe operation of the project will be seriously affected. Long cycles, high safety risk, high leak rate, and insufficient information are all issues with traditional manual inspection. Given the urgent need for regular inspection of large-diameter and super-long headrace tunnels in super-large water conservancy and hydropower projects, this study solved key scientific issues, such as the adaptability of robot underwater environment tasks, the active detection of super-long headrace tunnel apparent defects, and the safety risk assessment of tunnel structures based on robot inspection data. The key technology breakthroughs include the sub-parent cooperation of complex underwater environments, the fine operation of load manipulator, ultra-long distance underwater high-voltage power supply, umbilical cable safe release and recovery, ultra-long distance human-machine cooperative control, special environment adaptation of underwater robots, active defect detection and identification based on multi-sensor fusion. Structural safety classification, risk analysis and evaluation, and virtual drills were also carried out. The developed underwater robot inspection system was successfully applied to large-diameter and long headrace tunnels for comprehensive verification. [Progress] The application performance of underwater robots in special environments has improved due to breakthroughs in key technologies such as remote power supply, cooperative operation, intelligent patrol inspection, defect identification, and safety assessment of robots in complex underwater environments including water turbidity, high water pressure, adhesion and siltation, and local accessibility difficulties. The safety classification and risk assessment of the headrace tunnel structure are completed through the research and development of the multi-function “sub-parent” underwater robot system, and the whole process integration of “inspection, inspection, control, diagnosis, and use” of the underwater robot is realized, which has been demonstrated and verified in the eastern route of the South-to-North Water Transfer Project, Jinping Ⅱ Hydropower Station, and other major national projects, to improve the intelligent degree of the inspection of the headrace tunnel of large water conservancy and hydropower projects and support the safe operation of large projects. [Conclusions and Prospects] The research findings can significantly improve the accuracy of the headrace tunnel inspection, reduce the headrace tunnel inspection cost, and improve the guaranteed rate of the safe operation of large water conservancy and hydropower projects; promote the interdisciplinary integration of artificial intelligence and water conservancy disciplines to form interdisciplinary advantages; promote the application of robots in special environments, especially in the inspection of headrace tunnels, and guide the development of robots in special environments; promoting the application of artificial intelligence and intelligent management of water conservancy projects, as well as improving the level of technology and equipment in relevant fields in China and cultivating a large number of versatile talents, will have significant social, economic and scientific values.
  • Editorial
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1013-1013.
    Abstract (1131) PDF (474) HTML (3)   Knowledge map   Save
  • MECHANICAL ENGINEERING
    ZHOU Kai, ZHANG Ruizhe, YE Kuan, LI Hongda, WANG Zhe, HUANG Songling
    Journal of Tsinghua University(Science and Technology). 2022, 62(12): 2013-2020. https://doi.org/10.16511/j.cnki.qhdxxb.2022.25.032
    Abstract (1130) PDF (469) HTML (0)   Knowledge map   Save CSCD(4)
    The grounding grid plays a vital role for ensuring reliable power system operations. However, the buried environment around the grounding grid can result in grounding device defects, so the grounding grid must be periodically tested. This paper presents a SH guided wave detection method based on linear frequency modulation excitation and synchrosqueezed wavelet transforms for grounded flat steel power systems. The system uses a permanent magnetic array SH guided wave transducer which is simple and suitable for flat steel structures. The transducer is excited with a linear frequency modulation signal with synchrosqueezed wavelet transforms used to analyze the signal. Identification of the overlapping guided wave signals in the time-frequency plane effectively distinguishes between various defects and end faces. Signals from finite element simulations and experiments were then used to evaluate the signal analysis method. The calculated distance errors were all within 3%, which shows that the method can accurately extract the guided wave travel times and accurately locate defects. Comparisons with results using short-time Fourier transforms and Wigner distributions show the advantages of the time-frequency aggregation of the synchrosqueezed wavelet transforms.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    HUANG Ailing, WANG Zijian, ZHANG Zhe, LI Mingjie, SONG Yue
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1729-1740. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.034
    Abstract (1112) PDF (452) HTML (0)   Knowledge map   Save
    [Objective] In the airport ground-transport system, it is operationally important to match the evacuation requirement of passengers, and the capacity of multimodal transport vehicles is crucial. Numerous studies have investigated single-mode transport capacity allocation; however, research on multimode allocation is scarce. [Methods] To mitigate the difficulty in realizing an exact match between the evacuation demand of passengers and the capacity of multimodal transport, a bi-level programming model for multimodal transport resource allocation is proposed according to the analysis of the interaction between capacity allocation and passenger travel choice. A utility function of multiple travel modes, including airport buses, metro, taxis, and private cars, is formulated with the following four features: travel time, travel cost, punctuality, and comfort. The upper-level objective is to minimize the total enterprise-operation cost, passenger-waiting cost, and carbon emission cost for optimizing the headway of public transit and the taxi arrival rate, which is subject to the capacity of each transit mode, the range of each public-transit headway decided by fixed equipment, and the range of taxi arrival rate decided by the capacity of boarding location. Based on the output of the transport capacity allocation scheme used by the upper level, the low-level objective is to assign the passenger flow toward multiple travel modes according to a stochastic user equilibrium-logit model with a utility function. Furthermore, an improved genetic algorithm combined with method of successive algorithm (MSA) is designed to solve the proposed bi-level programming model. To improve the solving efficiency of the algorithm, a pre-search mechanism is proposed, in which the infeasible solution is filtered out using low-precision MSA to reduce the computational cost of repeatedly calling the low-level model. [Results] The Beijing Daxing International Airport was considered as a case study to illustrate the efficiency and effectiveness of the proposed bi-level programming model in optimizing transport capacity allocation in airport ground-transport centers. The transport capacity allocation scheme obtained via the proposed model reduced the average passenger-waiting time and the total carbon emission of the system by 14.08% and 6.21%, respectively, while increasing the operation cost by only 1.32%. Moreover, the optimized capacity allocation scheme resulted in the switching of 6.7% of passengers who availed taxis and private cars to buses and metro, which were more environmentally friendly. The proposed solution algorithm could efficiently solve the bi-level model. Under the pre-search mechanism, the generation time of the scheme was 217.6 s, which could meet the production demand within the acceptable time. [Conclusions] Results show that the optimized scheme obtained from the bi-level model and algorithms is considerably better than before. The proposed scheme reduces passenger-waiting time and the carbon emissions of the multimodal transport system at a negligible cost. Using the optimized scheme, the organizers of airport ground-transport centers can coordinate the capacities of landside multiple transport modes and guide passengers reasonably. This will reduce operation costs, improve airport landside traffic structure, and encourage green and low-carbon travel.
  • PUBLIC SAFETY
    HU Jun, SHU Xueming, XIE Xuecai, YAN Jun, ZHANG Lei
    Journal of Tsinghua University(Science and Technology). 2023, 63(5): 775-782. https://doi.org/10.16511/j.cnki.qhdxxb.2022.21.042
    Abstract (1099) PDF (446) HTML (3)   Knowledge map   Save
    Fire is a serious threat to public life and property safety. Insurance is an effective means to deal with fire risk, and accurately determining the premium rate of buildings according to the fire risk is a concern of the insurance industry. Currently, the premium rate is mainly based on the fire frequency and loss expectation from the insurance statistics, and adjustments are based on building risk assessment results. The adjustment scheme can be divided into two types. One is the rate floating model, which gives the floating range of the premium rate based on the risk level, but the floating proportion is fairly subjective. The other is the rate calculation model, which establishes the quantitative risk assessment method to calculate the specific premium rate. However, comprehensively reflecting the hazardous in the buildings as well as the uncertainty of losses with the current risk assessment method is difficult. Thus, the premium rate is relatively rough. A quantitative model for building fire insurance premium rates is constructed in this paper. First, the Bayesian network method is used to calculate the building fire probability considering the influences of various risk sources. The specific factors affecting ignition were comprehensively analyzed from the aspects of humans, things, and environments. Therefore, 14 factors were selected to construct the Bayesian network of building ignition, based on which the probability of building fire can be calculated rather quantitatively and objectively. Second, the Latin hypercube sampling (LHS) is used to stratify the burn rate in different fire stages from ignition, growth, and development to spread with certain distributions to reflect the staging and random characteristics of fire losses. Thus, the final loss distribution, including the expected value, standard deviation, probability density function, and cumulative probability density function, can be acquired accurately. Therefore, the quantitative and dynamic risk assessment of building fire is realized, and the rate calculation model is used to compute the rate based on the result. Fifteen households were selected to calculate their premium rates based on the quantitative assessment of building fire risk, including ignition probability and loss distribution, and the premium rates are compared with the rate in the insurance market. Results show that the proposed premium rate determination model can effectively reflect the differentiated level of fire risk and ensure the fairness of insurance. The premise of the building fire insurance premium rate model in this paper is that the insurance company covers all the fire risks of the building and disregards the case of deductible due to the retainment of fire risk by the insured. In addition, the foreign statistics were adopted, and the normal loss distribution at each stage after the ignition was assumed due to the lack of domestic data. Deductibles can be considered in further research to construct premium rate models, and accurate data can be acquired to obtain results consistent with the building fire risk level in China.
  • CONSTRUCTION MANAGEMENT
    ZHANG Hong, BI Zhijun, YU Anmiao
    Journal of Tsinghua University(Science and Technology). 2023, 63(2): 153-159. https://doi.org/10.16511/j.cnki.qhdxxb.2022.22.052
    Abstract (1077) PDF (444)   Knowledge map   Save
    [Objective] In 2003, the State Council of China identified the real estate industry as a pillar of the national economy. However, with economic development and the distribution of the nation's industrial structure, the status of real estate as a pillar industry has repeatedly been questioned. Researchers have conducted extensive studies on the positioning of the real estate industry in China but are yet to reach a consensus. Different understandings of the industrial categories and the selection of varying discrimination indicators are the primary reasons for these conflicting results. Referencing previous research, this study combines classical industrial economics theory with the input-output method to identify and compare the industrial positioning of the real estate industry for a reasonable judgment on the status and role of the real estate industry in the national economy. Considering the current economic transformation, it is imperative to determine the positioning of the real estate industry for formulating effective macro-control policies and for developing the real estate industry. [Methods] This study employs the input-output method. Recently, although the literature regarding the application of the input-output method to the real estate industry in China is gradually increasing, minimal systematic studies focus on the question of industrial positioning. This study first determines the features and characteristics of various categories under industrial positioning. Subsequently, it combines the findings with the input-output method and establishes the discriminant matrix from three perspectives: industrial correlation, industrial spillover effect, and economic contribution; the required coefficients are determined using merged input-output flow tables from 2007 to 2018. Finally, according to the discrimination matrix and each input-output coefficient, the industrial positioning of the real estate industry is determined and compared with the construction, financial, and leasing and business service industries. [Results] The results demonstrate that from 2007 to 2018, 1) the real estate industry has had a widespread but not a strong industrial relationship; however, its driving ability toward other industries has continuously improved over time, but the overall level is low; 2) contribution of the real estate industry to the economy and employment is at the middle or lower level of all industries; and 3) the real estate industry is closely related to construction, financial, and leasing and business service industries, and the functional positioning of the real estate industry has a higher similarity with the financial industry. [Conclusions] Thus, considering the results, this study asserts that the real estate industry in China is a basic industry in the national economy and does not rise to the level of a pillar industry, a prime mover industry, or a leading industry. This study suggests that when formulating regulatory policies for the real estate industry, the nature of the industry as a basic industry and its extensive industrial link should be fully considered to prevent excessive overlap between the functions of real estate and financial industries, which will promote the long-term and healthy development of the real estate industry.
  • BIG DATA
    YU Jiayin, HE Yulin, CUI Laizhong, HUANG Zhexue
    Journal of Tsinghua University(Science and Technology). 2023, 63(5): 740-753. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.003
    Abstract (1063) PDF (438) HTML (3)   Knowledge map   Save CSCD(3)
    [Objective] As a significant research branch in the field of data mining, missing value imputation (MVI) aims to provide high-quality data support for the training of machine learning algorithms. However, MVI results for large-scale data sets are not ideal in terms of restoring data distribution and improving data prognosis accuracy. To improve the performance of the existing MVI algorithms, we propose a distribution consistency-based MVI (DC-MVI) algorithm that attempts to restore the original data structure by imputing the missing values for large-scale data sets.[Methods] First, the DC-MVI algorithm developed an objective function to determine the optimal imputation values based on the principle of probability distribution consistency. Second, the data set is preprocessed by random initialization of missing values and normalization, and a feasible missing value update rule is derived to obtain the imputation values with the closest variance and the greatest consistency with the complete original values. Next, in a distributed environment, the large-scale data set is divided into multiple groups of random sample partition (RSP) data blocks with the same distribution as the entire data set by taking into account the statistical properties of the large-scale data set. Finally, the DC-MVI algorithm is trained in parallel to obtain the imputation value corresponding to the missing value of the large-scale data set and preserve distribution consistency with the non-missing values. The rationality experiments verify the convergence of the objective function and the contribution of DC-MVI to distribution consistency. In addition, the effectiveness experiments assess the performance of DC-MVI and eight other MVI algorithms (mean, KNN, MICE, RF, EM, SOFT, GAIN, and MIDA) through the following three indicators:distribution consistency, time complexity, and classification accuracy.[Results] The experimental results on seven selected large-scale data sets showed that:1) The objective function of the DC-MVI method was effective, and the missing value update rule was feasible, allowing the imputation values to remain stable throughout the adjustment process; 2) the DC-MVI algorithm obtained the smallest maximum mean discrepancy and Jensen-Shannon divergence on all data sets, showing that the proposed method had a more consistent probability distribution with the complete original values under the given significance level; 3) the running time of the DC-MVI algorithm tended to be stable in the time comparison experiment, whereas the running time of other state-of-the-art MVI methods increased linearly with data volume; 4) the DC-MVI approach could produce imputation values that were more consistent with the original data set compared to existing methods, which was beneficial for subsequent data mining analysis.[Conclusions] Considering the peculiarities and limitations of missing large-scale data, this paper incorporates RSP into the imputation algorithm and derives the update rules of imputation values to restore the data distribution and further confirm the effectiveness and practical performance of DC-MVI in the large-scale data set imputation, such as preserving distribution consistency and increasing imputation quality. The method proposes in this paper achieves the desired result and represents a viable solution to the problem of large-scale data imputation.
  • BIG DATA
    HU Minghao, WANG Fang, XU Xiantao, LUO Wei, LIU Xiaopeng, LUO Zhunchen, Tan Yushan
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1309-1316. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.010
    Abstract (1027) PDF (418)   Knowledge map   Save
    [Objective] The abundant information resources available on the internet about defense technology are of vital importance as data sources for obtaining high-value military intelligence. The aim of open information extraction in the field of defense technology is to extract structured triplets containing subject, predicate, object, and other arguments from the massive amount of information available on the internet. This technology has important implications for ontology induction and the construction of knowledge graphs in the defense technology domain. However, while information extraction experiments in the general domain yield good results, open information extraction in the defense technology domain faces several challenges, such as a lack of domain annotated data, arguments overlapping unadaptability, and unrecognizable long entities.[Methods] In this paper, an annotation strategy is proposed based on the entity boundaries, and an annotated dataset in the defense technology field combined with the experience of domain experts was constructed. Furthermore, a two-stage open information extraction method is proposed in the defense technology field that utilizes a pretrained language model-based sequence labeling algorithm to extract predicates and a multihead attention mechanism to learn the prediction of argument boundaries. In the first stage, the input sentence was converted into an input sequence <[CLS], input sentence[SEP]>, and the input sequence was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. Based on this sentence representation, a conditional random field (CRF) layer was used to predict the position of the predicates, i.e., to predict the BIO labels of the words. In the second stage, the predicated predicates from the first stage were concatenated with the original sentence and converted into an input sequence <[CLS], predicate[SEP], and input sentence[SEP]>, which was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. This representation was then fed to a multihead pointer network to predict the position of the argument. The predicted position was tagged with the actual position to calculate the cross-entropy loss function. Finally, the predicates and the arguments predicted by the predicate and argument extraction models were combined to obtain the complete triplet.[Results] The experimental results from the extensive experiments conducted on a self-built annotated dataset in the defense technology field reveal the following. (1) In predicate extraction, our method achieved a 3.92% performance improvement in the F1 value as compared to LSTM methods and more than 10% performance improvement as compared to syntactic analysis methods. (2) In argument extraction, our method achieved a considerable performance improvement of more than 16% in the F1 value as compared to LSTM methods and about 11% in the F1 value as compared to the BERT+CRF method.[Conclusions] The proposed two-stage open information extraction method can overcome the challenge of arguments overlapping unadaptability and the difficulty of long-span entity extraction, thus improving the shortcomings of existing open information extraction methods. Extensive experimental analysis conducted on the self-built annotated dataset proved the effectiveness of the proposed method.
  • PUBLIC SAFETY
    DENG Qing, ZHANG Bo, LI Yihao, ZHOU Liang, ZHOU Zhengqing, JIANG Huiling, GAO Yang
    Journal of Tsinghua University(Science and Technology). 2023, 63(1): 146-152. https://doi.org/10.16511/j.cnki.qhdxxb.2022.21.029
    Abstract (1018) PDF (412) HTML (1)   Knowledge map   Save
    Accurate crowd counts during evacuations can support real-time optimization of evacuation routes and scheduling of emergency resources. This study estimates the number of occupants in an evacuation passageway by setting a classification level and personnel density together in a cascaded convolutional neural network (CNN) crowd counting model based on analyses of existing methods. The method avoids the loss of image information and over fitting in the convolution process. The model estimates the real-time crowd count in crowded situations by learning the relationship between the number and the position of occupants in the image and by changing the image features. The model was implemented on the PyTorch platform with an identification accuracy for the validation set (612 photos) of 84.2% and for the test set (182 photos) of 83.6%, which shows that this method can accurately predict the number of evacuees in a monitoring screen.
  • COMPUTER SCIENCE AND TECHNOLOGY
    ZHANG Yang, JIANG Minghu
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1390-1398. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.013
    Abstract (1011) PDF (462)   Knowledge map   Save
    [Objective] Authorship identification is a study for inferring authorship of an unknown text by analyzing its stylometry or writing style. The traditional research on authorship identification is generally based on the empirical knowledge of literature or linguistics, whereas modern research mostly relies on mathematical methods to quantify the author's writing style. Currently, researchers have proposed various feature combinations and neural network models. Some feature combinations can achieve better results with traditional machine learning classifiers, while some neural network models can autonomously learn the relationship between the input text and corresponding author to extract text features implicitly. However, the current research mostly focuses on character and lexicon features. Furthermore, the exploration of syntactic features is limited. How to use the dependency relationship between different words in a sentence and combine syntactic features with neural networks still remains unclear. This paper proposes an authorship identification method based on the syntax tree node embedding, which introduces syntactic features into a deep learning model. [Methods] We believe that an author's writing style is mainly reflected in the way he chooses words and constructs sentences. Therefore, this paper mainly develops the authorship identification model from the perspectives of words and sentences. The attention mechanism is used to construct sentence-level features. First, an embedding representation of the syntax tree node is proposed, and the syntax tree node is expressed as a sum of embeddings corresponding to all its dependency arcs. Thus, the information on sentence structure and the association between words are introduced into the neural network model. Then, a syntactic attention network using different embedding methods to vectorize text features, such as dependencies, part-of-speech tags, and words, is constructed, and a syntax-aware vector is obtained through this network. Furthermore, the sentence attention network is used to extract the features from the syntax-aware vector to distinguish between different authors, thereby generating the sentence representation. Finally, the result is obtained by the classifier and the correct rate is used to evaluate the result. [Results] Experiments on CCAT10, CCAT50, IMDb62, and the Chinese novel data sets show that an increase in the number of authors causes a downward trend in the accuracy rate of the model proposed in the paper. In some data points, an increase in the number of authors resulted in an increase instead of a decrease in the correct rate. This shows that the ability of the model proposed in this study to capture the writing style of different authors is considerably different. Furthermore, when we change the number of authors on the IMDb dataset, the correct rate of the model in the paper is found to be slightly lower than the BertAA model in the case of 5 authors; however, the rate is higher than the BertAA model in the case of 10, 25, and 50 authors. Additionally, when the experimental results of the model are compared to other models on the CCAT10, CCAT50, and IMDb62 data sets, the performance of this model is observed to be ranked as second or third. [Conclusions] The attention mechanism demonstrated its efficiency in text feature mining, which can fully capture an author's style that is reflected in different parts of the document. The integration of lexical and syntactic features based on the attention mechanism enhances the overall performance of the model. Our model performs well on different Chinese and English datasets. Notably, the introduction of dependency syntactic combination provides more space for the interpretation of the model, which can explain the text styles of different authors at the word selection and sentence construction levels.
  • ECONOMIC AND PUBLIC MANAGEMENT
    ZHU Wuxiang, LIAO Jingqiu, ZHAN Ziliang, TAN Zhijia
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1467-1482. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.008
    Abstract (1003) PDF (415)   Knowledge map   Save
    [Significance] Because of multiple factors, such as deleveraging policy, slowing economic growth, trade friction, and the COVID-19 pandemic, debt defaults are occurring with increasing frequency, which could trigger risk contagion and even lead to systemic financial risks. However, some facts indicate that the existing financial distress prediction model is not sufficiently effective; for example, the nonperforming loan ratio of commercial banks shows a rising trend, and the downgrade of ratings usually lags considerably. Thus, government departments and market entities have a strong demand for improving and optimizing the financial distress prediction model, which is necessary to realize risk identification and early warning. An effective prediction model can provide early warnings of investment risks and help financial institutions and investors reduce losses, assist regulators in establishing a multichannel default disposal mechanism, and improve the credit environment of the capital market.[Progress] Based on an extensive literature search in top journals and conferences from 1932 to 2020, this paper reviews four topics, including the financial distress definition, statistical model, variable selection, and model efficiency evaluation method, then further summarizes three research anomalies: 1) Existing financial distress prediction models often focus on the prediction of deep crises, such as insolvency and bankruptcy, which may lead to a delayed warning and market panic. 2) The innovation of financial distress prediction research focuses on applying new computer algorithms and statistical models as well as considering nonfinancial information. One confusing fact is that the judgment of financial distress depends on the selected model, indicators, and sample set rather than the fundamental factors of the enterprise; thus, different prediction models may produce contradictory results on the judgment of the same enterprise. 3) The identification of financial distress relies on comparing an enterprise's future capital cash flow and rigid payment. However, most of the existing financial distress prediction models apply a multivariate weighting method according to common historical financial indicators.[Conclusions and Prospects] This paper proposes a cross-model evaluation framework to compare their financial distress prediction effectiveness and provides improvement suggestions including “one principle, three directions.” The one principle indicates that to accurately assess and manage the absolute risk of financial distress, the study of financial distress prediction should return to the financial principle and pay attention to future capital cash flow. The three directions that need to pay attention include: 1) early financial distress warnings, such as liquidity crisis warnings; 2) steady repayment sources, including operating cash inflows, reliable asset disposal earnings, and refinancing, rather than relying on the total assets of the balance sheet, current assets, and other indicators; 3) financing contracts and full scenario analyses of future capital cash outflows rather than just current ratio, quick ratio, asset-liability ratio, and other liability indicators. In the future, with the development of big data and the improvement in information transmission efficiency, corporate information disclosure will be considerably enhanced, allowing more accurate cash flow and repayment prediction. A prediction model assessing absolute financial distress risk has greater potential.
  • BIG DATA
    YANG Bo, QIU Lei, WU Shu
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1339-1349. https://doi.org/10.16511/j.cnki.qhdxxb.2023.22.030
    Abstract (992) PDF (409)   Knowledge map   Save CSCD(1)
    [Objective] Collaborative filtering algorithms are widely used in various recommendation systems and can be used to recommend information of interest to users similar to the user based on historical data. Recently, collaborative filtering algorithms based on graph neural networks have become one of the hot research topics. A collaborative filtering model based on a graph structure usually encodes the interaction between users and information items as a two-part diagram, and high-order connectivity modeling of the bipartite graph can be used to capture the hidden relationship between the user and the item. However, this bipartite graph model does not explicitly obtain the similarity relationship between users and between items. Additionally, the bipartite graph sparsity causes high-order connectivity dependence problems in the model. [Methods] Herein, a collaborative filtering model is proposed based on a heterogeneous graph convolutional neural network that explicitly encodes the similarities between users and that between items into the graph structure so that the interaction relationships between users and between items are modeled as a heterogeneous graph. The heterogeneous graph structure allows the similarities between users and between items to be directly captured, reducing the need for high-order connectivity and alleviating the bipartite graph sparsity problem. [Results] We conducted experiments on four typical datasets and compared the results using four typical methods. The results showed that our model achieved better experimental results than the traditional collaborative filtering models and existing graph neural network models. Moreover, based on the different types of edges, different similarity methods, and different similarity thresholds, our model obtained better experimental results. [Conclusions] Our model explicitly encodes the similarities between users and between items into the heterogeneous graph structure as edges so that the model can directly learn these similarities during training to get the embedded information of users and items. The proposed model alleviates the sparsity and high-order connectivity modeling problems of bipartite graphs. The collaborative filtering model based on heterogeneous graph neural networks can also fully capture the interaction relationships between users and items through low-order connectivity in the graph.
  • PROCESS SYSTEMS ENGINEERING
    CHENG Andi, LIU Shishuai, WU Xuemei, JIANG Xiaobin, HE Gaohong, WANG Fan, DU Guodong, XIAO Wu
    Journal of Tsinghua University(Science and Technology). 2023, 63(5): 704-713. https://doi.org/10.16511/j.cnki.qhdxxb.2022.21.040
    Abstract (989) PDF (407) HTML (0)   Knowledge map   Save CSCD(2)
    Helium is a strategic resource, which is a by-product of natural gas processing. The tail gas from the nitrogen rejection unit (NRU) of the liquified natural gas (LNG) plant is a main helium-containing gas with a helium mole content of about 1.0%-5.0%. The existing process uses cryogenic distillation to enrich it into crude helium (≥ 50.0%) and obtains high-purity helium (≥ 99.99%) through catalytic oxidation and pressure swing adsorption (PSA). However, the existing helium enrichment process based on cryogenic distillation requires severe operating conditions with high pressure and low temperature, leading to complex operations, high energy consumption and equipment investment. Besides, new impurities (e.g., O2 and H2O) can be introduced in the helium separation process under catalytic oxidation dehydrogenation, which increases the load of the adsorption device significantly and leads to the loss of potential hydrogen resources. Therefore, this paper proposed a novel helium separation process from the NRU tail gas based on a membrane coupling electrochemical hydrogen pump (EHP). Whole helium separation process can be divided into two parts with enrichment and purification. A two-stage membrane separation was used to achieve efficient enrichment of low-content helium. Due to the advantages of membrane with no phase change, small footprint and simple operation, it can realize energy saving in the helium enrichment process. The crude helium was refined through PSA to remove N2 and CH4. Then, the EHP was used to separate hydrogen and helium. Moreover, high-purity hydrogen and helium can be obtained without introducing impurities. Processes modeling and data analysis were conducted on Aspen HYSYS. Due to the strong interactions between process parameters in the two-stage membrane process, the response surface method (RSM) was used to optimize four key process parameters with membrane areas and feedstock pressures. Since the helium purification unit based on EHP is located at the end of the process and has no interaction with the previous units. The single-factor sensitivity analysis was used for parameter optimization of EHP. The optimization results of the membrane separation process show that the helium mole purity and recovery rate of the crude helium can reach 64.94% and 95.67% under the optimal operating conditions (M-101:4759.5 m2, M-102:435.3 m2; M-101 and M-102 feedstock pressures are 6010.3 and 4352.5 kPa, respectively). Furthermore, high-purity helium and hydrogen can be achieved simultaneously through EHP under the optimal parameters. The applied potential of two-stage EHP is 1 V, and the MEA areas of two-stage EHP are 39 and 17 m2, respectively. Economic evaluation results show that the production cost of the helium in the coupling process is 125.47 CNY/m3. The financial evaluation of the new helium separation process was conducted based on the economic evaluation data. The dynamic payback period is 2.09 years, and the internal rate of return is 79%. In summary, the proposed membrane coupling EHP helium separation process has significant economic and social benefits. It provides a feasible route for the independent industrial production of high-purity helium in China.
  • LOW-CARBON TRANSPORTATION & GREEN DEVELOPMENT
    WANG Yue, YAO Enjian, HAO He
    Journal of Tsinghua University(Science and Technology). 2023, 63(11): 1741-1749. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.024
    Abstract (983) PDF (401) HTML (0)   Knowledge map   Save CSCD(2)
    [Objective] Optimising travel structure, improving travel efficiency, and reducing transport carbon emissions are essential paths to green and low-carbon transport development. Research into fine-grained carbon management has received much attention in recent years. However, the implementation is complex, and setting a price on carbon estimation tends to elicit negative feelings from travellers. [Methods] In the concept of mobility as a service (MaaS), the service can provide an end-to-end travel service by the combination of multi-transport modes, including roads and public transport, as well as many new forms of transportation. Thus, the service provider can realise flexible price adjustments for multi-transport modes and sections in a single trip. Consequently, this paper proposes a low-carbon-oriented pricing strategy for the service provider. From the different perspectives of the MaaS servicer, travellers and the environment, we propose a multi-objective optimisation model. The object includes maximising service providers' revenue and minimising network travel time and transportation network carbon emissions. The model is a two-layer planning model. The upper layer of the model is the process of finding decision variables to calculate the objective function. The lower layer is the joint traffic mode and route choice process, as well as traffic equilibrium allocation in a multi-modal transportation network. In this model, the joint choice of mode and route of travellers depends on the upper-layer decision variables. Then, to solve the above optimisation problem, the reference point based non-dominated sorting genetic algorithm (NSGA-Ⅲ) and the method of successive algorithm (MSA) are introduced. [Results] The case study was conducted on an example network with 1 origin-destination pair, 16 sections in 3 traffic modes (travel by car, bus, and metro), and 6 nodes. Three representative strategies of Pareto solutions were selected, including optimise service provider benefits (OP-S), optimise network travel time (OP-T), and optimise transportation carbon emissions (OP-C). Furthermore, the original (OR) state was also presented as the background. The result showed that the travel price significantly increased in OP-S, which was unfriendly to travellers. In contrast, OP-T and OP-C were respectively metro-friendly and public transport-friendly strategies. Compared with the OR state, service benefits and carbon emissions were optimised, which means that the service provider could achieve emission reductions in multi-modal transport networks while ensuring their own profitability through rationalised regulation of service pricing. The traffic volume analysis also proved that the service provider could optimise the network travel mode structure, thereby reducing road congestion and increasing the share of public transport. By comparing the results of the optimisation strategies under different demands, we found that with the travel demand increased, the service provider benefits continued to grow (especially in OP-S). Although traffic carbon emissions increased, the optimisations could always reduce the traffic carbon emissions of the system. [Conclusions] This paper validates the feasibility of travel service pricing strategies in multi-modal network traffic optimisation and low-carbon transport development. Service providers should not only seek to maximise their own revenue but also take into account the cost of travel and its impact on the transport environment and take responsibility for the coordination and reduction of transport system emissions. This paper identifies the profitability and responsibilities of travel service providers in the green and low-carbon development of transport and provides a basis for service pricing strategies.
  • AEROSPACE ENGINEERING
    LIN Weiquan, XU Hangrui, LAN Xudong
    Journal of Tsinghua University(Science and Technology). 2024, 64(9): 1521-1535. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.024
    Abstract (977) PDF (402) HTML (0)   Knowledge map   Save
    [Significance] With the rapid advancements in aerospace engineering technology, the performance requirements for aircraft are increasingly escalating. At present, there are numerous goals for aircraft utilization in military, transportation, and other sectors. Hypersonic aircraft, which operate under multiple operating conditions and across a wide velocity range, have become a hot research topic in many countries. A critical component of such high-performance aircraft is their power system. A high-performance aircraft must have a high-performance power system to match it. Existing mature power systems have limitations in terms of working conditions and performance. Therefore, the development trend has shifted toward combined power systems. Currently, prominent combined engines include rocket-based combined cycle (RBCC), turbine-based combined cycle (TBCC), air turborocket, and precooled engines. When considering factors like cost, performance, and safety, TBCC engines emerge as the most promising power system for hypersonic aircraft within the near space range of 20-100 km because of their flight envelope width, reusable, large unit thrust, and other advantages. Therefore, summarizing the key TBCC technologies and exploring their development path is crucial. [Progress] The United States, Japan, and the United Kingdom are pioneers in combined power research. These countries have achieved significant technical achievements, possess mature technologies, and have completed the entire research and development cycle for combined engine products. They are at the forefront of this field. In the future research and development strategy, the United States focuses on system-wide research of TBCC and RBCC technologies. Following the completion of the HYPR90 program, Japan has conducted an in-depth study into the precooled engine ATREX. Meanwhile, the UK continues its extensive research on SABRE, aiming to deploy it in future single-stage spacecraft. Other countries, such as Germany, Russia, and China, are also engaged in large-scale TBCC research, accumulating a large number of technologies to achieve breakthroughs from theory to engineering application in the future. In terms of TBCC key technologies, this paper analyzes and summarizes advancements in propulsion system technology and subsystem technology. For subsystems, current TBCC inlet forms are reviewed, with advanced mixed rectangular divergent and integrated multidimensional cross-sectional configurations being analyzed. The future direction points toward the development of 3D internal contraction inlets. The advantages and disadvantages of series and parallel exhaust systems are analyzed alongside the basic theory of the exhaust process, emphasizing the need for more theoretical support for exhaust systems. Numerous achievements in modal conversion control technology are listed, highlighting that future research should focus on integrating strongly coupled flight control with modal control technology. Regarding propulsion system technology, a comprehensive theoretical model for aircraft-engine integration is presented, pointing out the defects of the traditional separate design approach for aircraft and engines. This paper reviews the development of performance simulation and testing technologies domestically and internationally, suggesting that future assignments should involve developing sophisticated simulation software and building new test benches. [Conclusions and Prospects] The combined engine essentially integrates four types of engines: turbine, rocket, ramjet and precooled. This paper summarizes the key technologies of TBCC and explores their development routes while also providing three prospects for the future form of combined engines: combining new basic power forms, adopting new energy sources, and incorporating the external drive platforms.
  • Research Article
    WU Zhuo, ZHANG Wenbo, WANG Zhiguo, FENG Jiarui, REN Yali
    Journal of Tsinghua University(Science and Technology). 2023, 63(3): 348-355. https://doi.org/10.16511/j.cnki.qhdxxb.2022.26.046
    Abstract (966) PDF (399) HTML (0)   Knowledge map   Save CSCD(1)
    [Objective] A parafoil is a type of parachute that can glide. With the aid of navigation and control equipment, a parafoil can approach a target point by autonomously changing its course. This capability is a great advantage over other types of parachutes in precision aerial delivery and spacecraft recovery missions. Because of these special characteristics, a parafoil is more like an aircraft than a parachute. Therefore, its design must include structural and aerodynamic design, making parafoil design complex, particularly for a large parafoil. The design method of a large parafoil is of high research value and can substantially improve the performance of the parafoil. The design method needs to be more accurate and reliable to meet the needs of the parafoil in recovery missions. In this paper, a complete set of design methods for a large parafoil was investigated, included structural and aerodynamic design methods. A structural design method for a large parafoil was first proposed, including structural composition, parameter selection, main component design, and structural framework. By investigating the design parameters of proven large parafoils, proposed values for design parameters were given. At the same time, the influence of design parameter variation on parafoil performance was also discussed. In addition, a 300 m2 parafoil was designed for a launch vehicle booster with the above method. On the basis of the structural design, this paper used the numerical simulation results of an airfoil to modify the aerodynamic design method of a parafoil. The modified method can obtain the stall angle of attack of a parafoil system and the imbalance of the parafoil system with a small rigging angle before the stall, which were conducived to selecting the rigging angle in the design. A wrong rigging angle will result in a parafoil system that cannot glide, which means it is a failed design. The modified method can also obtain more accurate parafoil aerodynamic data with a change in the attack angle at various rigging angles. According to this method, the aerodynamic data of the 300 m2 parafoil was acquired, and its rigging angle was determined to be 4°, which allowed for good aerodynamic performance and balance performance of the large parafoil. The verification results of an airdrop test and flight test for the 300 m2 parafoil were given. Comparing the aerodynamic data in the design and the test showed that:1) The data obtained by the modified aerodynamic design method agreed well with the data in the test. 2) The parameter selection in the design, such as the rigging angle, was reasonable and feasible. 3) The structural framework of the large parafoil was sufficiently strong. The design method of the large parafoil proposed in this paper is accurate and reliable. The designed large parafoil passes the airdrop and flight tests, approving that the method can be applied to large parafoils.
  • Research Article
    HU Xuechao, BI Xiaotian, LIU Ce, SHAO Weiwei
    Journal of Tsinghua University(Science and Technology). 2023, 63(4): 572-584. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.027
    Abstract (954) PDF (389) HTML (1)   Knowledge map   Save CSCD(5)
    [Objective] Micromix combustion is an excellent low-pollution combustion technology. However, the instability of micromix combustion based on multiple small flames, especially high-frequency oscillation under high hydrogen content, is still unclear. [Methods] Herein, the emission performance and oscillation characteristics of micromix combustion under different hydrogen enrichments were studied. Furthermore, an experimental study on the combustion instability of hydrogen-rich fuel was conducted using a novel micromix burner under atmospheric pressure and preheated air at 673 K, which provided a reference for practical engineering applications. Power spectral density was used for spectral analysis. Phase-space reconstruction was applied to analyze the developmental changes in the dynamical system and determine the limit cycle oscillations. Proper orthogonal decomposition (POD) was used to analyze flame dynamics under oscillating conditions, and the time coefficients and spatial distribution characteristics of the modes were extracted. Dynamic pressure sensors were arranged in the air inlet and exhaust outlet contraction sections to measure pressure fluctuations. A high-speed camera system was used to realize the fast acquisition of chemiluminescence signals. The NOx emission, dynamic pressure, flame structure, and other combustion characteristics were studied under different hydrogen contents, from pure methane to pure hydrogen. [Results] The results showed that: 1) The micromix burner had an excellent low-emission performance for pure hydrogen with< 5 μmol/mol NOx at 15% O2 and could adapt to a wide hydrogen content to achieve stable combustion. These characteristics indicated that this micromix burner could be directly applied to designing hydrogen turbine combustion chambers. 2) The oscillatory combustion phenomenon occurred when the hydrogen content was between 10% and 20%. Under those conditions, the phase-space reconstruction trajectory manifested as limit cycle oscillation, and the root mean square values of pressure fluctuation were >1%, representing strong correlation structures. High-order harmonics were also found. Heat release was shown as a periodic overall increase and decrease, and the periodic formation and axial propagation of flame vortices could be observed. The flames with high hydrogen contents fluctuated at a high frequency of >900 Hz, but the amplitude of these flames was low. 3) Time-average images were used to characterize the flame structure under different conditions. The decreasing flame height with increasing hydrogen content contributed to the changes in the heat release concentration position. On the one hand, it affected the coupling relationship between the heat release fluctuation and pressure fluctuation, on the other hand, it shortened the period of pressure fluctuation, corresponding to the increase in main frequency. 4) Between 10% and 20% hydrogen content, the first-order mode was a volume oscillation, which was identical to the main frequency of the whole oscillation, and the second-order mode was an axial oscillation, which was twice the main frequency of the oscillation. With the increase of hydrogen content, the main POD modes switched from the axial mode to flame interaction. [Conclusions] The oscillation conditions and the instability characteristics of the hydrogen-containing fuel were obtained via data analysis. The experimental results could be used to master the mechanism of combustion instability and provide a reference for developing control technology for combustion instability.
  • Review
    XU Pengfei, CHEN Meiya, KAI Yan, WANG Zipeng, LI Xinyu, WAN Gang, WANG Yanjie
    Journal of Tsinghua University(Science and Technology). 2023, 63(7): 1032-1040. https://doi.org/10.16511/j.cnki.qhdxxb.2023.26.018
    Abstract (923) PDF (382) HTML (0)   Knowledge map   Save CSCD(2)
    [Significance] China uses a large amount of hydropower, and the safety of hydropower dams is related to the safety of people's lives, properties, and the national economy. Therefore, regular inspection of dam defects in large hydropower plants is vital to ensure their safe operation. Most of the common dam defects, such as cracks and leakage, originate from the surface of the structure and can affect the service life of the dams. In recent years, remotely operated vehicles (ROVs) have been used for the underwater inspection of dam defects in hydropower plants, as they can mitigate many disadvantages associated with manual inspections while improving detection accuracy and efficiency. [Progress] Thus, we explore the environmental conditions of dams and the main content of dam defect inspection in hydropower plants and review the research on ROV application for underwater inspection in large hydropower dams. We find that different sensors can be combined with ROVs to inspect large hydropower dams underwater according to detection and operation needs. The method can achieve intelligent mobile inspection and remote control of dam operation safety, automatically identify dam defect characteristics, and store shore-station interactive information. At present, ROVs are less used for inspecting dam defects in large hydropower plants but are widely used in fields such as deep-sea exploration, undersea operations, and rescue assistance. The use of ROVs for crack and leakage inspection in hydropower plants has tremendous advantages. The research on using ROVs for the intelligent inspection of other structures has certain implications for developing ROVs for the intelligent underwater inspection of large hydropower dams. We analyze the progress of ROV technology in domestic and international research on hydropower engineering in terms of the overall technology, underwater absorber, power system, inspection technology, underwater positioning, and control system. Moreover, we explore the modular design and overall scale optimization of ROVs for underwater inspection in large hydropower dams, with the design objectives of lightweight, high stability, and high anti-current and anti-disturbance capability. Thrusters with high propulsion ratios have been developed to ensure high ROV power. Adsorbers have been added to the ROV systems to control the hovering of ROVs, which can also improve their underwater anti-disturbance ability to ensure stable detection and operation. Acoustic-optical inspection technology has been proposed to improve detection accuracy, and intelligent algorithms have been used for defect identification and image post-processing. Regarding underwater positioning and control systems, a complementary approach combining information from multiple sensors has been adopted, and the dam defect inspection is validated to improve the operational capability of the ROV movement and inspection. [Conclusions and Prospects] The use of ROVs for underwater inspection in large hydropower dams has major advantages in targeting cracks and other dam defects, and the research on the intelligent inspection of hydropower dams opens up a wide range of prospects.
  • COMPUTER SCIENCE AND TECHNOLOGY
    JIA Fan, KANG Shuya, JIANG Weiqiang, WANG Guangtao
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1399-1407. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.007
    Abstract (922) PDF (376)   Knowledge map   Save CSCD(1)
    [Objective] In recent years, the number of publicly disclosed vulnerabilities has increased, and software security personnel and vulnerability enthusiasts have experienced increasing difficulty in finding the vulnerability information they are interested in. A recommendation algorithm can provide personalized vulnerability suggestions to help users obtain valuable vulnerability information efficiently. However, recommendation systems related to vulnerabilities generally have problems such as one-sided analysis, complex implementation methods, strong professionalism, and data privacy, and research on directly recommending vulnerabilities as recommendation items is scarce.[Methods] This paper selects the vulnerability itself as the recommendation item, collects data from public datasets, and adopts a simple and efficient recommendation algorithm for personalized recommendations of vulnerabilities. As a classical recommendation model, the collaborative filtering recommendation algorithm is widely used and computationally efficient. However, the user–vulnerability interaction matrix is sparser than the interaction matrix analyzed by the classical recommendation model, which seriously affects the use effect of the collaborative filtering recommendation algorithm. To solve this problem, this paper introduces a vulnerability similarity research algorithm, comprehensively considers 13 features, such as vulnerability type, severity, and vulnerability description text, and integrates them into content-based recommendation algorithms, emphasizing the universal connection between vulnerabilities in recommendation algorithms. By calculating the similar vulnerabilities of each vulnerability the target user has interacted with, the algorithm summarizes the list of vulnerabilities with the highest recommended value and recommends it to the user. Simultaneously, the algorithm fully considers the characteristics of personal users and product users and combines the labeling mechanism to finally form a multi-user vulnerability recommendation algorithm based on similarity, effectively improving the sparsity and cold start of the recommendation algorithm.[Results] The experiments on public datasets show that 1) the content recommendation algorithm based on similarity can achieve better accuracy than the traditional collaborative filtering algorithm on all types of users. Particularly, the precision, recall, and F1 score of the recommendation algorithm results for product users increase by 58.86%, 58.53%, and 0.586 1, respectively. 2) The recommendation list of the content recommendation algorithm based on similarity is more effective and more consistent with the user's vulnerability preferences. For product users, the the normalized discounted cumulative gain score of the recommendation list increases by 0.596 5. 3) The result coverage of the content recommendation algorithm based on similarity is much higher than that of the collaborative filtering algorithm. Among human users, the result coverage of the content recommendation algorithm based on similarity is 7.6 times that of original interest data, which shows that the recommendation algorithm successfully mobilizes more vulnerabilities to recommend that users have not previously interacted with.[Conclusions] This paper takes vulnerabilities as a recommendation item to recommend vulnerabilities for multiple types of users and proposes a multi-user vulnerability recommendation algorithm based on similarity. The algorithm mainly introduces the vulnerability similarity calculation method and integrates it into the content-based recommendation algorithm. The algorithm proposed in this paper solves the problems of the high sparsity of a user–vulnerability interaction matrix and cold-start problems of user-based collaborative filtering algorithms and effectively improves the accuracy and effectiveness of recommendations.
  • CONSTRUCTION MANAGEMENT
    HUANG Yuecheng, LI Boning, YU Xiaoxia, WANG Yao, FANG Dongping
    Journal of Tsinghua University(Science and Technology). 2023, 63(2): 169-178. https://doi.org/10.16511/j.cnki.qhdxxb.2022.22.058
    Abstract (920) PDF (379)   Knowledge map   Save CSCD(1)
    [Objective] In recent years, the death toll in the construction industry has remained high and unsafe human behavior is the main cause of accidents. The interactions between construction team members have an important impact on workers' unsafe behavior; the safety leadership of the foreman—the most important firstline managers in the construction team—helps to eliminate the unsafe behavior of the workers. However, workers may develop a negative psychological state, such as anxiety and avoidance, during the actual interaction with their foreman, which affects the improvement of safe behaviors during the implementation of safety leadership, and the mechanism remains unclear. [Methods] Workers and foremen were selected in this research. We took attachment in the field of developmental psychology as the influencing factor of psychological interaction among individual construction team members and explored its mediating effect between foreman's safety leadership and workers' safety behavior and constructed a theoretical model. Based on the well-established adult attachment assessment scale in psychological research, a worker-foreman attachment scale suited for the construction industry was designed. The scale was modified through a pilot study and confirmatory factor analysis. The reliability and validity of the final scale were relatively ideal. The safety leadership questionnaire and safety behavior questionnaire used to measure the foreman's safety leadership and workers' safety behavior were well-proven mature scales. The empirical data were collected from six construction sites in Beijing, Ningxia, and Hubei in China, and 206 valid samples were finally obtained. Structural equation modeling was constructed using AMOS software, and path analysis was performed. [Results] The results show that the foreman's safety leadership has a significant negative impact on workers' attachment avoidance (P<0.001, path coefficient=-0.912). As attachment avoidance negatively influences workers' safety motivation and working habits, it can further negatively affect workers' compliance safety behavior (P<0.001, path coefficient=-0.720) and participation safety behavior (P<0.001, path coefficient=-0.776) as a mediating variable. Workers' attachment anxiety is marginally associated with the foreman's safety leadership in a positive way (P<0.05, path coefficient=0.166), while attachment anxiety is found to have no significant effect on the two dimensions of workers' safety behavior (P>0.05). [Conclusions] The results reveal the mediating effect of attachment avoidance between the foreman's safety leadership and workers' safety behaviors. Through the lens of workers' psychological states, several practical suggestions are given on safety leadership to reduce the occurrence of unsafe behaviors from the perspective of reducing workers' attachment avoidance. This study also finds that the overexertion of safety leadership may lead to a worker's anxiety and emotional state, which has an uncertain impact on their safety behaviors; thus, blindly strengthening safety leadership is inappropriate. This research not only illustrates the mechanism of safety leadership from an individual psychological level but also explains the implications of the worker-foreman relationship and highlights the importance of the interaction between construction team members. The effect of safety leadership is two-sided, and hence, safety leadership should be adjusted flexibly in practice according to the actual situation.
  • COMPUTER SCIENCE AND TECHNOLOGY
    ZHAO Chuanjun, WU Meiling, SHEN Lihua, SHANGGUAN Xuekui, WANG Yanjie, LI Jie, WANG Suge, LI Deyu
    Journal of Tsinghua University(Science and Technology). 2023, 63(9): 1380-1389. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.012
    Abstract (916) PDF (369)   Knowledge map   Save
    [Objective] Deep learning models for text sentiment analysis, such as recurrent neural networks, often require many parameters and a large amount of high-quality labeled training data to effectively train and optimize recurrent neural networks. However, obtaining domain-specific high-quality sentiment-labeled data is a challenging task in practical applications. This study proposes a cross-domain text sentiment classification method based on syntactic structure transfer and domain fusion (SSTDF) to address the domain-invariant learning and distribution distance difference metric problems. This method can effectively alleviate the dependence on domain-specific annotated data due to the difference in the data distribution among different domains. [Methods] A method combining SSTDF was proposed in this study to solve the problem of cross-domain sentiment classification. Dependent syntactic features are introduced into the recurrent neural network for syntactic structure transfer for designing a migratable dependent syntactic recurrent neural network model. Furthermore, a parameter transfer strategy is employed to transfer syntactic structure information across domains efficiently for supporting sentiment transfer. The conditional maximum mean discrepancy distance metric is used in domain fusion to quantify the distribution differences between the source and target domains and further refine the cross-domain same-category distance metric information. By constraining the distributions of source and target domains, domain variable features are effectively extracted to maximize the sharing of sentiment information between source and target domains. In this paper, we used a joint optimization and training approach to address cross-domain sentiment classification. Specifically, the sentiment classification loss of source and target domains is minimized, and their fusion losses are fully considered in the joint optimization process. Hence, the generalization performance of the model and classification accuracy of the cross-domain sentiment classification task are considerably improved. [Results] The dataset used in this study is the sentiment classification dataset of Amazon English online reviews, which has been widely used in cross-domain sentiment classification studies; furthermore, it contains four domains—B (Books), D (DVD), E (Electronic), and K (Kitchen)—each with 1 000 positive and negative reviews. The experimental results show that the accuracy of the SSTDF method is higher than the baseline method, achieving 0.844, 0.830, and 0.837 for average accuracy, recall, and F1 values, respectively. Fine-tuning allows the fast convergence of the network, thereby improving its transfer efficiency. [Conclusions] Finally, we used deep transfer learning methods to solve the task of cross-domain text sentiment classification from the perspective of cross-domain syntactic structure consistency learning. A recurrent neural network model that integrates syntactic structure information is used; additionally, a domain minimum distance constraint is added to the syntactic structure transfer process to ensure that the distance between the source and target domains is as similar as possible during the learning process. The effectiveness of the proposed method is finally verified using experimental results. The next step is to increase the number of experimental and neutral samples to validate the proposed method on a larger dataset. Furthermore, a more fine-grained aspect-level cross-domain sentiment analysis will be attempted in the future.
  • SPECIAL SECTION: ROBOTICS
    JIANG Xiao, WANG Song, WU Dan
    Journal of Tsinghua University(Science and Technology). 2024, 64(10): 1677-1685. https://doi.org/10.16511/j.cnki.qhdxxb.2024.27.023
    Abstract (910) PDF (376) HTML (0)   Knowledge map   Save
    [Objective] Oblong holes are commonly used across various industries to improve fault tolerance and adjustment capabilities. However, their complex geometric characteristics pose significant challenges for vision detection and location algorithms in industrial applications, impacting their utilization in automatic assembly processes. [Methods] This research investigates a high-precision and robust vision segmentation and location algorithm tailored for oblong holes. First, the geometric features of oblong holes, which are symmetric but lack a simple analytical description, are analyzed. This complexity renders traditional imaging methods ineffective for accurate localization. The detection and segmentation of oblong hole features are conducted using a novel vision location algorithm that integrates deep learning with conventional image processing techniques. Specifically, the algorithm employs a sequential connection framework of YOLO and fully convolutional networks to achieve accurate localization. This framework first identifies the region of interest and then performs semantic segmentation. YOLO networks rapidly detect the region of interest, prioritizing areas where the oblong hole is prominently featured. Semantic segmentation is subsequently performed using fully convolutional networks. Afterward, a skeleton feature extraction method based on medial axis transformation is applied to precisely locate the oblong hole. This method effectively reduces the impact of shape errors from semantic segmentation, achieving subpixel accuracy. However, medial axis transformation may produce redundant lines owing to the presence of image artifacts, potentially leading to inaccuracies. To address this issue, principal component analysis is employed to approximate the center of the oblong hole, thereby minimizing errors. For further precision, a Hough transformation ellipse detection method is utilized to identify the central skeleton of the oblong hole, which is interpreted both as a line segment and a special ellipse. The center of this skeleton represents the center of the oblong hole. [Results] Experimental validation conducted in a specific robotics automatic assembly system confirms the effectiveness of the proposed algorithm. The robustness of the algorithm is further demonstrated through image sampling using camera hardware distinct from that used in the training dataset. Additionally, the impact of surface features and oblong hole shapes on the detection performance is analyzed. The experimental outcomes indicate the optimal performance of the algorithm on objects with nonreflective surfaces, with minimal effect from the shape of the oblong hole on accuracy. Despite potential deformations in segmentation output due to hardware variations, the oblong hole region degenerating location algorithm, based on medial axis transformation, accurately locates the center. The final location error is recorded at 1.05 pixels, which surpasses the accuracy achieved through the direct calculation of the center of gravity of the segmented region. These results underscore the substantial benefits of the algorithm in scenarios with varying hardware and object conditions, demonstrating its high accuracy and exceptional robustness. [Conclusions] By merging deep learning techniques with traditional image processing methods, the location tasks for diverse objects are effectively resolved. The extraction of highly nonlinear features through deep learning, followed by processing with traditional image methods incorporating prior geometric knowledge, enhances the robustness and accuracy of the algorithm, making it suitable for practical production applications.
  • SPECIAL SECTION: BIG DATA
    WU Houyue, LI Xianwei, ZHANG Shunxiang, ZHU Honghao, WANG Ting
    Journal of Tsinghua University(Science and Technology). 2024, 64(12): 1997-2006. https://doi.org/10.16511/j.cnki.qhdxxb.2024.21.027
    Abstract (887) PDF (359) HTML (0)   Knowledge map   Save
    [Objective] The generation of adversarial samples in text represents a significant area of research in natural language processing. The process is employed to test the robustness of machine learning models and has gained widespread attention from scholars. Owing to the complex nature of Chinese semantics, generating Chinese adversarial samples remains a major challenge. Traditional methods for generating Chinese adversarial samples mainly involve word replacement, deletion/insertion, and word order adjustment. These methods often produce samples that are easily detectable and have low attack success rates, and thus, the methods struggle to balance attack effectiveness and semantic coherence. To address these limitations, this study introduces DiffuAdv, a novel method for generating Chinese adversarial samples. This approach enhances the generation process by simulating the data distribution during the adversarial attack phase. The gradient changes between adversarial and original samples are used as guiding conditions during the model's reverse diffusion phase in pre-training, resulting in the generation of more natural and effective adversarial samples. [Methods] DiffuAdv entails the introduction of diffusion models into the generation of adversarial samples to improve attack success rates while ensuring the naturalness of the generated text. This method utilizes a gradient-guided diffusion process, leveraging gradient information between original and adversarial samples as guiding conditions. It consists of two stages: forward diffusion and reverse diffusion. In the forward diffusion stage, noise is progressively added to the original data until a noise-dominated state is achieved. The reverse diffusion stage involves the reconstruction of samples, in which the gradient changes between adversarial and original samples are leveraged to maximize the adversarial objective. During the pre-training phase, data capture and feature learning occur under gradient guidance, with the aim of learning the data distribution of original samples and analyzing the deviations from adversarial samples. In the reverse diffusion generation phase, adversarial perturbations are constructed using gradients and integrated into the reverse diffusion process, ensuring that at each step of reverse diffusion, samples evolve toward greater adversarial effectiveness. To validate the effectiveness of the proposed method, extensive experiments are conducted across multiple datasets and various natural language processing tasks, and the performance of the method is compared with those of seven existing state-of-the-art methods. [Results] Compared with existing methods for generating Chinese adversarial samples, DiffuAdv demonstrates higher attack success rates across three tasks: text sentiment classification, causal relation extraction, and sentiment cause extraction. Ablation experiments confirm the effectiveness of using gradient changes between original and adversarial samples to guide the generation of adversarial samples and improve their quality. Perplexity (PPL) measurements indicate that the adversarial samples generated by DiffuAdv have an average PPL value of only 0.518, demonstrating that these samples are superior in rationality and readability compared with the samples generated by other methods. [Conclusions] DiffuAdv effectively generates high-quality adversarial samples that closely resemble real text in terms of fluency and naturalness. The adversarial samples produced by this method not only achieve high attack success rates but also exhibit strong robustness. The introduction of DiffuAdv enhances the research perspective on generating adversarial text samples and broadens the approaches for tasks such as text sentiment classification, causal relationship extraction, and emotion-cause pair extraction.
  • HYDRAULIC ENGINEERING
    GUO Shiyuan, MA Weizhi, LU Ruilin, LIU Jinlong, YANG Zhigang, WANG Zhongjing, ZHANG Min
    Journal of Tsinghua University(Science and Technology). 2023, 63(12): 1924-1934. https://doi.org/10.16511/j.cnki.qhdxxb.2023.21.005
    Abstract (883) PDF (359) HTML (0)   Knowledge map   Save
    [Objective] Water discharge prediction in canals under complex conditions is a fundamental problem with prominent practical significance in improving farmland irrigation water efficiency, conserving water resources, and reducing involved costs. The state-of-art solution of prediction is establishing nonlinear partial differential equations with numerical calculation methods, with time cost being exponential to the fineness of the spatiotemporal division. Moreover, the current time step calculation depends on the result of the last time step, i.e., the calculation cannot be parallelized, which results in a tradeoff between accuracy and efficiency. In actual irrigation areas, the control of gate openings in canals primarily relies on human experience, which has an extremely long feedback process. Therefore, it is challenging to employ human experience and numerical calculation methods when multiple gate changes are required. The rapid development of artificial intelligence-related technologies has yielded more opportunities for modernizing conventional industries. In this study, the input and output were definite for the water discharge prediction task, which corresponds to the "regression" problem-one of the two types of fundamental problems that neural networks are good at solving. This study presents new insights to leverage the neural network to solve the water discharge prediction problem end-to-end. The neural network only needs to be trained once, and further, multiple results can be obtained with high efficiency during testing. Therefore, the proposed approach overcomes the shortcomings of the conventional methods, which involve extremely high time costs.[Methods] Based on the Internet-of-Water theory of "real-time perception, water-information interconnection, process tracking, and intelligent processing", this study introduced a novel approach for water discharge prediction. First, we investigated the sequence features of the upstream and downstream canal water discharge gate control and introduced the static features of the gates and canal. Second, we proposed a novel predicting method for canal discharge based on a long short-term memory (LSTM) neural network, in which the gating mechanism allows better modeling and prediction of problems with sequential information. Feature discretization and normalization were applied to the static features to improve the generalization ability of the model to predict unseen data. Layer normalization was performed on the output of the LSTM network to adjust the distribution of the output to the unsaturated region of the activation function, making the neural network more sensitive to the input and output, as well as accelerating its convergence.[Results] The following comparative experimental results were obtained:1) The proposed model can complete the prediction task with an accuracy rate exceeding 97% in every canal segment, which is significantly better than all baselines, indicating the effectiveness of using the hidden sequence features inside the canal and the gating mechanism of the LSTM neural network. 2) Under normal circumstances, introducing static features as part of the model's input improves the prediction performance. 3) The proposed model demonstrates good robustness. It successfully learns and shows good prediction performance without too much data fed into it. Hence, it is extremely useful in situations of data shortage and when requiring model migration to other canals. 4) Compared to the conventional numerical calculation method, the proposed model demonstrates 308 times higher prediction efficiency, reducing the prediction time from 950 h to about 3 h on 100,000 pieces of data.[Conclusions] This study verifies the feasibility of artificial intelligence-based methods in improving the conventional canal discharge prediction problem, achieves a win-win situation between accuracy and efficiency through a reasonably designed deep learning model, and provides a new idea for applying artificial intelligence-based methods in solving hydraulic problems.
  • Editorial
    Journal of Tsinghua University(Science and Technology). 2023, 63(3): 293-293.
    Abstract (879) PDF (368) HTML (0)   Knowledge map   Save
  • Review
    LI Yanzhi, DU Jiayu, WU Xinxin, SUN Libin, MIN Qi
    Journal of Tsinghua University(Science and Technology). 2023, 63(8): 1173-1183. https://doi.org/10.16511/j.cnki.qhdxxb.2023.25.004
    Abstract (874) PDF (357) HTML (2)   Knowledge map   Save CSCD(1)
    [Significance] Aiming at carbon neutrality, energy structure transformation and upgrading has become a trend for global energy system progress. Nuclear energy can effectively fill the power and heat supply gap during coal substitution. It has the advantages of a flexible layout, wide application, and insensitivity to climate change and the global market, which ensures national energy security. A heat pipe (HP) is a passive and efficient heat exchange element with a wide temperature range, stable and reliable performance, and high security. It is ubiquitously applied in the aerospace, energy and chemical industries, as a solar collector, for electronic cooling, and in other fields. HPs are irreplaceable in advanced nuclear energy with multi-domain, multi-scale, and multi-section applications. Therefore, existing studies on HPs must be summarized for advanced nuclear technology.[Progress] According to operation temperature, HP applications in nuclear technology are classified into three parts:nuclear power/propulsion systems, unclear safety facilities, and nuclear urban service. First, heat pipe-cooled reactors (HPRs) use alkali metal high-temperature HPs to passively export the core heat, which has the advantages of inherent safety and storage and transportation. Because of a long phase transition during startup and the unraveling alkali metal dynamic and heat transfer process in the steady state and the transitory state, the startup characteristic and heat transfer performance of alkali metal high-temperature HPs have been the difficult part of HPRs development. To adapt to different energy needs, the designs of HPRs ranging from kilowatts to megawatts and the corresponding thermoelectric conversion schemes have been proposed. HPRs will have broad prospects in aerospace, ship power, deep sea exploration, land-based power supplies and other fields. Second, with passive characteristics, an HP is a better technical choice for safety facilities. In nuclear power plants, separated HPs have been applied to passive heat removal systems, passive emergency core cooling systems, passive containment cooling systems, and passive spent fuel pool cooling systems. In nuclear spacecraft cooling, an HP space radiator composed of an HP and a heat sink is a more promising space radiator, having good thermal properties, temperature conversion characteristics, environmental adaptability, anti-debris impact performance, and anti-single point failure characteristics. In a thermonuclear reactor, HP is also used in first-wall cooling. Third, HPs are mainly used in waste-heat recovery and low-temperature heat transfer to improve energy efficiency and safety in nuclear industry applications and urban services. Researchers have developed several desalination systems based on HP systems and waste heat from steam power plants and generators. Districted heating and nuclear power generation, hydrogen production, and heating triple production systems are promoted and have become popular in China. Finally, challenges in HP performance, adaptive design in HPRs, and HP operation and maintenance were discussed.[Conclusions and Prospects] The HP is perfectly in line with the advanced nuclear safety design concept. Currently, although HPs are widely used in nuclear power/propulsion systems and reactor safety facilities, their practical applications in the nuclear industry and urban service remain relatively scarce, and there is almost no participation in the intermediate temperature segment. At last, we propose the prospects of advanced HP technology.
  • PROCESS SYSTEMS ENGINEERING
    ZHOU Yingqian, FENG Xiao, YANG Minbo
    Journal of Tsinghua University(Science and Technology). 2023, 63(5): 723-729. https://doi.org/10.16511/j.cnki.qhdxxb.2022.25.050
    Abstract (865) PDF (351) HTML (0)   Knowledge map   Save CSCD(3)
    [Objective] Hydrogen demands in refineries are increasing annually because of the growing processing of heavy crude oil, which necessitates optimizing the hydrogen network to improve hydrogen usage. The cost associated with hydrogen compressors is the second largest in a refinery hydrogen network, following the fresh hydrogen cost. Thus, the synthesis of hydrogen networks considering gas compression has been an active topic in process systems engineering. Previous work has been modeled on reciprocating compressors, focusing on reducing the number of compressors and/or compression power consumption. However, reciprocating and centrifugal compressors are widely used in refinery hydrogen networks. The advantage of centrifugal compressors is their suitability for large gas flow rates without too high exhaust pressures, while reciprocating compressors have high and stable exhaust pressures and are suitable for small gas flow rates.[Methods] This work presents a hydrogen network superstructure that considers the selection of multistage centrifugal and reciprocating compressors. Compressor selection is determined based on the characteristics of reciprocating and centrifugal compressors considering their inlet gas flow rates and exhaust pressures. A mixed integer nonlinear programming (MINLP) model is formulated to minimize the total annualized cost, comprising fresh hydrogen, compressor investment, and compression power. The developed MINLP model is examined based on a hydrogen network reported in the literature. It is coded in the general algebraic modeling system 35.2 and can be directly solved by the BARON solver.[Results] The results indicated that the optimal hydrogen network contained three centrifugal compressors and six reciprocating compressors, with one reciprocating compressor for two-stage compression and one for three-stage compression. The flow rates of the three centrifugal compressors were larger than the upper flow rate limit of the reciprocating compressors, while the outlet pressures were lower than the upper outlet pressure limit of the centrifugal compressors.[Conclusions] This phenomenon indicates that the flow rate constraint dominates the compressor selection in this hydrogen network. Since the cost correlation of the centrifugal compressor is smaller than that of the reciprocating compressor in this study, the centrifugal compressor is preferred when both types of compressors meet the compression demands. Hence, only hydrogen streams with small flow rates and large compression ratios are chosen for the reciprocating compressors. Compared with the previous work, although the numbers of compressors are identical, the optimal hydrogen network structures differ notably, and this study obtains small compression power consumption. This result is obtained because earlier studies neglected compressor selection, and the mathematical model in this study prefers the less expensive centrifugal compressor. Therefore, the flow rates of several hydrogen streams are enlarged to satisfy the flow rate constraint of centrifugal compressors, which is also more consistent with refinery practice. Finally, the computation time of the MINLP model is only 0.72 s, thereby demonstrating the usefulness and convenience of the proposed method.
  • CONSTRUCTION MANAGEMENT
    GU Botao, CAO Sihan, WANG Yao, HUANG Yuecheng, FANG Dongping
    Journal of Tsinghua University(Science and Technology). 2023, 63(2): 160-168. https://doi.org/10.16511/j.cnki.qhdxxb.2022.22.055
    Abstract (860) PDF (355)   Knowledge map   Save CSCD(1)
    [Objective] The construction industry is suffering from a long-term and grave situation regarding construction safety. Unsafe behavior is a major cause of accidents. In practice, construction tasks are usually accomplished by people organized as a team. Each team has two primary goals: advance the construction process and ensure team safety. However, the current studies of the collaborative work of teams in the construction industry primarily consider scheduled tasks and fail to adequately consider the various types of members (workers, contractors, safety officers, subcontract safety officers, technicians, quality inspectors, supervisors, and government or A-party representatives) that are involved in working for safety. Moreover, members can be supported by the team. Interactions such as communication and collaboration among team members have an important impact on their behaviors; moreover, the types and characteristics of interacting and noninteracting unsafe behaviors that occur in collaborative work accidents deserve further exploration. [Methods] This study implements thematic analysis based on the classic accident causation model and 129 high-quality accident investigation reports from China and the United States. [Results] The results of the thematic analysis give a behavior list with 3 primary behavior types, 7 secondary behavior types, and 32 tertiary behavior types. Compared with traditional individual unsafe behaviors, construction collaborative unsafe behaviors increase by 23 team behavior types, including 11 unsafe sharing behavior types and 12 unsafe supervisory behavior types. The statistical results of the coded dataset show that (1) among the collaborative unsafe behaviors, unsafe actions, unsafe behaviors, and unsafe supervision appear 155, 86, and 152 times, respectively, and the sharing and supervisory unsafe behaviors also deserve more attention. (2) Among the unsafe sharing behaviors, lack of verbal communication is the most frequent, and therefore, training communication within the team must be focused on during management practice. (3) Among the unsafe supervisory behaviors, wrongly pointing out behavior is the most frequent, and therefore, training grassroots managers and grassroots supervisors to appropriately point out the safety hazards in collaborative work must be focused on during management practice. (4) For common workers, the most frequent unsafe behavior is breaking into the risk area; for special workers, procedure violations; and for grassroots managers and grassroots supervisors, no supervision at the site. These results indicate a large difference in the unsafe behaviors that must be heeded for different roles in the work team and the deeper reasons for the difference need to be further explored. (5) Unsupervised behavior is presented in 72% of the accidents and the lack of members performing supervisory functions is an important cause of work team accidents. [Conclusion] This study provides references for controlling unsafe construction behaviors.