Loading...
首页
期刊介绍
期刊订阅
联系我们
快速检索
引用检索
图表检索
高级检索
最新录用
|
预出版
|
当期目录
|
过刊浏览
|
阅读排行
|
下载排行
|
引用排行
|
百年期刊
ISSN 1000-0585
CN 11-1848/P
Started in 1982
About the Journal
»
About Journal
»
Editorial Board
»
Indexed in
»
Rewarded
Authors
»
Online Submission
»
Guidelines for Authors
»
Templates
»
Copyright Agreement
Reviewers
»
Guidelines for Reviewers
»
Online Peer Review
Office
»
Editor-in-chief
»
Office Work
»
Production Centre
Table of Content
, Volume 63 Issue 6
Previous Issue
Next Issue
For Selected:
View Abstracts
Download Citations
EndNote
Reference Manager
ProCite
BibTeX
RefWorks
Toggle Thumbnails
PUBLIC SAFETY
Select
Research progress on landslide deformation monitoring and early warning technology
DENG Lizheng, YUAN Hongyong, ZHANG Mingzhi, CHEN Jianguo
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 849-864. DOI: 10.16511/j.cnki.qhdxxb.2023.22.002
Abstract
PDF
(8468KB) (
812
)
[Significance] Landslide hazards are widely distributed in China and are severely harmful. The registered landslide hazards have achieved remarkable benefits in disaster reduction through a comprehensive prevention and control system. However, approximately 80% of all geo-disasters in China still occur outside the scope of identified hazards yearly. Therefore, monitoring and early warning are important means to actively prevent landslide disasters and achieve great success in disaster mitigation owing to promptness, effectiveness, and relatively low-cost advantages. Deformation is the most significant monitoring parameter for landslides and has become a focus and general trend. Landslide deformation monitoring engineering has strict requirements for controlled cost and high reliability to achieve widespread application and accurate early warning. Therefore, the commonly used monitoring instruments focus on surface deformation and rainfall to meet the requirements for easy equipment installation and low implementation cost. However, surface deformation and rainfall are not sufficient conditions to determine the occurrence of landslides. Various challenges exist in the existing monitoring technologies and early warning methods regarding engineering feasibility and performance improvement. Thus, it is important and urgent to summarize the existing research to rationally guide future development.[Progress] The deformation monitoring methods are divided into surface and subsurface monitoring. Most surface deformation monitoring technologies are vulnerable to the interference of terrain, environment, and other factors; therefore, their timeliness and reliability are not easily guaranteed. Additionally, slope subsurface deformation monitoring technologies can directly obtain the development and damage information of the sliding surface; thus, they can recognize the disaster precursor. Subsurface monitoring has advanced early warning ability; however, the existing instruments have problems, such as high cost, small measuring range, or difficult operation. Acoustic emission technology has the advantages of low cost, high sensitivity, and continuous real-time monitoring of large deformation, and has gradually developed into an optional method for landslide subsurface deformation monitoring. Thus, efficient landslide monitoring should comprehensively use multiple technologies to overcome the limitations of a single technology, and an integrated monitoring system becomes the state-of-the-art trend. The purpose of landslide monitoring is to provide a basis for decision-making of disaster early warning, thus, avoiding casualties and property losses through effective early warning efforts. In the field of early warning, regional meteorological and individual landslide early warning methods are gradually developed and improved. Deformation monitoring data are the main basis for landslide early warning, and experts analyze the deformation trend and sudden change characteristics. Different early warning levels could be triggered by the threshold values of velocity, acceleration, or other criteria. However, a landslide has complex dynamic mechanisms and individual differences; thus, the generic early warning model needs further exploration. The intelligent early warning model integrates machine learning technology with geological engineering analysis to improve the accuracy and automation level of landslide early warning.[Conclusions and Prospects] Deformation monitoring is essential in landslide prevention, and deformation data are the main basis for landslide early warning. Moreover, surface monitoring technologies have been widely used in the perception and decision-making process of landslides. Subsurface monitoring technologies can detect early precursors of landslide evolution to continuously improve early warning accuracy. Analyses show that early warning methods can be improved in the future by integrating machine learning models and geotechnical engineering.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Spatiotemporal rapid prediction model of urban rainstorm waterlogging based on machine learning
DAI Xin, HUANG Hong, JI Xinyu, WANG Wei
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 865-873. DOI: 10.16511/j.cnki.qhdxxb.2023.22.013
Abstract
PDF
(4909KB) (
699
)
[Objective] Rapid prediction of rainstorm waterlogging is crucial for disaster prevention and reduction. However, the traditional numerical models for simulating and predicting large-scale and complex subsurface conditions are complicated and time-consuming; moreover, the time-efficiency requirement of rainstorm waterlogging prediction is difficult to meet. To address these shortages of the numerical models, this study constructs a spatiotemporal prediction model of urban rainstorm waterlogging based on machine learning methods to rapidly predict waterlogging extent and water depth changes.[Methods] This study constructs a rapid prediction model of urban rainstorm waterlogging based on a hydrodynamics model and machine learning algorithms. First, a hydrodynamic model is constructed based on InfoWorks integrated catchment management (InfoWorks ICM) for rainstorm waterlogging in the study area with the parameter rate determination and model validation to realize the high-precision simulation of urban rainstorm waterlogging. On this basis, a rainfall scenario-driven hydraulics model is designed to further obtain rainstorm waterlogging simulation results. These results are used as the base dataset for machine learning. Second, the spatial characteristics data of rainstorm waterlogging are obtained from three aspects: rainfall situation, subsurface information, and the drainage capacity of the pipe network, which, together with the grid simulation results, comprise the dataset. The spatial prediction models are based on random forest, extreme gradient boosting (XGBoost), and
K
-nearest neighbor algorithms. Finally, the simulation results of waterlogging points are used to generate rainstorm waterlogging time series data. The rainfall, cumulative rainfall, and water depth of the first four moments (every 5 min) are used as the input for a long short-term memory (LSTM) neural network to predict the present water depth of the flooding point. The two models collaborate to achieve rapid spatial and temporal predictions of urban rainstorm waterlogging.[Results] For spatial predictions, the random forest model has the best fitting performance regarding evaluation indexes such as the mean square error, the mean absolute error, and the coefficient of determination (
R
2
). When a rainstorm scenario with an 80-year event and a 2.5 h rainfall calendar prediction set is used, the prediction results concur with the risk map of urban waterlogging in Beijing. Compared with the simulation results of InfoWorks ICM, the prediction accuracy of the predicted inundation extent reaches 99.51%, and the average prediction error of waterlogging depth does not exceed 5.00% by the random forest model. For temporal predictions, the trend of the water depth change of the LSTM neural network model is more consistent with the simulation results of InfoWorks ICM, the
R
2
of four typical inundation points are above 0.900, the average absolute error of water depth prediction at the peak moment is 1.9cm, and the average relative error is 4.0%.[Conclusions] When addressing sudden rainstorms, the rapid prediction model based on machine learning algorithms built in this study can generate accurate prediction results of flooding extent and water depth in seconds by simply updating the forecast rainfall data in the model input. The model computational speed is greatly improved compared to the hydrodynamics-based numerical model, which can help plan waterlogging mitigation and relief measures.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Joint probability analysis of tropical cyclone wind and precipitation with the Archimedean copula function
YE Yanting, GONG Junqiang, ZHANG Haixia, LI Jian
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 874-881. DOI: 10.16511/j.cnki.qhdxxb.2023.22.022
Abstract
PDF
(3764KB) (
176
)
[Objective] Tropical cyclone (TC) is one of the biggest threats to life and assets in coastal areas. TC is a stochastic event characterized by various hazards, such as strong wind, heavy rain, storm surge, and flooding, which can cause significant impacts individually or in combination. Exploring the relationship between the multiple attributes of TC can help estimate the severity of TC and aid in the emergency response and risk management. Strong wind and heavy rain are the two most severe hazards of TC disasters. Generally, TC weakens rapidly after landfall due to the mountainous terrains in coastal areas, and its intensity (wind) decays within a very short period. The maximum wind speed (MWS) of the TC at landfall reflects the threats posed by the strong winds of the cyclone. MWS also contributes to the rise in water levels caused by storm surges. Total precipitation (TP) can indicate the intensity of TC rainfall as well as the potential impact of inland floods and water logging. However, the relationship between MWS and TP is complex and nonlinear, and there is a lack of a clear formula to express this relationship. Copula is an effective probability method to model the dependence between two or more variables with uniform cumulative distribution functions (CDFs).[Methods] Therefore, in this study, a bivariate copula function was used to construct the joint probability of MWS and TP. Four marginal distribution models (Gamma, Gumbel, Weibull, and generalized extreme value (GEV)) were first fitted based on 553 MWSs at landfall and TPs over land in China (1951-2015). Three two-dimensional Archimedean copula functions (Clayton, Frank, and Gumbel) were then used to construct the joint probability of MWS and TP. The Kolmogorov-Smirov (K-S) test at a 5% significance level and the ordinary least squares (OLS) values were used to determine the best marginal and copula models. The characteristics of marginal CDFs and joint probability were also discussed. The conditional probability of TP was also calculated and discussed since TC intensity (wind) is easier to achieve than precipitation.[Results] The results of this study are as follows: (1) Weibull and Gamma are the best marginal CDFs for MWS and TP, respectively, and the Gumbel copula is the best copula function. Fitted Gumbel copula PDF values in the upper and lower tail are relatively high, indicating the probability of TCs with MWS and TP simultaneously being strong or weak is higher than TCs with either MWS or TP being severe. (2) The maxima of conditional probability increases with MWS, indicating that the most probable TP is also strong when MWS is strong. (3) Here, TP∈[1000, 2000]×10
8
m
3
is defined as strong TP. When MWS ≤60 m/s, the conditional probability of strong TP increases with MWS; but when MWS >60 m/s, the conditional probability of strong TP increases with MWS before the threshold and decreases with MWS after the threshold. Each TP is associated with an MWS threshold, which increases with the concerned TP.[Conclusions] Our findings show that the construction and analysis of the joint probability distribution between MWS and TP lead to an improved understanding of the interaction relationship between TC hazardous wind and precipitation. This study also contributes to a comprehensive investigation of the TC multihazard destructiveness.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Fast reconstruction of a wind field based on numerical simulation and machine learning
LI Congjian, GAO Hang, LIU Yi
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 882-887. DOI: 10.16511/j.cnki.qhdxxb.2023.22.014
Abstract
PDF
(14181KB) (
278
)
[Objective] In recent years, under the influence of strong wind, trees and walls collapse, objects fall, and other situations occur from time to time, which seriously affect the safety of community residents. In traditional emergency rescue, the background wind field at the disaster site is unknown, and the accuracy of accident development assessment is affected. In the case of fire, gas leakage, strong wind, and other disasters, decision-makers and rescue teams cannot accurately locate the dangerous areas in the community because of their inability to rapidly obtain accurate background wind field information, which affects the accuracy of the judgment of the disaster scope and development trend. Key dangerous areas in the community under strong wind need to be identified.[Methods] In this study, the wind field of a community in the Shijingshan District of Beijing was taken as an example to conduct scene modeling. To generate the database of the community wind field, the wind field was generated by OpenFOAM, and a shell script was used for a batch of simulations. The speed at the feature points obtained by
k
-means clustering served as the input, and the wind field served as the output to train the neural network. The selected community feature points could represent the wind field information of the community. The feature point selection and neural network modeling were continuously optimized based on the training and prediction results until the accuracy met the requirements.[Results] Taking the field data of 6 681 points predicted by 10 feature points as an example, the model training test results of 7917 training wind fields and 2026 testing wind fields were as follows: The average relative errors of the predicted values of speeds above 1m/s in the
x
- and
y
-axes were 5.8% and 6.2%, respectively. Among them, the average relative error of model prediction between 1m/s and 2m/s is 11.9%, for model prediction between 2m/s and 5m/s was 6.0%, for model prediction between 5m/s and 10m/s was 3.2%, and for model prediction above 10m/s was 3.5%.[Conclusions] Compared with the numerical simulation technology, the neural network model can rapidly generate the background wind field of the community based on the field location data. Compared with the time of the numerical simulation, the time of the neural network model to generate a field is significantly reduced. Unlike the existing neural network model, the proposed model takes actual community points as the feature points for model training and prediction, enabling the installation of sensors and the prediction of real-time wind fields. Therefore, people can organize risk prevention and emergency rescue according to the background wind field, which is of great significance for maintaining community safety.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Spatial heterogeneity and influencing factors of urban emergency services
TIAN Fengshi, SUN Zhanhui, ZHENG Xin, YIN Yanfu
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 888-899. DOI: 10.16511/j.cnki.qhdxxb.2023.22.004
Abstract
PDF
(16321KB) (
112
)
[Objective] Urban emergency services are mainly divided into fire suppression and technical rescues. Relatively more studies have been conducted on fire suppression, whereas relatively fewer studies have focused on technical rescues. Thus, this study aims to map the spatial distribution of fire suppression and technical rescues on a city scale and build their connections quantitatively to the human population and mobility. The findings are expected to help in the planning of urban emergency services to follow the major task change from fire suppression to technical rescues.[Methods] The global spatial autocorrelation of fire suppression, technical rescues, and their totality — whole emergency services — was assessed using global spatial autocorrelation analysis Moran's
I
, whereas the local components were indexed with local indicators of spatial association and Getis-Ord
G
*
i
. The human population and mobility were modeled through the point of interest (POI) and visitor throughput, respectively. Through the stepwise regression method, five broad categories of POI data (14 minor categories of POI) of the highest sensitivities were selected from the available 30 categories of POI. Their associations with fire suppression, technical rescues, and the whole emergency services were established using the multiscale geographically weighted regression (MGWR) model.[Results] Both fire suppression and technical rescues were found to have a certain degree of spatial clustering, but some differences were noted in the spatial distribution of different emergency service types. Old towns were the concentrated hot spots for fire suppression, whereas the formation of additional clusters of technical rescues was extensively distributed; thus, more multiregional linkage and targeted prevention were required for technical rescues than fire suppression. The POI data of residential premises, office premises (such as office buildings, and public administration and public service institutions), industrial premises (such as industrial parks and mines), educational premises (such as schools, scientific research institutions, and training institutions), and commercial premises (such as supermarkets, convenience stores, home appliance stores, digital device stores, beauty salons), and visitor throughput were closely connected to emergency services. The applied MGWR model overtook both the traditional multiple linear regression and conventional GWR, with the goodness of fit
R
2
exceeding 0.8 for fire suppression, technical rescues, and overall emergency services. The residual sum of squares and the corrected Akaike information criterion (AICc) had the smallest values in the MGWR model. The correlation of the local visitor throughput and POIs (e.g., residential buildings, offices, and retail stores) to local fire suppressions were approximately spatially uniform. By contrast, the connections of other POIs (e.g., offices, schools, and industrial parks) to fire suppression and technical rescues varied across the present city domain.[Conclusions] The findings indicate that targeted prevention should be employed in the present city's emergency services according to the local POI distribution and should be steered to technical rescues which have become the main part of the overall emergency services. This study provides an important reference for future emergency preparedness and response, regional prevention work, and location planning of new fire stations.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Heat and moisture transfer model of heated clothing for human thermal response calculation
CHEN Feiyu, SHEN Liangchang, FU Ming, SHEN Shifei, LI Yayun
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 900-909. DOI: 10.16511/j.cnki.qhdxxb.2023.22.020
Abstract
PDF
(1358KB) (
205
)
[Objective] Numerical heat and moisture transfer model of clothing is a crucial tool for evaluating clothing protective performance, calculating body-environment heat and moisture transfer, and assessing human safety during cold exposure. Existing models primarily concentrate on conventional passive protective clothing (PPC). However, actively-heated clothing (AHC) remains poorly understood, with fiber research being the primary focus in previous studies, which cannot simulate dressing conditions of the human body. In this study, we developed a multilayer heat and moisture transfer model of AHC, which can be coupled with a human thermal response model.[Methods] First, based on a published model of PPC, the heat production and transfer mechanism of active heating technologies, including electrical heating, phase change material (PCM), and moisture-absorption heating, were considered. Accordingly, we developed a general model for AHC. Particularly, the heat production of electrical heating was calculated using system voltage, current, and efficiency, and that of PCM was calculated using the phase change speed ratio and enthalpy. For moisture-absorption heating, the heat production was obtained using the moisture-absorption and heat-generation curves of the fabric, calculated by applying the specific heat and temperature change ratio. Second, we specifically considered electrically-heated clothing (EHC), which is the most widely used in practical applications. Further, the model was improved for EHC considering the clothing's detailed layer structure and radiative and horizontal heat transfer. The clothing layer containing the heating pad was further divided into interlining, pad, and fabric layers to establish more realistic heat-transfer equations. The radiative heat transfer between two clothing layers was derived using the Stefan-Boltzmann law, as heat radiation is significant in EHC systems. The body segment containing the heat area was further divided into heated and nonheated zones, in which horizontal heat transfer was modeled to accurately calculate the local skin temperature.[Results] The model coupled with a published human thermal response model was validated with existing experiments with air temperatures ranging from -20℃ to 8℃. Moreover, the general model was validated with data from an EHC experiment at 8℃ and a PCM clothing experiment at 5℃. The errors of mean skin, core, and microclimate temperatures did not exceed 0.58℃, 0.16℃ and 1.59℃, respectively. The improved EHC model was validated with data from a series of experiments with air temperatures ranging from -20℃ to 0℃ and air velocities from 0 to 5 m/s. Considering the thermal response prediction, the errors of mean skin, local skin, and core temperatures did not surpass 0.20℃, 0.47℃, and 0.14℃, respectively. Moreover, considering clothing evaluation, the error of effective heating power was ~0.10 W.[Conclusions] The proposed model can be used to assess human thermal safety and clothing protective performance in cold exposure cases with AHC and serve as a reference for personal protection, emergency management, and protective equipment research in the field of public safety and environmental ergonomics.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Tunable diode laser absorption spectroscopy (TDLAS)-based optical probe initial fire detection system
LI Kaiyuan, YUAN Hongyong, CHEN Tao, HUANG Lida
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 910-916. DOI: 10.16511/j.cnki.qhdxxb.2023.22.024
Abstract
PDF
(2970KB) (
145
)
[Objective] Current fire smoke detectors are susceptible to many factors that can significantly affect their accuracy, such as environmental disturbances and combustion states, resulting in false alarms. Moreover, point fire detectors inevitably cause time lags for alarms due to their optical darkrooms and insect-proof nets, delaying critical rescue time. Therefore, focusing on the limitations of point carbon monoxide (CO) detectors with absorbing gas cells, this study proposes an optical probe initial fire detection system based on tunable diode laser absorption spectroscopy (TDLAS) and laser remote sensing. Then, we present a preliminary threshold-based fire alarm algorithm.[Methods] Accordingly, this study's detection system relied on TDLAS and laser remote sensing, including wavelength modulation spectroscopy, to extract CO signals from an open-path geometry. A laser (wavelength=2331.93nm) was also used as an optical probe to replace the traditional absorption measurement chambers, achieving a CO path-integrated volume fraction measurement under complex initial fire conditions. First, we tested the signal responses of this detection system for different CO volume fraction combinations, incidence angles, and reflectances using a standard gas cell (length=0.5m) placed at the optical path and a standard reflector plate placed 4.4m from the light source to examine the limitations of the system. Then, we tested the received signal power at different distances, examined the CO released from wood pyrolysis fires, calculated the corresponding integral volume fractions, and set an appropriate threshold based on the detector's limit to verify whether this optical probe fire detector could meet the requirements.[Results] The experimental results of our limitation tests showed the following: (1) At a distance of 4.4m and target reflectance of 0.69, the theoretical detection limit was approximately 5 (μL/L)·m, and the actual detection limit was 26.75 (μL/L)·m. (2) As the reflectance decreased, the absolute value of the signal intensity also decreased, increasing the uncertainty in the measured volume fraction by a detection limit of 88.22 (μL/L)·m and reflectance of 0.07. (3) As predicted by calculations, although the absolute value of the second harmonic signal varied slightly for the 0°-10° incidence angle, the corresponding normalized absorption peak outputs were essentially the same. (4) When the distance to the target reflective surface was within 10 m, a detection limit of no less than 20 (μL/L)·m was achieved. Conversely, the standard wood pyrolysis fire tests showed that the system with a theoretical detection limit of 30 (μL/L)·m and a proposed threshold alarm value of 70 (μL/L)·m potentially triggered the initial fire alarm.[Conclusions] This study determines the feasibility of an open optical path to detect initial CO release from fire through detection limit tests and standard fire tests. Overall, the proposed method integrating TDLAS and laser remote sensing achieves the expected goal of detecting initial fire, significantly addressing the shortcomings of gas detectors using absorption cavities and point fire detectors.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Full-scale experimental study on single-end tunnel fires
YUE Shunyu, LONG Zeng, QIU Peiyun, ZHONG Maohua, HUA Fucai
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 917-925. DOI: 10.16511/j.cnki.qhdxxb.2023.22.019
Abstract
PDF
(11846KB) (
151
)
[Objective] Considering the advancement in underground space construction in China, the number of single-end tunnels during the construction process has increased annually. To study the smoke-spreading characteristics of fires occurring in single-end tunnels formed during subway construction, a full-scale experiment was performed in the construction section of a subway tunnel.[Methods] The diffusion and settlement laws of smoke in a single-end tunnel were studied through the analysis of the overall temperature distribution, wind speed distribution, smoke layer height, and other tunnel parameters with on-site observation combined.[Results] The results indicate that under natural ventilation, the diffusion velocity of smoke is slower toward the closed end than toward the through end; moreover, the velocity difference decreases with increasing distance between the ignition source and the closed end.[Conclusions] The decay rate of ceiling flue gas temperature is slower toward the through end than toward the closed end. The distribution of flue gas at the connected end conforms to the classical model with the exponential decay distritution, while the closed end has a clear accumulation effect, forming a dangerous section. The height of the flue gas layer at the closed end is as low as 1.5 m, which is the key aspect for consideration in flue gas control and fire emergencies.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Dynamic modeling approach for suppression firing based on cellular automata
JIANG Wenyu, WANG Fei, SU Guofeng, QIAO Yuming, LI Xin, QUAN Wei
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 926-933. DOI: 10.16511/j.cnki.qhdxxb.2023.22.016
Abstract
PDF
(17204KB) (
171
)
[Objective] Suppression firing is a crucial approach to control the spread of forest fires. However, existing suppression firing mainly relies on rare quantitative analysis by experts, making efficient forest fire control efforts difficult to perform.[Methods] In this paper, a fire spread prediction model was implemented to quantitatively simulate and analyze suppression firing. This model adopted the cellular automata algorithm to define the fire spread as a grid dynamics problem. The forest landscape was divided into contiguous regular cells with different cell burning states (
S
0
: unburned;
S
1
: ignited;
S
2
: flashover;
S
3
: extinguishing;
S
4
: extinguished). Then, multimodal environmental factors such as fuel type, slope, wind, and temperature were considered to construct the rate of the spread function and predict the fire spread speed in various complex scenarios. Next, state update rules were proposed to define how the burning state of forest cells was transformed for different fire conditions. The minimum travel time method was then adopted to iteratively calculate the ignition time of each cell in the forest landscape. Therefore, the spatiotemporal evolution of forest fires in complex environmental scenarios was quantitatively modeled. Additionally, a trigger mechanism was proposed to define reverse ignition behavior as a grid cell with specific time-trigger constraints. This mechanism realized a quantitative simulation analysis of human ignition factors with different spatiotemporal conditions.[Results] To verify the reliability and feasibility of our model, a real forest fire that occurred in the Beibei District of Chongqing in August, 2022 was chosen as the study case. Fire data (fuel type, slope, historical weather, fire perimeter, etc.) and firefighting records (the location and time of fire ignition, suppression firing description, etc.) were collected to reconstruct the firing process. Our model was applied to the suppression firing in this forest fire to analyze the fire control effect for different environmental conditions. The experimental results showed that our model was superior in predicting the spatiotemporal spread of forest fire with competitive model performance (Jaccard: 0.732; Sorensen: 0.845). The spatial location and ignition time of the reverse ignition in suppression firing were quantitatively analyzed and visualized, demonstrating how the reverse fire burned the fuel in advance and impeded the spread of free fires.[Conclusions] Quantitatively modeling the suppression firing can provide effective decision-making for wildfire firefighters to formulate accurate fire control strategies and improve the modernization capability of forest fire management. As a highly complex, dangerous firefighting strategy, more research on the combustion mechanism and simulation method of suppression firing is needed, such as the formation mechanism and modeling method of local microclimate in a forest fire landscape, the barrier effect of the isolation zone, and spatial optimization.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Top-level metrics decomposition and allocation method for large firefighting aircraft fireGextinguishing missions and its application
GU Yin, LIN Kaiyi, XIANG Tuoyu, ZHOU Rui, SHEN Shifei
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 934-940. DOI: 10.16511/j.cnki.qhdxxb.2023.22.023
Abstract
PDF
(2467KB) (
246
)
[Objective] The top-level metrics for evaluating the performance of an aircraft fire-extinguishing mission system include effective drop utilization rate and uniformity. The former refers to the ratio of the liquid collected on the ground within the effective coverage area to the dropped liquid; this ensures that sufficient liquid falls to the effective coverage area on the ground. The latter refers to the average thickness of ground coverage within the effective coverage area; this ensures that the thickness of liquid coverage falling to the effective coverage area on the ground meets the requirements of the firefighting task. However, physical modeling and the top-level metrics decomposition and allocation for fire-extinguishing mission systems have not yet been documented. The current work aims to address a semiphysical and semiempirical model for the top-level metrics decomposition and allocation of large firefighting aircraft fire-extinguishing missions.[Methods] Based on the ground pattern and fraction models by Legendre et al. and Gu et al., respectively, we establish a semiphysical and semiempirical model for the top-level metrics of aircraft fire-extinguishing missions by coupling logical reasoning and theoretical derivation methods. Further, we clarify the quantitative relationship between the top-level metrics and parameters at the design stage (such as the average flow rate and the total amount of liquid dropped) and the planning stage (such as the viscosity of the released liquid, the density of the released liquid, flight velocity, and flight altitude) of the fire-extinguishing mission system. Moreover, the top-level metric decomposition and allocation method is proposed by reversely applying the semiphysical and semiempirical model with a predetermined liquid release performance requirement as the goal. This enables rapid calculation of the range of relevant parameter values at the design and planning stages of the fire-extinguishing mission system, providing a theoretical basis for the iterative upgrade design of existing aircraft models and mission planning.[Results] To validate the effectiveness of the top-level metrics decomposition and allocation method for aircraft fire-extinguishing missions, this study decomposes and allocates the top-level metrics for a typical fixed-wing large firefighting aircraft fire-extinguishing mission system, obtaining the “hatch area” range for the design stage and the “fire-retardant viscosity” and “flight altitude–flight velocity” decision-making planes for the planning stage of the fire-extinguishing mission.[Conclusions] The results indicate that the proposed decomposition and allocation method can, to some extent, guide the optimization design and fire-extinguishing mission planning of the fixed-wing aircraft fire-extinguishing mission system.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Risk assessment method of gas pipeline networks based on fuzzy analytic hierarchy process and improved coefficient of variation
DU Yuji, FU Ming, DUANMU Weike, HOU Longfei, LI Jing
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 941-950. DOI: 10.16511/j.cnki.qhdxxb.2023.22.010
Abstract
PDF
(3287KB) (
249
)
[Objective] Reliable risk assessment results can help to improve the efficiency of safety management in gas pipeline networks. The Kent method is widely used as an accepted risk assessment method. However, relevant literature suggests that the Kent method is inadequate in the determination of weights, scoring items, and scores, and the determination of index weights and scoring criteria requires expert experience, which is highly subjective. Therefore, the traditional Kent method needs to be improved to comply with the risk assessment of different gas pipeline networks. To improve the objectivity and accuracy of risk assessment of gas pipeline networks, a quantitative risk assessment method based on the fuzzy analytic hierarchy process-improved coefficient of variation (FAHP-ICV) for gas pipeline networks is proposed.[Methods] In this work, based on data from the gas pipeline networks and their surroundings, the traditional risk assessment method for gas pipeline networks was improved in terms of index system and weighting and scoring criteria determination using statistical methods. First, a risk assessment index system comprising three primary indicators and nine secondary indicators was constructed while considering the actual operation of the gas pipeline networks in a province. Second, the subjective weighting method represented by the hierarchical analysis method and the objective weighting method represented by the coefficient of variation method were improved. The fuzzy hierarchical analysis method was used instead of the traditional one, and the improved coefficient of variation method was used to modify the weighting results of the original coefficient of variation method. The two methods were combined to determine the comprehensive weights of the evaluation indicators based on expert experience and the inherent rules between the indicator data. Next, based on the K-means clustering and sampling techniques in statistics, the sample data for the pipe section were determined and pre-processed through probability analysis to determine the upper bound of the scores of evaluation indicators. Then, the final scoring criteria were determined by integrating expert reports. Finally, a linear integrated assessment method was used to calculate the relative risk values of the pipe sections to achieve risk ranking and classification.[Results] To analyze the distribution of risk classes across gas pipe sections in the cities, the relative risk values of gas pipe sections in 12 cities were calculated and compared with the risk class classification criteria. For example, in city four, a comparison between the distribution of risk classes across gas pipe sections and the local map showed that the overall risk of the city was relatively high. The average risk values for gas pipe sections and different level indicators were compared between 12 cities; four cities were found to have great risk, and one city was found to have significant risk. Furthermore, cross-analysis was carried out on the city where the inspection and maintenance indicators suggested not fulfilling the requirements of gas pipe inspection regulations.[Conclusions] The feasibility and applicability of the method were verified through examples, providing new ideas and methods for the quantitative risk assessment of gas pipeline networks.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Case reasoning model for the “beyond a reasonable doubt” standard
WANG Jia, WANG Weixi, HUANG Mengyao, WANG Litao, SHEN Shifei
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 951-959. DOI: 10.16511/j.cnki.qhdxxb.2023.22.009
Abstract
PDF
(1519KB) (
142
)
[Objective] To maintain social justice and to prevent unjust cases, during the trial stage of social security cases, the criminal facts for case reasoning based on the “beyond a reasonable doubt” (BRD) standard must be determined. It is a crucial prerequisite for passing a judgment on whether the suspect is guilty under the principle of the presumption of innocence. BRD is an essential standard of proof in a criminal case, but its concept is relatively vague and abstract, which makes it challenging to implement in practice. Moreover, research regarding this issue is insufficient.[Methods] This study applies the chain rule and the Bayesian inference method to deeply analyze the BRD standard. The rationality of the causal logic is used to examine the rationality under the reverse causal logic and put forward the judgment rule of the rationality of the case facts. The rule states that the claim is reasonable given the evidence if and only if the claim is a priori reasonable and the claim can reasonably explain the evidence. Accordingly, the chain rule of evidence interpretation is proposed, which decomposes the interpretation of multiple pieces of evidence into the interpretation of a single piece of evidence, which can simplify the difficulty of analysis. Considering the above rules, a case reasoning model facing the BRD standard is proposed. The model exhibits the claims of the prosecution and defense in the case into a sequence of actions, defining the a priori reasonableness of the claims and the reasonable interpretation of the evidence. Moreover, the model further defines the independence between the evidence interpretation and the independent division of the evidence, and then the relationship between the independent division and the evidence can be reasonably explained.[Results] The proposed model is applied to two common criminal cases, vehicle collision and homicide. The prosecution and defense opinions of the cases are investigated through the reasoning model, and the model analysis results are compared with the actual case facts to verify the effectiveness of the model. The comparison between the analysis results and the facts shows that when the concerned case meets the BRD standard, the model can accurately determine the facts of the case, and the basis provided by the model is consistent with the reasons given in the actual trial. Furthermore, when the concerned case does not meet the BRD standard, the results obtained using the model inference are consistent with the actual trial results.[Conclusions] The results confirm that for the cases that meet the BRD standard and those that do not, the proposed models can provide the right judgment to assist in determining the facts of the case and the corresponding basis. The proposed model can provide robust help and support for professionals in the judicial field with the fact reasoning in the court trial.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Method of stress reaction induction in disaster scenarios based on virtual reality
WANG Shuyi, MAO Liangliang, WU Xinyang, TIAN Chengcheng, LIU Yi
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 960-967. DOI: 10.16511/j.cnki.qhdxxb.2023.22.005
Abstract
PDF
(4004KB) (
208
)
[Objective] As natural disasters have major impacts on urban security, research on the behavior of individuals in disaster scenarios is crucial to support emergency decision-making. However, disaster occurrence is highly uncertain, making it difficult to collect data in disaster scenarios.[Methods] This paper proposes a stress induction method for disaster scenarios that can be conducted in a laboratory, that is, to induce the stress reaction of individuals who are facing disasters by using virtual reality (VR) videos with disaster scenarios. This study also tested the effectiveness of the method through two experiments. The first experiment comprised the baseline, stress, and recovery periods, representing before, during, and after watching the VR videos. Questionnaires and physiological data, such as photoplethysmography (PPG) and electrodermal activity (EDA), of 180 participants were collected. The second experiment was a simulated driving experiment with 75 participants. This experiment was conducted in two sessions, which were separated by at least one week to avoid memory interference. In the first session, we collected car-following data on a highway. In the second session, participants drove in the same traffic scenario as in the first session after watching the VR video.[Results] The results of the first experiment indicated significant differences in the physical and psychological indicators among the baseline, stress, and recovery periods. The positive affect and negative affect scale (PANAS) data demonstrated that the VR video stimulated evident scared, nervous, and jittery emotions, which were consistent with negative compound emotions in disaster scenarios. Meanwhile, interested and attentive emotions did not change significantly, indicating that the participants were in good condition during the experiment. In addition, more than 80% of the participants reported that the VR video induced a high excitation level. Nearly 90% of them did not feel dizzy while watching the video. Furthermore, pulse rate variability indicators, such as standard deviation of NN intervals (SDNN), low frequency (LF), and EDA, were significantly higher in the stress period than in the baseline period. Changes in physiological indicators indicated that the participants were sympathetically excited and got stressed after watching the VR video. However, some physiological indicators had a high standard deviation due to individual variability. Therefore, VR videos used as stressors should be standardized. In general, the second driving simulation experiment showed a significant increase in acceleration and a significant decrease in headway after watching the VR video compared to when the video was not watched. This demonstrated that their driving style might become more aggressive after watching the VR video. This result is consistent with the findings of previous studies on driving characteristics in normal and emergency situations. In fact, the changing pattern of driving behavior in two sessions showed individual variability, which required further study.[Conclusions] Results of the two experiments indicate that VR videos can induce stress reactions in disaster scenarios, which is demonstrated in physical, psychological, and behavioral aspects. This paper provides a research method for investigating human behavior characteristics in disaster scenarios.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Discovery and evolution of hot topics of network public opinion in emergencies based on time-series supernetwork
CHEN Shuting, SHU Xueming, HU Jun, XIE Xuecai, ZHANG Lei, ZHANG Jia
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 968-979. DOI: 10.16511/j.cnki.qhdxxb.2023.22.007
Abstract
PDF
(10204KB) (
222
)
[Objective] Network public opinion security is an important part of social security, and identifying and tracking hot topics is the basis for the governance of online public opinion in emergencies. Existing research on the evolution of hot topics is limited by text semantics, and network public opinion text is severely limited by sparsity. Existing online public opinion event models based on complex network analysis focus on the static and isolated dimension of user characteristics, which provides an incomplete representation of online public opinion events. Meanwhile, existing online public opinion event models based on the supernetwork refer to the life-cycle theory of public opinion, divide time windows according to the development stage of public opinion, and construct them as nodes in the network. Hence, these models are limited to retrospective studies and are difficult to use.[Methods] To address the aforementioned problems, first, a supernetwork model with four levels—social, content, topic, and emotion—was constructed. This model departed from previously isolated interpretations of relevant factors of hot topics in online public opinion and effectively utilized the internal relationship between multiple public opinion elements and hot topics. The model provided a reference for mining hot topic information with complex network characteristics. Second, considering the universality and timeliness of the model, time information was constructed differently from traditional research, which was based on the development stage of public opinion. In this study, time windows were divided equidistantly with a certain granularity, and the order of time windows, rather than the nodes in the network, represented the connectivity characteristics of the network. To discover, migrate, and predict hot topics, a topic popularity index based on the centrality of the supernetwork and a topic popularity change rate index based on the hypernetwork centrality change rate were proposed in this paper. These indices were verified and analyzed for the “Gansu Baiyin Marathon” Microblog public opinion event.[Results] The findings of this study are as follows: (1) The time-series supernetwork model clearly represents network public opinion events and has significant advantages over the traditional methods in model visualization. (2) The topic popularity index accurately identifies the hot topics in each time window and evaluates the changes in topic popularity throughout the development of the event. For example, “accident notification” was the most popular topic in the early stage of the public opinion event, and its heat decreased with fluctuation thereafter. The “event guarantee” topic remained popular throughout the development of the event, and its popularity fluctuated on a daily basis. (3) Based on the topic popularity curve and network structure, topics with similar communities are found to migrate, such as “liability compensation”, “competition guarantee”, and “popular science knowledge”. (4) The topic popularity change rate index effectively predicts hot topics in the next time window.[Conclusions] This paper provides a general time-series model for network public opinion events with high sparsity and complex network characteristics. Topic heat and heat change rate indices can lead to the accurate identification of hot topics as well as the accurate tracking of the evolution, migration, and prediction of hot topics. Further, this study provides intuitive and useful guidance for the governance of online public opinion in real-world situations.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
A gamma radionuclide identification method based on convolutional neural networks
DU Xiaochuang, LIANG Manchun, LI Ke, YU Yancheng, LIU Xin, WANG Xiangwei, WANG Rudong, ZHANG Guojie, FU Qi
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 980-986. DOI: 10.16511/j.cnki.qhdxxb.2023.22.011
Abstract
PDF
(4192KB) (
176
)
[Objective] Rapid and reliable radionuclide identification can enable rapid monitoring and early warning of radioactive sources, which is essential for safeguarding people from the threat of radioactive materials. However, distinctive peak matching algorithms are not suitable for low gross count gamma-ray spectrum identification, especially when there are overlapped peaks in a spectrum. To improve the identification performance for low gross count gamma-ray spectra, this study creates a radionuclide identification model based on convolutional neural networks that can better identify the spectra obtained at low dose rates.[Methods] Firstly, a gamma-ray spectrum dataset was created. The gamma-ray spectra of 16 radionuclides were obtained at a dose rate of about 0.5μSv/h using a LaBr
3
spectrometer with measuring energy ranging from 30 to 3000keV, a resolution of about 5% at 662 keV, and a measured acquisition time about 100s. Secondly, a training dataset was developed. To train the model, a huge number of gamma-ray spectra of 16 radionuclides and their two mixed radionuclides were generated. We created 1100 data points for each type of gamma-ray spectra by varying the gross count and energy drift. Thus, a total of 149 600 gamma-ray spectrum data were generated. Among them, 80% of the data were randomly selected for model training and the remaining 20% for model crossvalidation. Finally, the convolutional neural networks was constructed. The random searching approach was used to search hyperparameters of the model using the Keras-Tuner tool for determining the ideal architecture of convolutional neural networks. The convolutional layer filter numbers were 96, 128, 32, 256, and 256 in order. The activation function for convolutional layers was the rectified linear unit. Furthermore, the neuron number of the hidden layer was 480, and the learning rate was 0.000 029 6. At last, the spectra labels were encoded using the one-hot format, and the softmax function was used as the activation function for the model's output layer. The model parameters were optimized using the Adam optimizer by employing crossentropy as the loss function. We obtained the radionuclide identification model after 100 epochs of training.[Results] To estimate the identification performance of the model under the condition that a dose rate was about 0.5 μSv/h and the measurement acquisition time was up to 120 s, we acquired 1 333 gamma-ray spectra from nine single radionuclides and their two mixed radionuclides using the LaBr
3
spectrometer. The nine radionuclides were
241
Am,
133
Ba,
137
Cs,
131
I,
226
Ra,
232
Th,
57
Co,
235
U, and
60
Co. The model was used to identify these spectra and the results showed that the model's accuracy was 90.11% with the acquisition time of 30s, and the accuracy was increased to 92.63% with the acquisition time of 60s.[Conclusions] In this study, we propose a radionuclide identification model based on convolutional neural networks. Analyses show that the model can effectively identify various radionuclides' gamma-ray spectra in a short period of time at a low dose rate.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Measurement of radioactivity in water by vacuum rotary evaporation
ZHANG Guojie, LIANG Manchun, HE Shuijun, XU Limei, YANG Dandan, DU Xiaochuang, FU Qi, SHEN Hongmin
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 987-993. DOI: 10.16511/j.cnki.qhdxxb.2023.22.012
Abstract
PDF
(2805KB) (
123
)
[Objective] The measurement of radioactivity in water samples with low activity by direct sampling is unreliable and inaccurate. We designed an automatic concentration method based on the vacuum rotary evaporation process to increase the ease and accuracy of such measurements.[Methods] In this method, the water samples were first evaporated to dryness by vacuum rotary evaporation and then washed with dilute nitric acid. The used nitric acid was then sampled and measured by a liquid scintillation spectrometer. An automatic concentration device was designed for these measurements. The optimum water bath temperature for concentrating the samples was determined experimentally. The relationship between the volume of dilute nitric acid used and the proportion of nuclides recovered was studied to improve the yield of the cleaning process. Given that a portion of the residues is likely to adhere to the inner wall of the container during the evaporation process, an experiment was designed to study the efficacy of our procedure to clean these residues using 12mL of 0.05mol/L nitric acid by determining the number of repeated cleanings required for the container to return to the normal background level after evaporating solutions containing
241
Am and
90
Sr with activitiy concentrations of 20, 5, and 1 Bq/L. After the water samples were automatically concentrated, they were measured using a liquid scintillation spectrometer, and the recovery rates of the two nuclides were calculated at different activity concentrations.[Results] Using vacuum rotary evaporation with a vacuum of 2.0-4.0 kPa, a condenser temperature of -5℃-0℃, a rotation speed of 50 r/min, and an initial water bath temperature of 50℃, which was raised to 60℃ after 50min, it took about 70min to concentrate 1 L of the water sample. To reduce the post-cleaning residue and avoid contaminating subsequent water samples, the evaporation should be washed with 12 mL of 0.05 mol/L dilute nitric acid before washing with pure water. After evaporating a 1 L water sample with a total activity of less than 5 Bq, two to three cleaning operations were needed, while after evaporating a 1 L water sample with a total activity of 20 Bq, about five cleaning operations were needed. Using 12mL 0.05mol/L nitric acid for elution could get satisfactory elution effects. The average yield of
241
Am by the automatic concentration method reached more than 70%, and the average recovery rate of
90
Sr reached about 80%.[Conclusions] This paper proposes an automatic concentration method based on the vacuum rotary evaporation process, which has not only a quick turnaround time but also high yield.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Impact of the COVID-19 pandemic on the development of core cities in the Beijing-Tianjin-Hebei urban agglomeration—Taken Beijing, Tianjin, and Shijiazhuang as examples
XIAO Xingyu, MEI Shiyu, LIU Ruiqi, WANG Kuo, DENG Qing, HUANG Lida, YU Feng
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 994-1002. DOI: 10.16511/j.cnki.qhdxxb.2023.22.003
Abstract
PDF
(2757KB) (
296
)
[Objective] The negative impact of COVID-19 pandemic has hindered the development of urban agglomerations. Because of close geographical, transportation, and economic ties, COVID-19 is more likely to be transmitted repeatedly in urban agglomerations. This paper provides references for coordinating the development and security of urban agglomerations and building “resilient cities” in the context of pandemic. By constructing a system of indicators for the urban development level and using an autoregressive integrated moving average (ARIMA) model to predict the urban development level without pandemic, this study can quantitatively assess the impact of pandemic on individual cities in the Beijing-Tianjin-Hebei urban agglomeration.[Methods] The study applied an ARIMA model to investigate the urban development mechanism in urban agglomerations in different stages of pandemic. First, indicators were selected from multiple sources based on the collection and analysis of literature. Their reliabilities were tested with the Cronbach’s coefficients. Second, the indicators were assigned weights using an integrated method with the analytic hierarchy process (AHP) and the entropy weight method. Third, the stages of the COVID-19 pandemic were divided based on the monthly data collected from Weibo and other websites. Fourth, based on historical data and urban development trends before pandemic, the ARIMA model was used to predict the urban development level without the effect of pandemic. Finally, a comparison analysis was conducted between the prediction value and the real value to quantitatively assess the impact of pandemic on individual cities in the urban agglomeration.[Results] (1) In the context of pandemic, the urban development level indicators of three cities reached peak and trough values in the same month. (2) The degree of influence was less than 0 during the outbreak period and gradually decreased to a stable trough value. (3) The degree of influence was greater than 0 in the early stage of the recovery period and gradually decreased to less than 0 in the later stage until it reached the trough point.[Conclusions] This study shows that: (1) the COVID-19 pandemic in the central city of the urban agglomeration affects the formulation and implementation of the overall urban agglomeration development strategy; (2) the development pattern of urban agglomeration converges because of pandemic; and (3) cities are resilient and have a certain disaster-bearing capacity. To strengthen the construction of the Beijing-Tianjin-Hebei urban agglomeration, the paper suggests that the government should start from the economy, transportation, people’s livelihood, and disaster resilience to improve the urban development level.
Figures and Tables
|
References
|
Related Articles
|
Metrics
Select
Prediction method of the pandemic trend of COVID-19 based on machine learning
REN Jianqiang, CUI Yapeng, NI Shunjiang
Journal of Tsinghua University(Science and Technology). 2023,
63
(6): 1003-1011. DOI: 10.16511/j.cnki.qhdxxb.2023.22.006
Abstract
PDF
(4476KB) (
281
)
[Objective] To estimate and predict the actual infection scale of COVID-19 in a population, a COVID-19 pandemic trend prediction method based on machine learning is proposed. This method uses detection data to predict the development trend of the pandemic and can implicitly consider the impact of prevention and control measures. Additionally, this method can predict the number of confirmed cases in the future and estimate the actual infection scale of COVID-19.[Methods] In this paper, a three-step prediction model based on machine learning (TSPM-ML) is proposed. Machine learning algorithms, such as neural networks, random forest, long short-term memory (LSTM), and sequence to sequence (seq2seq), are introduced into the prediction of the COVID-19 development situation, and the detection data are used to predict the number of people diagnosed and the actual scale of the infection in the future. The TSPM-ML includes three steps: (1) predicting the actual infection scale of COVID-19 based on the detection data, (2) predicting the future development trend of the actual infection scale based on the predicted results of the first step, and (3) predicting the number of people diagnosed in the future based on the actual infection scale obtained in the second step. The TSPM-ML is used to predict the actual pandemic situation in Germany, France, South Korea, the United States, Russia, and Finland.[Results] The largest prediction error is in the United States, with a forecast error of 23.71 per million people, while South Korea has the smallest prediction error of 0.63 per million people. Overall, the prediction results of the TSPM-ML are consistent with the simulation and actual data, and the reliability of the model is verified.[Conclusions] The predicted results are consistent with the actual data, and the TSPM-ML is highly reliable. The prediction results can enable government management departments to more accurately understand the actual development trend of COVID-19 and allocate medical resources more effectively, and provide decision support for COVID-19 prevention and control.
Figures and Tables
|
References
|
Related Articles
|
Metrics
News
More
»
aaa
2024-12-26
»
2023年度优秀论文、优秀审稿人、优秀组稿人评选结果
2023-12-12
»
2022年度优秀论文、优秀审稿人、优秀组稿人评选结果
2022-12-20
»
2020年度优秀论文、优秀审稿人评选结果
2021-12-01
»
aa
2020-11-03
»
2020年度优秀论文、优秀审稿人评选结果
2020-10-28
»
第十六届“清华大学—横山亮次优秀论文奖”暨2019年度“清华之友—日立化成学术交流奖”颁奖仪式
2020-01-17
»
a
2019-01-09
»
a
2018-12-28
»
a
2018-01-19
Links
More
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd