Loading...
 首页  期刊介绍 期刊订阅 联系我们
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  百年期刊

ISSN 1000-0585
CN 11-1848/P
Started in 1982
  About the Journal
    » About Journal
    » Editorial Board
    » Indexed in
    » Rewarded
  Authors
    » Online Submission
    » Guidelines for Authors
    » Templates
    » Copyright Agreement
  Reviewers
    » Guidelines for Reviewers
    » Online Peer Review
  Office
    » Editor-in-chief
    » Office Work
    » Production Centre
  • Table of Content
      , Volume 63 Issue 5 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    PROCESS SYSTEMS ENGINEERING
    Theoretical feasibility based biosynthetic pathway evaluation method
    WEI Yixin, HAN Yilei, LU Diannan, QIU Tong
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 697-703.   DOI: 10.16511/j.cnki.qhdxxb.2023.22.001
    Abstract   HTML   PDF (1823KB) ( 272 )
    Synthetic biology employs cells and enzymes to produce high-value-added chemicals through biocatalytic processes. Synthetic biology uses renewable biomass as the substrate as well as the energy source, thus meeting the requirements of green and sustainable chemical engineering. In synthetic biology, metabolic pathway reconstruction is a key step in the development of industrial strains and the biocatalytic synthesis of chemicals, which aims to determine the reaction pathways and substrates for the synthesis of target compounds in the host; and metabolic pathway reconstruction is the basis for the directed evolution of enzymes and the biosynthesis of target substances. Considering the complexity of the biological metabolic network, a metabolic pathway reconstruction typically results in numerous pathway results. To improve efficiency and reduce invalid experimental attempts, reasonable and effective pathway evaluation and screening are highly important. Current evaluation methods of relevant pathway reconstruction tools are relatively simple, using a few indicators without prioritizing any of them. To solve this problem, this paper proposes a biosynthetic pathway evaluation method. Metabolic path reconstruction generally includes path acquisition and evaluation. The path evaluation process can score and rank the most optimal candidate paths and make recommendations based on the ranking results to obtain several candidate paths to synthesize the target product. This study achieved path acquisition based on the Rhea database. The proposed path evaluation method designed six path evaluation indicators and calculated different path characteristics of several candidate paths, which indicated the path theoretical feasibility, including the path length score, the proportion of real reactions from the database, the path molecular similarity score, the path feasibility score, the proportion of reactions with enzyme information, and the reaction rules feasibility score. These characteristics reflected the theoretical feasibility of biosynthetic pathways from different aspects. To realize the scientific-weighted summation of the indicators, their individual weights were determined using the analytic hierarchy process. Three experts were invited to give the judgment matrices based on their subjective judgments of the relative importance of the indicators. Using these matrices, the respective indicator weights were calculated. Combined with the consistency index, the indicator weights determined by different experts were fused to obtain the final composite index weight, which was subsequently used to calculate the final path score for each path. The effectiveness of the biosynthetic path evaluation method established in this study is demonstrated by an actual test of conversion between benzoate and 2, 3-dihydroxybenzoate. The evaluation score for each path is calculated and used to provide reasonable and interpretable path recommendations. Paths with higher evaluation scores offer advantages in terms of theoretical feasibility. The proposed path evaluation method achieves the expected results, which can be used to deal with the contradiction between the richness of data and difficulty of practice in synthetic biology, reduce the trial-and-error cost of experiments, and provide a basis for increasing the practicability of biological retrosynthesis tools.
    Figures and Tables | References | Related Articles | Metrics
    Design and optimization of a helium separation process by membrane coupled with an electrochemical hydrogen pump
    CHENG Andi, LIU Shishuai, WU Xuemei, JIANG Xiaobin, HE Gaohong, WANG Fan, DU Guodong, XIAO Wu
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 704-713.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.040
    Abstract   HTML   PDF (4139KB) ( 224 )
    Helium is a strategic resource, which is a by-product of natural gas processing. The tail gas from the nitrogen rejection unit (NRU) of the liquified natural gas (LNG) plant is a main helium-containing gas with a helium mole content of about 1.0%-5.0%. The existing process uses cryogenic distillation to enrich it into crude helium (≥ 50.0%) and obtains high-purity helium (≥ 99.99%) through catalytic oxidation and pressure swing adsorption (PSA). However, the existing helium enrichment process based on cryogenic distillation requires severe operating conditions with high pressure and low temperature, leading to complex operations, high energy consumption and equipment investment. Besides, new impurities (e.g., O2 and H2O) can be introduced in the helium separation process under catalytic oxidation dehydrogenation, which increases the load of the adsorption device significantly and leads to the loss of potential hydrogen resources. Therefore, this paper proposed a novel helium separation process from the NRU tail gas based on a membrane coupling electrochemical hydrogen pump (EHP). Whole helium separation process can be divided into two parts with enrichment and purification. A two-stage membrane separation was used to achieve efficient enrichment of low-content helium. Due to the advantages of membrane with no phase change, small footprint and simple operation, it can realize energy saving in the helium enrichment process. The crude helium was refined through PSA to remove N2 and CH4. Then, the EHP was used to separate hydrogen and helium. Moreover, high-purity hydrogen and helium can be obtained without introducing impurities. Processes modeling and data analysis were conducted on Aspen HYSYS. Due to the strong interactions between process parameters in the two-stage membrane process, the response surface method (RSM) was used to optimize four key process parameters with membrane areas and feedstock pressures. Since the helium purification unit based on EHP is located at the end of the process and has no interaction with the previous units. The single-factor sensitivity analysis was used for parameter optimization of EHP. The optimization results of the membrane separation process show that the helium mole purity and recovery rate of the crude helium can reach 64.94% and 95.67% under the optimal operating conditions (M-101:4759.5 m2, M-102:435.3 m2; M-101 and M-102 feedstock pressures are 6010.3 and 4352.5 kPa, respectively). Furthermore, high-purity helium and hydrogen can be achieved simultaneously through EHP under the optimal parameters. The applied potential of two-stage EHP is 1 V, and the MEA areas of two-stage EHP are 39 and 17 m2, respectively. Economic evaluation results show that the production cost of the helium in the coupling process is 125.47 CNY/m3. The financial evaluation of the new helium separation process was conducted based on the economic evaluation data. The dynamic payback period is 2.09 years, and the internal rate of return is 79%. In summary, the proposed membrane coupling EHP helium separation process has significant economic and social benefits. It provides a feasible route for the independent industrial production of high-purity helium in China.
    Figures and Tables | References | Related Articles | Metrics
    Monte Carlo simulation of propylene/propane adsorption thermodynamics on molecular sieves
    ZHAO Li, HE Chang, SHU Yidan, CHEN Qinglin, ZHANG Bingjian
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 714-722.   DOI: 10.16511/j.cnki.qhdxxb.2023.26.003
    Abstract   HTML   PDF (4794KB) ( 209 )
    The separation of propylene (C3H6) and propane (C3H8) is crucial in the chemical industry. Compared with the existing distillation technology, adsorption separation technology saves a remarkable amount of energy and has recently attracted much attention. Furthermore, molecular sieves with uniform channels are promising for separating olefin and alkane mixtures. Thus, considering the convenience, separation speed, and reliability, molecular simulations can provide microscopic information that is difficult to obtain using conventional experiments and play a key role in adsorption material screening. This study systematically studied the adsorption of C3H6 and C3H8 and their binary mixtures using DD3R, ITQ-32, Si-CHA, Si-SAS, ITQ-12, and ITQ-3 molecular sieves to discover potential adsorbents for separating C3H6/C3H8binary mixtures. In this study, COMPASS, CVFF, and universal force fields are selected to optimize C3H6 and C3H8, respectively; furthermore, the accuracy of the force fields is verified using the charge distribution. Grand canonical Monte Carlo (GCMC) simulations are used to simulate the adsorption of C3H6 and C3H8 on six molecular sieves with 8-ring channels. The GCMC simulations are run for 1.1×107 cycles, with 1×106 cycles for equilibration and the remaining cycles for ensemble average. To explore the adsorption properties of C3H6 and C3H8, their adsorption capacities and isosteric adsorption heats on six molecular sieves are simulated at 300 K and 101 kPa, respectively. Since an adsorption isotherm can reflect the adsorbate-adsorbent interaction, the adsorption isotherm of C3H6 and C3H8on six molecular sieves at 300 K and 1-101 kPa are simulated. Moreover, the Langmuir, Freundlich, Dubbinin-Radushkevich(D-R), and Temkin models are used to fit the adsorption isotherms and explore the adsorption mechanism. The adsorption isotherms of C3H6 and C3H8 on Si-SAS at 250, 270, 290, 300, 310, and 330 K are simulated to examine the temperature effect. The adsorption capacities and isosteric adsorption heat of C3H6/C3H8 binary mixtures at 300 K and 101 kPa are simulated, and the selectivity is determined to find excellent adsorbents to separate these mixtures. The relationships between equilibrium selectivity and the difference in isosteric adsorption heat ΔQst, as well as the adsorption capacity of C3H6 and total pore volume, are further investigated. The simulation results revealed that:(1) the adsorption capacities of C3H6 and C3H8 on Si-CHA and Si-SAS molecular sieves were high. (2) Type-Ⅰ D-R adsorption isotherms were regressed well for the adsorption of C3H6 and C3H8 at 300 K. The adsorption capacity of the Si-SAS molecular sieve for C3H6 decreased with increasing temperature. (3) Si-SAS had an adsorption capacity of 2.26 mmol/g for C3H6 in binary mixtures, and the selectivity was 3.94, thus making it the best adsorbent for separating the mixture. (4) A positive correlation was observed between the equilibrium selectivity and ΔQst as well as the adsorption capacity of C3H6 and total pore volume. A systematic study on the adsorption of C3H6 and C3H8 at six molecular sieves provides a reference for selecting candidate adsorbents for separating C3H6/C3H8 mixtures, thus greatly expediting the search for optimal adsorbents. The positive correlation between selectivity and ΔQst, as well as the adsorption capacity of C3H6 and total pore volume, provides a theoretical basis for designing and developing excellent adsorbents for C3H6/C3H8 separation.
    Figures and Tables | References | Related Articles | Metrics
    Synthesis of refinery hydrogen networks considering compressor types
    ZHOU Yingqian, FENG Xiao, YANG Minbo
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 723-729.   DOI: 10.16511/j.cnki.qhdxxb.2022.25.050
    Abstract   HTML   PDF (2263KB) ( 331 )
    [Objective] Hydrogen demands in refineries are increasing annually because of the growing processing of heavy crude oil, which necessitates optimizing the hydrogen network to improve hydrogen usage. The cost associated with hydrogen compressors is the second largest in a refinery hydrogen network, following the fresh hydrogen cost. Thus, the synthesis of hydrogen networks considering gas compression has been an active topic in process systems engineering. Previous work has been modeled on reciprocating compressors, focusing on reducing the number of compressors and/or compression power consumption. However, reciprocating and centrifugal compressors are widely used in refinery hydrogen networks. The advantage of centrifugal compressors is their suitability for large gas flow rates without too high exhaust pressures, while reciprocating compressors have high and stable exhaust pressures and are suitable for small gas flow rates.[Methods] This work presents a hydrogen network superstructure that considers the selection of multistage centrifugal and reciprocating compressors. Compressor selection is determined based on the characteristics of reciprocating and centrifugal compressors considering their inlet gas flow rates and exhaust pressures. A mixed integer nonlinear programming (MINLP) model is formulated to minimize the total annualized cost, comprising fresh hydrogen, compressor investment, and compression power. The developed MINLP model is examined based on a hydrogen network reported in the literature. It is coded in the general algebraic modeling system 35.2 and can be directly solved by the BARON solver.[Results] The results indicated that the optimal hydrogen network contained three centrifugal compressors and six reciprocating compressors, with one reciprocating compressor for two-stage compression and one for three-stage compression. The flow rates of the three centrifugal compressors were larger than the upper flow rate limit of the reciprocating compressors, while the outlet pressures were lower than the upper outlet pressure limit of the centrifugal compressors.[Conclusions] This phenomenon indicates that the flow rate constraint dominates the compressor selection in this hydrogen network. Since the cost correlation of the centrifugal compressor is smaller than that of the reciprocating compressor in this study, the centrifugal compressor is preferred when both types of compressors meet the compression demands. Hence, only hydrogen streams with small flow rates and large compression ratios are chosen for the reciprocating compressors. Compared with the previous work, although the numbers of compressors are identical, the optimal hydrogen network structures differ notably, and this study obtains small compression power consumption. This result is obtained because earlier studies neglected compressor selection, and the mathematical model in this study prefers the less expensive centrifugal compressor. Therefore, the flow rates of several hydrogen streams are enlarged to satisfy the flow rate constraint of centrifugal compressors, which is also more consistent with refinery practice. Finally, the computation time of the MINLP model is only 0.72 s, thereby demonstrating the usefulness and convenience of the proposed method.
    Figures and Tables | References | Related Articles | Metrics
    BIG DATA
    Unsupervised learning-based intelligent data center power topology system
    JIA Peng, WANG Pinghui, CHEN Pin-an, CHEN Yichao, HE Cheng, LIU Jiongzhou, GUAN Xiaohong
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 730-739.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.039
    Abstract   HTML   PDF (2591KB) ( 245 )
    [Objective] In mission-critical cloud computing services, large-scale data center (DC) stability is a key metric that must be guaranteed. However, because of uncertain commercial power supplies and complex power equipment operation processes, DC failure events are inevitable and impactful, affecting related servers and network devices. To mitigate the impact, accurate DC power topology must be obtained to achieve fast and precise failure handling and root-cause localization for mitigating the damage to service quality. Nevertheless, the current process of generating DC power topology is labor intensive, and its correctness cannot be efficiently evaluated and guaranteed.[Methods]To solve these issues, instead of using the erroneous power topology provided by the operator, this paper designs an intelligent DC power topology system (IPTS). IPTS based on an unsupervised learning framework that automatically generates power topology for the working part of a power system or uses the power system monitoring data to verify manually constructed DC power topology, which may change over time. The intuition behind IPTS is that two physically connected pieces of power equipment should have not only a similar trend but also a close magnitude in specific monitoring data, e.g., current and active power, because their power loads produced by downstream servers are closed. By defining the structure abstraction of the DC power system according to the domain knowledge of DC power system architectures, the DC power system can be divided into several hierarchical functional blocks. Then, two unsupervised structure learning algorithms, namely, the one-to-one (O2O) and one-to-multiple (O2M) structure learning algorithms, are separately developed to automatically recover the O2O and O2M connection types between all pieces of power equipment in a divide-and-conquer manner. Moreover, no methods or metrics can currently be used to verify enterprise DC power topology unless manually checking with high complexity in terms of multiple data sources and numerous connections. To better indicate the consistency of connections within any two pieces of power equipment, this paper further designs an evaluation metric called the consistency ratio (CR). The CR derives from a systematic evaluation process that compares the original enterprise DC power topology information with learning-based enterprise DC power-topology information produced by IPTS automatically and iteratively.[Results] The experimental results of two large-scale DCs show that IPTS automatically generates accurate DC power topology with a 10% improvement on average over existing state-of-the-art methods and effectively reveals most errors (including errors in the local system for operations) in manually constructed DC power topology with 0.990 precision. After performing corrections according to the verification results, CR values between the learned structure and modified DC power topology can be improved to 0.978 on average, which is 18%~113% higher than that of the original topology. Additionally, for the inconsistent cases that occurred while generating and verifying power topology, this paper gives comprehensive investigations.[Conclusion] IPTS is the first system that uses data analytics for DC power topology generation and verification and has been successfully deployed for 19 enterprise DCs and applied in real large-scale industrial practice.
    Figures and Tables | References | Related Articles | Metrics
    Distribution consistency-based missing value imputation algorithm for large-scale data sets
    YU Jiayin, HE Yulin, CUI Laizhong, HUANG Zhexue
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 740-753.   DOI: 10.16511/j.cnki.qhdxxb.2023.25.003
    Abstract   HTML   PDF (5395KB) ( 242 )
    [Objective] As a significant research branch in the field of data mining, missing value imputation (MVI) aims to provide high-quality data support for the training of machine learning algorithms. However, MVI results for large-scale data sets are not ideal in terms of restoring data distribution and improving data prognosis accuracy. To improve the performance of the existing MVI algorithms, we propose a distribution consistency-based MVI (DC-MVI) algorithm that attempts to restore the original data structure by imputing the missing values for large-scale data sets.[Methods] First, the DC-MVI algorithm developed an objective function to determine the optimal imputation values based on the principle of probability distribution consistency. Second, the data set is preprocessed by random initialization of missing values and normalization, and a feasible missing value update rule is derived to obtain the imputation values with the closest variance and the greatest consistency with the complete original values. Next, in a distributed environment, the large-scale data set is divided into multiple groups of random sample partition (RSP) data blocks with the same distribution as the entire data set by taking into account the statistical properties of the large-scale data set. Finally, the DC-MVI algorithm is trained in parallel to obtain the imputation value corresponding to the missing value of the large-scale data set and preserve distribution consistency with the non-missing values. The rationality experiments verify the convergence of the objective function and the contribution of DC-MVI to distribution consistency. In addition, the effectiveness experiments assess the performance of DC-MVI and eight other MVI algorithms (mean, KNN, MICE, RF, EM, SOFT, GAIN, and MIDA) through the following three indicators:distribution consistency, time complexity, and classification accuracy.[Results] The experimental results on seven selected large-scale data sets showed that:1) The objective function of the DC-MVI method was effective, and the missing value update rule was feasible, allowing the imputation values to remain stable throughout the adjustment process; 2) the DC-MVI algorithm obtained the smallest maximum mean discrepancy and Jensen-Shannon divergence on all data sets, showing that the proposed method had a more consistent probability distribution with the complete original values under the given significance level; 3) the running time of the DC-MVI algorithm tended to be stable in the time comparison experiment, whereas the running time of other state-of-the-art MVI methods increased linearly with data volume; 4) the DC-MVI approach could produce imputation values that were more consistent with the original data set compared to existing methods, which was beneficial for subsequent data mining analysis.[Conclusions] Considering the peculiarities and limitations of missing large-scale data, this paper incorporates RSP into the imputation algorithm and derives the update rules of imputation values to restore the data distribution and further confirm the effectiveness and practical performance of DC-MVI in the large-scale data set imputation, such as preserving distribution consistency and increasing imputation quality. The method proposes in this paper achieves the desired result and represents a viable solution to the problem of large-scale data imputation.
    Figures and Tables | References | Related Articles | Metrics
    PUBLIC SAFETY
    Research for smoke control in a subway tunnel under the ceiling multi-point vertical smoke exhaust
    ZHONG Maohua, HU Peng, CHEN Junfeng, CHENG Huihang, WU Le, WEI Xuan
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 754-764.   DOI: 10.16511/j.cnki.qhdxxb.2022.26.055
    Abstract   HTML   PDF (12667KB) ( 185 )
    [Objective] To investigate the metro interval tunnel fire under the ceiling multipoint vertical smoke exhaust, the smoke temperature distribution under the tunnel ceiling is analyzed by performing a series of field fire experiments at a scale of 0.25-1.25 MW in a subway section tunnel.[Methods] A fire dynamics simulator numerical simulation tunnel model corresponding to the actual size is established. Subsequently, by increasing the fire source heat release rate (5.00-20.00 MW) and the exhaust air volume of the ventilation tunnel (0-120 m3/s), the critical exhaust air volume and exhaust efficiency is investigated, which can help in mitigating the spread of downstream smoke.[Results] According to the experiment and simulation results, different fire source positions exhibited no effect on the range of the lateral centerline temperature increase area. The position of the smoke exhaust port enabled the suppression of the increase in the ceiling temperature due to the elevated fire source heat release rate. Thus, establishing the air inlet in the metro tunnel could suppress the reverse flow of the smoke; however, it would make the downstream smoke unstable, and the exhaust port could not completely discharge the high-temperature smoke. The smoke temperature of the exhaust port near the fire source, which was related to the fire source heat release rate, but was almost independent of the exhaust air volume. With the increase of the exhaust air volume, it almost remained unchanged. Concurrently, the smoke exhaust port near the fire source played a major role in exhausting the smoke and heat. A critical exhaust air volume completely exhausted all the smoke generated by the fire, whose value was related to the fire source heat release rate. The Fr characterized the ratio of the inertial force to the buoyancy of the smoke layer. The dimensionless Fr was used to determine whether "plug-holing" occurs in the smoke exhaust system. The critical Fr was calculated to be approximately 2.7, slightly higher than that in previous studies.[Conclusions] The exhaust efficiency is an important parameter reflecting the exhaust effect of the exhaust port in the tunnel. The smoke exhaust efficiency of the exhaust port is calculated using the ratio of the mass flow rate of CO in the smoke discharged from the exhaust port and the total downstream CO mass flow rate of the smoke. With the increase in the exhaust air volume, the mass flow of CO discharged from the exhaust port close to the fire source first increases and then gradually becomes flat primarily because the exhaust efficiency of the exhaust port reaches saturation. Therefore, for different exhaust air volumes and fire source heat release rates, the exhaust efficiency first increases and then remains constant. Thus, when the exhaust air volume reaches a certain value, the exhaust port can completely discharge all the high-temperature smoke, and the exhaust efficiency of the port becomes 1. The empirical formula among the smoke exhaust efficiency, the Fr, and the dimensionless wind speed is obtained, and the empirical formula presents a piecewise function relationship.
    Figures and Tables | References | Related Articles | Metrics
    Intelligent dispatching optimization of emergency supplies to multidisaster areas in major natural disasters
    ZHANG Lin, WANG Jinyu, WANG Xin, WANG Wei, QU Li
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 765-774.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.038
    Abstract   HTML   PDF (1537KB) ( 460 )
    The frequent occurrence of major natural disasters not only endangers national stability and people's safety but also causes serious economic losses. Since most sudden natural disasters are unpredictable, how to transport emergency supplies to disaster-affected areas quickly and accurately has attracted wide attention. Unlike existing research, this study begins with the rescue characteristics of major natural disasters. In this study, an intelligent dispatching model of emergency supplies for multidisaster areas is constructed considering. Considering that the emergency materials of each rescue area to meet the needs of each disaster area, this study constructs an uncertain multiobjective intelligent dispatching model of emergency supplies in fully. Due to the uncertainty and fuzziness of information in emergency situations, using triangular fuzzy number method can help decision-makers to make effective decisions. Therefore, triangular fuzzy number method is used to express the uncertainty of emergency supplies demand and transportation time in different disaster areas. The rainstorm disaster in Henan Province, China, in 2021 is taken as a typical case in this study. The objective and actual data of emergency supplies dispatched in this disaster are obtained from the official websites of Zhengzhou Temporary Disaster Relief Reserve, Red Cross Society of China Henan Branch, and Henan Charity Network. This study sets emergency supplies as variable x(Ze)ij, unit cost as variable cij, transportation time as a variable tij. According to the triangular fuzzy number of emergency supplies demand and transportation time which set in this study, the uncertain variables are represented by the triangular fuzzy number method. Thus, the model is transformed into a deterministic multiobjective intelligent dispatching model. Two-dimensional Euclidean distance weighting is used to simulate the calculation and solve the model. Then, the linear interactive and general optimizer (LINGO) software is used to calculate the emergency supplies dispatching strategy from each rescue area to each disaster area. Given the actual situation of limited transportation conditions, each rescue area is usually unable to dispatch all emergency supplies at one time. Therefore, the weight of various emergency supplies is determined according to the urgency of the actual situation, and the LINGO software is used again in this study to calculate the phased emergency supplies transportation scheme. Finally, the optimal emergency dispatching strategy is formulated to meet the research objective in this study. Based on the above, a visual comparison is made between the results obtained using the constructed emergency supplies intelligent dispatching model and the demand quantity of emergency supplies in each disaster area. It can be seen that the dispatching quantity of various emergency supplies obtained by the model in this work has little difference from the actual emergency supply demand of each disaster area. As a result, large waste in major natural disasters can be avoided. The research findings show that the model has high reliability, and the simulation results are close to the actual situation. It can meet the emergency supplies demand of multidisaster areas and help decision-makers develop effective disaster relief strategies in major natural disasters.
    Figures and Tables | References | Related Articles | Metrics
    Building fire insurance premium rate based on quantitative risk assessment
    HU Jun, SHU Xueming, XIE Xuecai, YAN Jun, ZHANG Lei
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 775-782.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.042
    Abstract   HTML   PDF (1811KB) ( 412 )
    Fire is a serious threat to public life and property safety. Insurance is an effective means to deal with fire risk, and accurately determining the premium rate of buildings according to the fire risk is a concern of the insurance industry. Currently, the premium rate is mainly based on the fire frequency and loss expectation from the insurance statistics, and adjustments are based on building risk assessment results. The adjustment scheme can be divided into two types. One is the rate floating model, which gives the floating range of the premium rate based on the risk level, but the floating proportion is fairly subjective. The other is the rate calculation model, which establishes the quantitative risk assessment method to calculate the specific premium rate. However, comprehensively reflecting the hazardous in the buildings as well as the uncertainty of losses with the current risk assessment method is difficult. Thus, the premium rate is relatively rough. A quantitative model for building fire insurance premium rates is constructed in this paper. First, the Bayesian network method is used to calculate the building fire probability considering the influences of various risk sources. The specific factors affecting ignition were comprehensively analyzed from the aspects of humans, things, and environments. Therefore, 14 factors were selected to construct the Bayesian network of building ignition, based on which the probability of building fire can be calculated rather quantitatively and objectively. Second, the Latin hypercube sampling (LHS) is used to stratify the burn rate in different fire stages from ignition, growth, and development to spread with certain distributions to reflect the staging and random characteristics of fire losses. Thus, the final loss distribution, including the expected value, standard deviation, probability density function, and cumulative probability density function, can be acquired accurately. Therefore, the quantitative and dynamic risk assessment of building fire is realized, and the rate calculation model is used to compute the rate based on the result. Fifteen households were selected to calculate their premium rates based on the quantitative assessment of building fire risk, including ignition probability and loss distribution, and the premium rates are compared with the rate in the insurance market. Results show that the proposed premium rate determination model can effectively reflect the differentiated level of fire risk and ensure the fairness of insurance. The premise of the building fire insurance premium rate model in this paper is that the insurance company covers all the fire risks of the building and disregards the case of deductible due to the retainment of fire risk by the insured. In addition, the foreign statistics were adopted, and the normal loss distribution at each stage after the ignition was assumed due to the lack of domestic data. Deductibles can be considered in further research to construct premium rate models, and accurate data can be acquired to obtain results consistent with the building fire risk level in China.
    Figures and Tables | References | Related Articles | Metrics
    Effects of the backboard on downward flame spread over polymethyl methacrylate
    LI Dayu, ZHAO Kun, ZHOU Kuibin, SUN Penghui, WU Jinmo
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 783-791.   DOI: 10.16511/j.cnki.qhdxxb.2023.22.015
    Abstract   HTML   PDF (7567KB) ( 87 )
    Polymethyl methacrylate (PMMA) is one of the main architectural decoration materials and is widely used in different types of buildings. Once ignited, the flammable PMMA can cause serious building fire accidents. In building decoration, PMMA is used alone or attached to the noninflammable wall. Consequently, the PMMA could burn in a free condition or under the wall effect in a building fire. However, experimental studies on the effect of the backboard on PMMA downward flame spread are rare. In this study, the effect of the backboard was experimentally studied on the downward flame spread for PMMA of 2- and 5-mm thicknesses using a self-designed experimental setup. The experimental setup holds a noninflammable backboard that can be dismantled for the downward flame spread of PMMA in the free condition. Experimental observation showed significant changes in the flame color and pyrolysis front shape with and without the backboard. As compared to the free condition, the airflow induced by the entrainment of the burning flame might cause a flow boundary layer across the backboard, which helped to qualitatively explain the difference in the flame color and the pyrolysis front shape between the two thickness conditions. The experimental measurements also showed that the backboard had a negative effect on the downward flame spread rate, particularly for the 2-mm thick PMMA, which was characterized by an increase in the pyrolysis front angle and a drop in the average flame height, mass loss rate, and flame spread rate. The heat transfer analyses for the steady flame spread under the backboard and free conditions were conducted using the geometrical feature of the pyrolysis zone, and thus a comparison between the two conditions in terms of total heat feedback was conducted. There are four main conclusions. (1) The backboard caused a darker flame of PMMA and a blue flame of the 2-mm thick PMMA, as compared to those in the free condition. (2) Under the backboard effect, the pyrolysis front showed a "-" shape when the 2-mm thick PMMA burned in a steady stage, while the 5-mm thick PMMA held the pyrolysis front of an inverted "V" shape with a larger front angle than that in the free condition. (3) During the steady stage of flame spread, the backboard reduced the mass loss and flame spread rates by decreasing the total heat feedback to the pyrolysis zone. (4) The reductive effect was more significant on the mass loss rate and the flame spread for the 2-mm thick PMMA than the 5-mm thick PMMA because the former could be fully located in the flow boundary layer, whereas the latter partially located in the flow boundary layer in terms of the thickness. In summary, the qualitative and quantitative analyses of the experimental results show that the backboard can slow down the downward flame spread rate by restricting the air entrainment and reducing the heat feedback, and the inhibiting effect is more obvious for the thinner PMMA.
    Figures and Tables | References | Related Articles | Metrics
    MEDICAL EQUIPMENT
    Characteristics and progress of research on the reliability of active implantable medical devices
    WANG Weiming, LI Bing, LI Luming
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 792-801.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.028
    Abstract   HTML   PDF (8202KB) ( 220 )
    [Significance] Active implantable medical devices (AIMDs) are key areas for transformation, upgradation, and high-quality development in China's medical device industry.[Progress] The reliability characteristics of AIMD are summarized in this paper. The progress of research on the reliability of AIMDs is systematically described at the system, module, and component levels. At the system level, it is critical to study the primary factors affecting the life of AIMDs, including their structure, material, various stress environment profiles, working conditions, failure mode, failure mechanism, connection, and reoperation. Among them, the connection between the key modules and the content related to the doctor's multiple operations are critical components, thus requiring increased attention. At the module level, the reliability of key modules, primarily including the feed-through, implanted batteries, wires, implanted electrodes, and PCBA, has significantly impacted the reliability of the entire system and may even directly determine the reliability specifications of AIMDs. The reliability assurance of the circuit board has changed significantly with time compared with the traditional high-reliability assurance method. Directions for future development of the reliability assurance of commercial devices include simplifying component-level tests, adding user board-level test content, and designing relevant test conditions and criteria according to actual working conditions. The long-term reliability of the interaction between electrodes and tissues is a key issue in the reliability of AIMDs. Thus, establishing an accurate electrode structure model of the interaction between electrodes and tissues is a research focus. At the component level, the reliability of the key components used in AIMDs is analyzed along with the application scenarios considering capacitors as an example because they are the main cause of dysfunction in pacemakers and defibrillators. To date, no reliability specifications or methods exist for the components of AIMDs to the best of our knowledge. In the development and industrialization of domestic nerve stimulator products, the capacitors screened by reliability screening and the identification method developed by the National Engineering Research Center for Nerve Regulation have been applied to domestic nerve stimulator products, and no capacitor-induced failure has occurred thus far. To a certain extent, research on the reliability of AIMDs is also related to the development and application of new technologies and processes, such as micro-miniaturization-related technology application in the field of active implant medicines. Micro-implantable medical devices can reduce the risks of electromagnetic compatibility and magnetic resonance imaging closely related to lead and can avoid the reliability problems caused by lead deformation and fracture and its simple implantation.[Conclusions and Prospects] Micro-implantable medical devices have become an important development direction of AIMDs, exhibiting clear reliability. Despite the different methods, research in the field of high reliability, such as that of AIMDs, exhibits a prominent feature and development trend of considering design and application as the research subject and focusing on the guarantee of reliability based on the actual application. This multi-field and all-around exploration attempt will gradually form a series of specifications or unified standards after a certain stage of closed-loop verification by practical applications.
    Figures and Tables | References | Related Articles | Metrics
    Artifacts correction algorithm for iodine-131 SPECT planar imaging
    CHENG Li, LIU Fan, GAO Lilei, LIU Hui, LIU Yaqiang
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 802-810.   DOI: 10.16511/j.cnki.qhdxxb.2022.26.060
    Abstract   HTML   PDF (5468KB) ( 67 )
    [Objective] Iodine-131 SPECT (single photon emission computed tomography) planar imaging has been widely used in the clinical diagnosis and treatment evaluation of thyroid cancer. Because of the high-energy emissions of iodine-131, the photons have a high probability of penetrating the collimator septa of SPECT, causing "spoke" artifacts in the final result. The "spoke" artifacts make it difficult to distinguish accurate concentration regions of iodine-131, and they may obscure lower uptake regions nearby, such as metastatic spread to lymph nodes. In this paper, a deconvolution method based on the point spread function was combined with a priori regularization to suppress the spoke artifacts and to improve the diagnostic accuracy in clinical studies.[Methods] This study is based on the NET632 SPECT system with the corresponding high-energy general purpose collimator. The collected data follow the Poisson distribution, and they consist of two parts, a forward projection of the true activity distribution and the scatter data. The forward projection progress can be well modeled using a shift-invariant PSF (point spread function). An objective function is built based on the aforementioned approximation, and a priori function is introduced to regularize the reconstruction. A monotonic and convergent algorithm is derived to iteratively solve the objective function. In contrast, the conventional deconvolution method regularizes the solution using a total variation term, and the objective function is optimized based on the "one-step-late" algorithm; thus, nonnegativity and convergence can not be guaranteed. The triple-energy window method is employed to estimate scattering data, and PSFs of different sizes are generated based on Monte Carlo simulations. Simulated NEMA torso phantom data are reconstructed with different parameters to validate the monotonicity and convergence of the proposed method. Moreover, the dataset is also used to evaluate the effects of PSF size and regularization strength on reconstruction images. Normalized spoke counts and background noise are calculated for quantitative comparison. Simulation data are also used to compare the reconstruction performance of the proposed method and the conventional deconvolution method. With the optimized parameters determined by simulation data, the proposed method is further validated by clinical point data and volunteer data.[Results] With different reconstruction parameters, the objective function value increased monotonically, and the image differences between two adjacent iterations rapidly reduced to a value close to zero. The simulation study also demonstrated that a 127?27 PSF size could provide performance similar to a 255?55 PSF size, which was significantly better than 63?3 and 31?1 PSF sizes. A study on different regularization strengths suggested an optimized regularization parameter, 0.01. When the PSF size and regularization parameter were set as 127?27 and 0.01, the mean spoke counts could be reduced to 4% of the original value with a low background noise level. A comparison study based on simulation data showed the superiority of the proposed method over the conventional deconvolution method. The clinical point data and volunteer data also validated the performance of the proposed method, and the mean spoke counts could be reduced to 35% and 28% of the original values, respectively.[Conclusions] The proposed method suppresses the spoke artifacts in iodine-131 imaging using a PSF that models the physical response, and it also introduces a priori regularization to suppress noise amplified by deconvolution. The derived algorithm can guarantee the monotonicity and convergence of the iterative reconstruction. Studies on simulation data and clinical data have demonstrated that the proposed method can achieve the desired performance and is expected to improve the diagnostic accuracy in clinical studies.
    Figures and Tables | References | Related Articles | Metrics
    Design of dedicated collimator for whole-body bone scanning on single photon emission computed tomography based on Monte Carlo simulation
    WANG Zhexin, LIU Hui, CHENG Li, GAO Lilei, LV Zhenlei, JIANG Nianming, HE Zuoxiang, LIU Yaqiang
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 811-817.   DOI: 10.16511/j.cnki.qhdxxb.2022.26.040
    Abstract   HTML   PDF (5133KB) ( 88 )
    [Objective] Single photon emission computed tomography (SPECT) is an important imaging method of radionuclide bone imaging. It can obtain noninvasive three-dimensional functional images for early diagnosis and staged prognostic evaluation of disease by detecting γ photons emitted by radioactive drugs in the human body. According to the results of the national nuclear medicine census in 2020, more than 60% of SPECT clinical examinations in China are bone system examinations, indicating a great demand for bone imaging. Bone system examination generally refers to bone scanning, which is a nuclear medical imaging examination for systemic bones and can effectively diagnose various primary or secondary bone tumors. However, the low-energy general-purpose parallel-hole collimator, which is clinically used for SPECT bone scanning, has a low detection sensitivity, which leads to low patient comfort and scanning efficiency. Thus, this study aimes to optimize the detection sensitivity of SPECT system for bone imaging in clinical practice, which can not only reduce bone scanning time but also improve bone scanning efficiency and increase clinical-conomic benefits.[Methods] Based on the clinical dual-head SPECT system, this paper designed a specific collimator for bone imaging with high detection sensitivity. This study focuses on simulation experiments, including the construction of an overall simulation system, design of collimator parameters, and performance evaluation. The overall simulation system refers to the parameters of the SPECT system developed by this paper's cooperative company. In collimator parameter design, based on the formula derived in theory, which guides this paper in identifying the factors related to the detection sensitivity and resolution of SPECT system, different collimator parameters are tested by changing the collimator thickness, hole spacing, and hole diameter. Then, a Monte Carlo simulation, which is supported by center of high performance computing, Tsinghua University, is conducted with a point source for performance evaluation, including the detection sensitivity and image spatial resolution.[Results] The results indicates that the relationship between the geometric parameters and performance of the collimator matched well with the theoretical formula:as the increase of hole septal increases, the effective area of photon penetration on the collimator decreases, which reduces the detection sensitivity, while there is no obvious change in the image resolution. As the aperture increases, the collimation effect of the collimator is weakened, resulting in a serious decline in resolution. However, more scintillation photons will reach the scintillation crystal, there by hugely improving the detection sensitivity. When the aperture becomes larger, the improvement in detection sensitivity cannot make up for the loss brought by the reduction in resolution. When the collimator thickens, the collimation effect is enhanced, and the number of oblique incident photons that can be detected is reduced, so the detection sensitivity shows a downward trend. However, the image resolution can be improved.[Conclusions] Thinning the collimator and hole diameter is feasible in designing the SPECT collimator for bone scanning. According to the results of the performance evaluation, a collimator design (collimator thickness, 25.5 mm; hole septal, 0.15 mm; hole diameter, 0.5 mm) is empirically selected. It has a detection sensitivity of 183 cpm/μCi and a spatial resolution of 13.6 mm, which can significantly reduce the bone scanning acquisition time while ensuring image quality. The imaging effect of the collimator is evaluated using a hot-rod phantom experiment. The results show that hot rods with a 5.5-mm diameter could be distinguished, demonstrating the imaging performance of our proposed dedicated collimator design for bone scanning.
    Figures and Tables | References | Related Articles | Metrics
    HYDRAULIC ENGINEERING
    Response of benthic macroinvertebrates in highland rivers to the lateral hydrological connectivity: Taking the Quanji River as an example
    ZHOU Xiongdong, LIU Yibo, XU Mengzhen, ZHANG Jiahao, WANG Congcong
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 818-829.   DOI: 10.16511/j.cnki.qhdxxb.2023.22.021
    Abstract   HTML   PDF (7164KB) ( 187 )
    [Objective] Lateral hydrological connectivity alters the hydrodynamic and trophic conditions of a river system, which further affects the diversity, structure, and function of biotic assemblages. Previous studies focusing on lateral hydrological connectivity are mostly conducted on lowland river systems, whereas the ecological effects on highland rivers have yet to be explored. The objectives of this study are as follows: (1) to investigate the benthic macroinvertebrate assemblages in a typical highland river to facilitate the identification of different biotopes based on the macroinvertebrate traits; (2) to characterize biotopes according to the environmental conditions (especially in response to lateral hydrological connectivity), macroinvertebrate diversity, and morphological and functional structures; and (3) to analyze the pattern of how the benthic macroinvertebrate assemblages respond to variation in lateral hydrological connectivity.[Methods] We adopted a combined approach of field sampling, in situ measurement, and laboratory observation. Using benthic macroinvertebrate assemblages as indicators, we analyzed the ecological differences among biotopes with different lateral hydrological connectivities in the Quanji River, a typical highland river in northeastern Qinghai Lake, Qinghai Province, China. Ecological data were analyzed using the following methods: (1) ordination (canonical correspondence analysis, CCA) to analyze the general distribution patterns of macroinvertebrate taxa along critical environmental variable gradients; (2) hierarchical clustering and nonmetric multidimensional scaling (NMDS) based on the pairwise Bray-Curtis distance of macroinvertebrate assemblages to identify the representative biotopes; and (3) ANOVA and Kruskal-Wallis analysis to detect the significant differences in environmental variables and macroinvertebrate indices and to quantify how ecological characteristics respond to the variations in the lateral hydrological connectivity.[Results] Our results showed the following: (1) a total of 122 195 macroinvertebrate specimens were collected from the Quanji River, representing 33 families and 61 genera; (2) macroinvertebrate taxa exhibited different preferences to the environmental conditions and formed a featured distribution along the environmental gradients; (3) results of hierarchical clustering identified four types of biotopes (e.g., G1-G4) for the Quanji River based on the macroinvertebrate dissimilarity as measured by the Bray-Curtis distance, and samples of the four biotopes were also discriminated by the results of NMDS in the 2-D gradients; (4) significant differences in environmental conditions, including physicochemical and trophic conditions, were detected among biotopes, and in particular, the lateral hydrological connectivity from G1/G2 to G3 to G4 showed a clear decreasing pattern (e.g., "open" to "semi-open" to "closed"); (5) macroinvertebrate assemblage characteristics, including biodiversity and morphological and functional structures, were also found to be significantly different among biotopes, which indicated that the lateral hydrological connectivity was strongly related to diversity, structure, and function of macroinvertebrate assemblages; and (6) in the Quanji River basin, biodiversity responded to variations in lateral hydrological connectivity in a "single-valley" pattern, which was markedly distinguished from the unimodal pattern observed in lowland river systems. We argue that the hypometabolic and oligotrophic conditions in the highland river system lead to the increased sensitivity of macroinvertebrate assemblages to disturbances. Therefore, the intermediate disturbance preferred by macroinvertebrate assemblages shifts from the median connectivity environment to the low connectivity environment in highland rivers, which accounts for the "single-valley" response pattern.[Conclusions] Our study on bethic macroinvertebrate assemblages in different biotopes of the Quanji River has contributed considerably to our understanding of the highland invertebrates' response to variations in lateral hydrological connectivity. We find that the individual biotopes in the Quanji River respond differently to the lateral hydrological connectivity. One of the most intriguing results of our study is that biological indices, in particular the diversity index, demonstrate a single-valley response pattern to the lateral hydrological connectivity, which is in contrast with the unimodal pattern commonly observed in lowland rivers. This study not only reveals the critical roles of lateral hydrological connectivity variation in structuring highland river ecosystems from biological perspectives but also suggests that the management strategies of highland rivers should be different from those of lowland rivers.
    Figures and Tables | References | Related Articles | Metrics
    Diffusion convection model for conduit multiregime transient-mixed flow
    ZHANG Dong, WANG Enzhi, LIU Xiaoli, WU Chunlu
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 830-839.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.041
    Abstract   HTML   PDF (5249KB) ( 145 )
    The evolution of free-surface and pressurized flows and their simultaneous occurrence (mixed flow) often appear in pipe drainage systems (e.g., urban sewerage systems, water conveyance pipelines and karst conduits) due to variable inflow and outflow conditions. Accurately simulating the transition between free-surface and pressurized flows is of particular significance for infrastructural damage and operational control in engineering. However, the transition process between free surface-pressurized mixed flows in the channel and conduit leads to difficulties in numerical simulation and poor stability, especially in long-term hydrodynamic forecasting. Thus, an accurate, numerically stable, and low computational cost physical-mathematical model is urgently needed. Based on the diffusion wave approximation of the Saint-Venant equations, the water hammer equations are extended, and the fluid compressibility is considered to establish a linear relationship between fluid density and pressure head. The transition of flow regimes (free surface, pressurized, and mixed flow) during the flow process is also analyzed, and the Swamee-Swamee formula is adopted. A multiregime mixed flow diffusion convection model is then proposed in the paper. The model not only describes the evolution of flow types but also considers the transition of multiregime flows. Furthmore, the comparison between the proposed and previous models is conducted to demonstrate improvements and developments in the new proposed model. Numerical simulations for some benchmark questions are carried out and compared with previous models. The following results are presented. 1) Regardless of free-surface or pressurized flow, an evolution and transition of laminar, transition, and turbulent flows are observed in the pipeline flow with the change in water depth or flow rate, and laminar and transition flow states cannot be ignored. 2) The simulation results of the proposed model are close to those of the SWMM(storm water management model), which reach the requirements of engineering. The proposed model has high numerical stability and minimal limitations on the time step and is suitable for long-term hydrodynamic process prediction. The proposed model can accurately describe the mixed flow process in the pipe when the momentum simplification condition is satisfied. 3) The accuracy of the proposed model is higher than that of the Rob model. The multiregime model corrects the mass balance error caused by density variation, ensures mass conservation in the process of pressure flow, and eliminates the local singular values in the results of the Rob model. 4) Compared with the Rob model, the proposed model also has superior convergence and numerical stability. The proposed model assumes that fluid density changes continuously during the free surface-pressurized flow to avoid the numerical discontinuity of the water storage coefficient. Furthermore, the concept of water storage capacity per unit length of the pipeline is defined, and the L scheme is adopted to guarantee the unconditional convergence of the algorithm when Ln ≥ d. The proposed model is highly consistent with the theoretical solution, SWMM simulations, and experimental data. The model successfully and accurately simulates the hydrodynamic process of transient-mixed flow in pipes. Compared with the SWMM simulations, the proposed model is stable at the free surface-pressurized flow interface with minimal restriction in spatiotemporal discreteness. The proposed model can provide theoretical and methodological support for the simulation and prediction of long-term hydrodynamic processes in conduits, such as conduit flow in karst areas.
    Figures and Tables | References | Related Articles | Metrics
    NUCLEAR AND NEW ENERGY
    Energy analysis method of junction coupling
    YANG Linqing, QIN Benke, BO Hanliang
    Journal of Tsinghua University(Science and Technology). 2023, 63 (5): 840-848.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.043
    Abstract   HTML   PDF (2913KB) ( 268 )
    The pipeline system is widely used in various fields of industrial production. In the pipeline system, water hammer is a transient flow process triggered by flow regulation, fast closing valves, or accidents, possibly leading to large fluctuations of fluid pressure and threatening the normal operation of the pipeline and equipment. Thus, evaluating the transient flow characteristics of water hammer is necessary. Based on the fluid-structure interaction (FSI) water hammer theory, the energy analysis method was established, and the variation laws of fluid and pipeline energies were discussed to describe the influence of junction coupling on the transient flow characteristics of water hammer. First, the FSI four-equation model was solved by the method of characteristics (MOC) and verified. A variety of energy, including fluid internal energy, fluid kinetic energy, axial strain energy and axial kinetic energy of pipelines, was introduced. On this basis, the model was mathematically derived and transformed, and the physical expressions, governing equations of the pipeline and fluid energy were obtained. Combined with the initial and boundary conditions, the fluid pressure, fluid mean velocity, pipeline stress, and pipeline velocity of all nodes at different times were calculated by the FSI model. Next, according to the expressions of fluid and pipeline energy, the composite Simpson's integral method was used to integrate the physical quantities of each node numerically, and the energy of the entire pipeline at all times was obtained. The energy analysis method based on FSI water hammer theory was established, and the energy transfer and conversion process in the system were comprehensively analyzed. On this basis, the boundary conditions at the valve were changed, and the influence of the junction coupling was described quantitatively with the help of the maximum fluctuation amplitude of energy and the dimensionless factors. The following research results are presented:1) The energy analysis method explains the energy transfer and conversion in the water hammer process from the system level and provides a natural and direct perspective to understand the dynamic response process of the system, which is difficult to demonstrate in the traditional wave transmission and reflection theory. The relationship between fluid and pipeline energy is described, and the dominant energy is fluid internal and pipeline strain energy. Simultaneously, the relationship between the total fluid and pipeline energy is revealed, and their energy sources are clarified. 2) Considering the junction coupling, the energy transfer and conversion are intensified, and the fluid and pipeline energy slightly increase. Consequently, the fluid-structure coupling factor increases, the pipeline vibration factor significantly rises, and the hydraulic pulsation factor slightly decreases. Taking three energy factors as safety evaluation indexes, the protective measures of pipeline systems are proposed. 3) The energy equation is derived from the FSI water hammer model; therefore, this equation is linearly related to the FSI water hammer model. The research results provide a new method to understand and compare the dynamic response of the water hammer process of different systems and provide a direction and basis for quantifying the response characteristics of the FSI water hammer process.
    Figures and Tables | References | Related Articles | Metrics
  News More  
» aaa
  2024-12-26
» 2023年度优秀论文、优秀审稿人、优秀组稿人评选结果
  2023-12-12
» 2022年度优秀论文、优秀审稿人、优秀组稿人评选结果
  2022-12-20
» 2020年度优秀论文、优秀审稿人评选结果
  2021-12-01
» aa
  2020-11-03
» 2020年度优秀论文、优秀审稿人评选结果
  2020-10-28
» 第十六届“清华大学—横山亮次优秀论文奖”暨2019年度“清华之友—日立化成学术交流奖”颁奖仪式
  2020-01-17
» a
  2019-01-09
» a
  2018-12-28
» a
  2018-01-19


  Links More  



Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd