Loading...
首页
期刊介绍
期刊订阅
联系我们
快速检索
引用检索
图表检索
高级检索
最新录用
|
预出版
|
当期目录
|
过刊浏览
|
阅读排行
|
下载排行
|
引用排行
|
百年期刊
ISSN 1000-0585
CN 11-1848/P
Started in 1982
About the Journal
»
About Journal
»
Editorial Board
»
Indexed in
»
Rewarded
Authors
»
Online Submission
»
Guidelines for Authors
»
Templates
»
Copyright Agreement
Reviewers
»
Guidelines for Reviewers
»
Online Peer Review
Office
»
Editor-in-chief
»
Office Work
»
Production Centre
Table of Content
, Volume 64 Issue 8
Previous Issue
Next Issue
For Selected:
View Abstracts
Download Citations
EndNote
Reference Manager
ProCite
BibTeX
RefWorks
Toggle Thumbnails
COMPUTER SCIENCE AND TECHNOLOGY
Select
Push and pull Tor users' guards through optimized resource portfolios
ZHANG Guoqiang, XU Mingwei
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1293-1305. DOI: 10.16511/j.cnki.qhdxxb.2024.27.013
Abstract
HTML
PDF
(4752KB) (
83
)
[Objective] The second-generation onion router (Tor), as the most popular low-latency anonymous communication network on the Internet, is vulnerable to deanonymization attacks caused by traffic analysis. Evaluating the cost associated with acquiring user traffic is crucial to the measurement of the severity of this threat. Because of the direct correlation between Tor network entry nodes and user identities and the fact that these nodes can also be deployed by adversaries, Tor network entry nodes play a vital role in obtaining user traffic. When constructing communication circuits, Tor clients need to be compelled to select the entry nodes of adversaries, commonly referred to as guards. However, the existing approaches used to obtain user traffic by manipulating guard nodes often overlook cost-effectiveness, leading to cost evaluations that do not truthfully reflect the potential capabilities of adversaries. [Methods] To address the cost optimization issue of acquiring Tor user traffic, this study presents a novel model, i.e., the push and pull Tor users' guards through optimized resource portfolios (P-Group). The proposed method deploys controllable nodes to draw user traffic. Meanwhile, the proposed method expedites user traffic migration by utilizing general traffic to congest noncontrollable nodes that are currently used by users. This study unifies the resource measurements of both node deployment and bandwidth attacks and analyzes their correlation to enhance resource allocation efficiency. Through in-depth research into the traffic control and congestion mechanisms of the Tor protocol, P-Group employs queuing theory to quantify the reduction in the observed bandwidth of noncontrollable nodes. Moreover, the impact of attacking noncontrollable nodes with identical traffic can vary based on their bandwidth capacities. P-Group utilizes adapted flow deviation techniques to effectively coordinate the total amount of attack resources and target bandwidth capacity to optimize resource allocation. Considering the extensive operational scope and competitiveness of contemporary cloud service providers, this study assumes that the bandwidth requirements of adversaries are readily obtainable from various sources. By investigating standard hosting product prices across ten cloud service providers, including GoDaddy, the average cost of attack bandwidth is determined and integrated into the experimental assessment. [Results] The analysis and simulation results show that P-Group improves the utility and security levels while achieving a more advantageous cost-effectiveness ratio. Solely focusing on deploying controllable nodes, once their total bandwidth reaches 2% of the entire Tor network capacity, the marginal gain from investing resources decreases significantly. The utility of resource distribution has been improved by 20.5% by the proposed method compared with that of an equal split strategy between node deployment and bandwidth attacking. Furthermore, in the context of bandwidth attacks, the likelihood of planted nodes being selected by Tor clients is 15% higher than those of six traditional traffic distribution methods. With the implementation of P-Group, the average duration of the migration of user traffic from noncontrollable nodes to adversary-controllable nodes is approximately 200h, incurring costs of several hundred dollars. [Conclusions] In summary, while challenges persist in cost management within the existing methods of acquiring Tor user traffic, optimization can mitigate these hurdles to attain practical and feasible goals, thereby elevating traffic analysis attacks to a substantial threat.
References
|
Related Articles
|
Metrics
Select
Two general optimization techniques for low-latency services
GUO Yaning, XU Mingwei
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1306-1318. DOI: 10.16511/j.cnki.qhdxxb.2024.26.016
Abstract
HTML
PDF
(1510KB) (
42
)
[Significance] Low-latency services have become indispensable in people's work and daily life. Various low-latency services exist, including video conferences for online communication and cloud games for entertainment, which can meet different requirements of users. These services make it convenient for users to interact anywhere in real time, thereby overcoming the limitations of traditional applications. Therefore, these services have recently attracted numerous users, and their deployment scale has rapidly expanded. With the development of 5G technology, the coverage of low-latency services will spread further; these services have broad development prospects. Therefore, performance optimization of low-latency services is a hotspot in academia and industry. The most critical performance indicator for low-latency services is end-to-end latency. In addition to maintaining low latency, achieving high throughput and link usage to improve service quality and attract more users is also necessary. Therefore, performance optimization is key in the further development of low-latency services. [Progress] Various performance optimization schemes used at different positions of the transmission path were proposed by researchers. Among them, the two most widely used schemes were the low-latency congestion control algorithm (CCA) deployed on the server side and the active queue management algorithm (AQM) deployed in the network. Their design tried to avoid queuing as much as possible and to reduce end-to-end delay. The CCA and AQM constantly updated their design to solve the limitations of previous algorithms, improved their performance, and enhanced the practicality of the algorithms for large-scale deployment. Specifically, CCA improved the estimation strategy of congestion signals to make them more accurate, completed the logic of the adjustment of the sending rate and incorporated consideration for fairness into the design. AQM focused on queuing delay and minimized the amount of parameters, trying to implement a more lightweight algorithm. Although CCA and AQM shared similar goals, they were researched in parallel and independently. Being in the same control loop and both affecting the quality of low-latency services, the synergistic effect between CCA and AQM attracted considerable academic attention. Existing evaluations indicated a potential mismatch, resulting in poor performance when they were used together. To achieve superior collaboration between CCA and AQM, various optimization solutions were proposed. Among them, general CCA and AQM based on machine learning and cross-layer joint optimization were representative schemes. Although these solutions aimed to solve the mismatch problem and proposed general algorithms, they faced challenges in real-world deployment. [Conclusions and Prospects] This paper summarizes the main design ideas and performance evaluation of important low-latency CCA and AQM, sorts the performance and theoretical analysis of the combined use of the two algorithm types, analyzes the potential coordination problems between them, and further elaborates on the research work on the collaborative optimization of CCA and AQM. Through the summary, we propose that future research on low-latency services performance optimization must emphasize versatility and practical deployment and believe that cross-layer joint optimization is a practical idea to solve the existing mismatch, make CCA and AQM work well together and further improve the performance of low-latency services, which can be the focus of future research.
References
|
Related Articles
|
Metrics
Select
Overlapping community detection model based on a modularity-aware graph autoencoder
CHEN Jie, LIU Binbin, ZHAO Shu, ZHANG Yanping
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1319-1329. DOI: 10.16511/j.cnki.qhdxxb.2024.26.018
Abstract
HTML
PDF
(4511KB) (
54
)
[Objective] In the ever-expanding field of network science, the abstraction of complex entity relationships into network structures provides a foundation for understanding real-world interactions. The discovery of communities within these networks plays a pivotal role in identifying clusters of closely interconnected nodes. This process reveals latent patterns and functionalities inherent in the intricate fabric of reality, proving invaluable for tracking dynamic network behaviors and assessing community influences. These influences span a range of phenomena, from rumor propagation to virus outbreaks and tumor evolution. A notable characteristic of these communities is their overlapping nature, with participants often straddling multiple community boundaries. This characteristic adds an additional layer of complexity to the exploration of network structures, making the discovery of overlapping communities imperative for a comprehensive understanding of network structures and functional dynamics. [Methods] Within the realm of network science, network representation learning algorithms have significantly enriched the pursuit of community discovery. These algorithms adeptly transform complex network information into lower-dimensional vectors, effectively maintaining the underlying network structure and attribute information. Such representations prove invaluable for subsequent graph processing tasks, including but not limited to link prediction, node classification, and community discovery. Among these algorithms, the graph autoencoder model is a prominent representative, demonstrating efficiency in learning network embeddings and finding applications in diverse community discovery tasks. However, a limitation inherent in traditional graph autoencoder models is their predominant focus on local node-edge reconstruction. This focus often overlooks the crucial influence of community structure, particularly in scenarios featuring overlapping nodes across multiple communities. This inherent challenge makes it difficult to precisely determine node affiliations and community distributions. To address this issue, we introduce an innovative unsupervised modularity-aware graph autoencoder model (GAME) designed for overlapping community discovery. The model incorporates an efficient modularity maximization loss function into the graph autoencoder framework. This ensures the preservation of community structure throughout the network embedding process. The modularity-aware loss is meticulously reconstructed to facilitate the update of encoder parameters, thereby improving the model performance in overlapping community discovery tasks. We harness the resulting community membership matrix to probabilistically assign communities to nodes. [Results] The efficacy of the proposed GAME model was rigorously evaluated across six diverse social network datasets (Facebook 348, Facebook 414, Facebook 686, Facebook 698, Facebook 1684, and Facebook 1912), with node counts ranging from 60—800. Additionally, assessments were conducted on four collaborator network datasets (Computer Science, Engineering, Chemistry, and Medicine) featuring node counts ranging from 1.4×10
4
to 6.4×10
4
. Comparative analyses with seven prevalent overlapping community discovery methods, encompassing both traditional and graph autoencoder-based algorithms, demonstrated a noteworthy 2.1% improvement under the normalized mutual information (NMI) evaluation index. This performance enhancement substantiated the tangible advantages and effectiveness of the proposed GAME model. [Conclusions] The integration of an efficient modularity maximization loss function into the graph autoencoder model, as demonstrated by the GAME model, successfully addresses the conventional limitations of graph autoencoders. These models often prioritize the reconstruction of local node connections during community discovery tasks, often overlooking the overarching structure of the community, particularly when confronted with overlapping nodes. The experimentally validated performance boost underscores the GAME model's efficacy in navigating the complexities of overlapping community discovery compared to mainstream methods. However, it is worth noting that the model's reliance on substantial memory resources can become a challenge when handling datasets that combine network structure and node attributes. This is especially apparent in scenarios with small attribute networks (
N
≤800), where the model exhibits insensitivity to the threshold ρ variation. Future work will focus on refining the model to mitigate these challenges and ensure optimal performance across a broader spectrum of real-world scenarios.
References
|
Related Articles
|
Metrics
CIVIL ENGINEERING
Select
Analysis of concrete fatigue life based on stress-strain fatigue criterion
ZHANG Shu, MA Rui, HU Yu, LI Qingbin
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1330-1335. DOI: 10.16511/j.cnki.qhdxxb.2023.27.002
Abstract
HTML
PDF
(5799KB) (
77
)
[Objective] Accurate assessment of concrete fatigue life under fatigue load is essential to ensure the safety and stability of structures, especially the fatigue failure behavior dominated by stress and strain. The fatigue loading surface function is established to describe the fatigue state of concrete based on the constraint relationship in the stress-strain fatigue criterion. The fatigue loading surface function of concrete exhibits a monotonic variation with fatigue cycles, enabling the establishment of an equivalent function to represent the concrete fatigue state. The fatigue loading surface function of concrete can be described as a linear equivalent expression, and the coefficients can be calibrated by the characteristic points in the fatigue loading process. [Methods] Based on the constraint relationship between fatigue stress-strain and fatigue cycles, the equivalent fatigue cycles can be calculated from the fatigue stress-strain data. The equivalent fatigue cycles can effectively express the fatigue stress-strain state of the material, and the fatigue life indirectly represents the fatigue failure stress-strain state in the fatigue failure criterion of materials with the static constitutive curve as the limit value. The degree of fatigue accumulation of materials can be quantified by comparing the equivalent fatigue cycles and fatigue life. The evaluation method based on equivalent fatigue cycles overcomes the shortcomings of the current evaluation methods based on the classic fatigue criteria and fatigue envelope lines. Therefore, in this work, the fatigue loading surface function is constructed, and its evolution law is studied through the analogical form of the fatigue failure criterion of materials with a static constitutive curve as the limit value, thus proposing a description method for equivalent calibration and solving the equivalent fatigue cycles. The fatigue loading surface function is proposed to describe the fatigue state and determine the constraint relationship between fatigue stress and strain and fatigue cycles based on the fatigue failure criterion of materials with a static constitutive curve as the limit value. The equivalent fatigue loading surface function and coefficients can be obtained by the equivalent description method of feature point calibration. The R-square is introduced to ensure an equivalent description, and the maximum R-square directly relates to the optimal equivalent description results. Therefore, the maximum R-square algorithm is proposed based on the evolution law of the fatigue loading surface function. The linear equivalent form of the fatigue loading surface function is proposed to meet the equivalent description and practical application requirements. [Results] Therefore, equivalent calibration can be achieved by selecting the optimal maximum R-square, and the coefficients of the fatigue loading surface function can be determined from the experimental results of the fatigue loading process. The equivalent fatigue loading surface function, feature point calibration, and maximum determinable coefficient algorithms were developed to achieve the equivalent fatigue state description of materials. Through the equivalent calibration results, the equivalent fatigue cycles can be obtained using the corresponding fatigue stress-strain. Furthermore, the fatigue stress-strain state of concrete can be quantified by the equivalent fatigue cycles, and corresponding evaluation processes and indicators are obtained through further study. [Conclusions] The proposed method provides an effective approach for the fatigue life analysis of concrete.
References
|
Related Articles
|
Metrics
Select
Coupled influence of reservoir water and rainfall on the response mechanism of accumulated landslides with chair-like interfaces between sliding masses and bedrock
LUO Shilin, LIU Hualiang, JIANG Jianqing
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1336-1346. DOI: 10.16511/j.cnki.qhdxxb.2024.21.003
Abstract
HTML
PDF
(26927KB) (
36
)
[Objective] The Three Gorges Reservoir (TGR) Dam is currently the largest hydroelectric power project in the world. TGR has a daily power output of 9.81×10
9
kWh and a reservoir capacity of 3.92×10
10
m
3
. It was completed in Southwest China in 2009 and the reservoir water was impounded to 175 m. Consequently, a long and narrow hydro-fluctuation belt (over 600 km, from 145 m to 175 m), which extends from Yichang City in Hubei Province to Maoer Gorge in Chongqing Municipality, China, appeared along the TGR. The completion of the TGR Project and the subsequent impoundment of the 660 km-long reservoir reactivated and induced over 5 000 landslides along the banks of the reservoir. These landslides have posed great threats to residences, shipping along the Yangtze River, and dam stability. Thus, gaining insights into the behavior, triggering and conditioning factors, and evolutionary mechanism of these reservoir-induced landslides is crucial for landslide control, management, and decision-making. Water, including fluctuations in reservoir water levels and periodic rainfall, is one of the most common triggering factors for bank slope failures. Variations in reservoir water levels can affect slope stability during the drawdown and impoundment stages. Furthermore, rainfall can trigger slope failure events, including shallow and deep-seated landslides on slopes along the reservoir. Although the individual effects of rainfall or reservoir water on bank slope stability have been widely studied, the combined effects of rainfall and reservoir water on landslide deformation have been rarely discussed. Moreover, many reservoir-induced landslides have been triggered by the coupled effects of rainfall and reservoir-level fluctuations. This study aims to study the coupled influence of reservoir water and rainfall on the response mechanism of accumulated landslides with chair-like interfaces between sliding masses and bedrock. [Methods] The Outang landslide, a typical landslide with a chair-like interface between the sliding mass and bedrock, was selected as the research object. Monitoring data analysis, correlation theory, and discrete element method were adopted to reveal the deformation characteristics and possible failure evolution mechanism of landslides. [Results] Research results revealed that from a regional perspective, the landslide deformation rate increased with the increase in elevation, whereas from a temporal perspective, landslide deformation showed a step-like increase. Reservoir water and rainfall were considered the main triggering factors of landslide deformation. Reservoir water affected the front of the landslide with a corresponding seepage-induced deformation mechanism, whereas rainfall affected the remaining part of the landslide, and its corresponding movement mechanism transformed from water swelling during the initial period of rainfall to the seepage-induced mechanism. [Conclusions] The landslide exhibited composite push-retrogression-type failure evolution with fluctuating reservoir levels destabilizing rock masses toward the front section and precipitation mainly mobilizing materials in the upper section of slopes. This research will further elucidate the reactivated ancient landslide issues, such as deformation characteristics and failure evolution, and provide practical experience and insights regarding analogous old landslides in the reservoir area.
References
|
Related Articles
|
Metrics
Select
Method for spatiotemporal evolution analysis and the instability criterion of traction landslides under intermittent rainfall
HOU Xiaoqiang, ZHOU Zhongren, WU Honggang, HU Tianxiang, HOU Yunlong
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1347-1356. DOI: 10.16511/j.cnki.qhdxxb.2024.27.006
Abstract
HTML
PDF
(12341KB) (
83
)
[Objective] Numerous scholars, domestically and internationally, have extensively researched the initiation and occurrence of landslides under intermittent rainfall. These studies have revealed the intrinsic relationship between rainfall intensity and landslide stability. This relationship indicates that landslide destabilization and failure involve the gradual spread of the plastic zone at the sliding surface until it penetrates through. The stability of these landslides was quantitatively evaluated using the overall safety factor. To improve the accuracy of landslide prediction and forecasting, the temporal and spatial relationships between landslide evolution characteristics and stability must be deeply investigated. By combining existing monitoring technology, revealing the temporal and spatial patterns of the overall safety coefficient of landslides, point safety coefficients, displacements, and other parameters, and proposing the criteria for the stability of landslides at each stage, we can provide reliable theories and methods for the accurate early warning and forecasting of landslides. [Methods] Using ABAQUS software, a finite element model for a traction landslide was established, considering the relationship between the spatial stress and time of the slope body under intermittent precipitation. Python was implemented for the secondary development of a time-history method for analyzing point safety factors. This method allowed us to calculate the spatial point safety coefficient cloud map of the different regions of a landslide section at any time and analyze the entire process of the initiation-deformation-failure of a traction landslide under intermittent rainfall. [Results] The study yielded the following results: 1) Point safety coefficient time-range calculations visually described the evolution characteristics of the traction landslide sliding surface under intermittent rainfall, gradually moving from the foot to the top of the slope. The time-history varying characteristics of the three-point safety coefficient reflect the process of landslide initiation, deformation, and destabilization evolution stages, providing a sufficient basis for dividing the landslide traction, main slide, and locking sections. 2) Based on multidimensional parameters such as the overall safety factor, point safety factor, and displacement under intermittent rainfall, the combination of the point safety factor and deformation and time-history displacement parameters is shown to serve as a basis for judging the initiation, deformation, and destabilization of a traction landslide under intermittent rainfall. However, the overall safety factor cannot be used as the basis for judging. 3) According to the displacement variations and the numerical size of point safety coefficients of the three parts of a landslide (the traction, main slide, and locking sections), we jointly determine that a landslide has four states: stable, basically stable, less stable, and unstable. This analysis forms the criterion for determining the stable state of each stage of a traction landslide, and its reliability was verified through examples. [Conclusions] The above results prove that using the point safety coefficient to describe the landslide deformation and failure process is more specific and comprehensive than the overall safety coefficient. This finding provides a theoretical basis and engineering guidance for future early warning and forecasting of intermittent rainfall landslides in China.
References
|
Related Articles
|
Metrics
AEROSPACE AND ENGINEERING MECHANICS
Select
Prediction method of lift-off acoustic environment for multinozzle rockets
WANG Haoxuan, RONG Yi, ZENG Yaoxiang
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1357-1366. DOI: 10.16511/j.cnki.qhdxxb.2024.26.007
Abstract
HTML
PDF
(5838KB) (
29
)
[Objective] Lift-off acoustic environment is most severe during the rocket flight. This broadband random noise can cause high-intensity random responses of the rocket structure. Predicting the lift-off acoustic environment is important to guide the design of rocket noise protection. The Kudryavtsev method is a comprehensive technique that considers five noise sources during lift-off of rocket; however, it weakens the directivity of the free jet and neglects the shielding of the launch pad and service tower. In addition, the multijet equivalent method does not consider the distance between jets. Therefore, three modifications are made based on the Kudryavtsev method to enhance the prediction ability of the lift-off acoustic environment of multinozzle rockets. [Methods] The lift-off acoustic environment was predicted using five types of noise sources, namely the noise of the undisturbed free jet above the launch table, noise of interaction between the jet and launch table, noise of reflection by the launch table, noise of the disturbed free jet between the launch pad entrance and deflector, and noise of diversion channel exit. For free-jet noise sources, the distributed source method Ⅱ (DSM-Ⅱ) was employed to correct directivity and redistribute the noise power. Herein, two normalization curves of noise power distribution in DSM-Ⅱ were corrected by empirical formulas, increasing the results by about 1.5 dB. Subsequently, considering the structural shielding in noise propagation paths, Maekawa's noise shielding model was utilized to estimate the noise attenuation of the launch pad and service tower. Based on numerical simulation results, a equivalent method was used for predicting the multijet noise of rockets. The multijet noise of the core stage was calculated by a equivalent single jet, and the single jet noise of boosters was calculated independently. The modified method was employed for predicting the acoustic environment near the service tower of a certain rocket at different times of lift-off. [Results] Comparison results indicated that for the overall sound pressure level (OASPL), the maximum prediction error of the modified method was less than 5.0 dB within 2 s of lift-off, while the maximum prediction error of the Kudryavtsev method was more than 15.0 dB. The accuracy was increased by about 10.0 dB. The modified method can more accurately predict the peak time, and the error of OASPL near peak time was less than 3.0 dB. In contrast, the peak time predicted by Kudryavtsev method was smaller, and the maximum error of OASPL near peak time was more than 3.0 dB for the Kudryavtsev method. For the 1/3-octave-band sound pressure level spectrum at the peak time of OASPL, the peak frequency and sound pressure level of the modified method were near the test data. The maximum error of the modified method was less than 6.0 dB in the full band and less than 3.0 dB in the 1-5 kHz frequency band. However, the maximum error of Kudryavtsev method was more than 6.0 dB. [Conclusions] Herein, three modifications were done based on the Kudryavtsev method, which effectively enhanced the prediction accuracy. Compared with the original method, the modified method has high accuracy within 2 s of lift-off and near peak time. In addition, the predicted 1/3-octave-band sound pressure level spectrum of the modified method is closer to the test data. Thus, the method presented in this study has higher prediction accuracy and can be better applied in practical engineering.
References
|
Related Articles
|
Metrics
Select
Online intelligent positioning of parts based on adaptive inspired particle swarm optimization
LIU Huasen, GUO Zihao, NIE Haiping, PEN Zhijun
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1367-1379. DOI: 10.16511/j.cnki.qhdxxb.2024.27.016
Abstract
HTML
PDF
(15875KB) (
20
)
[Objective] In the realm of industrial assembly line production, the automation of spraying operations for small- and medium-sized parts often encounters significant challenges. The primary issue lies in the inconsistent placement of different parts, complicating the automation of measurement and positioning tasks. [Methods] To address this, a sophisticated approach leveraging 2D images has been developed to assist in the preliminary positioning and classification of parts. This method involves planning the scanning and measurement path of a 3D camera based on the preliminary positioning information provided by 2D image processing. However, the cloud data points captured by the 3D camera include a large amount of irrelevant background information. To isolate accurate point cloud data for the parts, the data is segmented according to the preliminary positioning information of the parts. Point cloud registration, a critical challenge within robotics and computer vision fields, involves estimating a rigid transformation to align one point cloud to another. The final step entails registering the measured point cloud with the 3D digital model of the part to determine the exact position and orientation of the part. The 2D image is processed successively by filtering, gray equalization, binarization, morphological processing, and contour extraction. These steps effectively separate the image foreground from the background and identify the parts within the image. Recently, an innovative adaptation of the classic particle swarm optimization algorithm, enhanced with an adaptive heuristic, has been employed to tailor different schemes and scales based on specific registration conditions. This approach achieves rapid location cloud registration under strong background noise. To overcome issues of slow convergence speed and the tendency to settle into local optima, this improved algorithm incorporates learning and inertia coefficients of adaptive state registration, along with stalling coefficients and gradient descent operations for adaptive scaling. Employing a normal distribution confidence criterion minimizes the effect of fitness outliers on registration, facilitating intelligent alignment between the point cloud and the theoretical numerical model. This helps achieve precise determination of the parts' positions and orientations. [Results] Integrating two-dimensional (2D) vision technology significantly reduces measurement times and improves efficiency through multithreaded concurrent synchronization operations. Finally, the automatic scanning of three-dimensional (3D) point clouds and the autonomous registration positioning of various parts are accomplished within 3 min, achieving an average accuracy of 2 mm. When aligning identical point clouds, the algorithm demonstrates a registration accuracy of 0.002 mm. [Conclusions] Despite its robustness under strong back-point cloud influence, the algorithm's registration accuracy is still greatly affected. In addition, the inherent discrepancy between the sampling consistency of the scanned point cloud and the theoretical numerical model introduces an error of about 0.5 mm, limiting further improvements in registration accuracy. Due to their limited features and small size, some aviation parts are easily misidentified and prone to significant attitude registration errors. It is necessary to further combine the advantages of 2D images and 3D point cloud data to enhance the robustness of the identification and positioning of aviation parts.
References
|
Related Articles
|
Metrics
Select
Dynamic assessment of threats to cluster targets in low-altitude multi-domain battlefields
DONG Zewei
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1380-1390. DOI: 10.16511/j.cnki.qhdxxb.2024.27.002
Abstract
HTML
PDF
(3226KB) (
44
)
[Objective] Threat assessment of targets serves as a critical reference for commanders in wartime decision-making. With the rapid development of unmanned systems and smart technologies, the future of warfare is progressing toward unmanned, multi-domain, and clustered operations. However, existing studies on target threat assessment fall short of effectively satisfying these demands of future warfare, demonstrating three main issues: 1) Majority of combat scenarios focus on singular settings, such as maritime air defense, air-to-air combat, and ground-based air defense, with scant research on multi-domain operations (land, low-altitude, and electromagnetic environments). 2) Research is mainly concentrated on individual entities or cluster targets, such as fighter aircraft, unmanned aerial vehicle swarms, and unmanned surface vessels, with inadequate investigation of clustered equipment integrating manned/unmanned ground combat vehicles and low-altitude manned/unmanned aircraft. 3) Current methodologies predominantly consider the state and characteristics of Blue Force targets, ignoring the influence of dynamic changes in Red Force equipment on the weighting of threat indicators for Blue Force targets. [Methods] To deal with these problems, we proposed a dynamic assessment method for threats to clustered targets in low-altitude, multi-domain battlefields based on hesitant fuzzy sets. First, we explored the laws governing low-altitude, multi-domain battlefields and the operational characteristics of clustered equipment that involves manned/unmanned air and ground elements. We analyzed five major influencing factors in the threat assessment of cluster targets, namely, operational cluster type, urgency, comprehensive strike capability, intelligent collaborative capability, and importance of the attack area, and determine an indicator system for threat assessment of clustered targets. Afterward, leveraging the Weber-Fechner law, we explored the relationship between changes in the Red Force's situation and the psychological pressure experienced by commanders and proposed a Weber-Fechner law-based weight determination method, which adjusted the weight values of the Red Force's comprehensive strike capability and the Blue Force's air power strike capability in conjunction with variations in the damage rate of the Red force's air defense capability. Finally, by combining the variable weight method under a hesitant fuzzy environment, a dynamic assessment model based on hesitant fuzzy sets for threats to cluster targets in low-altitude, multi-domain battlefields was constructed. [Results] In a simulation, when the Red Force's air defense system sustains serious damage, the threat posed by the Blue Force's air power intensifies significantly. By utilizing the Weber-Fechner law-based weight adjustment method, the weight determination becomes more scientifically reasonable, effectively and promptly reflecting the psychological changes encountered by commanders when faced with the stimulation of the battlefield situation and reducing the subjectivity and arbitrariness related to weight optimization adjustments. Comparative analysis of the threat assessment results under constant and variable weights demonstrates that cluster targets with air power superiority exhibit more sensitive and timely adjustments in threat assessment results under variable weight conditions with a higher level of consistency. [Conclusions] These results further confirm the accuracy and effectiveness of the model, providing commanders with feasible and reliable decision support.
References
|
Related Articles
|
Metrics
Select
Energy harvesting efficiency from fluid flow by vortex-induced vibrations: reduced-order modeling
HAN Peng, HUANG Qiaogao, QIN Denghui, PAN Guang
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1391-1400. DOI: 10.16511/j.cnki.qhdxxb.2024.27.009
Abstract
HTML
PDF
(7027KB) (
21
)
[Objective] Vortex-induced vibration (VIV) is a well-known fluid-structure interaction phenomenon that holds promising potential for harnessing energy from fluid flows. The pursuit of optimizing energy conversion efficiency from VIV has garnered significant interest. However, considerable challenges, such as the costs of experiments and computational fluid dynamics (CFD) simulations, pose significant hurdles in conducting comprehensive global optimization studies on efficiency. [Methods] In response to these challenges, this study employs a reduced-order model (ROM) based on a wake oscillator. The ROM is used to compute and optimize the energy harvesting efficiency from the VIV of a circular cylinder. The rapid computational capabilities of the ROM allow the creation of high-resolution efficiency maps across various mass ratios and Reynolds numbers. These maps encompass a broad spectrum of incoming velocities and damping ratios. They not only provide valuable insights into achieving maximum efficiency but also detailed information on the optimal damping ratio and velocity. To validate the predictions of the ROM, comparisons are drawn against experiments and CFD simulations. For cases with high Reynolds numbers (high-
Re
), the ROM is validated using published experimental data. Conversely, for low-
Re
cases where experimental data is sparse, a computational fluid-structure interaction solver, named CFD-FSI, is utilized. This tool relies on direct simulations to verify the ROM results. Despite some differences observed between the ROM, experimental outcomes, and CFD data, this study demonstrates that the maximum efficiency and its occurrence conditions predicted by the ROM are acceptable. With its cost-effectiveness, the ROM emerges as a valuable tool for investigating optimal energy harvesting efficiency and providing insights into related engineering aspects. [Results] Overall, the main findings of this study can be summarized as follows: 1) This study contributes high-resolution global optimization maps for energy harvesting efficiency from VIV, focusing on the optimization parameters of reduced velocities and damping ratios. Additionally, it offers a cost-effective approach and solution through the use of ROM. This method seeks to achieve a high efficiency from VIV across a large number of tested cases. 2) The maximum efficiency point is found to be influenced by the incoming velocity and the product of the mass ratio and damping ratio. This implies that if fluid flow conditions, such as the Reynolds number, remain constant, the global maximum efficiency remains consistent across different VIV energy converters despite having various structural configurations. Additionally, for different VIV energy converters, the product of the mass and the optimal damping ratio, where the global maximum efficiency occurs, tends to be similar. 3) Efficiency at a high Reynolds number is shown to surpass that at
Re
=150 in laminar flows. This is primarily attributed to differences in lift coefficients and Strouhal numbers between high- and low-Reynolds flows. From the perspective of the ROM, a high lift coefficient might contribute to a higher converted efficiency. [Conclusions] Considering the definition of the mass ratio, the study suggests that the energy harvesting efficiency of a lighter system is more robust than that of a system with a high mass ratio. This indicates potential advantages for ocean VIV energy converters over wind VIV energy converters. Furthermore, the present work provides an effective tool to predict, analyze and optimize the energy harvesting efficiency from VIV, which would be helpful for related engineering design and future studies on this topic.
References
|
Related Articles
|
Metrics
Select
Multiscale smoothed particle hydrodynamic simulation of injection molding
XU Xiaoyang, TIAN Lingyun
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1401-1413. DOI: 10.16511/j.cnki.qhdxxb.2024.22.004
Abstract
HTML
PDF
(12635KB) (
32
)
[Objective] Injection molding is an important method in polymer molding processing. Numerical simulations have proven effective in studying viscoelastic injection molding problems. Traditional numerical simulations of injection molding typically rely on macroscale models. However, the performance of plastics is closely related to their micromolecular structure. Understanding the evolution of the micromolecular structure is essential for improving product quality. Therefore, the simulation study of coupling micro- and macroscales has important academic and practical value. [Methods] In this study, a multiscale smoothed particle hydrodynamic (SPH) method based on the bead-spring chain model is used to simulate viscoelastic injection molding. At the macroscale, the conservation equations of mass and momentum are solved using the SPH method, whereas at the microscale, the elastic stress is described using the Brownian configuration field method. The viscoelastic Couette flow is simulated using three types of bead-spring chain models. The effectiveness of the multiscale SPH method is verified by comparing simulation results with analytical solutions of the viscoelastic Couette flow of the Hookean dumbbell model and by comparing the numerical solutions of the viscoelastic Couette flow of the finitely extensible nonlinear elastic dumbbell model with the literature results. The bead-spring dumbbell model is extended to a bead-spring chain model, and the Couette flow is simulated. The influence of different numbers of beads on the viscoelastic flow is analyzed, and an appropriate number of beads is selected for numerical simulations. In addition, the multiscale SPH method is extended to simulate injection molding in C-shaped and N-shaped cavities. Micro- and macro-parameters in injection molding are investigated, including the first normal stress difference, molecular stretch, and mean conformation thickness. The convergence of the multiscale SPH method is evaluated by changing the total number of SPH fluid particles
N
f
at the macroscale and the total number of bead-spring chain models
N
b
in each particle at the microscale. The obtained numerical solutions for the velocity are consistent. Furthermore, the effects of different rheological parameters, such as the number of beads of the molecular chain
M
, the spring maximum extensibility
b
max
, the viscosity ratio
β
, the Reynolds number
Re
, and the Weissenberg number
Wi
, on the flow behavior of viscoelastic fluid are analyzed. [Results] The results show that the multiscale SPH method can stably and effectively simulate viscoelastic injection molding. This method can obtain micromolecular information that is impossible to obtain using traditional macro closed-form constitutive equations. In addition, different rheological parameters significantly influence the viscoelastic flow behavior. Larger
M
and
b
max
values result in increased steady values of molecular stretch and mean conformation thickness. Larger
β
and
Re
values result in smaller peak values of the first normal stress difference and weaker overshoot phenomena. Furthermore, larger
Wi
values yield larger peak values of the first normal stress difference, more oscillating numerical values, smaller molecular stretch values, and greater mean conformational thickness values. [Conclusions] The multiscale method provides a new approach for simulating viscoelastic injection molding.
References
|
Related Articles
|
Metrics
MECHANICAL ENGINEERING
Select
Analysis and experimental verification on the leakage of labyrinth seals under multiple factors
SUN Weiwei, LIU Yue, LI Yongjian, JIANG Jie
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1414-1423. DOI: 10.16511/j.cnki.qhdxxb.2023.26.061
Abstract
HTML
PDF
(8404KB) (
42
)
[Objective] As the main form of shaft seal, labyrinth seals provide several advantages, such as simple structure, convenient disassembly and assembly, and the ability to withstand high pressure and other harsh working conditions. They are widely used in centrifugal compressors, aeroengines, expansion turbines, and other rotating machinery. The diversity of labyrinth seal structures and the complexity of real working conditions make its internal gas flow and heat transfer very complex. The current literature reveals several factors that affect the leakage of labyrinth seals; however, comprehensive studies on the leakage characteristics of the multifactor coupling effect have been scarce. To improve the sealing performance of the labyrinth seal and reach the development trend of modern rotating machinery, performing detailed research on the structure optimization of labyrinth seals from multiple angles is crucial. [Methods] In this study, the flow field of the labyrinth seal was simulated by FLUENT software, the influences of geometric parameters and tooth structure of the labyrinth seal on leakage were analyzed by computational fluid dynamics (CFD). The labyrinth seal experimental system was established, which could realize the labyrinth seal experimental conditions under a maximum speed of 10 000 r/min and a gas supply pressure of 0.6 MPa. [Results] The experimental results exhibited that in the operating parameters, the pressure difference had a significant impact on the leakage, while the speed basically had no impact on the leakage. Moreover, comparison of the experimental results with the mathematical model showed that the maximum error between the mathematical model and the experimental results was 3%. The orthogonal experiment results indicated that the geometric parameters of the four tooth profiles had different degrees of influence on the leakage. Thus, the seal clearance had a remarkable effect on the leakage, while the duty cycle, depth width ratio, and tooth pitch had little effect on the leakage. After analyzing the tooth shape parameters, it was found that in the tooth shape structure, the inclined teeth had smaller leakage and reducing the inclination angle between the front teeth and the rear teeth would be beneficial to reducing the leakage. Analysis of the number of sealing teeth revealed that increasing the number of teeth was conducive to reducing the leakage without increasing the axial length. Moreover, this phenomenon was more obvious with increasing pressure difference. Therefore, when the total pressure difference was a constant value, the impact of the increasing number of teeth on the leakage rate would not change significantly. [Conclusions] In this study, the CFD model of the labyrinth seal is established by FLUENT software, and the internal flow field distribution and leakage characteristics of the labyrinth seal are revealed. Two geometric mechanism parameter definitions, namely, duty cycle and depth width ratio, are proposed. The parameter definition can effectively eliminate the coupling effect between the parameters, such as tooth height and tooth thickness, and provide an important reference for parameter optimization design. A sealing experimental system that can realize the working conditions of high speed and large pressure difference is generated and can monitor the leakage in real time. Moreover, the impacts of the working condition parameters, geometric parameters, number of seal teeth, and structural parameters on the leakage of the labyrinth seal are studied by the CFD model. This study is of great importance for the structural design of labyrinth seals.
References
|
Related Articles
|
Metrics
Select
Influence of impeller-guide vane axial distance on the performance and flow characteristics of the gas-liquid-solid multiphase pump
HU Liwei, LI Huichuang, YANG Jiahang, LIANG Ao, ZHANG Wenwu
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1424-1434. DOI: 10.16511/j.cnki.qhdxxb.2024.21.013
Abstract
HTML
PDF
(22394KB) (
38
)
[Objective] Multiphase rotodynamic pumps are widely used in multiphase mixed transport processes, including petrochemicals, agricultural irrigation, urban water supply, and drainage, attributed to their advantages of compact structure and good operation under high-speed and high-sand content conditions. The performance of these pumps is crucial for the transport capacity of the mixed transport system; thus, the improvement of their performance has always been a research interest. The impeller-guide vane axial distance of the gas-liquid-solid multiphase pump can seriously affect its transportation performance, but is rarely researched. [Methods] Herein, a multiphase rotodynamic pump with impeller-guide vane axial distances (
d
) of 8, 10, 12, 14, and 16 mm were modeled via UG-NX. The inlet and outlet pipelines, as well as impellers and guide vanes, were meshed via ICEM-CFD and TurboGrid, respectively. Based on the Euler multiphase flow model, computational fluid dynamics (CFD) numerical simulations were conducted to reveal the influence law of
d
on the comprehensive performance, including the head, efficiency, pressure, gas void fraction (GVF), solid void fraction (SVF), vorticity, and vortex structure for the gas-liquid-solid multiphase rotodynamic pumps. [Results] The adopted accuracy of the numerical methods was verified through experiments. The numerical results revealed that as
d
increased from 8 mm to 16 mm, the head and efficiency of the multiphase rotodynamic pump showed an overall decreasing trend; the head and efficiency of the pump declined by 0.45 m and 1.38%, respectively. As
d
increased from 8 mm to 10 mm, the head and efficiency of the multiphase rotodynamic pump declined by 0.21 m and 0.52%, respectively, which was recorded as performance plummet Ⅰ; as
d
increased from 10 mm to 14 mm, the head and efficiency of the multiphase rotodynamic pump declined by 0.09 m and 0.12%, respectively—recorded as performance moderation; as
d
increased from 14 mm to 16 mm, the head and efficiency of the multiphase rotodynamic pump declined by 0.15 m and 0.74%, respectively—recorded as performance plummet Ⅱ. The change in
d
had a more significant influence on the flow state in the guide vane than that in the impeller. As
d
increased, the pressure difference decreased from the impeller inlet to the guide vane outlet, GVF at the trailing edge and SVF near the pressure surface at the leading edge of the guide vane blades gradually increased, the vorticity in the multiphase rotodynamic pump increased, and the vortex structure remained prominent, decreasing the overall pump performance. [Conclusions] The increase in
d
will reduce the head and efficiency of the multiphase pump and make the internal flow more turbulent. However, it will strengthen the rotor-stator interaction if
d
is exceedingly small. Therefore, the value of
d
should be selected from the performance moderation in the optimization design of such pumps.
References
|
Related Articles
|
Metrics
Select
Stiffness modeling technique for five-axis milling process system with thin-walled parts
ZHAO Tong, BIAN Pengxi, WANG Yongfei, ZHANG Yibo
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1435-1444. DOI: 10.16511/j.cnki.qhdxxb.2024.27.014
Abstract
HTML
PDF
(9673KB) (
29
)
[Objective] The manufacturing industry is witnessing increasing precision requirements for thin-walled components, such as turbine blades and casings, in the field of aerospace engines. This is because these components suffer high material removal rates and poor machining performance, resulting in extreme changes in cutting forces during machining and the formation of chatter marks on the machined surface, leading to decreased surface quality. The relative vibration between the cutting edge and workpiece surface is mainly influenced by the milling forces and stiffness of various parts of the process system. Therefore, modeling of stiffness and analysis of such process systems are crucial for increasing the machining accuracy. [Methods] Based on the research on machine tool modeling using cutting tools, casings, and fixtures available domestically and internationally, simulation calculations and process system analyses were conducted under typical working conditions. A large amount of data was generated through hammering and vibration excitation experiments, and an eXtreme Gradient Boosting (XGBoost) algorithm was employed to establish a stiffness model for a cradle-type AC rotary table structure machine tool; furthermore, this model includes pose information. The stiffness model of the machine tool was integrated with the tool, workpiece, and fixture models obtained via the finite element method using a stiffness series formula and the spatial coordinate transformation method for developing the stiffness model of the process system. The stiffness model’s accuracy was increased through iterative updates. First, the harmonic response of the tool, casing, and fixture was calculated through the finite element simulation. Second, a suspended measurement device for measuring the dynamic stiffness of the machine tool was proposed using an electromagnetic exciter. The device offers the advantages of high excitation power, high measurement signal-to-noise ratio, and measurement ability in different machine tool poses. Subsequently, the dynamic stiffness values of the machine tool in different poses were measured using the exciter. After obtaining the stiffness models of the tool, fixture, and workpiece systems through the finite element method, the overall stiffness modeling of the process system was completed through the stiffness series theory and spatial coordinate transformation. Finally, the calibration and iteration mechanism of the stiffness model of the process system were established. An XGBoost algorithm was established to estimate the dynamic stiffness of the process system using milling force and surface topography information measured through milling experiments. An adaptive undersampling technique for discrete data with equal chord height was proposed to address the severe imbalance between the amounts of dynamic stiffness data obtained through the abovementioned model and vibration excitation test data. [Results] 1) After comparing the measurement results of the hammering experiment with simulation result, the maximum error of the dynamic stiffness obtained through the finite element simulation was <10% and the average error was <6%, indicating that relatively accurate tool and casing models can be obtained through this simulation. 2) After measuring the dynamic stiffness of the machine tool at different poses by using the electromagnetic exciter, the error rate of the machine tool stiffness model established using XGBoost algorithm meets the 3
σ
criterion. 3) The average error of the XGBoost algorithm for measuring the process system stiffness through milling force and surface topography is less than 13%, with a maximum value of no more than 18%. [Conclusions] The undersampled vibration excitation test data and stiffness data obtained through the cutting experiments were used for calibrating and iterating the stiffness model of the process system, ensuring that the error rate of the model meets the 3
σ
criterion.
References
|
Related Articles
|
Metrics
VEHICLE AND TRAFFIC ENGINEERING
Select
Research on the unsprung mass effect of in-wheel motor drives based on a half-vehicle model
WU Peibao, LUO Rongkang, YU Zhihao, HOU Zhichao
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1445-1455. DOI: 10.16511/j.cnki.qhdxxb.2024.21.016
Abstract
HTML
PDF
(11930KB) (
22
)
[Objective] In-wheel motor drive has emerged as a promising innovation in electric-vehicle-powertrain configurations. In mainstream configurations of in-wheel motor drive units, power and transmission devices are rigidly connected to wheel hubs. However, this design increases the unsprung mass of the vehicle. The implications of this added mass on vehicle performance, especially in terms of ride comfort, have been a subject of debate. Existing research, encompassing experimental and simulation studies, presents different and sometimes contradictory conclusions. Notably, many simulations fail to consider the effects of motor positioning and resultant vibrations across different areas of the vehicle. [Methods] This study tries to address these concerns using a half-vehicle model approach. These models are established for a standard passenger vehicle and an in-wheel motor-driven variant, the latter modified by adding mass to the wheels of the former. To assess ride comfort, we focused on several dynamic performance indicators: body vertical acceleration, pitch angular acceleration, relative wheel dynamic load, and suspension travel. Using the frequency domain method allowed us to convert double-wheel excitations into single-wheel excitations, from which we derived the equivalent amplitude-frequency characteristics for our chosen indicators. This step was followed by determining the power spectral density of a random road profile, which in turn facilitated the calculation of the power spectral density and root-mean-square values of the performance indicators. Simulations were then performed to compare the performance of the in-wheel motor-driven vehicle with that of the traditional vehicle on a C-class random road over a speed range varying from 1 to 50 m/s (3.6-180 km/h). The analysis considered various factors, including body position, hub motor mass, and hub motor drive mode. By integrating the amplitude-frequency characteristics of our indicators, we were able to shed light on how increased unsprung mass influences vehicle dynamics. [Results] The results of our study can be summarized as follows: 1) In the vehicle speed range of up to 50 m/s, an increase in unsprung mass results in a larger wheel dynamic load and greater suspension travel. This, in turn, negatively affects road holding and suspension performance. 2) The impact of increased unsprung mass on body vertical acceleration varies with body position owing to the wheelbase-filtering property. Specifically, the front and rear body accelerations are exacerbated by the increased unsprung mass across all speeds. Furthermore, the vertical and pitch accelerations of the body centroid exhibit alternating patterns of increase and decrease throughout the speed range. In other words, these two indicators deteriorate at certain speeds but improve at others. 3) As the hub motor mass increases, the vertical and pitch accelerations of the body centroid intensify within the speed ranges where deterioration occurs. Conversely, within the speed ranges where improvements are noted, these accelerations diminish. 4) At typical speeds, vehicles with front-drive and four-drive hub motors experience significant increases in vertical and pitch accelerations of the body centroid owing to the added unsprung mass. The adverse effect is considerably less pronounced in vehicles equipped with rear-drive hub motors. [Conclusions] In summary, this study systematically reveals the influence of increased unsprung mass on vehicle ride comfort. By doing so, it aims to resolve the discrepancies and controversies found in previous research. The insights gained from this research serve as a valuable resource for informing the design of hub motor-driven vehicles.
References
|
Related Articles
|
Metrics
Select
Two-vehicle cooperative lane change strategy for intelligent and connected buses in the mandatory lane change process of entry and exit
REN Hanxiao, LUO Yugong, GUAN Shurui, YU Jie, ZHOU Junyu, LI Keqiang
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1456-1468. DOI: 10.16511/j.cnki.qhdxxb.2024.21.019
Abstract
HTML
PDF
(10612KB) (
37
)
[Objective] The process of buses entering and exiting a station often accompanies mandatory lane change, which can significantly impact traffic safety and efficiency. However, current research on lane change strategies for intelligent and connected buses relies on single-vehicle algorithms, lacking consideration for surrounding vehicles and facing challenges in handling inter-vehicle conflicts. These approaches also struggle to ensure the safety and success rate of mandatory lane change during entry and exit, and easily impact the traffic flow. To improve the performance of mandatory lane change for intelligent and connected buses during the entry and exit processes, a two-vehicle cooperative lane change strategy based on the cloud control system (CCS) is proposed, which can schedule buses and cooperating vehicles simultaneously. [Methods] This study designed a set of cooperative lane change strategies for the entry and exit processes. For the entry process, the road segment leading to the bus station was divided according to the distance to the station, and a decision-making model based on the lane change benefit criterion was established. A two-stage trajectory planning method was developed for lane change trajectories, incorporating the longitudinal adjustment and cooperative lane change stages. Optimization problems were formulated and solved in a receding horizon to obtain the optimal trajectories. A layered quadratic programming method was also employed to improve the real-time performance of the algorithm. For the exit process, to adapt to the characteristics of the exiting motion of buses, a rule-based decision-making method and pre-deceleration rule were designed. Lane change trajectories were optimized considering the characteristics of bay-style bus stations to meet the station shape and bus mobility requirements. Finally, the NGSIM dataset was used to design typical scenarios for simulation and hardware-in-the-loop experiments. The effectiveness of the proposed strategy was tested. The proposed strategy was also compared with baseline methods, i.e., the minimizing overall braking induced by lane change (MOBIL) decision model along with the fifth-order polynomial single-vehicle planning method, and the rule-based decision model along with the fifth-order polynomial single-vehicle planning method for the entry and exit processes, respectively. [Results] The simulation and hardware-in-the-loop experiments under typical scenarios demonstrate that the proposed strategy can effectively adapt to realistic vehicle speed changes, assisting buses in completing lane changes during station entry and exit. Moreover, when faced with the actual computing and communication environment of the cloud platform, the proposed strategy meets the requirements for real-time and effective calculations. The comparison with the baseline method shows that, in batch tests for station entry and exit, the lane change success rates of the proposed strategy were 87.4% and 82.4%, respectively, both higher than the baseline method's 51.9% and 20.4%, respectively. Evaluation metrics based on the following vehicle's speed, acceleration, and time to collision (TTC) also indicate that, compared with the baseline method, the proposed strategy can reduce the impact of mandatory lane change on traffic efficiency while maintaining better safety standards. [Conclusions] Using the CCS to achieve centralized decision-making and trajectory planning for scheduling buses and cooperating vehicles simultaneously, the proposed strategy can effectively handle mandatory lane change under high traffic density conditions. The proposed strategy not only improves the success rate of mandatory lane changes for buses entering and exiting stations, but also ensures lane change safety and reduces the impact on upstream traffic. The proposed strategy is also applicable in cloud platforms under realistic computing and communication environments.
References
|
Related Articles
|
Metrics
Select
Multimodal stochastic ridesharing user equilibrium model with tailored service
WANG Qianlian, MA Jie, WANG Wei, CHEN Jingxu
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1469-1481. DOI: 10.16511/j.cnki.qhdxxb.2023.26.053
Abstract
HTML
PDF
(3113KB) (
19
)
[Objective] Ridesharing is considerably reforming urban transportation networks. It is considered an effective measure to alleviate traffic congestion and vehicle emissions and is supported by many governments and residents globally. It also encourages travelers with the same or similar origin and destination to share vehicles to reduce on-road vehicles. Unlike traditional ridesharing services, which place as many ridesharing participants as possible inside ridesharing vehicles, tailored ridesharing prioritizes the comfort and convenience of travelers and predominantly offers one-to-one ridesharing services. However, the emerging tailored ridesharing mode makes traditional methods of predicting traffic flow ineffective. Therefore, it is crucial to investigate a traffic assignment problem to predict traffic flow and formulate traffic management policies. [Methods] First, we clarify a multimodal stochastic ridesharing user equilibrium (SRUE) problem, where the network topology, flow constraints, and generalized travel cost functions are specified. Travelers can choose to be solo drivers, ridesharing drivers, riders, or public transport passengers, considering a network with tailored ridesharing and public transit services. Consideration is given to the travel demands of car owners and noncar owners. The flow conservation and ride-matching constraints are formulated. Furthermore, a path-based generalized travel cost function is proposed for each mode, including time costs, inconvenience costs, ridesharing prices, compensation, and miscellaneous costs. The SRUE flow distribution principle states that travelers will always select the alternative, where an alternative consists of a route and a mode with the minimum perceived travel cost. Second, we formulate an equivalent variational inequality (VI) model for the above SRUE problem, refer to as the VI-SRUE model. Moreover, the equivalence, existence, and uniqueness of the model solution are demonstrated. The stochasticity is handled by introducing a random perception error that satisfies the Gumbel distribution, ensuring that the travelers' choice behavior pattern conforms to the Logit model. The equivalence is proved by checking the Karush-Kuhn-Tucker (KKT) condition and Slater's theorem. The existence is supported by the compact feasible solution set and continuous function of the VI-SRUE model. The VI-SRUE has a unique solution because its function is strictly monotonous under mild conditions. Finally, a globally convergent parallel self-adaptive projection (PSAP) algorithm is applied to find the solution to the VI-SRUE model. The algorithm combines the
K
-shortest path method, network decomposition, and parallel computing techniques to avoid the possible memory overflow caused by large-scale networks. [Results] In this paper, numerical experiments were conducted to assess the proposed model and algorithm. A sensitivity analysis was performed based on the Braess network. From these experiments, the following results were obtained: (1) Tailored ridesharing could effectively reduce the travel time of on-road vehicles and travelers. (2) Riders dominated the tailored ridesharing market through high sensitivity of ridesharing flow to coefficient of inconvenience (COI). (3) Appropriate bus fares could help public transport and tailored ridesharing to further reduce the traffic congestion. Moreover, through the Sioux-Falls network, the PSAP algorithm was verified to have excellent computational efficiency for solving large-scale SRUE problems. [Conclusions] In summary, this paper proposes a VI-SRUE model to predict the flow pattern of urban transportation networks with tailored ridesharing services using an efficient algorithm. The proposed VI-SRUE model describes the stochasticity associated with travelers' perception of transportation network information. The contribution of this paper is to establish a traffic assignment problem that simultaneously considers various travel modes and multiple types of travelers in the ridesharing network. Through the verification, solution, and analysis of the problem and equivalent mathematical model, this paper clarifies the relation among multiple travel modes, providing an effective foundation for predicting traffic flow and formulating ridesharing management measures.
References
|
Related Articles
|
Metrics
CARBON NEUTRALITY
Select
Establishing energy usage profiles of urban residential households: A case study of the Beijing-Tianjin-Hebei region
LIU Xinnan, SONG Laihao, JI Yingbo
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1482-1491. DOI: 10.16511/j.cnki.qhdxxb.2024.22.027
Abstract
HTML
PDF
(7352KB) (
40
)
[Objective] With improving living standards, energy consumption in urban residential households has continuously increased. In 2020, the energy consumption during the operational phase of buildings accounted for 21.3% of the total energy consumption in China, with the urban residential energy consumption accounting for 38.7% of the energy consumption during the operational phase of buildings. Energy usage in urban residential households is overly complex and differs considerably among households. For sustainable energy saving and emission reduction, it is important to monitor energy conservation in urban residential households by accurately analyzing and identifying user characteristics of different households. Therefore, we develop an urban residential household energy usage profile model using a multidimensional perspective of user profiling theory. [Methods] Herein, we focused on urban residential households in the Beijing-Tianjin-Hebei region by establishing a labeling system for household energy usage profiling in six dimensions: household attributes, building features, household appliances, energy usage behaviors, energy consumption, and use of renewable energy; this labeling system included 18 indicators. A total of 351 valid household energy usage datasets were collected through surveys and semistructured interviews. A comprehensive search method was used to calculate the silhouette coefficients of the 18 indicators using different numerical combinations. Backward feature selection was used to filter the indicators based on their average silhouette coefficients. This process was terminated when the insignificance of indicators led to inconsistent results across different indicator sets. Consequently, the silhouette coefficient of the household energy dataset clustering was >0.5. The remaining indicators represented the optimal subset of the household energy usage profile indicators. Finally, the optimal number of clusters
k
was determined using the elbow method principle. The
k
-means algorithm was applied to cluster analysis of urban residential households. The
t
-distributed stochastic neighbor embedding (TSNE) dimensionality reduction method reduced multidimensional data to two dimensions and visualized the distribution of different urban residential households, demonstrating the scientific approach used in this study. [Results] (1) The optimal subset of energy usage profile indicators for urban residential households in the Beijing-Tianjin-Hebei region includes eight indicators: household population, building area, house use pattern, number of appliances, air conditioning behavior, annual electricity consumption, annual gas consumption, and number of solar energy equipment. (2) The optimal number of clusters for the household energy dataset is four. (3) Using the optimal subset, the
k
-means clustering algorithm classifies urban residential households in the Beijing-Tianjin-Hebei region into energy utilization quality pursuit, energy-saving potential, energy utilization regularity, and energy conservation and environmental protection types. [Conclusions] By analyzing the characteristics of the four types of urban residential household energy usage profiles in the Beijing-Tianjin-Hebei region, we examine the energy-saving potential of households with energy utilization quality pursuit, energy-saving potential, and energy utilization regularity types and propose energy-saving recommendations. We provide new insights for relevant departments to understand the energy usage characteristics of urban residential households in the Beijing-Tianjin-Hebei region and develop accurate energy conservation strategies based on household types.
References
|
Related Articles
|
Metrics
Select
EU carbon border adjustment mechanism and international industrial landscape: Impact assessment based on a global computable general equilibrium model
LUO Bixiong, GU Alun, CHEN Xiangdong, ZUO Peng, WENG Yuyan, CHEN Yiming
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1492-1501. DOI: 10.16511/j.cnki.qhdxxb.2023.26.050
Abstract
HTML
PDF
(1487KB) (
24
)
[Objective] The legislative process for the European Union (EU)'s carbon border adjustment mechanism (CBAM) has been completed, and it officially became EU law in May 2023. The EU's CBAM imposes charges on imported products from selected industries based on their carbon emissions and the prevailing carbon prices in the EU emissions trading system through the issuance of “CBAM certificates.” Considering the intensified global competition in low-carbon development and the profound changes in the geopolitical energy landscape, the implementation of the EU's CBAM as a unilateral trade measure will have an impact on the global economy, trade, and industry, making it a focal point in the competition among major powers in the long run. Existing studies focus more on the impact of the EU's CBAM on exports and pay less attention to the changes in the international industrial landscape caused by the industrial output changes due to the implementation of the EU's CBAM. Furthermore, most existing studies primarily performed static analysis for the benchmark year and failed to capture the impact of cumulative changes over time and the evolution of CBAM rules on the future economy, trade, and industry. Limited analysis exists on the dynamic effects of the EU's CBAM. In addition, most studies exogenously assume carbon prices, unable to simulate the endogenous trends of carbon prices and their impacts within major economies under the latest low-carbon transition policies. [Methods] This research employs the China-in-global energy model (C-GEM) developed by Tsinghua University to simulate and analyze the impact of the EU's CBAM. C-GEM is a computable general equilibrium model that effectively represents the interlinkages and interactions between different sectors of the economy and the energy system, allowing the assessment of the economic impact of climate policies on major economies. C-GEM is a global multiregional model that can evaluate the effects of the EU's CBAM on the EU and other economies while analyzing the changes in the global industrial landscape from a global perspective. Furthermore, C-GEM is a recursive dynamic model that can simulate the medium- and long-term emissions reduction targets of various economies and analyze the future implications of the EU's CBAM. The EU's CBAM is depicted in the model as follows: First, the CBAM tax rates are calculated based on value-at-the-border, using the value-based carbon intensity, trade values, and endogenous carbon prices from the C-GEM. Second, these tax rates are applied to the EU's import sectors in the model. Finally, the EU's CBAM is made dynamic, and assumptions are made regarding the covered sectors, sectoral coverage ratios, emission types, and other factors at different time points. [Results] The simulated results from the C-GEM revealed the following: (1) Due to the substantial exports of steel and other products to the EU, Russia was heavily affected, with projected GDP changes of -0.12% and -0.32% in 2025 and 2030, respectively, while the EU increased its GDP by producing substitute domestic products. (2) Russia experienced the largest decline in total exports, with projected changes of -0.86% and -2.48% in 2025 and 2030, respectively, while the EU considerably benefited from the exports of specific industries related to the CBAM. (3) The output of industries such as steel, nonmetallic minerals, nonferrous metals, and chemicals in economies such as Russia, Turkey, and China experienced varying degrees of decline, with the chemical and steel industries being more affected than others. (4) The international market shares of key industries in developing economies like Russia and China mostly declined, replaced by increased market share for related industries in developed economies such as the EU. The percentage increase in the EU's market share considerably exceeded the percentage decrease in other economies. However, China's nonferrous metals industry exhibited a trend of further increasing its international market share, reflecting its competitive advantage. [Conclusions] Implementing the EU's CBAM showed heterogeneous impacts on various economies at different times and reshaped the existing international industrial landscape. There is a trend of global key industry output shifting from developing economies with a high dependence on carbon-intensive product exports to developed or more competitive developing economies. To respond to the EU's CBAM, China needs to strengthen multilateral cooperation and proactively address the unilateral trade measures taken by EU and the United States, optimize industrial and trade structures and promote green and low-carbon development, accelerate the development of the national carbon market and improve domestic carbon pricing mechanisms, and actively participate in the formulation of international standards and rules in the areas of climate change and trade.
References
|
Related Articles
|
Metrics
Select
One-dimensional numerical analysis of CO
2
capture by desublimation in an isopentane spray tower
JIN Xin, WANG Bing
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1502-1508. DOI: 10.16511/j.cnki.qhdxxb.2024.26.011
Abstract
HTML
PDF
(2980KB) (
23
)
[Objective] Carbon capture technology is a focal point in the realm of carbon capture, utilization, and storage. Enhancing CO
2
capture efficiency and reducing energy consumption are pivotal for the viability of industrial applications and the attainment of the “carbon peaking and carbon neutrality” objective. Cryogenic CO
2
capture by desublimation is a post-combustion capture technology that has the advantages of high CO
2
capture rate, environmental friendliness, and the production of high-purity CO
2
products. Consequently, it holds substantial potential for both academic research and industrial applications. Nevertheless, conventional CO
2
desublimation capture methods using solid media present limitations, including challenges in collecting and removing solid CO
2
, compromised heat transfer between solid media and gaseous CO
2
, and corrosion issues. Although utilizing liquid media for desublimating and capturing CO
2
can overcome these challenges, pertinent research remains insufficient. [Methods] This study employs the cryogenic carbon capture method, utilizing liquid media to desublimate CO
2
. It establishes a one-dimensional model for the isopentane spray tower to examine the temperature and CO
2
concentration fields within the tower. The aim is to elucidate the relationships and physical mechanisms governing the tower's overall CO
2
capture rate in relation to the initial conditions of the inlet gas, isopentane droplets, and spray tower settings. [Results] Numerical results from the one-dimensional isopentane spray tower revealed the following: (1) The temperature variation of isopentane droplets was minimal and primarily occurs around the gas inlet area, indicating that desublimation was contingent upon CO
2
concentration fields and mass diffusion. (2) The temperature of the CO
2
mixture gas undergone significant changes throughout the tower at a constant rate, highlighting the dominance of gas temperature fields by thermal convection with negligible effects from droplet desublimation on gas temperature. (3) The initial diameters and temperatures of isopentane droplets significantly affected the spray tower's overall CO
2
capture rate. Initial diameters smaller than 2.0-mm and initial temperatures below 150.00 K for isopentane droplets result in a CO
2
capture exceeding 90% for a 2.0-m high spray tower, validating the efficacy and efficiency of the isopentane spray tower in cryogenic CO
2
capture. (4) The spray tower's overall CO
2
capture rate was influenced by the tower's height, initial velocity and temperature of isopentane droplets, and inlet gas velocities. The efficiency of the desublimation process was strongly dependent on the heat transfer efficiency and contact time between isopentane droplets and CO
2
mixture gases. [Conclusions] Through numerical simulation and investigation of temperature and CO
2
concentration fields within the isopentane spray tower, this study unveils and analyzes factors influencing the tower's CO
2
capture rate and the pertinent mechanisms of CO
2
desublimation on liquid droplets. Additionally, it demonstrates the effectiveness of the isopentane spray tower in capturing CO
2
, emphasizing the substantial potential for cryogenic CO
2
capture using liquid spray in the field of carbon capture.
References
|
Related Articles
|
Metrics
MEDICAL EQUIPMENT
Select
Study on quantification performance evaluation of a domestic SPECT/CT system
JIANG Nianming, LIU Fan, CHENG Li, GAO Lilei, LIU Hui, LIU Yaqiang
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1509-1515. DOI: 10.16511/j.cnki.qhdxxb.2023.26.051
Abstract
HTML
PDF
(4712KB) (
26
)
[Objective] Single-photon emission computed tomography (SPECT) is still considered a nonquantitative imaging modality because it cannot perform attenuation correction. To acquire a quantitative SPECT image, Beijing Novel Medical Equipment Co., Ltd. developed a domestic SPECT/CT system, Insight NM/CT Pro. It was equipped with dual digital detectors and enabled various scanning geometries. In addition to better spatial resolution, the system can correct attenuation effects in SPECT and realize quantitative reconstruction using CT. In this study, we evaluated the imaging accuracy of this domestic SPECT/CT system and compared it with advanced abroad systems. [Methods] To obtain quantitatively correct results, physical effects, including collimator blurring, object attenuation, scatter, and radionuclide decay, were modeled and corrected in the reconstruction algorithm. Moreover, a large cylindrical phantom was employed to obtain the calibration factor for the system so that we could convert the reconstruction result to a quantitative image in terms of Bq/mL. Then, the performance of CT-based attenuation correction was tested according to testing method for SPECT imaging based on CT-attenuation correction (YY/T 1546—2017). A cylinder phantom with three cylindrical inserts corresponding to air, nonradioactive water, and bone was filled with a radioactive solution for imaging. The biases inside the air, water, and bone regions of the reconstructed image were calculated to evaluate the performance of CT-based attenuation correction. In addition, the quantitative accuracy of the equipment was tested using the Notional Electrical Manufactures Association (NEMA) torso phantom according to performance measurements of Gamma cameras (NEMA NU 1—2018). Six fillable spheres with different diameters were set as targets for recovery evaluation, and the target-to-background concentration ratio was approximately 8∶1. A large volume of interest (VOI) was placed in the background region to calculate the quantitative bias of the reconstruction. VOIs with the same size of six spheres were drawn based on the registered CT to evaluate recovery coefficients of different sizes. Radionuclide technetium-99m and a low-energy, high-resolution collimator with Insight NM/CT Pro were used for all these tests, and the evaluation results were compared with those of the GE Discovery NM/CT 670. [Results] For attenuation accuracy testing, the biases inside the air, water, and bone regions of Insight NM/CT Pro were 7.84%, 8.38%, and 4.66%, respectively, whereas the corresponding biases in GE Discovery NM/CT 670 were 16.64%, 18.01%, and 11.02%, respectively. For quantitative accuracy testing, in GE discovery NM/CT 670, the quantitative recovery coefficients of the hot spheres with diameters of 13, 17, 22, and 28 mm were 31.04%, 51.36%, 59.91%, and 66.39%, respectively, and the background concentration bias was 10.84%. Insight NM/CT Pro achieved higher recovery coefficients and lower bias. The recovery coefficients were 38.22%, 50.98%, 66.55%, and 71.32% for 13, 17, 22, and 28 mm hot rods, respectively, and the bias in the background region was 7.95%. [Conclusions] Phantom studies have demonstrated that the domestic Insight NM/CT Pro imaging system can obtain smaller biases after CT attenuation correction and achieve quantitative images with high accuracy. The reliability of the Insight NM/CT Pro system for quantitative imaging is validated in this study, and its performance is comparable to that of the advanced GE Discovery NM/CT 670 imaging system.
References
|
Related Articles
|
Metrics
Select
Large field-of-view radioactive source location system based on a coded aperture and pinholes
LIU Yujie, DAI Tiantian, JIANG Nianming, HOU Yansong, WEI Qingyang
Journal of Tsinghua University(Science and Technology). 2024,
64
(8): 1516-1520. DOI: 10.16511/j.cnki.qhdxxb.2024.27.003
Abstract
HTML
PDF
(3989KB) (
12
)
[Objective] Gamma-ray detection using a nuclear radiation locator is critical for monitoring, locating, and processing radioactive sources. In recent years, gamma cameras based on coded aperture imaging techniques have been extensively utilized to identify and monitor radioactive sources. However, these detectors have limitations in terms of the imaging field. To accurately determine the specific location of radioactive sources, constant adjustment of the detection angle is required, which is often time-consuming. To expand the detection field, multiple coded aperture cameras can be used simultaneously, but this approach increases cost and equipment complexity. Some researchers have attempted to combine Compton and coded aperture imaging techniques. While the Compton camera can extend the field-of-view (FOV) to 4π, this method is complicated, costly, and limited to detecting high-energy rays. As a result, the combination of these two techniques proves inadequate when searching for low-energy sources. In this work, we proposed a system and method for locating radioactive sources with a large FOV based on combining a coded aperture with pinholes. This method addresses the limited FOV issue encountered in the aforementioned system. [Methods] The coded aperture component of the system uses a modified uniformly redundant array as the uniform redundant array mask. The base mode class is 11, with a unit size of 3.3 mm×3.3 mm, leading to a total size of 69.3 mm×69.3 mm. The mask thickness is 9 mm, and tungsten is used as the material. The detector section includes a 26×26 NaI (Tl) array, where each crystal pixel has dimensions of 1.45 mm×1.45 mm×6.00 mm. A crystal gap of 0.2 mm exists between each pixel, and the distance between the center of the coded aperture and the position-sensitive sensor is 77.5 mm.For the pinhole part of the system, a tapered pinhole with a center size of 4 mm is used. The pinhole is embedded in a shield with equally large pinholes on all four sides. For performance assessment of the system, Monte Carlo simulation experiments were performed with GATE software. A large FOV radioactive source location system is constructed, and simulation data are produced. MATLAB is employed to process the simulation data, compute the system transmission matrix using the Sidden algorithm, and conduct reconstruction using the maximum likelihood expectation maximization method. The projection and reconstruction results of the point sources at various positions are compared and analyzed. Thus, this work shows a comprehensive analysis and assessment of the developed system for locating radioactive sources with a large FOV using a combination of coded aperture and pinhole imaging techniques. [Results] The results indicate that the full coding and semipseudo-film FOV of the coded aperture camera are 19.33° and 70.80°, respectively, and the added pinhole extends the FOV of the system to 123.40°. The developed system attains an angular resolution of 2.95° within the coded aperture FOV and 6.30° within the extended pinhole FOV, effectively imaging a 10 mCi radioactive source at a distance of 3 m. [Conclusions] The developed wide FOV radiation source location system and method effectively address the limited imaging field of the coded aperture camera.
References
|
Related Articles
|
Metrics
News
More
»
2023年度优秀论文、优秀审稿人、优秀组稿人评选结果
2023-12-12
»
2022年度优秀论文、优秀审稿人、优秀组稿人评选结果
2022-12-20
»
2020年度优秀论文、优秀审稿人评选结果
2021-12-01
»
aa
2020-11-03
»
2020年度优秀论文、优秀审稿人评选结果
2020-10-28
»
第十六届“清华大学—横山亮次优秀论文奖”暨2019年度“清华之友—日立化成学术交流奖”颁奖仪式
2020-01-17
»
a
2019-01-09
»
a
2018-12-28
»
a
2018-01-19
»
news2016
2017-01-13
Links
More
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd