
Objective: The efficient allocation and dispatch of fire rescue resources are crucial to urban public safety. Traditional approaches assume continuous spatial distribution of fire service coverage areas and give less consideration to the impact of real-time traffic conditions on rescue route selection and response times. This study aims to introduce and define the concept of "rescue enclaves"—areas that, although not directly adjacent to fire stations, can be effectively covered by them—and proposes a method to identify and calculate these spatially discontinuous coverage areas. Methods: This study proposed a method for identifying and calculating spatially discontinuous coverage areas by mapping points to grids. Using this method: (1) fire truck travel times were calculated using real-time traffic data, (2) geographic coordinates were converted to universal transverse Mercator (UTM) coordinates, (3) the region was divided into fine grids, (4) grid coverage status was determined, (5) transition grids were processed through neighborhood analysis, and (6) rescue enclaves were identified using a breadth-first search (BFS) algorithm. The CS-XX urban fire station in a Chinese city was selected as a case study to validate the method. In this case study, 3 818 points of interest were identified as rescue demand points across 49 evaluation periods in one day, generating 187 082 valid data samples. A target response time of 4 min was established, and an 80% reduction coefficient was applied to convert regular vehicle travel times to fire truck travel times. Results: The rescue enclave areas were successfully identified and calculated using the proposed method, through which the following key findings were revealed: (1) the dynamic coverage area of CS-XX was observed to vary from 1.83 to 4.57 km2, with the minimum fire service coverage of 1.83 km2 being recorded during the morning peak at 8:00, (2) the calculated coverage area trends were found to be consistent with the percentage of demand points accessible within 4 min, whereby the reliability of the method was validated, (3) critical rescue enclaves were identified near CS-XX, with enclave areas ranging from 0.25 to 1.12 km2, accounting for 12.20%-27.53% of the total coverage area, (4) the rescue enclaves were observed to occasionally extend beyond the traditional coverage of 7.00 km2 prescribed by standard area determination methods, and (5) coverage areas and rescue enclave areas were demonstrated to synchronously vary with traffic conditions, with traffic congestion leading to a significant reduction in their sizes. Conclusions: The proposed conceptualization of rescue enclaves is elucidated in this study, and their substantial manifestation within fire service coverage areas is substantiated through rigorous analysis. The rescue enclaves are systematically identified and quantified via an algorithmically driven methodological framework, and it is ascertained that such enclaves may comprise up to 27.53% of the coverage area of a fire station. If rescue enclaves are integrated into fire rescue jurisdiction planning protocols, they can substantially optimize resource allocation efficacy. While real-time traffic conditions and different flow efficiencies across heterogeneous route typologies are identified as the primary determinants of enclave formation, subsequent investigations are warranted to elucidate the precise mechanistic underpinnings and contributory factors governing rescue enclave emergence as well as to establish quantitative metrics for rescue passage efficiency across diverse route configurations.
Objective: The safety of rescue personnel is a critical factor in determining the success of rescue operations. The ability to accurately identify the motion states of rescue personnel is key to ensuring their safety. However, monitoring their motion states in real time is challenging because of the complex and dangerous environment they operate in. This study aims to develop a method for identifying the motion states of rescue personnel based on triaxial motion data to enhance the efficiency of personnel safety monitoring during rescue missions. Methods: In this study, the MPU6050 sensor, an integrated triaxial accelerometer and gyroscope, was utilized to collect the motion data from the leg and waist of rescue personnel. This sensor was selected based on its low power consumption, automatic sleep mode, and power management features, making it suitable for long-duration rescue tasks. Before data collection, the sensors were calibrated using zero-bias calibration to reduce errors and ensure data reliability. These sensors were strategically placed on the waist the and leg of the rescue personnel to capture their overall body dynamics and detailed movements. This study analyzed the acceleration data under four different motion states: standing still, working in a small area, walking, and running. The data were analyzed using time-domain feature analysis, focusing on the standard deviation of acceleration to quantify the fluctuation and stability of the motion states. This study proposed a classification mechanism based on the sum of the standard deviations of waist and leg accelerations to distinguish between different motion states. Results: The experimental results demonstrated that the proposed method effectively distinguished between different motion states. In the standing-still state, the total acceleration was close to zero, indicating no movement. In the state of working in a small area, the acceleration was greater than zero but remained within a small range with stable fluctuations. In the walking state, there was a significant difference between the waist and leg total accelerations, with the latter showing larger fluctuations and clear peaks and valleys. In the running state, both waist and leg total accelerations showed larger fluctuations, with the latter having a greater amplitude. The method showed high accuracy and stability in real-time monitoring of rescue personnel's motion states, effectively identifying the changes in motion states within a 2-min test period. The standard deviation analysis revealed a clear hierarchical distribution, indicating significant differences in acceleration fluctuations between different motion states. The sum of the standard deviations of waist and leg accelerations provided a reliable basis for distinguishing between the four motion states. Conclusions: This study has provided a reliable method for monitoring the motion states of rescue personnel, which can substantially improve the safety and efficiency of rescue operations. The method's ability to accurately and stably identify different motion states in real time makes it a valuable tool for ensuring the safety of rescue personnel in complex and dangerous environments. The findings of this study contribute to the development of more effective monitoring systems for rescue operations, potentially reducing the risk of accidents and enhancing the overall success rate of rescue missions.
Objective: Increasing complexity of disaster rescue systems and frequent global natural disasters require efficient emergency command mechanisms to reduce life and property losses. Large-scale earthquakes present time-sensitive, multi-dimensional challenges, needing rapid decisions, precise resource allocation, and cross-departmental coordination. However, current research does not quantitatively analyze how emergency command capabilities affect rescue efficiency in dynamic disaster scenarios. This study develops a multi-agent simulation model based on the 6.2-magnitude Jishishan earthquake in China to assess the impact of command strategies on rescue operations and optimize emergency response systems. Methods: A multi-agent simulation model is developed using the NetLogo platform to meet the research objectives. It represents the emergency command and rescue system with four types of agents: the on-scene chief command agent, the on-scene support command agent, the emergency rescue force agent, and the disaster-affected agent. Each agent has specific behavioral patterns and interaction rules. The on-scene chief command agent oversees coordination, decision-making, and resource allocation. The on-scene support command agent manages task planning, resource scheduling, and real-time information feedback. The emergency rescue force agent performs rescue tasks, while the disaster-affected agent represents victims awaiting rescue. The simulation model is designed to reflect real-world scenarios, focusing on key variables such as information completeness, decision-making capability, resource allocation efficiency, and coordination success rate. This study analyzes scenarios under different conditions: (1) Information incompleteness: limited communication and fragmented data; (2) Resource scarcity: imbalanced demand-supply distribution; and (3) Feedback delays: lagging information updates and decision adjustments. The rescue rate (R), defined as the ratio of rescued victims to total victims, is the primary performance metric. Comparative analyses adjust agent capabilities to identify optimal strategies. Results: The simulation results highlight key findings: (1) Critical role of command capabilities. The on-scene chief command agent's information organization and coordination control capabilities are crucial in accelerating early-stage rescue operations. When optimized, these capabilities increase R by 0.4 within the first five simulation ticks. The on-scene chief command agent's feedback adjustment capability becomes crucial in later stages, thus reducing task conflicts by 0.25 through dynamic strategy updates. (2) Scenario-specific optimization strategies. Under incomplete information conditions, improving the on-scene support command agent's resource scheduling speed increases R from 0.4 to 0.9 in 9 ticks. During resource scarcity, enhancing the on-scene support command agent's coordination ability minimizes allocation conflicts, thus achieving a stable R of 0.7 despite limited supplies. During feedback delays, enhancing the on-scene support command agent's task prioritization management reduces decision latency by 30%, thus increasing R from 0.5 to 0.68 in 12 ticks. (3) Role of lower-level command agents. This study emphasizes the significance of lower-level command agents, especially the on-scene support command agent, in enhancing rescue efficiency. Optimizing their resource scheduling and coordination abilities can significantly enhance the overall rescue operation, even under complex, challenging conditions. Conclusions: This study quantitatively confirms that effective emergency command is crucial for earthquake rescue efficiency. The on-scene chief command agent's information integration and macro-level coordination capabilities form the foundation for rapid response, while the on-scene support command agent's strategic optimizations are critical under resource constraints. A hierarchical, decentralized command structure is recommended to effectively balance decision-making authority with operational flexibility. Future research should combine dynamic disaster factors to evaluate the robustness of command strategies in unpredictable scenarios.
Objective: This study integrates social attributes of human behavior as an independent mechanism within the analytical framework of negative emotion propagation dynamics. It aims to provide a comprehensive understanding of how negative emotions spread across social networks and establish a scientific basis for effective public opinion management and crisis response. Methods: This study examines the distinct mechanisms of social reinforcement and individual regulation that differentiate the spread of negative emotions in social networks from that of traditional infectious diseases. A heterogeneous propagation threshold model, named the SI-SEIR (social reinforcement and individual regulation susceptible-exposed- infected-recovered) model, incorporates a dual influence mechanism of "social reinforcement-individual regulation". First, we develop a non-Markovian negative emotion propagation model, considering social reinforcement and variations in individual emotion regulation abilities. We then extend the edge-based compartmental theory to determine the theoretical outbreak threshold and final propagation scale, including both continuous and discontinuous phase transitions. Extensive numerical simulations are conducted based on data from the Weibo network, using the Hubei Province Red Cross Society incident at the early stage of the COVID-19 pandemic to validate the effectiveness of the SI-SEIR model. Results: The findings show that individual emotion regulation abilities and social reinforcement significantly impact the spread of negative emotions. Improving individuals' emotion regulation ability and decreasing social reinforcement intensity can help effectively reduce large-scale outbreaks of negative emotions during public crises. Moreover, the network's topology feature significantly influences propagation outcomes. When individuals have relatively uniform emotion regulation abilities, a higher average degree of the network substantially raises the outbreak threshold, thereby reducing the likelihood of widespread diffusion. Increasing network heterogeneity can help increase the outbreak threshold and reduce the spread of negative emotions. Conclusions: Considering both social reinforcement and individual emotion regulation mechanisms is critical for accurately modeling and predicting the dynamics of negative emotion propagation in social networks.
Objective: With increased globalization, multiple countries are involved in supply chains, forming complex supply networks. Frequent occurrences of natural disasters, geopolitical instability, and global health crises pose unprecedented challenges to traditional supply chain management methods. Local disruptions in the supply chain can spread internally, causing a series of chain reactions. Enhancing supply chain risk resilience and robustness has become a research focus for many scholars. The widespread use of the Internet has led to rapid information exchange between enterprises; an increasing number of scholars have recognized the importance of early warning information in preventing supply chain disruptions. Therefore, understanding how information affects the propagation of risks within the supply chain and maximizing the early warning function of information have significant practical implications. Moreover, the heterogeneity in the responses of enterprises to early warning information also needs attention. Methods: To capture the propagation of early warning information and disruption risks, a two-layer propagation model that couples risk and information is constructed. In this model, the upper layer represents the information layer and the lower layer represents the risk layer. The information of a disruption in a lower-layer enterprise is transmitted to upstream and downstream enterprises with a certain probability. After receiving the early warning information, an enterprise transitions into a conscious node and this transition is reflected in the upper layer network. In this model, there are five possible states for the nodes in the network. A microscopic Markov chain (MMC) method is used to analyze the state transition process between nodes and calculate the risk propagation threshold of the system. Furthermore, the key factors influencing the propagation of disruption risk are analyzed. An agent-based approach is used for case simulation to validate the model's effectiveness. Numerical analysis of the model reveals that the network structure, network size, extent of risk information propagation in the information layer, and the probability of disruption risk propagation are the key factors influencing the propagation of the risk. Financial data from Tesla's supply chain in China are also collected. In case simulation, an agent-based method is used to study the effects of the information layer network structure, information propagation rate, and risk propagation rate on the supply chain resilience. Results: The results show that for a low information propagation rate, the scale-free network structure accelerates information dissemination, allowing more enterprises to quickly obtain early warning information, thereby helping the supply chain resist risks and improve resilience. When the information propagation rate exceeds 0.4, the small-world network structures can propagate risks more efficiently because of their shorter average paths. Additionally, three disruption schemes are used to analyze system resilience, revealing that prioritizing the disruption of nodes with higher degrees has the greatest impact on the network, while deliberately attacking nodes with smaller degrees allows the supply chain to maintain higher operational efficiency. This finding suggests that maintaining the robustness of the key nodes in the supply chain is critical for enhancing the overall network resilience. Conclusions: Adjusting the supply chain network structure can help improve the risk resilience and robustness of the system. Enhancing risk awareness of enterprises and their response strategies can effectively improve supply chain resilience and suppress risk diffusion. Deliberate attacks on hub nodes with high degrees cause the greatest damage to the network system. Thus, this study provides theoretical support for supply chain management and can serve as a basis for decision-making to improve supply chain risk resilience and optimize management strategies.
Objective: An important issue that fiber-reinforced composites face in applications is their sensitivity to low-velocity impact. Low-velocity impact not only causes material penetration and perforation but also introduces internal damage in the forms of delamination, matrix cracking, and fiber breakage. This reduces the damage tolerance of fiber-reinforced composites, posing a potential threat to their protective performance. The research objective of this paper is to enhance the impact resistance through fiber hybridization and to explore the influence of the fabric structure on the impact resistance of carbon-Kevlar fiber intraply hybrid composites. Methods: In this study, carbon-Kevlar fiber intraply hybrid composites with different fabric structures were fabricated using the vacuum assisted resin infusion (VARI) process. Through low-velocity impact tests using a drop hammer under 30- and 60-J energy, the reinforcement effect of intraply hybridization on the composites and the influence of the fabric structures on the mechanical properties and impact resistance of carbon-Kevlar fiber intraply hybrid composites were investigated. Low-velocity impact was simulated through an impact simulation model of hybrid fiber composites, and the damage mechanism of the specimens under impact loads was explored. The impact response behaviors and impact resistance of each specimen were comparatively analyzed using the load - displacement, load - time, and energy - time curves. Results: Results show that: (1) At an energy of 30 J, the specimen with the maximum peak load is the plain warp carbon weft Kevlar (P-CC/KK) specimen, reaching 3.30 kN, which is 72.77% and 106.25% higher than the maximum peak loads of plain carbon (P-C) and twill warp carbon weft Kevlar (T-CC/KK) specimens (with the minimum peak load), respectively. At an energy of 60 J, the specimen with the maximum peak load is also P-CC/KK, reaching 3.68 kN, which is 97.85% higher than the maximum peak load of the P-C specimen (with the minimum peak load). (2) Comparing the maximum deflection of rebounded specimens, the T-CC/KK specimen has the largest deflection at 30- and 60-J energy, reaching 24.74 and 31.92 mm, respectively. (3) From the energy absorption curve, the energy absorption of the hybrid specimens is significantly increased compared with that of P-C. At an energy of 30 J, the plain carbon-Kevlar alternate (P-CK/CK) specimen just stops the impactor, absorbing the most energy of 29.54 J, which is 132% higher than the energy absorbed by P-C. At an energy of 60 J, the T-CC/KK specimen absorbs the most energy of 59.65 J, which is 324% higher than the energy absorbed by P-C, suffering greater bending deformation and internal damage. (4) The low-velocity impact simulation study based on the finite element model on the P-CC/KK specimens shows that the load - time and load - displacement curves from the tests and simulation results have a high degree of consistency, and errors in the peak loads of the curves and maximum deflections of the specimens are less than 10%. Conclusions: The low-velocity impact resistance of the carbon-Kevlar hybrid composites is significantly improved compared to that of pure carbon fiber. The energy absorption of the hybrid specimens is greatly enhanced compared to that of P-C. During impact, P-CC/KK has the highest peak load and T-CC/KK has the largest deflection. The warp carbon weft Kevlar (CC/KK) structure has better impact resistance than the carbon-Kevlar alternate (CK/CK) structure, and the plain weave structure has superior low-velocity impact resistance compared to the twill weave structure. The established finite element model has relatively high accuracy, and the numerical simulation model well reflects the in-plane damage of the composites under the nonpenetration condition.
Objective: In densely populated venues, early warning signs of safety accidents triggered by portable flammable items (e.g., power banks) are often difficult to detect. This challenge necessitates effective security patrol inspections. Although recent research has focused on optimizing patrol routes, resource allocation, and deploying unmanned aerial vehicles, manual inspections remain the primary method. Moreover, visual obstructions caused by high crowd density and structural limitations further complicate the detection process. Consequently, the optimal allocation of security patrol personnel is critical for enhancing early warning sign detection. This paper addresses the impact of visual obstructions and investigates how the number of security patrol personnel influences detection efficiency in densely populated venues. Methods: This study examined the effect of the number of security patrol personnel on the efficiency of early warning sign detection and explored the underlying mechanisms. The investigation was conducted in three steps. First, behavior rules for pedestrians and security patrol personnel were established using the classical social force model. These rules were tailored to reflect the characteristics of densely populated venues and incorporated key factors such as interpersonal interactions and environmental constraints. Second, a dynamic visual field model for security patrol personnel was developed under three conditions: unobstructed vision, obstructed vision with non-overlapping blind spots, and obstructed vision with overlapping blind spots. By dynamically calculating the visual coverage area in real time, the model enabled adaptive adjustments of the visual field in response to changes in crowd density and obstructions. Third, eight sets of scenarios were designed based on variations in crowd density and typical spatial factors of densely populated venues. MATLAB was used to simulate detection times when one to eight security patrol personnel patrolled along random paths with early warning signs placed at random locations and their visual field trajectories recorded. Two hundred simulation runs were performed to ensure the robustness and reliability of the results. Results: The simulation shows that the dynamic visual field model effectively monitors crowd density within a specified range along the patrol direction. The model dynamically calculates the visual coverage area while accounting for both unobstructed vision and overlapping blind spots, thereby enabling adaptive adjustments to maintain optimal monitoring conditions. Moreover, the simulation data reveals that increasing the number of patrol personnel significantly reduces the time required to detect early warning signs. However, the improvement effects exhibit a trend of diminishing effects, following a negative power-law relationship as personnel numbers increase. In larger spaces or high-density environments, a fixed number of patrol personnel require substantially longer detection times, whereas a moderate increase in personnel yields more effective reductions in detection time. Furthermore, the simulations demonstrates that as the number of patrol personnel increases, the time required for the average visual coverage to reach its upper limit gradually decreases and eventually stabilizes. This finding suggests that the marginal benefit of adding extra personnel declines beyond a certain point, thereby limiting further reductions in detection time. The diminishing marginal effect is especially pronounced in smaller venues or environments with lower crowd density. Conclusions: This study clarifies how the number of security patrol personnel affects the efficiency of early warning sign detection under visual obstruction conditions and provides insights into the underlying mechanisms. The findings help improve detection efficiency in densely populated venues and offer theoretical and methodological support for developing more effective patrol strategies.
Objective: With the continuous expansion of power grid engineering and the rapid enhancement of information technology, the complexity of accident scenarios has increased significantly, leading to an explosive growth in monitoring data. This study aims to address the limitations of current power grid emergency plans in large-scale data querying and on-site guidance, and assist emergency decision-makers in quickly generating response plans, accurately allocating emergency resources, and promote the digitalization of emergency plans. Toward this goal, this study proposes an improved method for constructing an ontology model in power grid emergency planning. Methods: First, the traditional seven-step ontology construction method is refined based on the Toronto virtual enterprise (TOVE) and skeletal methods. In the refinement process, an "application scenario analysis" phase is introduced in the initial step to enhance the relevance of the ontology construction. Additionally, after creating ontology instances, a "qualitative and quantitative analysis" phase is adopted to verify the scientific validity and feasibility of the ontology, thereby improving model quality. Subsequently, the improved method comprehensively implements the goal determination and construction processes of the ontology model. These processes include defining knowledge in the power grid emergency planning domain; evaluating the reuse of existing ontologies; clarifying key concepts from legislation, emergency scenarios, and enterprise planning systems; and establishing class hierarchies and attributes. Next, the Protégé tool is employed for model visualization. For the example of the emergency plan for typhoon disaster events from a provincial power company, a model was constructed comprising 39 ontology categories, 24 relationship categories, and 14 attribute categories, supplemented by 408 entities, 774 relationships, and 334 attributes. Finally, the ontology model is applied to study the semantic network of emergency plans, designing a schema for the knowledge graph of emergency plans for power enterprises based on the resource description framework schema and web ontology language frameworks. The ontology model in the field of power grid emergency planning is visualized using Protégé. The richness and structural integrity of the model are evaluated using a HermiT1.4.3.456 reasoning engine and the ontology quality analysis method. Results: The results indicate that the relationship richness of the model approaches 1, suggesting a rich relationship structure; the attribute richness value exceeds 1, indicating reasonable attribute settings; and the richness of major classes is 1, whereas that of minor classes is 0.9474, close to 1, demonstrating a high utilization rate of classes. Overall, the model exhibits good rationality and practicality. Conclusions: Empirical results demonstrate that this ontology model effectively addresses impracticality issues, usability, and relevance often encountered in emergency plans. It significantly enhances the efficiency of emergency personnel in response and decision-making while also improving the expressiveness and digital construction of knowledge related to power grid emergency planning.
Objective: The Northwest Pacific Ocean is prone to typhoons; hence, high-resolution wind field data are essential for understanding their formation and aiding disaster prevention in China's coastal areas. Spaceborne synthetic aperture radar (SAR) is currently the only means for detecting large-scale, fine-grained typhoon wind fields. Traditional SAR techniques have limitations, such as challenges in determining wind direction and reliance on external inputs for wind speed. Most methods utilize single-polarization data, restricting their ability to capture a broad range of wind speeds. Although deep learning has shown promise in SAR wind field retrieval, research in this area remains preliminary and often neglects the benefits of dual polarization images or the specific challenges posed by typhoon conditions. Therefore, it is necessary to further explore the application potential of dual-polarization SAR data and deep learning technology in obtaining sea surface wind fields in typhoon sea areas covering a wide range of wind speeds. Methods: In this paper, we propose a wind field retrieval model based on deep learning using dual-polarization Sentinel-1 SAR data. We attempt to effectively capture spatial features at various positions in SAR images, enhance the significance of key features, mitigate interference from irrelevant information, and improve retrieval efficiency. We integrate attention mechanisms such as SKNet, ECANet, and CBAM into the ResNet18 architecture to develop an E-SKNet_wind model. The performance of the developed models under different polarizations (VV, VH, and VV+VH) is systematically evaluated through comparisons with the ResNet18_wind models and the results reported in the literature. Results: The statistical results show that the precision of both types of deep learning models (E-SKNet_wind and ResNet18_wind) is higher in VV+VH dual polarization data than in either VV or VH single polarization data. Further, the dual-polarization E-SKNet_wind model performs better than the dual-polarization ResNet18_wind model. For wind speed retrieval, the root mean square error (RMSE) of the dual-polarization E-SKNet_wind model is 1.49 m · s-1, which is smaller than that of the dual-polarization ResNet18_wind model (1.86 m · s-1). For wind direction retrieval, the dual-polarization E-SKNet_wind model has an RMSE of 19.03°, which is smaller than that of the dual-polarization ResNet18_wind model (22.38°). In addition, the dual-polarization E-SKNet_wind model performs better than nearly all traditional methods and most existing machine learning and deep learning wind speed retrieval models. However, a few models may outperform our model potentially because of the narrower wind speed range or the inclusion of external wind direction data as an input parameter. The results of a case analysis show that the retrieval results of the typhoon wind field from the dual-polarization E-SKNet_wind model follow a trend consistent with the wind field from the ERA5 reanalysis across almost all regions. However, in the regions characterized by exceptionally low wind speeds, such as near the center of the typhoon, there is a notable and significant overestimation of wind speed values. This discrepancy results in a discontinuity in the retrieved wind speed profile. Future research and solutions are necessary to address these issues. Conclusions: The dual-polarization E-SKNet_wind model effectively leverages spatial texture features from dual-polarization SAR images to precisely extract wind speeds and directions without any external input, thus overcoming the limitations of single-polarization data. This model accurately extracts sea surface wind fields from SAR images for the Northwest Pacific Ocean typhoon sea area.
Objective: Traditional methods for the prevention of spontaneous coal combustion rely on chemical additives. However, these methods are often costly and potentially polluting. Microbial technology has been gradually applied to the prevention and control of spontaneous coal combustion due to its environmental protection and high efficiency. This study aimed to explore a green and efficient method for inhibiting spontaneous coal combustion through the influence of microorganisms on the microcrystalline structure and oxidation characteristics of lignite and provide new ideas for the diversification and greening of spontaneous coal combustion prevention strategies. Methods: The inhibitory effect of microorganisms on spontaneous coal combustion was analyzed from two aspects: microstructure and macroscopic oxidation characteristics. In the experiment, lignite samples from a coal mine in Inner Mongolia were used, and Sphingomonas polyaromaticivorans was selected for the bacterium-coal mixing experiment. X-ray diffractometer and Fourier transform infrared spectroscopy experiments were used to analyze the microstructural changes in lignite after microbial treatment. Moreover, alterations in the minerals and functional groups in the coal were revealed to gain insights into the effects of microbial treatment on coal microstructure. In addition, the key thermodynamic parameters, such as characteristic temperature, thermal weight loss, and thermal loss rate of coal samples in the oxidation and spontaneous combustion process, were analyzed through macroscopic thermal analysis techniques, such as thermogravimetric analysis (TGA) and differential scanning calorimetry. Results: The experimental results show the following: (1) The microcrystalline structure of the coal sample tended to be orderly and graphitized after microbial treatment, and the average crystallite diameter and the number of effective stacked aromatic flakes were considerably reduced by 32.04% and 27.22%, respectively. These results indicate that microorganisms can reduce the unstable aliphatic hydrocarbon side-chain structure in coal through oxidation, decarboxylation, and enzyme catalysis and promote coal graphitization. (2) After microbial treatment, the fitting peak areas of hydroxyl groups, oxygen-containing functional groups, and aromatic hydrocarbons in the coal samples decreased substantially, with the fitting peak areas of hydroxyl groups and aliphatic hydrocarbons decreasing by 43.21% and 55.56%. This finding indicates that microorganisms tremendously affect the mechanism of oxidative spontaneous combustion of coal by reducing the number of functional groups and the generation of free radicals. (3) After microbial treatment, the characteristic temperature points of the coal samples shifted to the higher-temperature region in the TGA curve, and changes in the maximum thermogravimetric rate temperature and burnout temperature were the most significant, which were increased by 67.87℃ and 138.67℃, respectively. These results indicate that microorganisms greatly improve coal's thermal stability and resistance to oxidation and effectively inhibit its low-temperature oxidation process. Conclusions: Microbial treatment of coal samples changes their microcrystalline structures and promotes their orderliness and graphitization. The number of functional groups in coal and the coal's oxidation activity are considerably reduced. In addition, microorganisms improve the thermal stability and oxidation resistance of coal, slow its oxidation rate, and effectively inhibit its low-temperature oxidation process. These results provide a scientific basis for inhibiting spontaneous coal combustion through microbial action and provide a reference for the greening and diversification of the corresponding prevention strategies.
Objective: To thoroughly investigate the impact of high temperatures at the bottom of in-situ combustion wells on casing integrity, a finite element model (FEM) was developed to simulate the casing-cement sheath-formation system. The model was constructed based on the principles of heat transfer and material mechanics, enabling an in-depth analysis of the temperature distribution and equivalent stress distribution under combustion temperatures ranging from 500 ℃ to 650 ℃. This approach aimed to elucidate the thermal and mechanical behaviors of casing materials under extreme high bottomhole temperature conditions, thereby providing a reliable basis for casing risk assessment and design optimization. Methods: High-temperature tensile tests were performed on casing materials specifically designed for in-situ combustion wells. The tests assessed the mechanical properties of these materials, including yield strength and tensile strength, under elevated temperatures typically encountered in in-situ combustion operations. Allowable stress for different temperature ranges was determined using the safety factor method, which is widely used in engineering field to ensure the reliability of the casing structure. Additionally, a microscopic analysis of tensile fracture surfaces at various temperatures was conducted using scanning electron microscopy to investigate the temperature-induced casing failure mechanisms. Results: The results reveal significant changes in the fracture morphology of casing materials, highlighting a progressive degradation in the material properties as temperature increases. The Von Mises yield criterion is used as the failure assessment standard, allowing for a detailed comparison between equivalent stresses obtained from numerical simulations and experimental allowable stresses. This comparison identifies casing segments at a high risk of deformation and failure. Numerical simulations demonstrate that the casing within oil-bearing formations experiences substantial thermal expansion. Meanwhile, due to axial constraints at the top and bottom of a wellbore, the casing undergoes significant axial compression, resulting in compressive deformation. Furthermore, in ordinary rock formations, the casing expands freely but is subjected to formation stresses, leading to pronounced necking deformation. Experimental results show that once the temperature exceeds 500 ℃, the yield strength and tensile strength of casing materials decrease dramatically. The tensile fracture surfaces exhibit typical ductile fracture characteristics, accompanied by severe oxidation, confirming the adverse effects of high-temperature exposure on the mechanical performance and structural integrity of the casing. This mechanical performance degradation poses a considerable risk of failure, especially in high-temperature zones near the ignition segment. Conclusions: This study further reveals that the equivalent stress distribution within the casing is positively correlated with the temperature field. High bottomhole temperatures significantly expand the risk zones for casing damage, though the failure locations remain confined to the casing segment near the igniter. The primary failure mechanism is identified as compressive deformation caused by thermal stress, with oil-bearing formation segments being particularly vulnerable to failure due to restricted casing expansion caused by surrounding constraints. These findings provide valuable insights into the failure mechanisms of the casing under high-temperature conditions and offer practical guidance for improving the casing design and reliability in in-situ combustion for heavy oil. By integrating numerical simulations with experimental tests, a more comprehensive understanding of the casing behavior under extreme thermal and mechanical conditions achieves. This approach not only enhances risk assessment strategies but also supports the development of more robust casing solutions capable of withstanding harsh environments encountered in in-situ combustion wells. Ultimately, this research contributes to the sustainable development and improves safety of heavy-oil extraction projects, reducing operational risks and ensuring long-term profitability.
Objective: The accurate combustion simulation of wood is essential for improving fire safety in architectural and wildland contexts. Existing studies, which predominantly rely on fire dynamics simulators (FDS), face considerable limitations, particularly in terms of grid adaptability for curved geometries and the oversimplification of pyrolysis models. These limitations often result in substantial deviations from experimental data, thereby reducing the reliability of fire safety predictions. This study develops a comprehensive simulation framework for wood ignition using ANSYS Fluent to address the above gaps. This framework is validated through controlled experiments to improve its predictive accuracy for fire dynamics. Methods: The experimental phase of this study employed small cylindrical Finnish pine wood blocks, each with a diameter and length of 30 mm. The wood blocks had an average moisture content of 8.68% and an apparent density of 460.27 kg/m3. Thermogravimetric analysis (TGA) was conducted to quantify wood moisture content, which was found to be 8.87%, and pyrolysis conversion rate, which reached 0.745 at 500 ℃. Ignition tests were performed under a heptane flame, revealing mass loss ratios of 20%-50% within just 2 min. This remarkable mass loss was attributed to surface charring and the development of internal pyrolysis gradients. Combustion was further characterized by three distinct stages: an evaporation stage (Stage Ⅰ) marked by slow mass loss; a rapid pyrolysis stage (Stage Ⅱ) defined by accelerated degradation; and a slow mass decline stage (Stage Ⅲ), wherein the accumulation of a char layer inhibited further reactions. Postcombustion analysis highlighted the formation of a uniform 5 mm char layer, with internal conversion rate gradients showing a surface value of 19.14% and low internal values. These gradients were influenced by gas permeability and temperature distribution within the wood. In the numerical simulation phase, ANSYS Fluent was employed to model the complex multiphase processes involved in wood ignition. User-defined functions (UDFs) were developed to incorporate drying and pyrolysis. Wood components were simplified into moisture and organic matter, with porosity values of 0.676-0.679 derived from cell wall density measurements. Pyrolysis kinetics were modeled using a modified Arrhenius model, integrating parameters obtained from TGA. A virtual heat-exchange layer was introduced to adjust surface heating rates, effectively mimicking the insulating effect of water vapor observed in experiments. A rotating slip-grid method ensured the uniform heating of the wood sample. Meanwhile, the large eddy simulation was employed to capture the turbulent combustion of heptane. Radiation effects were modeled using the discrete ordinates approach, which was coupled with energy equations to account for stage changes and chemical reactions. Results: The key innovations of this study include the development of a spatially resolved conversion rate gradient model for char layers with the thickness x, expressed as αs(x)=e-0.28-x, and dynamic porosity adjustments to reflect gas transport limitations within the wood. Simulation results demonstrate strong agreement with experimental mass losses, thereby validating the proposed method. This study reveals that surface charring substantially decelerates pyrolysis by reducing gas permeability, whereas internal temperature gradients govern the cessation of reactions within the wood. Conclusions: This work establishes a robust Fluent-based framework for simulating wood ignition, effectively overcoming the limitations of FDS through advanced mesh resolution and detailed pyrolysis modeling. By integrating experimental data into UDFs, the method established herein enhances predictive capabilities for fire spread in structural and environmental fire scenarios. Future research could focus on expanding the model to incorporate heterogeneous secondary reactions, thereby further bridging the gap between simulations and real-world fire behavior.
Objective: Steel frame structures are highly vulnerable to collapse during fire, posing severe threats to occupant safety and property protection. As reliable methods for assessing structural performance under fire exposure, fire tests are essential for elucidating structural response mechanisms. However, constrained by budgetary limitations and logistical challenges, most contemporary fire tests on steel frame structures primarily focus on evaluating structural integrity under fire conditions, rarely progressing to the collapse stage. This limitation hinders a comprehensive understanding of steel structure collapse mechanisms under fire, emphasizing the urgent need for systematic investigation. Methods: To explore the collapse evolution mechanisms of steel frame structures under real-fire scenarios, a reduced scale experimental fire test was conducted on a two-story, three-span steel frame structure, subjecting the bottom-floor compartment to fire exposure. Real-time data were collected, including temperature distribution within the structure, thermal responses of critical structural components, and three-dimensional displacement behavior of the structure. The experimental process was meticulously documented, capturing the entire progression from fire ignition to structural collapse. Results: The key findings of the test were the following: (1) The localized strong ventilation effect induced by the open doorway significantly accelerated the combustion rate of the wooden crib in the central room among the three rooms side by side, leading to asymmetric fire propagation from the central room toward adjacent areas; (2) The test frame experienced global collapse at 82 minutes after fire exposure, with the maximum structural temperature reaching approximately 900 ℃ at the time of collapse. The collapse occurred in three stages: Initial stage (0-45 min), where internal steel columns buckled under combined axial loading and rapid heating (12 ℃/min); Progressive stage (45-68 min), where load redistribution triggered a sharp increase in the axial forces of adjacent columns; and Final stage (68-82 min), where a progressive downward collapse occurred in the interior span, characterized by localized sunken failure. Conclusions: By analyzing the test results, the following conclusions are drawn: (1) The test validates that the failure of steel columns is the principal factor governing the fire-induced collapse of redundant steel frame structures. Furthermore, it elucidates the collapse mechanisms, which are driven by the sequential failure of critical structural components. (2) The observed fire development highlights a significant correlation between the fire propagation path within the building and ventilation conditions. When designing building ventilation systems, the influence of ventilation on fire field evolution must be thoroughly considered to reduce the potential for accelerated structural failure caused by localized strong ventilation effects. (3) The test reveals that the fire-resistance performance of the overall structural system surpasses that of individual components. To enhance fire-resistance design, structural redundancy should be strategically leveraged to mitigate vulnerabilities associated with a single load-transfer pathway. (4) The collapse mode of the tested steel frame exhibits typical characteristics of progressive sunken failure. Each buckling of internal steel columns is identified as a critical indicator for collapse warning.
Objective: High-pressure water mist fire suppression systems have been widely used for liquid fuel storage fire in China. In the event of a leakage and subsequent fire accident involving liquid fuels, the fire can rapidly spread owing to the heat feedback within confined spaces. High-pressure water mist fire suppression systems are favored for their energy efficiency, environmental protection, efficient cooling, and rapid smothering; however, their effectiveness in extinguishing liquid fuel fires requires further investigation. Methods: This study explored the impact of high-pressure water mist at different flow rates on fire suppression for different types of oil pool fires through full-scale experiments. An experimental platform was designed and built specifically for high-pressure water mist fire suppression in confined spaces, focusing on transformer oil and gasoline pool fires to investigate the extinguishing effects of water mist on these fires. Cold spray experiments were carried out to assess water mist flux using a measuring cup collection method, which provided crucial data for fire suppression tests. Simultaneously, fire extinguishing experiments were carried out, with thermocouples arranged near the experimental oil pools and on the walls to analyze variations in key parameters such as plume temperature, oil temperature, and wall temperature. Cameras were also installed to record the combustion process and flame morphology. Results: The experimental results indicate the following: (1) Under identical flow rates, high-pressure water mist is far more effective at extinguishing transformer oil pool fires than gasoline pool fires. For gasoline pool fires, the water mist can control the fire's spread within the confined space; however, even after five minutes of continuous application, complete extinguishment is not achieved. Despite a decrease in the burning area, flame height, and oil temperature, combustion continues. (2) Cold spray experiments reveal that water mist flux in the protected area increases directly with the flow rate of the high-pressure water mist. (3) The effectiveness of fire extinguishment is closely linked to the water mist flow rate. For transformer oil pool fires, higher water mist flow rates significantly shorten the extinguishment times. For gasoline pool fires, increased flow rates strengthen suppression effects but fell short of fully extinguishing the fire. (4) High-pressure water mist can provide continuous cooling to oil and surrounding walls, with cooling efficiency improving as water mist flow rate increases. Conclusions: The findings of this study provide valuable insights into the application of high-pressure water mist for fire suppression in confined spaces. This research offers important technical support for designing fire protection systems in critical areas of confined spaces, emphasizing the need to consider factors such as fuel type, water mist flow rate, and cooling efficiency.
Objective: To meet the lightweight requirements of tethered unmanned aerial vehicles (UAVs) for high-altitude firefighting operations and limited-space deployments, the structural design of the airframe needs optimization. Typically, these UAVs are connected to the fire trucks on the ground using cables and fluid supply pipes. This separation increases the overall weight of the system, complicates operation, and affects operational stability and reliability. This study focuses on the integrated design of cable and fluid supply pipe inspired by advanced cases both domestically and internationally; this integrated design leads the cable to heat the liquid in the fluid supply pipe, requiring an investigation into temperature rise caused by this heating during liquid transportation. Methods: To analyze the influence of heat generated by the cable on the temperature rise of the foam extinguishing agents in the fluid supply pipe, numerical simulations were performed using the commercial computational fluid dynamics (CFD) software Fluent. The simulations employed the volume of fluid (VOF) model, turbulence model, and heat transfer model to simulate the fluid flow and heat transfer processes of a foam extinguishing agent in the pipe. Simulation results provided variations in the fluid phase, velocity, and temperature fields over time. Several selected moments (10, 25, 75, 100, 125, and 150 s) and typical positions (25, 50, 100, 150, and 200 m) were analyzed to assess the temperature rise of the foam extinguishing agent. The influence of different flow rates (100-400 L/min) and current-carrying capacities of the cable (420-600 A) on the temperature rise was investigated. Results: The results revealed that when the foam extinguishing agent flowed in the integrated fluid-electric pipe under edge-powered heating conditions, the fluid temperature at the same cross-section increased linearly along the axial direction of the pipe from the inner to the outer region. When the extinguishing agent flowed upward to a certain location in the pipe, the fluid temperature at that location stabilized after experiencing a rapid increase. When the pipe length and the cable's current-carrying capacity were fixed, higher flow rates of the extinguishing agent led to lower temperature rises, underscoring a negative linear relationship between flow rate and temperature rise. This reflected the direct effect of the fluid flow process on the heat transfer efficiency. The maximum temperature rise, approximately 15 ℃, was observed at the lowest flow rate of 100 L/min. Conversely, when the fluid flow rate and pipe length were constant, greater current-carrying capacities of the cable led to higher temperature rises, reflecting a positive linear relationship. The highest temperature rise, approximately 11 ℃, occurred at a cable current-carrying capacity of 600 A. Conclusions: The heating effect of the cable on the foam extinguishing agent in the pipe does not significantly affect transportation efficiency and safety. However, further experiments are necessary to evaluate its specific effect on the extinguishing performance of the foam extinguishing agent. Our simulation results provide a theoretical foundation for the integrated design of the cable and fluid supply pipe in tethered UAV systems.
Objective: Flow velocity measurement methods for weak flame plumes face several challenges due to the complex dynamics involved. Flame plumes, driven by buoyancy forces, are inherently turbulent, with the plume motion accompanied by the entrainment of the surrounding air. The boundary between the plume and the environment continuously evolves in space, making it difficult to capture the plume's true flow characteristics. Past plume velocity measurement methods rely on intrusive methods, using tools such as Pitot tubes, smoke probes, and hot-wire anemometers. These methods disrupt the plume flow field and cannot accurately reflect the undisturbed temporal and spatial characteristics of the entire plume, thereby limiting their applicability for a detailed analysis of weak flame plumes. Methods: To address these challenges, we employed the schlieren imaging technique to visualize flame plumes. This nonintrusive visualization technique allowed the capture of the flow field induced by buoyancy forces at a high resolution. Industrial cameras were used to record the ignition process and flame plume dynamics at varying heights and oil pan diameters. By analyzing the schlieren images, we aimed to overcome the limitations of traditional measurement methods. In this study, we derived a simplified two-dimensional Navier-Stokes equation to develop an optimized optical flow (OF) algorithm tailored for velocity measurements in flow fields. The proposed algorithm was applied to the schlieren images of flame plumes, showing significant improvements over conventional OF methods. Results: The key advancements of the optimized algorithm are as follows. (1) Enhanced sensitivity and precision: The optimized algorithm produces smoother displacement fields and more uniform vorticity fields. This enables the detection of finer vortex structures that are often overlooked by conventional OF methods. By improving the resolution and accuracy of the calculated flow field, the algorithm provides a more detailed representation of the flame plume's dynamics. (2) Rapid convergence: During the velocity calculation process, the optimized algorithm achieves rapid convergence. The energy residual after the first iteration is reduced to less than 10-2, and the energy residual in the final OF field remains below 10-5. This indicates that the proposed algorithm achieves a high accuracy in fewer iterations, making it computationally efficient. (3) Improved robustness in experimental validation: In flame plume velocity measurement experiments, the optimized algorithm demonstrates superior robustness compared with conventional OF methods. A dimensional analysis of the results shows a significant improvement in the fit between the predicted and measured values. Specifically, the coefficient of determination (R2) increases from 0.90 for the conventional OF method to 0.98 for the optimized algorithm. Additionally, the measured results are in close agreement with the Heskestad model results. While conventional OF methods show an average error range of -20%-30% in the plume region, the optimized algorithm reduces this error range to -5%-20%. This reduction highlights the enhanced accuracy and reliability of the optimized algorithm. Conclusions: Overall, the proposed algorithm provides a more stable, accurate, and efficient approach for measuring the velocity of weak flame plumes. By addressing the limitations of conventional OF measurement techniques and aligning more closely with theoretical prediction models, this study offers a valuable contribution to the flame plume analysis. These findings pave the way for the improved understanding and modeling of fire dynamics, with potential applications in fire safety engineering and combustion research.
Objective: To better understand the influence of intercalated modified hydrotalcite on the ultraviolet resistance of intumescent fire-retardant coatings on wooden structures, this study is dedicated to exploring the flame-retardant properties of intumescent fire-retardant coatings with different mass fractions of hydrotalcite or intercalated modified hydrotalcite after ultraviolet exposure of different durations. The analysis focused on the performance and mechanisms of intercalated modified hydrotalcite. Methods: Experimental specimens were prepared, including a control specimen (#0), five groups of fire-retardant coating specimens containing different proportions of hydrotalcite, and five groups containing different proportions of UV326-intercalated modified hydrotalcite. The specimens were subjected to ultraviolet (UV) irradiation for different durations. Their properties were analyzed using cone calorimeter, scanning electron microscopy (SEM), thermogravimetric analysis (TGA), and Fourier-transform infrared spectroscopy (FTIR). Key fire-related parameters, such as ignition time, heat release rate, total heat release, effective heat of combustion, and total smoke release, were studied to assess performance differences. Results: Cone calorimeter tests revealed a significant improvement in the flame-retardant and UV-resistant properties of specimens containing hydrotalcite or intercalated modified hydrotalcite compared to the control group (#0). When the additive ratio was 2.7%, the aging rate decreased from 86% (#0) to 48% (hydrotalcite specimens) and 45% (intercalated modified hydrotalcite specimens), respectively. SEM analysis of the aged UV#3 (with 2.7% intercalated layered hydrotalcite) specimen, along with its expanded char layer, showed that intercalation modification effectively minimized hydrotalcite agglomeration in the coating. During the early stages of the UV aging process, the expanded char layer exhibited excellent performance, with a dense and thick structure, numerous pores, and a well-defined honeycomb-like pattern. With the extension of UV irradiation time, however, the char layer became increasingly fragmented and irregular. After 15 days, the control specimen (#0) was nearly incapable of forming a complete expanded layer. FTIR analysis indicated that UV aging treatment caused the decomposition of ammonium polyphosphate and polyacrylate resin in the flame-retardant system. The UV resistance of the intumescent fire-retardant coating with intercalated modified hydrotalcite was significantly enhanced. The inclusion of intercalated modified hydrotalcite caused the appearance of the C—N—C absorption peak, indicating the formation of a cross-linked structure between the amino groups in hydrotalcite and pentaerythritol. Conclusions: This study underscores the importance of optimizing the formulation and application of fire-retardant coatings to effectively reduce the fire risk of wooden structures. Properly designed coatings can slow the flame spread and control local temperatures, but inappropriate coating choices or application strategies may lead to risks such as UV aging and uneven fire protection. These findings provide important technical insights for developing fire safety measures, highlighting the need to tailor fire-retardant coating strategies to wooden structures. This study offers valuable scientific evidence for improving fire safety management in wooden structures and holds practical relevance for improving the design and application of fire-retardant coatings.