
Significance: Empowering machines to understand human emotions remains one of the primary challenges in developing artificial intelligence (AI). The affective brain-computer interface (BCI), which decodes emotional states based on brain signals, is an emerging field combining psychology, neuroscience, and AI. Brain signals are inherently uncontrollable, contain rich emotion-specific information, and provide a promising physiological basis for developing computing systems that support continual emotion monitoring. Since its inception, affective BCI research has required close collaboration across various disciplines: it depends on the use of information science for feature engineering and algorithm development, psychology for theoretical frameworks of emotion, and neuroscience for revealing the neural mechanisms underlying emotional processes. Such a demand for multidisciplinary co-operation forms the core focus of this review. Specifically, this paper focuses on the methods by which psychology-and neuroscience-based insights can inspire and advance affective BCI research. Progress: We summarize the current progress at three levels: theoretical, technical, and applied. In the first one, recent advances in affective science offer new perspectives for shaping affective BCI paradigms. The traditional discrete and dimensional frameworks have laid the groundwork for emotion decoding but often overlook positive emotions and the dynamic intensity of affective experiences. Recent emotion theories emphasizing refined positive emotions, mixed emotions, and context-dependent emotions provide valuable directions for improving emotion representation. Affective computing should align with these developments, integrating them into computational models to enhance ecological validity. In turn, affective BCI research may also contribute to psychology by offering evidence to test and refine emotion theories, fostering reciprocal progress across disciplines. At the technical level, neuroscience provides crucial insights for building more robust affective BCIs. Findings on emotional valence lateralization and distributed emotion-associated brain representations can inform the design of models that better capture emotional processing complexity. Moreover, inter-subject brain synchronization research has revealed mechanisms that enhance model generalizability across users, suggesting that incorporating neuroscientific findings can substantially improve the performance and reliability of affective BCIs. At the application level, affective BCIs are expanding beyond emotion recognition toward understanding emotion-related individual differences. Variability between individuals—often treated as noise—may instead offer meaningful information about personality traits or mental health conditions. In the long term, the goal of affective BCI systems may evolve from accurately identifying emotions to comprehensively understanding each individual's psychological tendencies and dynamic affective patterns across multimodal neural and behavioral data. We advocate for stronger integration between affective BCI technologies and practical domains. Such integration allows practical demands to drive technological development, ensuring that affective BCI remains human-centered. Conclusions and Prospects: Finally, we discuss the technical challenges of affective BCI, including extending algorithms from controlled laboratory settings to real-world scenarios, advancing sensor technology for more convenient and reliable brain-signal acquisition, and leveraging large models to enhance performance for affective BCI. Specifically, we emphasize the vital role of ethical considerations: as affective BCIs move from passive emotion detection toward active emotional support or intervention, the responsibility of humans as rational moral agents in a future era of man-computer symbiosis must be considered, ensuring the autonomy of human emotions.
Significance: The worldwide shift towards renewable energy is creating unprecedented demand for critical metals such as lithium, cobalt, and rare earth elements. This rise in demand exerts pressure on primary mineral supplies and raises profound concerns over supply chain security and environmental sustainability. This dependency generates a substantial environmental footprint and exposes economies to geopolitical vulnerabilities and supply chain disruptions, making dependence on primary extraction an unsustainable long-term strategy. Recycling "urban minerals"—secondary resources in end-of-life products—has become essential for securing strategic resources and building a green, closed-loop economy. However, the relationship between the recycling potential of urban minerals and the practical challenges is unclear. Therefore, to ensure the long-term sustainability of the energy transition, we must systematically assess how we can recycle these urban minerals in energy industries and identify the associated technological, economic, and policy challenges. Progress: This study systematically reviewed recent research on recycling urban minerals during the energy transition. It focused on three key areas: the consumption and recycling potential of key metals, the current status of recycling technology, and the factors that influence recycling. First, this study examined the demand for key metals used in the wind power, photovoltaic, and energy storage industries. It also highlighted the valuable recycling potential found in decommissioned waste streams. Second, this study examined recycling methods for typical waste products, such as composite wind turbine blades, photovoltaic modules, and waste lithium-ion batteries. The study compared hydrometallurgy, pyrometallurgy, and mechanical-physical methods based on their technical maturity, recovery efficiency, and environmental impact. Finally, this study examined the internal and external factors that hinder efficient recycling. Key challenges included material complexity, such as the strong adhesion of multiple layers in PV modules and the variety of materials in lithium-ion batteries. Other challenges include high recovery and compliance costs, poor industrial chain synergy, and weak policies and regulations, such as fragmented Extended Producer Responsibility (EPR) schemes. Conclusions and Prospects: Recent research shows a significant theoretical gap between the efficient recycling of urban minerals and their potential in the emerging energy sector. Existing recycling technologies face challenges in processing low-value composite materials, preventing secondary pollution, and achieving economic viability. Concurrently, numerous end-of-life products flow through informal channels. This results in significant loss of valuable resources and poses severe environmental and health risks due to unregulated processing. Future research should prioritize overcoming key technological challenges, such as developing low-cost depolymerization technologies for composites and flexible, automated dismantling processes for various battery types. Moreover, governments and industries must implement comprehensive lifecycle traceability systems, including mandatory EPR schemes and "digital passports" for material accountability. Optimizing industrial layouts, enhancing economic incentives, and promoting product eco-design are essential supporting strategies. Ultimately, these initiatives will improve the measurable contribution of urban minerals to the critical metal supply chain, which is essential for securing national strategic resources and promoting green, low-carbon development.
Significance: Safety evacuation and passenger transportation are key components of metro crowd control, requiring critical research related to passenger safety. With the rapid development of urban rail transit, a growing number of domestic and international scholars have conducted research on these components. To systematically understand developments and research trends in metro station safety evacuation and passenger transportation within the broader field of public safety, it is necessary to review and summarize relevant studies. Progress: First, relevant Chinese and English literature on metro station safety evacuation and passenger transportation was retrieved from the Web of Science database and the China National Knowledge Infrastructure database, and relevant information about the studies was recorded. Next, a bibliometric analysis was conducted, including publication volume statistics and keyword analysis. This study reviewed and summarized the characteristics of the research content and methodologies regarding both safety evacuation and passenger flow management. It summarized the advantages and disadvantages of existing research approaches and methods, providing future development directions. Bibliometric analysis showed that research on safety evacuation in China has developed rapidly, although studies on passenger transportation require further attention. To date, research on safety evacuation has focused primarily on metro station fires, whereas studies on passenger transportation have concentrated mainly on the organization and control of large passenger flows. In the field of safety evacuation, metro station safety evacuation is characterized by multi-level enclosed spaces, multi-stage evacuation routes, and large passenger flows. Field experiments are difficult to implement due to their limitations, and microscopic models such as the social force model and cellular automaton model have thus become the primary research tools. Future research still needs to integrate intelligent algorithms, such as big data and machine learning, to dynamically optimize evacuation routes. In the field of passenger transportation, metro station passenger flow is characterized by complex, multi-directional movements and batch arrivals, and research in this area mainly relies on numerical simulation methods. Existing research primarily aims to reduce passenger waiting times and train delay times and has achieved relative maturity in optimizing train schedules and improving transport capacity. Implementing passenger flow control measures has become the main approach to reducing passenger flow risks. However, in actual metro operations, there remains a deficiency in the networked multi-station collaborative response mechanism for passenger flow organization. Conclusions and Prospects: This study conducted a bibliometric analysis of the literature on safety evacuation and passenger transportation in metro stations. It reviewed the characteristics and study methodologies used in the literature, while discussing research progress and existing limitations. The study contributes to understanding the current state of research and development trends in metro station safety evacuation and passenger transportation. Furthermore, it argues that future studies in China should place greater emphasis on passenger transportation and incorporate advanced intelligent algorithms into research on both safety evacuation and passenger transportation.
Significance: Natural graphite, a cornerstone of China's strategic mineral resources, enables the country to leverage its exceptional geological endowments and industrial prowess to maintain unparalleled global leadership. China's natural graphite industry dominates in both production output and high-value deep-processed products, such as spherical graphite and exfoliated graphite, positioning the country as a key strategic fulcrum in carbon materials competitiveness. Natural graphite, a thermodynamically stable crystalline allotrope of carbon, exhibits a hexagonal lattice structure in which sp2-hybridized carbon layers are stacked via weak van der Waals interactions. This intrinsic lamellar architecture underlies its unique material properties: high electrical and thermal conductivity, resistance to high and low temperatures, low friction coefficient, thermal stability, chemical inertness, and biocompatibility. Natural graphite is an irreplaceable foundational material that bridges traditional manufacturing with cutting-edge strategic emerging industries through its synergistic properties. In traditional industrial sectors, natural graphite demonstrates versatile applicability: in metallurgy, it functions as a carburetant and high-temperature refractory material; in mechanical engineering, its self-lubricating properties enable the fabrication of wear-resistant components such as precision bearings and seals; and in chemical processing, it can be modified through intercalation to create catalyst supports and advanced adsorption materials. Within strategic emerging industries, the strategic value of natural graphite is further elevated: high-purity spherical graphite acts as an irreplaceable precursor for lithium-ion battery anode materials, exfoliated graphite provides efficient oil-water separation, and flexible graphite is the material of choice for sealing systems operating under harsh environmental conditions. Progress: Recent advances in the exfoliation of graphite at room temperature have enabled the use of milder reaction conditions that better preserve its crystal structure. Unlike high-temperature processes, this method prevents local oxidation, resulting in exfoliated graphite worms with high flexibility. Room-temperature exfoliation can produce exfoliated graphite blocks with controllable shape, density, high mechanical strength, and excellent rebound. Electrochemically exfoliated graphene has few layers and a high yield, making it highly effective in enhancing the anti-corrosion performance of water-based coatings. Meanwhile, flexible graphite paper prepared by rolling has high electrical and thermal conductivity. Micro exfoliated graphite modified via room-temperature exfoliation can be combined with other metals to form lithium-ion battery anode materials with excellent rate performance and cycle stability. To address the growing demands for functional exfoliation and performance enhancement of natural graphite, this study systematically reviewed the latest research progress in six key categories: graphite intercalation compounds, natural graphite anode materials, exfoliated graphite, flexible graphite, graphene powder, and microcrystalline graphite-based isotropic graphite. The study systematically integrated research across the technological chain, including material synthesis, structural modulation, performance optimization, and industrial-scale application. Moreover, the intrinsic structure-activity relationships and critical technical bottlenecks in natural graphite-based materials were identified. Conclusions and Prospects: Natural graphite-based materials are poised to evolve toward higher performance, greener processes, and multifunctionality, serving as a key material for strategic emerging industries. This study provides a comprehensive reference for further research and industrial applications of natural graphite-based materials.
Significance: Turbine-based combined cycle (TBCC) engine is an ideal propulsion system for hypersonic flight, with a wide-speed range, large flight envelope, and horizontal takeoff and landing capability. The TBCC engine, comprising an air-breathing gas turbine and a ramjet, has become a key aspect of current and future aerospace research. When the TBCC engine operates across a wide-speed range (Ma 0-7.0), it undergoes a mode transition between the gas turbine and the ramjet. This transition requires coordinated operation among various components and subsystems, involving a broad disciplinary scope, high technical complexity, and significant implementation challenges. Consequently, the mode transition has become a critical bottleneck in the development of TBCC engines. Progress: This study systematically reviews the development progress of TBCC engines across various countries and analyzes the "thrust gap" phenomenon and the multi-component matching challenges that occur during mode transition. The review encompasses four key aspects: (1) Intake system design and regulation technology: Current mature approaches, such as boundary layer bleeding and vortex generators, offer limited adjustability, making precise and rapid flow control challenging. Axisymmetric intakes, favored for their simplicity in series-configured TBCC engines during mode transitions, still require enhanced variable-geometry capabilities to improve performance. Additionally, two-dimensional and three-dimensional inward-turning intakes provide greater regulation flexibility and effectively suppress inlet coupling interference; however, their control strategies within intake systems demand further in-depth investigation. (2) High-performance turbine and ramjet engine design, as well as rocket-assisted boost technology: Modified high-speed turbine engines utilizing inlet pre-cooling show greater potential, compared to newly developed ones, though their advancement hinges on the creation of lightweight pre-coolers that can operate across wide temperature ranges. For wide-speed ramjet technologies, methods such as rotating detonation combustion, advanced inlet designs, and combustion optimization can effectively extend the operational Mach number range. However, integrating these technologies into combined-cycle engines requires further in-depth research. While rocket-assisted thrust augmentation directly addresses the "thrust gap, " incorporating an additional rocket engine may introduce significant structural complexity. (3) Exhaust system design and regulation technology: Future directions focus on efficient aerodynamic profile design and active control of shockwave-boundary layer interactions. Regarding nozzle configurations, both two-dimensional and three-dimensional nozzles can satisfy the exhaust expansion requirements of combined-cycle engines. Two-dimensional nozzles offer simpler structures but pose significant challenges for aerodynamic integration. In contrast, three-dimensional nozzles provide superior performance and better integration potential with the overall propulsion system; however, they involve greater design, manufacturing, and control complexities. (4) The combined-cycle engine system integration, mode transition control, and experimental testing technologies: The United States has conducted relatively comprehensive research, having completed integrated engine model-level mode transition tests and comparative analyses of various control algorithms. Nevertheless, most existing studies remain theoretical or limited to model validation. Conclusions and Prospects: Many conducted mode transition experiments have not fully addressed the variable-geometry adjustment of the inlet and exhaust systems or the dynamic cooperative control of the fully integrated engine. Consequently, future research should prioritize cross-system integrated cooperative control for combined-cycle engines, the development of advanced test facilities capable of simulating wide-range flight environments, and full-scale engine validation of mode transition processes. Key future research directions include optimizing off-design performance and multi-physics coupling in intake system design, advancing rotation detonation combustion technology, developing three-dimensional nozzle control and multi-duct collaborative matching techniques, and establishing a full-chain research and development system for TBCC engines.
Significance: Identifying the three-dimensional structures of biological macromolecules is fundamental to understanding the molecular basis of life and for the discovery of novel therapeutics. As the biological sciences enter the era of artificial intelligence (AI), structural data have become increasingly essential, while AI technologies simultaneously impose higher demands on data organization and management. This review traces the five-decade evolution of biological macromolecular structure databases, with a particular focus on the pivotal role of the Protein Data Bank (PDB). The PDB was established as a small archive of experimentally determined atomic coordinates but gradually developed into a global infrastructure that underpins structural biology. Progress: We first chart the progression of structural data resources from early structure archives, which largely functioned as static catalogs of experimentally determined structures, to the emergence of highly curated functional classification systems, such as SCOP and CATH. These resources enable researchers to analyze structural relationships, investigate evolutionary patterns, and derive mechanistic insights. In parallel, sequence-centric databases—such as Pfam, InterPro, and later comprehensive domain-family resources—expanded by annotating conserved elements across the protein domain. Together, these efforts created a rich, multi-layered ecosystem in which the sequence, structure, and function of proteins became increasingly integrated, thereby turning structure databases into indispensable platforms for comparative analysis and mechanistic discovery. A new phase of structural data expansion began with AI-driven structure prediction. The release of the AlphaFold Protein Structure Database (AFDB), followed by complementary resources, including the ESM Atlas, induced an unprecedented expansion in structural coverage, spanning entire proteomes and previously challenging protein families. Conclusions and Prospects: We propose that structural databases and AI models form a mutually reinforcing "double-helix" data model. High-quality experimental structures provide essential references for training and benchmarking predictive models, while large-scale AI-generated structures dramatically increase the amount of available data, thereby revealing new sequence-structure-function relationships, and enriching the databases themselves. This synergy would catalyze a paradigm shift in structural biology, transitioning the field from an experiment-led discipline to an integrated ecosystem in which computation and experimentation may coevolve. Despite rapid progress in this industry, major challenges persist. Structural databases remain affected by experimental sampling biases, uneven representation across organisms and protein families, and persistent inconsistencies in annotation quality. Moreover, the scarcity of dynamic and condition-dependent structural information further limits biological interpretability, particularly for intrinsically disordered regions, conformational ensembles, and transient complexes. Furthermore, AI-driven predictions introduce new concerns regarding model interpretability, calibration of confidence metrics, and the governance of large-scale predictive datasets. We anticipate that biological macromolecular structure databases will evolve from merely "AI-enhanced" to "AI-integrated" and, ultimately, adopt "AI-native" architectures. Such systems will incorporate a continuous feedback model, automated annotation pipelines, and multi-modal data fusion, thereby enabling them to function as reliable knowledge instruments capable of hosting biologically meaningful "digital twins." Collectively, these developments promise to enhance our understanding of structure-function relationships and accelerate rational design in protein engineering, drug discovery, and synthetic biology. As a result, structural databases will continue to underpin scientific innovation while defining a new research standard for biological sciences.
Significance: With the rapid advancement of new-generation information technologies, the deep integration of smart city infrastructure and intelligent connected vehicles (ICVs) has emerged as a critical engine for the intelligent transformation of urban transportation. Traditional single-vehicle intelligence often faces limitations such as blind spots, occlusions, and limited perception ranges in complex urban scenarios. Therefore, it is essential to move beyond the independent operation of "human-vehicle-road" units to build an integrated, collaborative "vehicle-road-cloud" ecosystem. Such integration is pivotal for resolving information islands, enhancing traffic safety and efficiency through a closed-loop mechanism of "perception-transmission-calculation-control, " and promoting sustainable urban governance by reducing congestion and carbon emissions. Progress: This study systematically reviewed the research progress, technical architectures, and development trends of this "dual-intelligence" synergy. First, from the viewpoints of operational logic and core architecture, the study elucidated how smart cities empower ICVs. By deploying roadside perception networks and mobile edge computing nodes, the infrastructure provides ICVs with beyond-visual-range information and enables high-precision navigation through digital twin technologies and high-definition maps. Conversely, ICVs act as mobile sensors, providing real-time trajectory and status feedback to the cloud, thereby facilitating traffic situational awareness and decision optimization (e.g., adaptive signal control and congestion management). This interaction establishes a robust multi-level vehicle-road-cloud system framework. Second, a comparative analysis of international development paths was performed to reveal distinct strategies. The United States has traditionally prioritized vehicle-side intelligence (e.g., Tesla's vision-based approach) but is increasingly transitioning toward C-V2X communication standards following spectrum reallocation. The European Union focuses on cross-border interoperability and standardization through projects such as C-Roads, emphasizing data privacy under GDPR. Japan and South Korea rely on government-led legislation and integration of automated driving with high-precision 3D mapping (e.g., SIP-adus and K-City). China adopts a "top-down" design with large-scale dual-intelligence pilot zones in cities such as Beijing and Wuhan, promoting the rapid deployment of 5G-V2X and standardized roadside infrastructure. Furthermore, the study deeply explored the key enabling technologies that support this synergy. Integrated Sensing and Communication, which optimizes spectrum and hardware resources through functional fusion (data sharing) and signal fusion (unified waveforms), was analyzed in detail. Moreover, the study analyzed the hierarchical cloud control system, comprising edge, regional, and central clouds. This system balances real-time local control with global data mining and long-term optimization. Additionally, multisensor fusion positioning algorithms were examined to illustrate the integration of GNSS, INS, and LiDAR through loose, tight, or deep coupling mechanisms. Such integration ensures robust centimeter-level positioning, even in GNSS-denied environments like tunnels or urban canyons. Conclusions and Prospects: Despite significant achievements, the collaborative development of smart cities and ICVs faces multifaceted challenges. These include technological bottlenecks in automotive-grade chips and algorithm adaptability, barriers in cross-industry protocol compatibility, and prominent risks regarding data security and privacy protection in cross-border transmission. Consequently, future research must focus on several key directions: achieving semantic alignment and unified representation in multimodal sensing fusion to handle heterogeneous data, developing adaptive protocols based on software-defined networks to ensure compatibility, optimizing dynamic edge-cloud computing resource scheduling to meet real-time demands, and constructing active immune network security frameworks to defend against intelligent cyber-attacks. This study provides a comprehensive theoretical reference and technical support for fostering the deep integration and scalable application of smart city infrastructure and ICVs.
Significance: As onshore and nearshore resources become increasingly scarce, exploiting the deep sea has become a strategic priority for energy production, aquaculture, raw materials, and maritime transport. Deep-sea engineering relies on various offshore floating structures that operate under harsh, complex, and time-varying wind, wave, and current conditions. Accurate and efficient prediction of their motions and internal force responses is essential for structural safety, optimal design, and operational planning. Conventional methods, such as computational fluid dynamics and potential flow theory, are computationally expensive or imprecise when strong nonlinearities are present. Advances in sensors, computing power, and big data technology have enabled artificial intelligence (AI) applications in this field. Artificial neural networks (ANNs) adaptively capture complex nonlinear dynamics from large datasets, making AI-based response prediction an effective bridge between efficiency and accuracy in ocean engineering. This review surveys recent progress in AI methods for predicting the dynamic responses of offshore floating structures, underscoring their strengths and limitations and outlining future research directions. Progress: This review consolidates recent advances in applying AI to three key predictive tasks for offshore floating structures (e.g., oil and gas platforms, floating production, storage and offloading units (FPSOs), and floating wind turbines). First, in time-series prediction, recurrent ANNs, such as gated recurrent units and long short-term memory networks, are widely used for short-term forecasting of floater motions and mooring tensions. Current research primarily focuses on two key improvement strategies. The first involves optimizing input features, which include environmental time histories (e.g., wave elevation and wind speed) and dynamic response time series (e.g., floater motions and internal structural forces). The second focuses on integrating AI with complementary techniques. Signal processing algorithms, such as variational mode decomposition, are used to reduce the bandwidth of model inputs. Optimization algorithms, such as Bayesian optimization, are employed to fine-tune model hyperparameters. Furthermore, incorporating physical laws (e.g., hydrodynamic transfer functions) enhances the model's generalization capability. Second, for extreme-value prediction, ANNs such as multilayer perceptrons and backpropagation networks are trained to map environmental parameters directly to short-term extremes or to extreme-value distribution parameters, thereby greatly reducing computational cost compared with time-domain simulations. For long-term extremes, representative sea states are sampled, and a surrogate model is trained to rapidly predict short-term extremes; probability convolution across states then provides long-term estimates that approach the accuracy of traditional full long-term analyses at a fraction of the computational cost. Third, for short-term fatigue damage prediction, ANNs are applied in both frequency-domain analysis (e.g., approximating nonlinear stress transfer functions) and time-domain analysis (e.g., mapping environmental parameters directly to load ranges or damage equivalent loads). For long-term fatigue assessment, two practical strategies prevail. The first is similar to that used in long-term extreme predictions. The second employs active learning to iteratively select the most informative samples, considerably reducing the required number of simulations while preserving accuracy. Conclusions and Prospects: AI provides significant advantages in rapid prediction and effective modeling of strong nonlinearities, overcoming the limitations of traditional numerical methods and enabling efficient forecasting and design optimization. However, most models are purely data-driven and thus assume limited generalizability to unseen conditions, lack physical interpretability, and often ignore built-in uncertainty quantification. Additionally, while various time-series prediction methods have been extensively compared, similar cross-evaluations are scarce for extreme-value and fatigue prediction approaches. To translate AI advances into reliable engineering practice, future work should prioritize physics-informed neural networks that embed fundamental hydrodynamics to improve generalization and trustworthiness, integrate uncertainty quantification frameworks such as Bayesian neural networks for reliability-based design, and develop more efficient strategies for long-term extreme-value and fatigue prediction. Finally, establishing high-quality shared datasets, standardized benchmarks, and validation protocols will be essential to migrate these techniques from research prototypes to routine engineering tools, powering digital twins and forecasting systems for offshore floating structures.
Significance: Filter capacitors play a vital role in rectification and filtering circuits by stabilizing voltage and smoothing electrical signals. With the rapid miniaturization and performance enhancement of electronic equipment—particularly ultra-small devices—there is an urgent demand for filter capacitors that combine high capacitance density, compact size, and excellent frequency response. Traditional dielectric capacitors, although exhibiting excellent frequency response, are constrained by low capacitance density and large volume, preventing them from meeting the integration requirements of modern ultra-small electronic devices. In contrast, electrochemical capacitors, which offer high capacitance in a small volume, have emerged as a research hotspot for ultra-small devices and are widely regarded as promising candidates for next-generation high-end filter capacitors. However, slow ion migration within these electrochemical capacitors severely limits their frequency response, creating a key bottleneck that hinders practical application and industrialization. This underscores the importance of summarizing existing research on electrochemical filter capacitors (EFCs), clarifying key technical issues, and exploring effective solutions to guide future development and advance filter capacitor technology. Progress: To overcome the frequency response limitation caused by slow ion migration, researchers have made progressive breakthroughs in three key areas. First, in selecting electrode material phases, efforts have focused on developing materials with high conductivity, excellent ion accessibility, and stable electrochemical properties—optimizing carbon-based materials (e.g., activated carbon, carbon nanotubes, and graphene) and transition-metal-oxide-based materials to enhance ion transport efficiency within electrodes. Second, in regulating micro-pore structures, hierarchical pore architectures (integrating macropores, mesopores, and micropores) have been designed: macropores serve as rapid ion transport channels, mesopores provide space for ion storage and migration, and micropores increase specific surface area—collectively shortening ion diffusion paths and reducing ionic migration resistance to improve frequency response. Third, in device structure design, innovations such as sandwich-like structures and interdigital electrode structures have reduced overall device volume while optimizing ion transport paths between electrodes. These advances have significantly improved the specific capacitance of EFCs, increasing it from 80μF·cm-2 to 6mF·cm-2. Additionally, this paper outlines the development trajectory of EFCs, reviews the latest progress in material systems and multi-scale structure design, and summarizes strategies tailored for high-frequency filtering scenarios (e.g., integrating high-conductivity materials with optimized pore structures and combining device structure innovation with material performance enhancement). Conclusions and Prospects: To enhance performance, future research should focus on optimizing ion migration efficiency. Developing new electrode materials with ultra-high conductivity and designing more efficient ion transport channels can boost the frequency response while maintaining high capacitance density. In industrial-scale manufacturing, challenges such as consistent material preparation, scalable device fabrication, and cost control must be addressed through low-cost, large-scale, high-quality manufacturing technologies. For cross-field applications, EFCs must meet field-specific requirements (e.g., high-temperature stability for automotive electronics and long cycle life for energy storage systems). In conclusion, with continuous advances in materials science, structural design, and manufacturing technology, EFCs are expected to play an increasingly important role in electronic devices; however, further investigation is still required to address existing challenges and fully realize their application potential.
Significance: In 2023, China marked a significant milestone in nuclear energy development, becoming the first nation to successfully commercialize fourth-generation high-temperature gas-cooled reactor (HTGR) technology. This accomplishment represents a substantial advancement in nuclear power generation, offering enhanced safety features, improved thermal efficiency, and greater operational flexibility compared with conventional reactor designs. Progress: This paper provides a comprehensive and systematic examination of recent advancements in coupled simulation methodologies for high-temperature pebble flow dynamics within pebble-bed reactor cores, focusing on elucidating the fundamental physical mechanisms governing granular flow behavior and heat transfer processes. These mechanisms are critical for optimizing HTGR performance metrics, operational reliability, and safety parameters. Historically, research in this domain has faced two primary limitations. First, there have been methodological shortcomings in accurately modeling the complex multiphysics phenomena involved. Second, there have been substantial computational resource requirements, virtually rendering full-core-scale numerical investigations of the HTR-PM reactor configuration impractical. This study makes significant breakthroughs by developing innovative computational frameworks and artificial intelligence-enhanced methodologies. It systematically examines three key research areas. The first area is modeling and simulation techniques, including the discrete element method (DEM) for pebble flow characterization, computation fluid dynamics—DEM coupling methodologies, and graphical processing unit (GPU)-accelerated DEM software development. The second is heating transfer mechanisms, encompassing intra-and inter-particle conduction models, convective heat transfer formulations, and high-temperature radiative heat transfer approaches. The third is artificial intelligence (AI)-enhanced methodologies featuring the use of neural networks for pebble residence time prediction, future state forecasting networks, and convolutional neural networks-based radiative view factor estimation. A significant component of this study is the creation of GRAPE, a proprietary three-dimensional DEM-based numerical model that incorporates GPU heterogeneous parallel acceleration. This approach facilitates efficient simulation of pebble flow dynamics with up to tens of millions of particles on standard workstations while maintaining superior computational accuracy. The study's particle-scale heat transfer model integrates neural network-accelerated radiative heat transfer computation with enhanced mesh search algorithms, demonstrating remarkable capability in predicting pebble-bed temperature distributions at reactor core scales. Furthermore, the introduction of two novel deep learning architectures represents a significant advancement in pebble flow research methodologies. RT-Net is a multi-branch convolutional network designed for real-time pebble flow residence time prediction, and Pre-Net is an image generation network employing guided learning principles to forecast pebble flow evolution. These AI-driven tools, integrated within our integrated computational framework, serve to not only address critical research gaps but also establish new paradigms for analyzing multiphysics coupling in high-temperature multiphase particle systems. This development marks a substantial advancement in HTGR core research and development through key innovations, including the GRAPE simulation platform, neural network-enhanced heat transfer modeling, and pioneering applications of deep learning in pebble flow prediction. Conclusions and Prospects: These advancements provide a theoretical foundation and the practical tools necessary to address the next generation of challenges in nuclear energy technology, particularly in reactor safety, operational optimization, and the development of even more advanced reactor designs. Through a review, elaboration, and synthesis of these key issues, this paper identifies critical challenges and future research directions for developing high-temperature gas-cooled reactor cores, as well as high-temperature multiphase particle flow and multiphysics coupling.
Objective: Shear wall systems are crucial components of high-rise buildings, providing the essential lateral force resistance. The design and layout of shear walls significantly influence the structural safety, serviceability, and cost-effectiveness of a building. Traditional design methods often rely on the experience and intuition of structural engineers, resulting in lengthy design cycles and potential suboptimal solutions, particularly for complex or large-scale projects. To address these challenges, this research proposes a novel intelligent design method based on generative adversarial networks (GANs), incorporating shear wall-beam joint training. This method aims to enhance the quality and coherence of structural layouts, automating the design process and optimizing the integration between shear walls and beams, which are traditionally designed in isolation. Methods: The proposed intelligent design method utilizes a stacked architecture of the pix2pixHD model, a state-of-the-art image-to-image translation network, to jointly train the generation of shear wall and beam layouts. A dataset consisting of 200 sets of architectural and structural drawings was compiled. These drawings were sourced from leading design institutes to ensure real-world applicability and compliance with relevant building standards. These CAD drawings were semantically processed to convert them into four distinct feature images: architectural design plans, shear wall layout diagrams, beam layout diagrams, and spatial function layouts (e.g., room locations and structural cores). The proposed method processes these feature images through a GAN-based framework, where the first generator (G1) produces the shear wall layout from the architectural plan. The second generator (G2) then utilizes the architectural and shear wall layouts to generate a beam layout. The GAN framework is optimized through a feedback loop, where the performance of G2 is integrated into the loss function of G1. This integration promotes the production of layouts that are architecturally accurate and structurally optimized for the beam system. The quality of the generated layouts can be evaluated using two key similarity indicators: SSWall for shear wall layouts and SBeam for beam layouts, which assess the accuracy of the generated designs relative to the ground truth. Results: The experimental results revealed several key findings highlighting the effectiveness of the proposed method. First, the performance of the model was significantly correlated with the resolution of the input images. Increasing the resolution from 256×128 pixels to 1536×768 pixels resulted in notable improvements in the quality of the generated designs. This is because a higher resolution enables the model to capture both the global layout strategies and intricate geometric details of the components. Second, the introduction of spatial function information, such as room types and functional zones (e.g., stairwells and elevator cores), as an additional input channel substantially enhanced the quality of the generated layouts. This input is most effective during the shear wall design phase, as it improves the overall coordination between the shear wall and beam layouts. The results show that the joint training model outperforms traditional independent training methods: the proposed method generates layouts with better structural continuity—that is, the relationship between the shear walls and beams is more logical and cohesive, aligning more closely with professional engineering designs. Under optimal conditions, with an image resolution of 1536×768 pixels and spatial function information included, the proposed model achieved an SSWall score of 0.850 and an SBeam score of 0.770, indicating a high level of design accuracy. Conclusions: This study demonstrated that integrating shear wall and beam layouts through joint training using GANs can significantly enhance the quality, coherence, and efficiency of structural layout designs. The proposed method successfully automates the design process and improves the precision and integration of structural components, which are crucial for optimizing overall building performance. The case study of a 30-story, 90-meter-high residential building revealed that the GAN-generated layouts closely matched the engineer's designs in terms of dynamic characteristics, seismic performance, and interstory displacement angles. Most of the discrepancies were within acceptable engineering tolerances. This result indicates that the method can be effectively integrated into the design workflow for high-rise buildings. This intelligent design approach enhances the efficiency of the early design stages. It also offers a practical, automated solution to drastically reduce the time and labor required for these stages. The results validate the viability of this approach for real-world applications, paving the way for the broader adoption of GAN-driven design methodologies in structural engineering.