Loading...
 首页  期刊介绍 期刊订阅 联系我们
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  百年期刊
Cover Illustration
2023, Vol.63  No.9

ISSN 1000-0585
CN 11-1848/P
Started in 1982
  About the Journal
    » About Journal
    » Editorial Board
    » Indexed in
    » Rewarded
  Authors
    » Online Submission
    » Guidelines for Authors
    » Templates
    » Copyright Agreement
  Reviewers
    » Guidelines for Reviewers
    » Online Peer Review
  Office
    » Editor-in-chief
    » Office Work
    » Production Centre
  • Table of Content
      , Volume 63 Issue 9 Previous Issue    Next Issue
    For Selected: View Abstracts Toggle Thumbnails
    BIG DATA
    Two-stage open information extraction method for the defence technology field
    HU Minghao, WANG Fang, XU Xiantao, LUO Wei, LIU Xiaopeng, LUO Zhunchen, Tan Yushan
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1309-1316.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.010
    Abstract   PDF (1224KB) ( 404 )
    [Objective] The abundant information resources available on the internet about defense technology are of vital importance as data sources for obtaining high-value military intelligence. The aim of open information extraction in the field of defense technology is to extract structured triplets containing subject, predicate, object, and other arguments from the massive amount of information available on the internet. This technology has important implications for ontology induction and the construction of knowledge graphs in the defense technology domain. However, while information extraction experiments in the general domain yield good results, open information extraction in the defense technology domain faces several challenges, such as a lack of domain annotated data, arguments overlapping unadaptability, and unrecognizable long entities.[Methods] In this paper, an annotation strategy is proposed based on the entity boundaries, and an annotated dataset in the defense technology field combined with the experience of domain experts was constructed. Furthermore, a two-stage open information extraction method is proposed in the defense technology field that utilizes a pretrained language model-based sequence labeling algorithm to extract predicates and a multihead attention mechanism to learn the prediction of argument boundaries. In the first stage, the input sentence was converted into an input sequence <[CLS], input sentence[SEP]>, and the input sequence was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. Based on this sentence representation, a conditional random field (CRF) layer was used to predict the position of the predicates, i.e., to predict the BIO labels of the words. In the second stage, the predicated predicates from the first stage were concatenated with the original sentence and converted into an input sequence <[CLS], predicate[SEP], and input sentence[SEP]>, which was encoded using a pretrained language model to obtain an implicit state representation of the input sequence. This representation was then fed to a multihead pointer network to predict the position of the argument. The predicted position was tagged with the actual position to calculate the cross-entropy loss function. Finally, the predicates and the arguments predicted by the predicate and argument extraction models were combined to obtain the complete triplet.[Results] The experimental results from the extensive experiments conducted on a self-built annotated dataset in the defense technology field reveal the following. (1) In predicate extraction, our method achieved a 3.92% performance improvement in the F1 value as compared to LSTM methods and more than 10% performance improvement as compared to syntactic analysis methods. (2) In argument extraction, our method achieved a considerable performance improvement of more than 16% in the F1 value as compared to LSTM methods and about 11% in the F1 value as compared to the BERT+CRF method.[Conclusions] The proposed two-stage open information extraction method can overcome the challenge of arguments overlapping unadaptability and the difficulty of long-span entity extraction, thus improving the shortcomings of existing open information extraction methods. Extensive experimental analysis conducted on the self-built annotated dataset proved the effectiveness of the proposed method.
    Figures and Tables | References | Related Articles | Metrics
    Spatio-temporal correlation mining method for large-scale traffic networks
    FAN Xiaoliang, PENG Zhaopeng, ZHENG Chuanpan, WANG Cheng
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1317-1325.   DOI: 10.16511/j.cnki.qhdxxb.2023.22.029
    Abstract   PDF (6859KB) ( 271 )
    [Objective] Spatio-temporal correlation mining is a key technology in intelligent transportation systems and is usually applied to spatio-temporal data prediction problems such as traffic flow prediction. Accurately predicting traffic flows in urban management is extremely important for alleviating urban traffic congestion, improving traffic efficiency, and reducing traffic accident occurrences. However, it is extremely challenging to accurately predict traffic flows in large-scale traffic networks due to the high nonlinearity and complexity of the massive traffic flow data. Most existing methods usually conduct two separate components to capture the spatial and temporal correlations. A static spatial graph is constructed for each time step in the spatial dimension; furthermore, the same nodes on different time steps are connected to build a spatio-temporal graph in the temporal dimension. However, the potential correlations between the traffic flow data of different nodes at different time steps are ignored and the complex spatio-temporal correlations in the traffic flow data cannot be effectively modeled. [Methods] In this paper, we proposed a spatio-temporal combinational graph convolutional network (STCGCN) to address the issue of traffic flow prediction. STCGCN consisted of three modules: the spatio-temporal combinational graphs (STCG) construction module, the spatio-temporal combinational graph convolution (STCGC) module, and the prediction module. The STCG construction module constructed an adaptive STCG adjacency matrix across temporal slices based on spatio-temporal embedding vectors, which could automatically learn parameters during training, accommodate complex spatio-temporal correlations between nodes, and solve the problem that existing prediction methods hardly captured the potential spatio-temporal correlation between nodes. The STCGC module designed adaptive STCGC operators and adaptive STCGC layers to extract spatio-temporal features from historical traffic data of nodes and the constructed adaptive STCG. Finally, the prediction module aggregated the hidden layer representation of all historical time steps obtained using the STCGC module and outputed the prediction result via fully connected layer mapping. We evaluated STCGCN on PeMSD4 and PeMSD8, two public datasets from Caltrans performance measurement system (PeMS), by comparing it with 11 baseline methods: vector autoregressive (VAR), support vector regression (SVR), fully connected long-short term memory (FC-LSTM) neural network, diffusion convolutional recurrent neural network (DCRNN), spatio-temporal graph convolutional networks (STGCN), attention based spatial-temporal graph convolutional networks (ASTGCN), Graph WaveNet, spatial-temporal synchronous graph convolutional networks (STSGCN), adaptive graph convolutional recurrent network (AGCRN), graph multi-attention network (GMAN), and time zigzags at graph convolutional networks (Z-GCNETs). We adopted two widely used metrics for evaluation: mean absolute error and root mean squared error. [Results] The experimental results revealed that using a unified component, the proposed STCGCN model effectively modeled the dynamic temporal correlation, spatial correlation, and cross-spatio-temporal correlation in the traffic flow data. Furthermore, the model achieved the best prediction results at each moment, and its error growth was slower than other baseline methods as the prediction time increased. We also explored the effect of three hyperparameter settings in STCGCN on model performance, and the experiments demonstrated differential model performance under different hyperparameter settings. The number of parameters and training times of all models, including STCGCN and 11 baseline methods, were compared at the end of the experiment. The results showed that the STCGCN achieved the best model performance with the least number of model parameters and training time, and the algorithm efficiency was close to the best. [Conclusions] Experiments on the public datasets show that the STCGCN model outperforms 11 baseline methods in prediction accuracy.
    Figures and Tables | References | Related Articles | Metrics
    Chinese-oriented entity recognition method of character vocabulary combination sequence
    WANG Qingren, WANG Yinzi, ZHONG Hong, ZHANG Yiwen
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1326-1338.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.009
    Abstract   PDF (5316KB) ( 179 )
    [Objective] As the core task of information extraction, named entity recognition recognizes various types of named entities from the text. The task of Chinese-named entity recognition has benefited from the application of deep learning in character vocabulary representation, feature extraction, and other aspects, achieving rich results. However, this task still faces the challenge of a lack of vocabulary information, which has been regarded as one of the primary impediments to the development of a high-performance Chinese-named entity recognition (NER) system. While the automatically constructed dictionary contains rich lexical boundary information and lexical semantic information, the integration of word knowledge in the Chinese NER task still faces challenges, such as the effective integration of the semantic information of self-matching words and their context information into Chinese characters. Furthermore, although graph neural networks can be used to extract feature information from various Chinese character-vocabulary interaction diagrams in feature extraction, the challenge of how to fuse features based on the importance of the information from the respective interaction diagrams into the original input sequence is yet to be solved.[Methods] This paper proposes a Chinese-oriented entity recognition method of Chinese-vocabulary combination sequence. (1) First, this method proposes a Chinese-vocabulary combination sequence embedding structure that primarily uses self-matching words to replace the Chinese characters in the Chinese character sequence under consideration. To make complete use of the self-matching vocabulary information, we also constructed a sequence for the self-matching vocabulary and vectorized the vocabulary and Chinese characters. At the coding level, we obtained the context information of the Chinese character sequence, the vocabulary sequence, and the Chinese-word combination sequence using the BiLSTM model and then fused the information from the words in the Chinese word combination sequence into the corresponding words in the vocabulary sequence. Furthermore, the graph neural network was used to extract the features of different Chinese-vocabulary interaction diagrams so that the enhanced vocabulary information can be integrated into Chinese characters, which can not only make complete use of the vocabulary boundary information but also integrate the context information of the self-matching vocabulary sequence into characters while capturing the semantic information between the Chinese characters and words, further enriching the character features. Finally, the conditional random field was used to decode and label the entities. (2) Considering the importance of different Chinese character-word interaction diagram information to the original input Chinese character sequence is not the same, this method proposes a multigraph attention fusion structure. It assigns a score to the correlation of the Chinese character sequence based on different Chinese character-word interaction diagram information, differentiates between structural features based on their importance, and fuses different Chinese character-word interaction diagram information into the Chinese character sequence based on their proportions.[Results] The F1 value of the new method was higher than that of the original method on Weibo, Resume, OntoNotes4.0, and MSRA data by 3.17% (Weibo_all), 1.21%, 1.33%, and 0.43%, respectively, thus verifying the feasibility of the new method on Chinese NER tasks.[Condusions] The experiment revealed that the proposed method is more effective than the original method.
    Figures and Tables | References | Related Articles | Metrics
    A collaborative filtering model based on heterogeneous graph neural network
    YANG Bo, QIU Lei, WU Shu
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1339-1349.   DOI: 10.16511/j.cnki.qhdxxb.2023.22.030
    Abstract   PDF (2455KB) ( 391 )
    [Objective] Collaborative filtering algorithms are widely used in various recommendation systems and can be used to recommend information of interest to users similar to the user based on historical data. Recently, collaborative filtering algorithms based on graph neural networks have become one of the hot research topics. A collaborative filtering model based on a graph structure usually encodes the interaction between users and information items as a two-part diagram, and high-order connectivity modeling of the bipartite graph can be used to capture the hidden relationship between the user and the item. However, this bipartite graph model does not explicitly obtain the similarity relationship between users and between items. Additionally, the bipartite graph sparsity causes high-order connectivity dependence problems in the model. [Methods] Herein, a collaborative filtering model is proposed based on a heterogeneous graph convolutional neural network that explicitly encodes the similarities between users and that between items into the graph structure so that the interaction relationships between users and between items are modeled as a heterogeneous graph. The heterogeneous graph structure allows the similarities between users and between items to be directly captured, reducing the need for high-order connectivity and alleviating the bipartite graph sparsity problem. [Results] We conducted experiments on four typical datasets and compared the results using four typical methods. The results showed that our model achieved better experimental results than the traditional collaborative filtering models and existing graph neural network models. Moreover, based on the different types of edges, different similarity methods, and different similarity thresholds, our model obtained better experimental results. [Conclusions] Our model explicitly encodes the similarities between users and between items into the heterogeneous graph structure as edges so that the model can directly learn these similarities during training to get the embedded information of users and items. The proposed model alleviates the sparsity and high-order connectivity modeling problems of bipartite graphs. The collaborative filtering model based on heterogeneous graph neural networks can also fully capture the interaction relationships between users and items through low-order connectivity in the graph.
    Figures and Tables | References | Related Articles | Metrics
    COMPUTER SCIENCE AND TECHNOLOGY
    Survey of deep face manipulation and fake detection
    XIE Tian, YU Lingyun, LUO Changwei, XIE Hongtao, ZHANG Yongdong
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1350-1365.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.002
    Abstract   PDF (5408KB) ( 1051 )
    [Significance] Deep face manipulation technology involves the generation and manipulation of human imagery by different strategies, such as identity swapping or face reenactment between the source face and the target face. On the one hand, the rise of deep face manipulation has inspired a series of applications, including video making and advertising marketing. On the other hand, because face manipulation technology is usually open source or packaged as APPs for free distribution, it makes the threshold of tampering technology lower, resulting in the proliferation of fake videos. Moreover, when face manipulation technology is maliciously used by criminals to produce fake news, especially for important military and political officials, it will guide and intervene in public opinion, posing a great threat to national security and social stability. Therefore, the research on deep face forgery detection technology is particularly important. Hence, it is necessary to summarize the existing research to rationally guide deep face manipulation and detection technology.[Progress] Nowadays, deep face manipulation technology can be roughly divided into four types, namely, identity swapping, face reenactment, face editing, and face synthesis. Deepfakes bring real-world identity swapping to a new level of fidelity. The region-aware face-swapping network provides the identity information of source characters from local and global perspectives, making the generated faces more natural. In the field of facial reenactment, Wav2lip uses pretrained lip synchro models as expert models, encouraging the model to generate natural and accurate lip movements. In the field of face editing, FENeRF, a three-dimensional perception generator based on a neural radiation field, aligns semantic, geometric, and texture information in spatial domain and improves the consistency of the generated image between different perspectives while ensuring that the face can be edited. In the field of face synthesis, Anyface proposes a cross-modal distillation module for the alignment of language and visual representation, realizing the use of text information to generate more diversified face images. Deep face forgery detection technology can be roughly divided into image-level forgery detection and video-level forgery detection methods. In the image-level methods, SBI proposes a self-blended technique to generate realistic fake face images with data augmentation, effectively improving the generalization ability of the model. M2TR proposes a multimodal and multi-scale Transformer model to detect local artifacts at different levels of the image in spatial. Frequency domain features are also added as auxiliary information to ensure the forgery detection ability of the model for highly compressed images. In the video-level methods, RealForensics learns the natural correspondence between the face and audio in a real video in a self-supervised way, enhancing the generalization and robustness of the model.[Conclusions and Prospects] Presently, deep face manipulation and detection technologies are rapidly developing, and various corresponding technologies are in the process of continuous update and iteration. First, this survey reviews the deep face manipulation and detection methods and discusses their strengths and weaknesses. Second, the common datasets and the evaluation results of different manipulation and detection methods are summarized. Finally, the main challenges of face manipulation and fake detection are discussed, and the possible research direction in the future is pointed out.
    Figures and Tables | References | Related Articles | Metrics
    Decentralized internet number resource management system based on blockchain technology
    LI Jiang, XU Mingwei, CAO Jiahao, MENG Zili, ZHANG Guoqiang
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1366-1379.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.016
    Abstract   PDF (4583KB) ( 241 )
    [Objective] Internet is an important infrastructure that has been evolving for decades. Border gateway protocol (BGP) is the de facto interdomain routing protocol on the internet and connects autonomous systems (ASes) around the world. The BGP uses internet number resources (INR), including internet protocol (IP) prefixes and autonomous system numbers for addressing and routing. However, BGP has been vulnerable to the INR misusage threat recently, which causes a common type of anomaly called prefix hijacking. In prefix hijacking, a malicious AS originates the victim AS's prefixes to blackhole or intercept the victim's data traffic. The existing security solution, called resource public key infrastructure (RPKI), provides INR ownership and prefix-to-AS mapping information through a centralized infrastructure. ASes can extract and use the information from RPKI to prevent prefix hijacking. However, this solution has three typical drawbacks. First, the centralized architecture of RPKI causes single-point failures. Second, to obtain consistent INR information from RPKI, ASes need a long convergence time owing to the disorderly distribution of information. Third, ASes incur high interaction cost for extracting real-time INR information frequently.[Methods] To solve the above mentioned shortcomings, this study proposes a decentralized internet number resource management system (DINRMS) based on blockchain technology. The proposed system adopts a hierarchical architecture consisting of an autonomy layer and an arbitration layer. DINRMS partitions all ASes on the internet into groups that form the autonomy layer. The arbitration layer comprises the Internet Assigned Numbers Authority, five Regional Internet Registries and representatives elected by each group in the autonomy layer. Each entity in DINRMS has nearly the same impact on the system and the single-point failure of an entity does not lead to a serious global breakdown. The architecture of the proposed system overcomes the poor scalability of blockchain technology, which cannot be applied to efficient global INR information management on the internet. A blockchain is maintained within each group to record the INR ownership and prefix-to-AS mapping information of the respective groups. Entities within a group use information from third parties, such as the Whois Lookup tool, to check the consistency of INR ownership information. For prefix-to-AS mapping information, entities within a group use routing data from public route collectors to check the consistency and then vote on the legitimacy of the information. Subsequently, the entities judge the legitimacy of the information according to the majority rule. The arbitration layer maintains the global INR ownership information in the form of group granularity and prefix-to-AS mapping information. This information is sourced from representatives elected by each group in the autonomy layer for mutual supervision and endorsement. The arbitration layer is responsible for arbitrating usage conflicts related to INR. The DINRMS proposes a heuristic INR information push mechanism based on the architecture and dynamics of INR information. The mechanism decides to push INR information to ASes if a long time has passed since the last information push or if many information items have not been pushed.[Results] Experiments results show that DINRMS provides secure and trusted INR information for interdomain routing. In addition, the degree of centralization of DINRMS is 60% less than that of RPKI in terms of the Gini coefficient. Moreover, DINRMS reduces the convergence time and interaction overhead by more than 50%.[Conclusions] DINRMS manages INRs based on blockchain technology using a decentralized approach. The hierarchical and grouping architecture of DINRMS improves system scalability. The efficient push mechanism based on the dynamics of INR information shortens the convergence time and reduces the interaction overhead for ASes to obtain consistent INR ownership and mapping information.
    Figures and Tables | References | Related Articles | Metrics
    Cross-domain sentiment classification based on syntactic structure transfer and domain fusion
    ZHAO Chuanjun, WU Meiling, SHEN Lihua, SHANGGUAN Xuekui, WANG Yanjie, LI Jie, WANG Suge, LI Deyu
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1380-1389.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.012
    Abstract   PDF (3117KB) ( 339 )
    [Objective] Deep learning models for text sentiment analysis, such as recurrent neural networks, often require many parameters and a large amount of high-quality labeled training data to effectively train and optimize recurrent neural networks. However, obtaining domain-specific high-quality sentiment-labeled data is a challenging task in practical applications. This study proposes a cross-domain text sentiment classification method based on syntactic structure transfer and domain fusion (SSTDF) to address the domain-invariant learning and distribution distance difference metric problems. This method can effectively alleviate the dependence on domain-specific annotated data due to the difference in the data distribution among different domains. [Methods] A method combining SSTDF was proposed in this study to solve the problem of cross-domain sentiment classification. Dependent syntactic features are introduced into the recurrent neural network for syntactic structure transfer for designing a migratable dependent syntactic recurrent neural network model. Furthermore, a parameter transfer strategy is employed to transfer syntactic structure information across domains efficiently for supporting sentiment transfer. The conditional maximum mean discrepancy distance metric is used in domain fusion to quantify the distribution differences between the source and target domains and further refine the cross-domain same-category distance metric information. By constraining the distributions of source and target domains, domain variable features are effectively extracted to maximize the sharing of sentiment information between source and target domains. In this paper, we used a joint optimization and training approach to address cross-domain sentiment classification. Specifically, the sentiment classification loss of source and target domains is minimized, and their fusion losses are fully considered in the joint optimization process. Hence, the generalization performance of the model and classification accuracy of the cross-domain sentiment classification task are considerably improved. [Results] The dataset used in this study is the sentiment classification dataset of Amazon English online reviews, which has been widely used in cross-domain sentiment classification studies; furthermore, it contains four domains—B (Books), D (DVD), E (Electronic), and K (Kitchen)—each with 1 000 positive and negative reviews. The experimental results show that the accuracy of the SSTDF method is higher than the baseline method, achieving 0.844, 0.830, and 0.837 for average accuracy, recall, and F1 values, respectively. Fine-tuning allows the fast convergence of the network, thereby improving its transfer efficiency. [Conclusions] Finally, we used deep transfer learning methods to solve the task of cross-domain text sentiment classification from the perspective of cross-domain syntactic structure consistency learning. A recurrent neural network model that integrates syntactic structure information is used; additionally, a domain minimum distance constraint is added to the syntactic structure transfer process to ensure that the distance between the source and target domains is as similar as possible during the learning process. The effectiveness of the proposed method is finally verified using experimental results. The next step is to increase the number of experimental and neutral samples to validate the proposed method on a larger dataset. Furthermore, a more fine-grained aspect-level cross-domain sentiment analysis will be attempted in the future.
    Figures and Tables | References | Related Articles | Metrics
    Authorship identification method based on the embedding of the syntax tree node
    ZHANG Yang, JIANG Minghu
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1390-1398.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.013
    Abstract   PDF (1270KB) ( 377 )
    [Objective] Authorship identification is a study for inferring authorship of an unknown text by analyzing its stylometry or writing style. The traditional research on authorship identification is generally based on the empirical knowledge of literature or linguistics, whereas modern research mostly relies on mathematical methods to quantify the author's writing style. Currently, researchers have proposed various feature combinations and neural network models. Some feature combinations can achieve better results with traditional machine learning classifiers, while some neural network models can autonomously learn the relationship between the input text and corresponding author to extract text features implicitly. However, the current research mostly focuses on character and lexicon features. Furthermore, the exploration of syntactic features is limited. How to use the dependency relationship between different words in a sentence and combine syntactic features with neural networks still remains unclear. This paper proposes an authorship identification method based on the syntax tree node embedding, which introduces syntactic features into a deep learning model. [Methods] We believe that an author's writing style is mainly reflected in the way he chooses words and constructs sentences. Therefore, this paper mainly develops the authorship identification model from the perspectives of words and sentences. The attention mechanism is used to construct sentence-level features. First, an embedding representation of the syntax tree node is proposed, and the syntax tree node is expressed as a sum of embeddings corresponding to all its dependency arcs. Thus, the information on sentence structure and the association between words are introduced into the neural network model. Then, a syntactic attention network using different embedding methods to vectorize text features, such as dependencies, part-of-speech tags, and words, is constructed, and a syntax-aware vector is obtained through this network. Furthermore, the sentence attention network is used to extract the features from the syntax-aware vector to distinguish between different authors, thereby generating the sentence representation. Finally, the result is obtained by the classifier and the correct rate is used to evaluate the result. [Results] Experiments on CCAT10, CCAT50, IMDb62, and the Chinese novel data sets show that an increase in the number of authors causes a downward trend in the accuracy rate of the model proposed in the paper. In some data points, an increase in the number of authors resulted in an increase instead of a decrease in the correct rate. This shows that the ability of the model proposed in this study to capture the writing style of different authors is considerably different. Furthermore, when we change the number of authors on the IMDb dataset, the correct rate of the model in the paper is found to be slightly lower than the BertAA model in the case of 5 authors; however, the rate is higher than the BertAA model in the case of 10, 25, and 50 authors. Additionally, when the experimental results of the model are compared to other models on the CCAT10, CCAT50, and IMDb62 data sets, the performance of this model is observed to be ranked as second or third. [Conclusions] The attention mechanism demonstrated its efficiency in text feature mining, which can fully capture an author's style that is reflected in different parts of the document. The integration of lexical and syntactic features based on the attention mechanism enhances the overall performance of the model. Our model performs well on different Chinese and English datasets. Notably, the introduction of dependency syntactic combination provides more space for the interpretation of the model, which can explain the text styles of different authors at the word selection and sentence construction levels.
    Figures and Tables | References | Related Articles | Metrics
    Multi-user recommendation algorithm based on vulnerability similarity
    JIA Fan, KANG Shuya, JIANG Weiqiang, WANG Guangtao
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1399-1407.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.007
    Abstract   PDF (1450KB) ( 350 )
    [Objective] In recent years, the number of publicly disclosed vulnerabilities has increased, and software security personnel and vulnerability enthusiasts have experienced increasing difficulty in finding the vulnerability information they are interested in. A recommendation algorithm can provide personalized vulnerability suggestions to help users obtain valuable vulnerability information efficiently. However, recommendation systems related to vulnerabilities generally have problems such as one-sided analysis, complex implementation methods, strong professionalism, and data privacy, and research on directly recommending vulnerabilities as recommendation items is scarce.[Methods] This paper selects the vulnerability itself as the recommendation item, collects data from public datasets, and adopts a simple and efficient recommendation algorithm for personalized recommendations of vulnerabilities. As a classical recommendation model, the collaborative filtering recommendation algorithm is widely used and computationally efficient. However, the user–vulnerability interaction matrix is sparser than the interaction matrix analyzed by the classical recommendation model, which seriously affects the use effect of the collaborative filtering recommendation algorithm. To solve this problem, this paper introduces a vulnerability similarity research algorithm, comprehensively considers 13 features, such as vulnerability type, severity, and vulnerability description text, and integrates them into content-based recommendation algorithms, emphasizing the universal connection between vulnerabilities in recommendation algorithms. By calculating the similar vulnerabilities of each vulnerability the target user has interacted with, the algorithm summarizes the list of vulnerabilities with the highest recommended value and recommends it to the user. Simultaneously, the algorithm fully considers the characteristics of personal users and product users and combines the labeling mechanism to finally form a multi-user vulnerability recommendation algorithm based on similarity, effectively improving the sparsity and cold start of the recommendation algorithm.[Results] The experiments on public datasets show that 1) the content recommendation algorithm based on similarity can achieve better accuracy than the traditional collaborative filtering algorithm on all types of users. Particularly, the precision, recall, and F1 score of the recommendation algorithm results for product users increase by 58.86%, 58.53%, and 0.586 1, respectively. 2) The recommendation list of the content recommendation algorithm based on similarity is more effective and more consistent with the user's vulnerability preferences. For product users, the the normalized discounted cumulative gain score of the recommendation list increases by 0.596 5. 3) The result coverage of the content recommendation algorithm based on similarity is much higher than that of the collaborative filtering algorithm. Among human users, the result coverage of the content recommendation algorithm based on similarity is 7.6 times that of original interest data, which shows that the recommendation algorithm successfully mobilizes more vulnerabilities to recommend that users have not previously interacted with.[Conclusions] This paper takes vulnerabilities as a recommendation item to recommend vulnerabilities for multiple types of users and proposes a multi-user vulnerability recommendation algorithm based on similarity. The algorithm mainly introduces the vulnerability similarity calculation method and integrates it into the content-based recommendation algorithm. The algorithm proposed in this paper solves the problems of the high sparsity of a user–vulnerability interaction matrix and cold-start problems of user-based collaborative filtering algorithms and effectively improves the accuracy and effectiveness of recommendations.
    Figures and Tables | References | Related Articles | Metrics
    VEHICLE AND TRAFFIC ENGINEERING
    Optimization method for allocations of energy storage systems and tractions for metro systems
    LIU Anbang, CHEN Xi, ZHAO Qianchuan, LI Borui
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1408-1414.   DOI: 10.16511/j.cnki.qhdxxb.2023.22.018
    Abstract   PDF (2388KB) ( 210 )
    [Objective] Urban metro systems are the most important public transport system in many countries. These systems are associated with high speed, large capacity, and punctuality. However, traction energy costs account for a high proportion of the total operating cost. Thus, energy storage systems (ESSs), such as supercapacitors, have been recently introduced to collect and utilize the renewable energy generated by trains. Specifically, renewable energy is generated when a train brakes, and this energy can be collected using an ESS. The collected energy can later be released and used to start a train. Thus, this approach reduces the energy consumed by the entire system. The generated renewable energy and the energy used for starting a train are different in different stations. Therefore, the allocations (i.e., positions and capacities) of the ESSs and the traction substations can significantly affect the efficiency of energy collection and utilization. Hence, it is essential to optimize such allocations, but this process is difficult due to the complicated features of the problem. Most of the existing studies have considered either ESSs or traction substations in their optimization model and have not considered joint optimization. In addition, these studies have used rule-based methods or metaheuristics as solution methodologies which cannot quantified the qualities of the solutions obtained.[Methods] In this paper, the allocations of the ESSs and the traction substations are jointly optimized. We first model the devices in the power supply network as ideal components and then build a simulation model that can provide the current and the voltage at each position of the network. Based on the simulation model, we develop a simulation optimization problem for the energy consumption of the line, with the upper limits of voltage fluctuation being considered as constraints. Due to the complexity of the simulation optimization model, evaluating the objective function once is computationally expensive. Thus, we develop a solution method based on ordinal optimization theory to efficiently solve the optimization problems. We establish a crude model, which has low computational requirements and use it to obtain a set of candidate solutions. Then, through the evaluation of the ordered performance curve and error bounds, a good enough solution is selected from the candidate solutions.[Results] The testing results obtained on a metro line in Qingdao demonstrate that the optimized configuration can reduce the traction energy consumption by 6.1% when compared to the empirical configuration. Moreover, our results are better than those obtained by only optimizing the allocation of the ESSs.[Conclusions] To reduce the energy consumption of urban metro systems, this paper proposes a model and a solution method for optimizing the allocations of ESSs and traction substations. The effectiveness of our method is proven based on the numerical results. Our proposed method can be used to optimize other problems associated with the urban metro system, such as the optimization of charging and discharging strategies for supercapacitors. Because the train timetable can affect the collection and utilization of the regenerative braking energy, the joint optimization of the timetable and allocations of ESSs requires further investigation.
    Figures and Tables | References | Related Articles | Metrics
    Improvement of an autonomous emergency-braking decision-making system for commercial vehicles based on unsafe control strategy analysis
    ZHOU Tuqiang, LIU Wei, LI Haoran, XU Shucai, SUN Chuan
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1415-1427.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.014
    Abstract   PDF (9243KB) ( 186 )
    [Objective] Most current automatic emergency-braking (AEB) systems perceive the surrounding environment through on-board sensors, which generally encounter the following issues: the cost of lidar is high, their performance in the presence of smoke medium and rain and snow weather is imprecise and is restricted by long-distance energy loss, and the millimeter-wave radar can only sense obstacles in a short distance. The monocular/binocular camera is greatly affected by objective factors, such as reduced visibility due to weather and nighttime, resulting in a small observation distance. At the intersection, the road traffic environment is complex, specifically when a commercial vehicle has a remarkable blind spot, and the function of the vehicle sensor is greatly limited.[Methods] To improve the safety and reliability of AEB systems, this work designs and studies an AEB system for commercial vehicles based on unsafe control behavior. First, a compensation method is proposed on the basis of the characteristics of vehicle-to-vehicle communication delay under different conditions. Real vehicle tests are conducted to collect data regarding the communication delay of vehicle-mounted communication equipment transmitting self-vehicle information under different working conditions. The average value is taken as the delay compensation in the safety distance and then added as compensation data to the established safety distance model in the AEB system based on vehicle-road coordination. The delay law is used to correct parameters such as the speed, displacement, and coordinates of the environmental vehicle to compensate for the impact of communication delay on system decision-making. An AEB strategy for commercial vehicles at the intersection section is described. The contours of the two vehicles are projected onto a coordinate system to determine whether the two vehicles overlap. When a collision risk is detected, the collision avoidance strategy of the two vehicles at the road intersection is implemented. When the two vehicles are about to collide, the braking system of the vehicle is controlled to brake automatically and urgently with maximum braking deceleration to avoid collision. Furthermore, the unsafe control behavior causing the accident is determined through analysis, and the corresponding safety constraints are used to optimize the algorithm strategy. Finally, the proposed algorithm is simulated and tested.[Results] Results show that the proposed AEB algorithm based on unsafe control behavior can effectively prevent the collision of two vehicles at the intersection and has high safety and reliability.[Conclusions] This study has a few limitations and shortcomings. This work only considers the influence of communication delay and braking onset stage on the safe braking distance, and the collision avoidance strategy only considers the scene of a straight intersection. In future research, consideration will be given to the factors affecting the ability to obtain an accurate and safe braking distance, and 5G technology will be gradually applied to an AEB system based on vehicle-road coordination.
    Figures and Tables | References | Related Articles | Metrics
    Influence degree analysis of ridership characteristics at urban rail transit stations
    MA Zhuanglin, YANG Xing, HU Dawei, TAN Xiaowei
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1428-1439.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.044
    Abstract   PDF (13070KB) ( 146 )
    [Objective] The ridership characteristics of urban rail transit stations are closely related to the surrounding built environment and socio-economic factors, and the influence of different influencing factors on ridership characteristics also has temporal and spatial heterogeneity. Considering the complexity of influencing factors on station ridership, this paper uses the multiscale geographical weighted regression (MGWR) model to analyze the influencing factors of ridership at rail transit stations in different temporal scales.[Methods] This paper selects the station ridership on weekdays as the dependent variable, which is divided into five categories, including the average daily ridership, inbound ridership of morning peak hours, outbound ridership of morning peak hours, inbound ridership of evening peak hours, and outbound ridership of evening peak hours. A total of 23 independent variables are selected from three aspects: station attributes, connectivity, and the built environment. The variance inflation factor and Moran index are utilized to test the linear correlation and spatial autocorrelation between independent variables, respectively. The MGWR model is applied to construct the analysis model of ridership characteristics, and three indicators, including the residual sum of squares (RSS), adjusted R2, and the corrected Akaike information criterion (AICc), are employed to compare the performance of the ordinary least squares (OLS), geographically weighted regression (GWR), and MGWR models. The influencing factors and their interaction with rail transit station ridership in different temporal scales are developed. Finally, this method is applied to analyze the influence degree of ridership characteristics at Nanjing rail transit station.[Results] The following results are presented. 1) The MGWR model is more reliable than the OLS and GWR models. 2) The average daily ridership analysis model, which ignores the impact of morning and evening peak hour ridership, has the most significant independent variables. 3) The distance to the city center has a significant negative impact on station ridership, indicating that the agglomeration of station ridership is evident when the station is close to the city center. 4) The stations with a high proportion of residential and living facilities have a strong attraction to the morning peak inbound and evening peak outbound ridership, whereas those with a low proportion of residential and livings facilities have a strong attraction to the morning peak outbound and evening peak inbound ridership. Three significant local variables, namely tourism facility POI density, enterprise and office POI density, and the ratio of floor area on commercial lands to the total floor area, are available, and these local variables have different impacts on rail transit ridership at different temporal scales. Tourism facility POI density has negative spatially varying impacts on the average daily ridership, inbound ridership of morning peak hours, and outbound ridership of evening peak hours. Enterprise and office POI density has a negative spatially varying impact on inbound ridership of morning peak hours but has a positive spatially varying impact on outbound ridership of morning peak hours. The ratio of floor area on commercial lands to the total floor area has positive and negative spatially varying impacts on inbound ridership during evening peak hours. This finding implies that not all the commercial buildings around the rail transit stations are attractive to the inbound ridership during evening peak hours.[Conclusions] The MGWR model considering spatial autocorrelation can capture numerous influence scales of different variables and reduce the deviation of results. The developed method in this paper achieves the expected goal and depicts the interdependence between ridership and influencing factors from the station level.
    Figures and Tables | References | Related Articles | Metrics
    Rate performance of thin-film all-solid-state lithium batteries
    QI Junyi, FANG Ruqing, WU Yongmin, TANG Weiping, LI Zhe
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1440-1451.   DOI: 10.16511/j.cnki.qhdxxb.2023.26.008
    Abstract   PDF (5776KB) ( 189 )
    [Objective] All-solid-state thin-film lithium batteries with advantages such as ultra-thin thickness, intimate interfacial contact, and simple structure have a promising prospect for application in portable and microdevices. Unlike the porous structures in conventional lithium-ion batteries, the electrode and electrolyte structures of all-solid-state thin-film lithium batteries are stacked in layers without any liquid electrolyte. Due to this structural layout, there is considerably high interface resistance between the electrode and electrolyte and relatively low ionic conductivity of the solid-state electrolyte, which lead to poor rate performance of the batteries when operated below a certain capacity demand.[Methods] Herein, an all-solid-state thin-film lithium battery with crystallization LiCoO2, amorphous LiPON, and lithium metal thin films have been fabricated via RF magnetron sputtering and high vacuum evaporation, respectively, while a lithium symmetric cell has been fabricated via electrochemical deposition. Through electrochemical experiments and physical models applied in the time and frequency domains, rate performance factors are systematically discussed and analyzed. Furthermore, in order to perform a detailed analysis of the rate performance of these thin-film batteries, it is necessary to obtain kinetic parameters corresponding to different physical and chemical processes of all the battery components. Here, electrochemical impedance spectroscopy (EIS) has been used to measure the parameters via the impedance spectrum of the Li1- xCoO2/LiPON/Li battery and the lithium symmetrical cell.[Results] The preliminary results of the electrochemical analysis method used on the voltage curve at different current rates showed that the diffusion process of lithium ions in the solid-state electrolyte or positive electrode was the origin of the main polarization causing low rate capacity. There was also a high rate of huge overpotential owing to the linear process of electron or ion migration. Based on the EIS under different lithium intercalation amounts in the positive lithium cobalt oxide and the one-dimensional frequency domain model, we obtained vital dynamic parameters of this battery. Moreover, a one-dimensional electrochemical time domain model with the dynamic parameters calculated above was introduced to further analyze the voltage curves. It was found that the mass transfer process in the solid-state electrolyte and the diffusion process in the positive electrode were the key physical and chemical processes of rate performance in the fabricated battery. Furthermore, the preliminary design was proposed to improve the rate performance of these batteries through an electrochemical model that reduced the thickness of solid-state electrolytes and shortened the diffusion path in the positive electrode.[Conclusions] This work provides an analytical method based on frequency and time domain physical models that are useful for accurately distinguishing and analyzing the impacts of various kinetic parameters of thin-film batteries. The method also allows for preliminary and practical conclusions for the rate performance of all-solid-state thin-film batteries. The mass transfer process in the solid electrolyte is the main factor affecting the power performance, while the diffusion process in the positive electrode is the main factor affecting the capacity performance. Knowing these parameters is helpful for fabricating and structuring the design of the battery.
    Figures and Tables | References | Related Articles | Metrics
    ECONOMIC AND PUBLIC MANAGEMENT
    Financial distress and innovation: Evidence from the Enterprise Bankruptcy Law
    YANG Jialun, WANG Yintian
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1452-1466.   DOI: 10.16511/j.cnki.qhdxxb.2022.21.045
    Abstract   PDF (1274KB) ( 288 )
    [Objective] Traditional social concepts still hold certain negative views and stereotypes about bankruptcy. After the implementation of the Enterprise Bankruptcy Law in China in 2007, the positive role of the bankruptcy system and the impact of creditor protection are still not fairly understood. Until now, no literature has systematically analyzed and studied the effect of the Enterprise Bankruptcy Law on the innovative behavior of enterprises in financial distress. In fact, only a few quantitative analyses of the economic effects of implementing the Enterprise Bankruptcy Law have been found. This study examines the impact of the Enterprise Bankruptcy Law implementation on firm innovation and finds an increase in patent activity for firms in financial distress following the implementation of the law, thereby filling a gap in the literature on law and finance. [Methods] The 2007 Enterprise Bankruptcy Law is used as a quasi-natural experiment to construct a difference-in-difference model, as the model is intended to net out the common trend between the treatment and control firm groups. This study takes Merton's option pricing theory as the theoretical foundation of the credit monitor model and gauges the degree of financial distress faced by enterprises by calculating their distance to default. Based on the financial distress level faced by sample firms in 2006, the year before implementing the Bankruptcy Law, the firms are grouped by the annual median level of financial distress. However, firms with high financial distress typically differ from those with low financial distress in terms of other company characteristics, which could lead to concerns about missing variables. This paper introduces a propensity score matching method based on the difference model and selects four firms in the control group that belong to the same industry and have the closest propensity scores to those in the treatment group to effectively eliminate this concern. The estimation of the trend-matching scoring is performed using logical regression, and the sample set has passed the balance test. Furthermore, multistage dynamic regressions are employed to account for the interference of causal issues with the results. In addition, a volatility indicator based on the three-factor model is used to assess a firm's risk bearing to study the channel through which the Enterprise Bankruptcy Law influences firm innovation. [Results] After the implementation of the Enterprise Bankruptcy Law, the results revealed the following: 1) The number of patent applications of the experiment group increased by 18.77%, and the number of invention patent applications increased by 25.86% compared with the control group. 2) The implementation of the Enterprise Bankruptcy Law has strengthened the protection of creditors and thus improved the level of firm risk bearing, which has significantly increased innovation output and innovation quality of financially-distressed firms. 3)The above phenomena are more pronounced in firms with good corporate governance in regions with strict rules of law and strong intellectual property protection. [Conclusions] This study confirms the effect of creditor protection on business innovation. The introduction of the Enterprise Bankruptcy Law has enhanced the legitimate rights and interests of creditors in China, provided a strong guarantee for creditors to obtain potential repayment, enhanced the risk tolerance of firms, and encouraged firms to innovate. This study reveals the far-reaching effects of the legal system on the real economy and expands our knowledge of how the legal system environment influences enterprise innovation.
    Figures and Tables | References | Related Articles | Metrics
    Systematic review and future perspectives of financial distress prediction studies: Back to the principle of finance
    ZHU Wuxiang, LIAO Jingqiu, ZHAN Ziliang, TAN Zhijia
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1467-1482.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.008
    Abstract   PDF (2501KB) ( 381 )
    [Significance] Because of multiple factors, such as deleveraging policy, slowing economic growth, trade friction, and the COVID-19 pandemic, debt defaults are occurring with increasing frequency, which could trigger risk contagion and even lead to systemic financial risks. However, some facts indicate that the existing financial distress prediction model is not sufficiently effective; for example, the nonperforming loan ratio of commercial banks shows a rising trend, and the downgrade of ratings usually lags considerably. Thus, government departments and market entities have a strong demand for improving and optimizing the financial distress prediction model, which is necessary to realize risk identification and early warning. An effective prediction model can provide early warnings of investment risks and help financial institutions and investors reduce losses, assist regulators in establishing a multichannel default disposal mechanism, and improve the credit environment of the capital market.[Progress] Based on an extensive literature search in top journals and conferences from 1932 to 2020, this paper reviews four topics, including the financial distress definition, statistical model, variable selection, and model efficiency evaluation method, then further summarizes three research anomalies: 1) Existing financial distress prediction models often focus on the prediction of deep crises, such as insolvency and bankruptcy, which may lead to a delayed warning and market panic. 2) The innovation of financial distress prediction research focuses on applying new computer algorithms and statistical models as well as considering nonfinancial information. One confusing fact is that the judgment of financial distress depends on the selected model, indicators, and sample set rather than the fundamental factors of the enterprise; thus, different prediction models may produce contradictory results on the judgment of the same enterprise. 3) The identification of financial distress relies on comparing an enterprise's future capital cash flow and rigid payment. However, most of the existing financial distress prediction models apply a multivariate weighting method according to common historical financial indicators.[Conclusions and Prospects] This paper proposes a cross-model evaluation framework to compare their financial distress prediction effectiveness and provides improvement suggestions including “one principle, three directions.” The one principle indicates that to accurately assess and manage the absolute risk of financial distress, the study of financial distress prediction should return to the financial principle and pay attention to future capital cash flow. The three directions that need to pay attention include: 1) early financial distress warnings, such as liquidity crisis warnings; 2) steady repayment sources, including operating cash inflows, reliable asset disposal earnings, and refinancing, rather than relying on the total assets of the balance sheet, current assets, and other indicators; 3) financing contracts and full scenario analyses of future capital cash outflows rather than just current ratio, quick ratio, asset-liability ratio, and other liability indicators. In the future, with the development of big data and the improvement in information transmission efficiency, corporate information disclosure will be considerably enhanced, allowing more accurate cash flow and repayment prediction. A prediction model assessing absolute financial distress risk has greater potential.
    Figures and Tables | References | Related Articles | Metrics
    ENVIRONMENTAL SCIENCE AND ENGINEERING
    Layout methods of sponge source facilities for future community based on different needs
    ZHANG Xiaoyue, LI Yue, WANG Chenyang, CHEN Zhengxia, JIA Haifeng
    Journal of Tsinghua University(Science and Technology). 2023, 63 (9): 1483-1492.   DOI: 10.16511/j.cnki.qhdxxb.2023.21.001
    Abstract   PDF (5013KB) ( 139 )
    [Objective] Future community is a novel type of ecological low-carbon urban functional unit that follows sustainable development objectives and the sponge city construction concept. Some studies have employed different methods targeting data accessibility and technical requirements to explore future community planning. However, a systematic method is still lacking for different planning and design stages, additions to which will support the planning layout of sponge source facilities for future communities.[Methods] To integrate the future community planning methods incorporating the sponge city construction concept, a multimethod framework for the sponge source facility layout of the future community was constructed, adopting the volume capture ratio (VCR) method, the modeling method, and the multiobjective optimization method for different data and technical requirements. The results from the case study of a community to be transformed into a future community in a rainy southern Chinese city showed that the VCR method demonstrated the lowest data and technical requirements, which could generate a layout scheme meeting the volume capture ratio of annual rainfall (VCRAR). This method is particularly suitable for the early stages of the sponge source facility layout planning for limited data. However, a model was required for further assessments of pollution and carbon reduction, along with additional relevant data (drainage network, rainfall data, etc.). To achieve multiobjective comprehensive environmental benefits and the cost-effectiveness of future communities, a multiobjective optimization method could be incorporated. Nevertheless, intelligent optimization algorithms and model coupling technology were indispensable to achieve multiobjective optimization.[Results] The runoff management efficiencies of different schemes employed by these methods indicated that the sponge source facility layout scheme by the VCR method achieved approximately 80% VCRAR. The VCR-based scheme was further evaluated by the Storm Water Management Model (SWMM), demonstrating a decline in the runoff peak flow from 5.65 m3·s-1 in the traditional scheme (without sponge facilities) to 2.17 m3·s-1, and the VCRAR changed from 51.87% in the traditional scheme to 79.43%. A 21.69%—30.52% reduction in the peak concentrations of total suspended solids, nitrogen, phosphorus, and chemical oxygen demand and a 284.87 t·y-1 carbon reduction over the traditional scheme were recorded, exhibiting significant pollution and carbon reduction improvement of the VCR-based scheme. The multiobjective optimization scheme based on the multiobjective optimization method by coupling SWMM and NSGA-II aimed for the best cost-effectiveness, which resulted in a 3.29% and a 1.51% decrease in the green roof and the sunken greenbelt area, respectively, and a 2.13% increase in the permeable pavement area, as well as an 18.67% reduction in the cost compared to the VCR-based scheme. Thus, the increased area of permeable pavement made it the preferred choice. Moreover, the multiobjective optimization scheme displayed superior peak flow reduction (21.20% decrease), peak concentration reduction of different pollutants (6.32%-16.67% decrease), rainwater reuse rate (1.17%-2.65% increase), and carbon reduction (7.91%-12.66% increase) over the VCR-based scheme. However, in the multiobjective optimization scheme, the increase in the permeable pavement area increased the carbon emission by 178.40 t as compared to the VCR-based scheme.[Conclusions] Utilizing the carbon emission indicator as a control objective in the optimization process is necessary for future studies. Nonetheless, the multiobjective optimization scheme achieved higher net carbon reduction benefits due to higher annual reductions and needed about seven years to achieve carbon emission recovery. Briefly, the VCR method has a simple and easy operation, and it can meet the requirements of future community planning and runoff control objectives, while the multiobjective optimization method can achieve the best environmental benefits and cost-effectiveness.
    Figures and Tables | References | Related Articles | Metrics
  News More  
» aaa
  2024-12-26
» 2023年度优秀论文、优秀审稿人、优秀组稿人评选结果
  2023-12-12
» 2022年度优秀论文、优秀审稿人、优秀组稿人评选结果
  2022-12-20
» 2020年度优秀论文、优秀审稿人评选结果
  2021-12-01
» aa
  2020-11-03
» 2020年度优秀论文、优秀审稿人评选结果
  2020-10-28
» 第十六届“清华大学—横山亮次优秀论文奖”暨2019年度“清华之友—日立化成学术交流奖”颁奖仪式
  2020-01-17
» a
  2019-01-09
» a
  2018-12-28
» a
  2018-01-19


  Links More  



Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd