Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们 横山亮次奖 百年刊庆
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  横山亮次奖  |  百年刊庆
清华大学学报(自然科学版)  2021, Vol. 61 Issue (2): 128-134    DOI: 10.16511/j.cnki.qhdxxb.2020.22.029
  专题:安全监测 本期目录 | 过刊浏览 | 高级检索 |
监控完全遮挡场景下火灾调查方法
王冠宁1,2, 陈涛1, 米文忠3,4, 康彦武2, 邓亮5
1. 清华大学 工程物理系, 公共安全研究院, 北京 100084;
2. 甘肃省消防救援总队, 兰州 730000;
3. 清华大学合肥公共安全研究院, 合肥 320601;
4. 应急管理部 消防救援局, 北京 100032;
5. 中国人民警察大学 侦查学院, 廊坊 065000
Fire investigation method using completely blocked surveillance cameras
WANG Guanning1,2, CHEN Tao1, MI Wenzhong3,4, KANG Yanwu2, DENG Liang5
1. Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing 100084, China;
2. Gansu Fire and Rescue Department, Lanzhou 730000, China;
3. Hefei Institute for Public Safety Research, Tsinghua University, Hefei 320601, China;
4. Fire and Rescue Department, Ministry of Emergency Management, Beijing 100032, China;
5. School of Investigation, China People's Police University, Langfang 065000, China
全文: PDF(8765 KB)   HTML
输出: BibTeX | EndNote (RIS)      
摘要 视频分析已成为火灾调查方法研究的热点领域。为解决火灾现场监控完全遮挡场景下火灾原因认定面临的重大困难,该文基于视频分析技术对监控完全遮挡场景下起火部位定位方法进行了研究。利用油盘火模拟室内火灾现场,布置彩色电荷耦合器(CCD)摄像机使之完全被遮挡,收集火灾现场监控视频进行分析。首先结合监控视频和现场勘验对火灾现场进行二维平面重建,其次通过光路分析对起火部位初步确定,然后通过图像亮度分析对起火部位进一步确定,最后利用单目视觉定位技术结合光路和亮度分析结果对起火部位进行精确定位。实验结果表明:针对监控完全遮挡场景所建立的基于光路和亮度分析的单目视觉定位方法,可以较好地实现起火部位的精确定位,为火灾事故调查工作的顺利开展提供了有力的工具。
服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
王冠宁
陈涛
米文忠
康彦武
邓亮
关键词 事故调查视频分析单目视觉亮度分析火焰定位    
Abstract:Video analyses are a key part of fire investigations. However, the cause of the fire cannot be easily determined when the surveillance cameras are completely blocked. This study analyzes fire location methods when the surveillance cameras are completely blocked based on video analyses. An oil pan pool fire is used to simulate an indoor fire scenario. The charge-coupled device (CCD) camera views are completely blocked with the videos of the fire scene then collected for analysis. A two-dimensional reconstruction of the fire scene is first created by combining the surveillance videos with the scene information. Then, the fire origin is initially estimated based on optical path analyses and further clarified by spatial illumination analyses. Finally, monocular visual positioning is used to combine the results to locate the fire origin. Tests show that the monocular visual positioning method based on the optical path and illumination analysis using completely blocked surveillance cameras can accurately locate the fire origin as a powerful tool for fire investigations.
Key wordsaccident investigation    video analyses    monocular vision    illumination analyses    fire origin positioning
收稿日期: 2020-06-05      出版日期: 2020-12-29
基金资助:陈涛,研究员,E-mail:chentao.a@tsinghua.edu.cn
引用本文:   
王冠宁, 陈涛, 米文忠, 康彦武, 邓亮. 监控完全遮挡场景下火灾调查方法[J]. 清华大学学报(自然科学版), 2021, 61(2): 128-134.
WANG Guanning, CHEN Tao, MI Wenzhong, KANG Yanwu, DENG Liang. Fire investigation method using completely blocked surveillance cameras. Journal of Tsinghua University(Science and Technology), 2021, 61(2): 128-134.
链接本文:  
http://jst.tsinghuajournals.com/CN/10.16511/j.cnki.qhdxxb.2020.22.029  或          http://jst.tsinghuajournals.com/CN/Y2021/V61/I2/128
  
  
  
  
  
  
  
  
  
  
  
[1] 唐诗洋, 疏学明, 胡俊, 等. 基于E-V融合的线上-线下联合监控技术[J]. 清华大学学报(自然科学版), 2018, 58(6):576-580.TANG S Y, SHU X M, HU J, et al. Online-offline associated surveillance system based on E-V fusion[J]. Journal of Tsinghua University (Science and Technology), 2018, 58(6):576-580. (in Chinese)
[2] 陈冲, 白硕, 黄丽达, 等. 基于视频分析的人群密集场所客流监控预警研究[J]. 中国安全生产科学技术, 2020, 16(4):143-148.CHEN C, BAI S, HUANG L D, et al. Research on monitoring and early-warning of passenger flow in crowded places based on video analysis[J]. Journal of Safety Science and Technology, 2020, 16(4):143-148. (in Chinese)
[3] 陶一宁, 苏峰, 袁培江, 等. 联合足迹识别与监控视频分析的智能刑侦系统[J/OL].[2020-05-31]. 北京航空航天大学学报, 2020:1-12. DOI:10.13700/j.bh.1001-5965.2020.0062.TAO Y N, SU F, YUAN P J, et al. Intelligent criminal investigation system based on footprint recognition and video surveillance analysis[J/OL].[2020-05-31]. Journal of Beijing University of Aeronautics and Astronautics, 2020:1-12. DOI:10.13700/j.bh.1001-5965.2020.0062. (in Chinese)
[4] National Fire Association. NFPA 921:Guide for fire and explosion investigations[Z]. 2017 Ed. Quincy, USA:National Fire Protection Association, 2017:921-17.
[5] 刘兴华. 视频分析技术在火灾调查中的应用[J]. 武警学院学报, 2016, 32(4):85-88.LIU X H. On the application of video analysis in the fire investigation[J]. Journal of Chinese People's Armed Police Force Academy, 2016, 32(4):85-88. (in Chinese)
[6] LIU Z H, SUN Y Y, XU L, et al. Explosion scene forensic image interpretation[J]. Journal of Forensic Sciences, 2019, 64(4):1221-1229.
[7] 马芳武, 史津竹, 葛林鹤, 等. 无人驾驶车辆单目视觉里程计的研究进展[J]. 吉林大学学报(工学版), 2019, 50(3):765-775.MA F W, SHI J Z, GE L H, et al. Progress in research on monocular visual odometry of autonomous vehicles[J]. Journal of Jilin University (Engineering and Technology Edition), 2019, 50(3):765-775. (in Chinese)
[8] YU H Y, LI G Y, ZHANG W G, et al. The unmanned aerial vehicle benchmark:Object detection, tracking and baseline[J]. International Journal of Computer Vision, 2020, 128(5):1141-1159.
[9] KAKANI V, NGUYEN V H, KUMAR B P, et al. A critical review on computer vision and artificial intelligence in food industry[J]. Journal of Agriculture and Food Research, 2020, 2:100033.
[10] TIAN H K, WANG T H, LIU Y D, et al. Computer vision technology in agricultural automation:A review[J]. Information Processing in Agriculture, 2020, 7(1):1-19.
[11] XU Y, DING C, SHU X K, et al. Shared control of a robotic arm using non-invasive brain-computer interface and computer vision guidance[J]. Robotics and Autonomous Systems, 2019, 115:121-129.
[12] KAZEMIAN A, YUAN X, DAVTALAB O, et al. Computer vision for real-time extrusion quality monitoring and control in robotic construction[J]. Automation in Construction, 2019, 101:92-98.
[13] 王丽. 基于图像的消防车用火灾定位技术研究[D]. 北京:中国矿业大学, 2019.WANG L. Study on fire location technology for fire truck based on image[D]. Beijing:China University of Mining & Technology, 2019. (in Chinese)
[14] 吴能凯. 基于多源视觉的淡色火焰识别与定位[D]. 北京:北京交通大学, 2019.WU N K. Light color flame recognition and localization based on multi-source vision[D]. Beijing:Beijing Jiaotong University, 2019. (in Chinese)
[15] 陈科尹, 邹湘军, 关卓怀, 等. 基于混合蛙跳优化的采摘机器人相机标定方法[J]. 农业机械学报, 2019, 50(1):23-34.CHEN K Y, ZOU X J, GUAN Z H, et al. Camera calibration method of picking robot based on shuffled frog leaping optimization[J]. Transactions of the Chinese Society for Agricultural Machinery, 2019, 50(1):23-34. (in Chinese)
[16] 苏兆熙. 基于视频图像处理的明火识别与定位研究[D]. 南昌:华东交通大学, 2013.SU Z X. Research of identification and location of open flame based on video image processing[D]. Nanchang:East China Jiaotong University, 2013. (in Chinese)
[1] 迟昊, 刘宇, 陈恳, 冯渭春, 张继文. 惯量对称非合作目标参数的单目估计方法[J]. 清华大学学报(自然科学版), 2022, 62(9): 1492-1499.
[2] 王冠宁, 陈涛, 米文忠, 梁晓良, 王汝栋. 基于凸壳理论的监控摄像头部分遮挡场景下火焰定位方法[J]. 清华大学学报(自然科学版), 2022, 62(2): 277-284.
[3] 唐诗洋, 疏学明, 胡俊, 吴津津, 申世飞. 基于E-V融合的线上-线下联合监控技术[J]. 清华大学学报(自然科学版), 2018, 58(6): 576-580.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 《清华大学学报(自然科学版)》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn