2024
2024
-
Record 169 of
Title:Design of optical system for space-based space debris detection
Author Full Names:Linlan, Liu(1,2); Guangzhi, Lei(1); Ming, Gao(2); Hu, Wang(1,2)Source Title:Proceedings of SPIE - The International Society for Optical EngineeringLanguage:EnglishDocument Type:Conference article (CA)Conference Title:7th Global Intelligent Industry Conference, GIIC 2024Conference Date:March 30, 2024 - April 1, 2024Conference Location:Shenzhen, ChinaConference Sponsor:The Chinese Society for Optical EngineeringAbstract:Space debris affects the safety of Earth orbit and the detection of space debris is becoming increasingly important. Space-based detection has the advantages of not being affected by weather and being close to each other. A high-sensitivity optical system for space debris detection is designed, which has a field of view of 1° × 1°, a wavelength range of 450nm-900nm, a aperture of 150mm, a signal-to-noise ratio of 5, and can detect 12-magnitude debris, it can also provide early warning for space debris smaller than 1 cm approaching 100km. The results of image quality evaluation, tolerance analysis, temperature adaptability analysis and ghost image analysis show that the system has a speckle diameter of 6.8μm, distortion less than 0.01% and high capability concentration. The results of tolerance analysis show that the lens yield is higher than 90% if the RMS radius of the system is greater than 0.0058 mm. The results of temperature adaptability analysis show that the defocus of the system is 0.004mm from atmospheric pressure to vacuum in the range of -20°C-50°C, and the system has good adaptability to temperature environment. The results of ghost image analysis show that the system ghost illuminance is less than 1E-15w/mm2, and has no effect on imaging. The results show that the designed space debris detection optical system has the characteristics of high sensitivity and large detection range, and meets requirements of space debris detection optical system. © 2024 SPIE.Affiliations:(1) Space Optics Technology Research Laboratory, Xi'an Institute of Optics and Precision Machinery, Chinese Academy of Sciences, Xi'an, China; (2) School of Optoelectronic Engineering, Xi'an University of Technology, Xi'an, ChinaPublication Year:2024Volume:13278Article Number:132781HDOI Link:10.1117/12.3032362数据库ID(收录号):20244517307146 -
Record 170 of
Title:Interaction semantic segmentation network via progressive supervised learning
Author Full Names:Zhao, Ruini(1); Xie, Meilin(1); Feng, Xubin(1); Guo, Min(1); Su, Xiuqin(1); Zhang, Ping(2)Source Title:Machine Vision and ApplicationsLanguage:EnglishDocument Type:Journal article (JA)Abstract:Semantic segmentation requires both low-level details and high-level semantics, without losing too much detail and ensuring the speed of inference. Most existing segmentation approaches leverage low- and high-level features from pre-trained models. We propose an interaction semantic segmentation network via Progressive Supervised Learning (ISSNet). Unlike a simple fusion of two sets of features, we introduce an information interaction module to embed semantics into image details, they jointly guide the response of features in an interactive way. We develop a simple yet effective boundary refinement module to provide refined boundary features for matching corresponding semantic. We introduce a progressive supervised learning strategy throughout the training level to significantly promote network performance, not architecture level. Our proposed ISSNet shows optimal inference time. We perform extensive experiments on four datasets, including Cityscapes, HazeCityscapes, RainCityscapes and CamVid. In addition to performing better in fine weather, proposed ISSNet also performs well on rainy and foggy days. We also conduct ablation study to demonstrate the role of our proposed component. Code is available at: https://github.com/Ruini94/ISSNet © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024.Affiliations:(1) Xi’an Institute of Optics and Precision Mechanics of the Chinese Academy of Sciences, Xi’an; 710119, China; (2) Chang’an University, Xi’an; 710064, ChinaPublication Year:2024Volume:35Issue:2Article Number:26DOI Link:10.1007/s00138-023-01500-4数据库ID(收录号):20241115732788 -
Record 171 of
Title:Motion detection of swirling multiphase flow in annular space based on electrical capacitance tomography
Author Full Names:Zhao, Qing(1); Liao, Jiawen(1); Chen, Weining(1)Source Title:Proceedings of SPIE - The International Society for Optical EngineeringLanguage:EnglishDocument Type:Conference article (CA)Conference Title:2023 International Conference on Computer Application and Information Security, ICCAIS 2023Conference Date:December 20, 2023 - December 22, 2023Conference Location:Wuhan, ChinaAbstract:Cyclone multiphase flow in the annular space is widely used in fluid machinery, such as burner and pneumatic conveying. However, the annular flow field is complex, and the related research is not sufficient. To improve the safety and efficiency of equipment, this paper proposes a method for detecting the motion state of swirling fluid in annular space by integrating computational fluid dynamics (CFD) and electrical capacitance tomography (ECT), calculates the motion characteristics of swirling multiphase flow in the annular space using the CFD, and visually measures the distribution and motion state of swirling multiphase flow in the annular space using the ECT. Numerical simulation and experimental results show that the results of the two methods are in good agreement, indicating that the model selected in this paper in the CFD is correct. The CFD effectively reveals the distribution of swirling multiphase flow in the annular pipe, and the ECT can accurately reconstruct the position and size of swirling multiphase flow in the annular space. The combination of these two methods provides a new idea for the study of multiphase flow in annular space. © 2024 SPIE.Affiliations:(1) Xi'an Institute of Optics and Precision Mechanics of Chinese Academy of Sciences, Shaanxi, Xi'an; 710100, ChinaPublication Year:2024Volume:13090Article Number:1309003DOI Link:10.1117/12.3026097数据库ID(收录号):20241815993004 -
Record 172 of
Title:An optimization method for aircraft attitude measurement based on contour matching
Author Full Names:Qin, Ruijiao(1,2); Tang, Huijun(3)Source Title:Proceedings of SPIE - The International Society for Optical EngineeringLanguage:EnglishDocument Type:Conference article (CA)Conference Title:4th International Conference on Geology, Mapping, and Remote Sensing, ICGMRS 2023Conference Date:April 14, 2023 - April 16, 2023Conference Location:Wuhan, ChinaConference Sponsor:Academic Exchange Information Centre (AEIC); Hubei University of Technology; Suzhou University of Science and TechnologyAbstract:The pose information of aircraft is an important index to study flight status and aircraft performance[1]. This article mainly focuses on the research of aircraft attitude estimation based on contour matching, intending to achieve pose estimation of non-contact long-distance moving objects under the rigorous formula system of photogrammetry. The rationality of the algorithm proposed in this article has been proven through the analysis of experimental results. © 2024 COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.Affiliations:(1) Xi'An Jiaotong University, Shaanxi, Xi'an, China; (2) The No.771 Institute, China Aerospace Science and Technology Corporation, Shaanxi, Xi'an, China; (3) Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Shaanxi, Xi'an, ChinaPublication Year:2024Volume:12978Article Number:129782IDOI Link:10.1117/12.3019432数据库ID(收录号):20240615524021 -
Record 173 of
Title:Optical fiber sensing probe for detecting a carcinoembryonic antigen using a composite sensitive film of PAN nanofiber membrane and gold nanomembrane
Author Full Names:Li, Jinze(1); Liu, Xin(2); Sun, Hao(1); Xi, Jiawei(1); Chang, Chen(3); Deng, Li(1); Yang, Yanxin(1); Li, Xiang(1)Source Title:Optics ExpressLanguage:EnglishDocument Type:Journal article (JA)Abstract:An optical fiber sensing probe using a composite sensitive film of polyacrylonitrile (PAN) nanofiber membrane and gold nanomembrane is presented for the detection of a carcinoembryonic antigen (CEA), a biomarker associated with colorectal cancer and other diseases. The probe is based on a tilted fiber Bragg grating (TFBG) with a surface plasmon resonance (SPR) gold nanomembrane and a functionalized polyacrylonitrile (PAN) PAN nanofiber coating that selectively binds to CEA molecules. The performance of the probe is evaluated by measuring the spectral shift of the TFBG resonances as a function of CEA concentration in buffer. The probe exhibits a sensitivity of 0.46 dB/(μg/ml), a low limit of detection of 505.4 ng/mL in buffer, and a good selectivity and reproducibility. The proposed probe offers a simple, cost-effective, and a novel method for CEA detection that can be potentially applied for clinical diagnosis and monitoring of CEA-related diseases. © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.Affiliations:(1) School of Optoelectronic Engineering, Xidian University, Xi'an; 710071, China; (2) School of Physics, Xidian University, Xi'an; 710071, China; (3) Department of Pathology, Shaanxi Provincial People's Hospital, Xi'an; 710068, ChinaPublication Year:2024Volume:32Issue:11Start Page:20024-20034DOI Link:10.1364/OE.523513数据库ID(收录号):20242116151967 -
Record 174 of
Title:Grayscale Iterative Star Spot Extraction Algorithm Based on Image Entropy
Author Full Names:Zhao, Qing(1); Liao, Jiawen(1); Zhang, Derui(1); Feng, Jia(1)Source Title:Applied Sciences (Switzerland)Language:EnglishDocument Type:Journal article (JA)Abstract:Star trackers are susceptible to interference from stray light, such as sunlight, moonlight, and Earth atmosphere light, in the space environment, resulting in an overall improvement in the star image grayscale, poor background uniformity, low star extraction rate, and high number of false star spots. In response to these challenges, this paper proposes a grayscale iterative star spot extraction algorithm based on image entropy. The implementation of the algorithm is mainly divided into two steps: (1) The algorithm conducts multiple grayscale iterations, effectively utilizing the prior information on the local contrast of star spots to filter out stray light backgrounds to a certain extent. (2) By establishing an inner–outer template, the image entropy algorithm is employed to obtain the real star targets to be extracted, which further suppresses the background clutter and noise. Numerical simulations and experimental results demonstrate that, compared to traditional detection algorithms, this algorithm can effectively suppress background stray light, enhance star extraction rates, and reduce the number of false star spots, and it exhibits superior detection performance in complex backgrounds across various scenarios. © 2024 by the authors.Affiliations:(1) Aircraft Optical Imaging Monitoring and Measurement Technology Laboratory, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an; 710119, ChinaPublication Year:2024Volume:14Issue:20Article Number:9207DOI Link:10.3390/app14209207数据库ID(收录号):20244417292963 -
Record 175 of
Title:Multinetwork Algorithm for Coastal Line Segmentation in Remote Sensing Images
Author Full Names:Li, Xuemei(1); Wang, Xing(2); Ye, Huping(3); Qiu, Shi(4); Liao, Xiaohan(5)Source Title:IEEE Transactions on Geoscience and Remote SensingLanguage:EnglishDocument Type:Journal article (JA)Abstract:The demarcation between the sea and the land, commonly referred to as the coastline, is of paramount importance for the dynamic monitoring of its alterations. This monitoring is essential for the effective utilization of marine resources and the conservation of the ecological environment. Addressing the challenges posed by the extensive expanse of coastal lines, which can complicate their acquisition and processing, this study utilizes remote sensing imagery to introduce an algorithm for coastal line segmentation. The algorithm integrates multiple networks to enhance its effectiveness. Innovations encompass the development of an extraction algorithm for coastal lines that are as follows. First, utilize an attention-guided conditional generative adversarial network (AC-GAN) model, which redefines the task of image segmentation by framing it as a style transformation problem. Second, a strategy for coastal line segmentation utilizes Dense Swin Transformer Unet (DSTUnet) to construct a densely structured model. This approach integrates Transformer to prioritize focal regions, thereby enhancing image and semantic interpretation. Third, a transfer learning framework is proposed to integrate multiple features, leveraging the strengths of different networks to achieve accurate segmentation of coastal lines. The study introduced two datasets, and the experimental results confirm that parallel network configurations and asymmetric weighting are superior in achieving optimal results, with an area overlap measure (AOM) score of 85%, outperforming the Unet by 5%. © 1980-2012 IEEE.Affiliations:(1) Chengdu University of Technology, School of Mechanical and Electrical Engineering, Chengdu; 610059, China; (2) National Institute of Measurement and Testing Technology, Electronic Research Institute, Chengdu; 610021, China; (3) Institute of Geographic Sciences and Natural Resources Research, The Key Laboratory of Low Altitude Geographic Information and Air Route, Civil Aviation Administration of China, Chinese Academy of Sciences, State Key Laboratory of Resources and Environment Information System, Beijing; 100101, China; (4) Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Key Laboratory of Spectral Imaging Technology Cas, Xi'an; 710119, China; (5) Institute of Geographic Sciences and Natural Resources Research, The Key Laboratory of Low Altitude Geographic Information and Air Route, Civil Aviation Administration of China, The Research Center for Uav Applications and Regulation, Chinese Academy of Sciences, State Key Laboratory of Resources and Environment Information System, Beijing; 100101, ChinaPublication Year:2024Volume:62Article Number:4208312DOI Link:10.1109/TGRS.2024.3435963数据库ID(收录号):20243216813662 -
Record 176 of
Title:Consumer Camera Demosaicking and Denoising With a Collaborative Attention Fusion Network
Author Full Names:Yuan, Nianzeng(1); Li, Junhuai(2); Sun, Bangyong(3,4)Source Title:IEEE Transactions on Consumer ElectronicsLanguage:EnglishDocument Type:Journal article (JA)Abstract:For the consumer cameras with Bayer filter array, raw color filter array (CFA) data collected in real-world is sampled with signal-dependent noise. Various joint denoising and demosaicking (JDD) methods are utilized to reconstruct full-color and noise-free images. However, some artifacts (e.g., remaining noise, color distortion, and fuzzy details) still exist in the reconstructed images by most JDD models, mainly due to the highly related challenges of low sampling rate and signal-dependent noise. In this paper, a collaborative attention fusion network (CAF-Net), with two key modules, is proposed to solve this issue. Firstly, a multi-weight attention module is proposed to efficiently extract image features by realizing the interaction of spatial, channel, and pixel attention mechanisms. By designing a local feedforward network and mask convolution aggregation of multiple receptive fields, we then propose an effective dual-branch feature fusion module, which enhances image details and spatial correlation. Accordingly, the proposed two modules significantly facilitate our CAF-Net to recover a high-quality image, by accurately inferring the correlations of color, noise, and the spatial distribution of the CFA data. Extensive experiments on demosaicking, synthetic, and real image JDD tasks prove that the proposed CAF-Net can achieve advanced performance in terms of objective evaluation index metrics and visual perception. © 2023 IEEE.Affiliations:(1) Xi'an University of Technology, School of Computer Science and Engineering, Xi'an; 710048, China; (2) Xi'an University of Technology, School of Computer Science and Engineering, The Shaanxi Key Laboratory for Network Computing and Security Technology, Xi'an; 710048, China; (3) Xi'an University of Technology, School of Printing, Packaging and Digital Media, Xi'an; 710048, China; (4) Xi'an Institute of Optics and Precision Mechanics, Key Laboratory of Spectral Imaging Technology, China Academy of Science, Xi'an; 7119, ChinaPublication Year:2024Volume:70Issue:1Start Page:509-521DOI Link:10.1109/TCE.2023.3342035数据库ID(收录号):20235115239885 -
Record 177 of
Title:A Novel Dynamic Contextual Feature Fusion Model for Small Object Detection in Satellite Remote-Sensing Images
Author Full Names:Yang, Hongbo(1,2); Qiu, Shi(1)Source Title:Information (Switzerland)Language:EnglishDocument Type:Journal article (JA)Abstract:Ground objects in satellite images pose unique challenges due to their low resolution, small pixel size, lack of texture features, and dense distribution. Detecting small objects in satellite remote-sensing images is a difficult task. We propose a new detector focusing on contextual information and multi-scale feature fusion. Inspired by the notion that surrounding context information can aid in identifying small objects, we propose a lightweight context convolution block based on dilated convolutions and integrate it into the convolutional neural network (CNN). We integrate dynamic convolution blocks during the feature fusion step to enhance the high-level feature upsampling. An attention mechanism is employed to focus on the salient features of objects. We have conducted a series of experiments to validate the effectiveness of our proposed model. Notably, the proposed model achieved a 3.5% mean average precision (mAP) improvement on the satellite object detection dataset. Another feature of our approach is lightweight design. We employ group convolution to reduce the computational cost in the proposed contextual convolution module. Compared to the baseline model, our method reduces the number of parameters by 30%, computational cost by 34%, and an FPS rate close to the baseline model. We also validate the detection results through a series of visualizations. © 2024 by the authors.Affiliations:(1) Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an; 710119, China; (2) University of Chinese Academy of Sciences, Beijing; 100049, ChinaPublication Year:2024Volume:15Issue:4Article Number:230DOI Link:10.3390/info15040230数据库ID(收录号):20241816016150 -
Record 178 of
Title:Analysis of laser interference backward stray light based on TianQin space gravitational wave detection
Author Full Names:Yan, Haoyu(1,2,3); Chen, Qinfang(1,3); Ma, Zhanpeng(1,3); Wang, Hu(1,2,3)Source Title:Journal of Astronomical Telescopes, Instruments, and SystemsLanguage:EnglishDocument Type:Journal article (JA)Abstract:According to the working principle of the telescope, we know that the telescope requires stray light from the system to reach the order of 10-10 of the output laser power. In this article, given the roughness of the M1 mirror of 3 and the roughness of the M2M4 mirror of 1.8 , through separate analysis of the four mirror surfaces, we found that M4 has the greatest impact on the backward stray light of the telescope, and as the angle of M4 incident light increases, the level of stray light in the system decreases; after adjusting the M4 incidence angle and considering only the roughness, the stray light level of the telescope system reaches 10-11 of the power of the outgoing laser, which meets the expected requirements. Subsequently, we calculated the impact of particle pollution on the stray light of the system, and based on our analysis results, we determined that the cleanliness level of the telescope testing and storage environment was better than 100. Then, we conducted surface defect calculations and obtained the surface defect requirements for M1 to M4, and it is concluded that as the scattering angle decreases, the main contribution of bidirectional reflectance distribution function (BRDF) changes from geometric optics to diffraction effects. Finally, we conducted actual measurements on the surface quality of the ultra-smooth mirror sample, and the measured BRDF value was substituted into the simulation analysis, resulting in a telescope stray light of 8.29×10-11, meeting the expected requirements. © 2024 Society of Photo-Optical Instrumentation Engineers (SPIE).Affiliations:(1) Chinese Academy of Sciences, Xi'an Institute of Optics and Precision Mechanics, Xi'an, China; (2) University of Chinese Academy of Sciences, Beijing, China; (3) Xi'an Space Sensor Optical Technology Engineering Research Center, Xi'an, ChinaPublication Year:2024Volume:10Issue:3Article Number:034007DOI Link:10.1117/1.JATIS.10.3.034007数据库ID(收录号):20244217187147 -
Record 179 of
Title:A stitching seams search strategy based on spectral image classification for hyperspectral image stitching
Author Full Names:Liu, Hong(1,2); Hu, Bingliang(1); Hou, Xingsong(2); Yu, Tao(1)Source Title:2024 9th International Symposium on Computer and Information Processing Technology, ISCIPT 2024Language:EnglishDocument Type:Conference article (CA)Conference Title:9th International Symposium on Computer and Information Processing Technology, ISCIPT 2024Conference Date:May 24, 2024 - May 26, 2024Conference Location:Hybrid, Xi�an, ChinaConference Sponsor:IEEEAbstract:Hyperspectral image data is a form of data that combines images and spectra, and there are information differences between images in different bands when performing cube concatenation of hyperspectral data. A stitching seam search strategy based on hyperspectral spectral image classification is proposed to address the insufficient utilization of spectral dimension information in current data cube stitching methods. The main steps in searching for stitching seams are: Iteratively self-organizing data analysis algorithm (ISODATA) is used to classify two hyperspectral data cubes separately. Perform grayscale changes on the classification result images. Use graph cutting method to search for stitching seams on the transformed image. Apply the stitching seam to all bands to obtain the spliced hyperspectral data. The experimental results of applying this method to unmanned aerial hyperspectral data cubes captured by acousto-optic tunable filter (AOTF) spectral imager at waypoints show that our proposed method has certain advantages in both spatial and spectral dimensions compared to using stitching seams obtained from a single spectral segment image to achieve hyperspectral data cube stitching strategy. © 2024 IEEE.Affiliations:(1) Xi'an Institute of Optics Precision Mechanic of Chinese Academy of Sciences, Key Laboratory of Spectral Imaging Technology, Xi'an, China; (2) Xi'an Jiao Tong University, School of Electronic and Information Engineering, Xi'an, ChinaPublication Year:2024Start Page:535-539DOI Link:10.1109/ISCIPT61983.2024.10673327数据库ID(收录号):20244117161963 -
Record 180 of
Title:A Detection Method for Typical Component of Space Aircraft Based on YOLOv3 Algorithm
Author Full Names:He, Bian(1,2,3); Jianzhong, Cao(1,3); Cheng, Li(1,3); Junpeng, Dong(1,3); Zhongling, Ruan(1,3); Chao, Mei(1,3)Source Title:2024 IEEE 3rd International Conference on Electrical Engineering, Big Data and Algorithms, EEBDA 2024Language:EnglishDocument Type:Conference article (CA)Conference Title:3rd IEEE International Conference on Electrical Engineering, Big Data and Algorithms, EEBDA 2024Conference Date:February 27, 2024 - February 29, 2024Conference Location:Changchun, ChinaAbstract:A solar panel recognition method based on YOLOv3 deep learning algorithm is proposed to address issues such as inaccurate recognition of traditional algorithms in space solar panel detection. First, this paper scales the dataset images to 416 × 416, then uses Labelme to annotate the data and transform the bounding box position information, and finally uses the YOLOv3 algorithm framework for model training. The results show that the recall, F1 score and accuracy of YOLOv3 algorithm are all above 80%. The YOLOv3 deep learning algorithm meets the requirements for real-time detection of solar panels in terms of accuracy. © 2024 IEEE.Affiliations:(1) Xi'an Institute of Optics and Precision Mechanics of Cas, Xi'an, China; (2) University of Chinese Academy of Sciences, Beijing, China; (3) Xi'an Key Laboratory of Spacecraft Optical Imaging and Measurement Technology, Xi'an, ChinaPublication Year:2024Start Page:1726-1729DOI Link:10.1109/EEBDA60612.2024.10485846数据库ID(收录号):20241715982706