2025
2025
-
Record 13 of
Title:Long-term stable timing fluctuation correction for a picosecond laser with attosecond-level accuracy
Author Full Names:Li, Hongyang; Liu, Keyang; Tian, Ye; Song, LiweiSource Title:HIGH POWER LASER SCIENCE AND ENGINEERINGLanguage:EnglishDocument Type:ArticleKeywords Plus:COHERENT BEAM COMBINATION; PULSEAbstract:Rapid advancements in high-energy ultrafast lasers and free electron lasers have made it possible to obtain extreme physical conditions in the laboratory, which lays the foundation for investigating the interaction between light and matter and probing ultrafast dynamic processes. High temporal resolution is a prerequisite for realizing the value of these large-scale facilities. Here, we propose a new method that has the potential to enable the various subsystems of large scientific facilities to work together well, and the measurement accuracy and synchronization precision of timing jitter are greatly improved by combining a balanced optical cross-correlator (BOC) with near-field interferometry technology. Initially, we compressed a 0.8 ps laser pulse to 95 fs, which not only improved the measurement accuracy by 3.6 times but also increased the BOC synchronization precision from 8.3 fs root-mean-square (RMS) to 1.12 fs RMS. Subsequently, we successfully compensated the phase drift between the laser pulses to 189 as RMS by using the BOC for pre-correction and near-field interferometry technology for fine compensation. This method realizes the measurement and correction of the timing jitter of ps-level lasers with as-level accuracy, and has the potential to promote ultrafast dynamics detection and pump-probe experiments.Addresses:[Li, Hongyang] Tongji Univ, Sch Phys Sci & Engn, Shanghai, Peoples R China; [Li, Hongyang; Tian, Ye; Song, Liwei] Chinese Acad Sci, Shanghai Inst Opt & Fine Mech, State Key Lab High Field Laser Phys, Shanghai 201800, Peoples R China; [Li, Hongyang; Tian, Ye; Song, Liwei] Univ Chinese Acad Sci, Ctr Mat Sci & Optoelect Engn, Beijing, Peoples R China; [Liu, Keyang] Chinese Acad Sci, Xian Inst Opt & Precis Mech, XIOPM Ctr Attosecond Sci & Technol, State Key Lab Transient Opt & Photon, Xian, Peoples R ChinaAffiliations:Tongji University; Chinese Academy of Sciences; Shanghai Institute of Optics & Fine Mechanics, CAS; State Key Laboratory of High Field Laser Physics; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CAS; Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; State Key Laboratory of Transient Optics & PhotonicsPublication Year:2025Volume:12Article Number:e89DOI Link:http://dx.doi.org/10.1017/hpl.2024.74数据库ID(收录号):WOS:001390471900001 -
Record 14 of
Title:Multi-Scale Long- and Short-Range Structure Aggregation Learning for Low-Illumination Remote Sensing Imagery Enhancement
Author Full Names:Cao, Yu; Tian, Yuyuan; Su, Xiuqin; Xie, Meilin; Hao, Wei; Wang, Haitao; Wang, FanSource Title:REMOTE SENSINGLanguage:EnglishDocument Type:ArticleKeywords Plus:OBJECT DETECTIONAbstract:Profiting from the surprising non-linear expressive capacity, deep convolutional neural networks have inspired lots of progress in low illumination (LI) remote sensing image enhancement. The key lies in sufficiently exploiting both the specific long-range (e.g., non-local similarity) and short-range (e.g., local continuity) structures distributed across different scales of each input LI image to build an appropriate deep mapping function from the LI images to their corresponding high-quality counterparts. However, most existing methods can only individually exploit the general long-range or short-range structures shared across most images at a single scale, thus limiting their generalization performance in challenging cases. We propose a multi-scale long-short range structure aggregation learning network for remote sensing imagery enhancement. It features flexible architecture for exploiting features at different scales of the input low illumination (LI) image, with branches including a short-range structure learning module and a long-range structure learning module. These modules extract and combine structural details from the input image at different scales and cast them into pixel-wise scale factors to enhance the image at a finer granularity. The network sufficiently leverages the specific long-range and short-range structures of the input LI image for superior enhancement performance, as demonstrated by extensive experiments on both synthetic and real datasets.Addresses:[Cao, Yu; Tian, Yuyuan; Su, Xiuqin; Xie, Meilin; Hao, Wei; Wang, Haitao; Wang, Fan] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Key Lab Space Precis Measurement Technol, Xian 710119, Peoples R China; [Cao, Yu; Tian, Yuyuan; Su, Xiuqin; Xie, Meilin; Hao, Wei] Pilot Natl Lab Marine Sci & Technol, Qingdao 266237, Peoples R China; [Cao, Yu] Shanxi Univ, Collaborat Innovat Ctr Extreme Opt, Taiyuan 030006, Peoples R China; [Tian, Yuyuan] Univ Chinese Acad Sci, Beijing 100049, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Laoshan Laboratory; Shanxi University; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CASPublication Year:2025Volume:17Issue:2Article Number:242DOI Link:http://dx.doi.org/10.3390/rs17020242数据库ID(收录号):WOS:001404656400001 -
Record 15 of
Title:When Remote Sensing Meets Foundation Model: A Survey and Beyond
Author Full Names:Huo, Chunlei; Chen, Keming; Zhang, Shuaihao; Wang, Zeyu; Yan, Heyu; Shen, Jing; Hong, Yuyang; Qi, Geqi; Fang, Hongmei; Wang, ZihanSource Title:REMOTE SENSINGLanguage:EnglishDocument Type:ReviewAbstract:Most deep-learning-based vision tasks rely heavily on crowd-labeled data, and a deep neural network (DNN) is usually impacted by the laborious and time-consuming labeling paradigm. Recently, foundation models (FMs) have been presented to learn richer features from multi-modal data. Moreover, a single foundation model enables zero-shot predictions on various vision tasks. The above advantages make foundation models better suited for remote sensing images, where image annotations are more sparse. However, the inherent differences between natural images and remote sensing images hinder the applications of the foundation model. In this context, this paper provides a comprehensive review of common foundation models and domain-specific foundation models for remote sensing, and it summarizes the latest advances in vision foundation models, textually prompted foundation models, visually prompted foundation models, and heterogeneous foundation models. Despite the great potential of foundation models for vision tasks, open challenges concerning data, model, and task impact the performance of remote sensing images and make foundation models far from practical applications. To address open challenges and reduce the performance gap between natural images and remote sensing images, this paper discusses open challenges and suggests potential directions for future advancements.Addresses:[Huo, Chunlei] Capital Normal Univ, Informat & Engn Coll, Beijing 100048, Peoples R China; [Huo, Chunlei; Hong, Yuyang] Univ Chinese Acad Sci, Beijing 100049, Peoples R China; [Chen, Keming; Zhang, Shuaihao; Wang, Zeyu; Yan, Heyu; Fang, Hongmei; Wang, Zihan] Chinese Acad Sci, Aerosp Informat Res Inst, Beijing 100086, Peoples R China; [Shen, Jing; Qi, Geqi] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Peoples R China; [Shen, Jing; Qi, Geqi] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100086, Peoples R ChinaAffiliations:Capital Normal University; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CAS; Chinese Academy of Sciences; Aerospace Information Research Institute, CAS; Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Chinese Academy of Sciences; Institute of Automation, CASPublication Year:2025Volume:17Issue:2Article Number:179DOI Link:http://dx.doi.org/10.3390/rs17020179数据库ID(收录号):WOS:001404721500001 -
Record 16 of
Title:Variable-Parameter Impedance Control of Manipulator Based on RBFNN and Gradient Descent
Author Full Names:Li, Linshen; Wang, Fan; Tang, Huilin; Liang, YanbingSource Title:SENSORSLanguage:EnglishDocument Type:ArticleAbstract:During the interaction process of a manipulator executing a grasping task, to ensure no damage to the object, accurate force and position control of the manipulator's end-effector must be concurrently implemented. To address the computationally intensive nature of current hybrid force/position control methods, a variable-parameter impedance control method for manipulators, utilizing a gradient descent method and Radial Basis Function Neural Network (RBFNN), is proposed. This method employs a position-based impedance control structure that integrates iterative learning control principles with a gradient descent method to dynamically adjust impedance parameters. Firstly, a sliding mode controller is designed for position control to mitigate uncertainties, including friction and unknown perturbations within the manipulator system. Secondly, the RBFNN, known for its nonlinear fitting capabilities, is employed to identify the system throughout the iterative process. Lastly, a gradient descent method adjusts the impedance parameters iteratively. Through simulation and experimentation, the efficacy of the proposed method in achieving precise force and position control is confirmed. Compared to traditional impedance control, manual adjustment of impedance parameters is unnecessary, and the method can adapt to tasks involving objects of varying stiffness, highlighting its superiority.Addresses:[Li, Linshen; Wang, Fan; Tang, Huilin; Liang, Yanbing] Xian Inst Opt & Precis Mech CAS, Xian 710119, Peoples R China; [Li, Linshen; Tang, Huilin] Univ Chinese Acad Sci, Sch Optoelect, Beijing 100049, Peoples R China; [Li, Linshen; Wang, Fan; Tang, Huilin; Liang, Yanbing] Key Lab Space Precis Measurement Technol CAS, Xian 710119, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CASPublication Year:2025Volume:25Issue:1Article Number:49DOI Link:http://dx.doi.org/10.3390/s25010049数据库ID(收录号):WOS:001393893600001 -
Record 17 of
Title:Simulation investigation on the pulse/analog dual-mode electron multiplier with discrete arc-shaped dynodes
Author Full Names:Liu, Li; Li, Jie; Liu, Biye; Wang, Teng; Liu, Hulin; Yun, Xintuan; Wu, Shengli; Hu, WenboSource Title:JOURNAL OF VACUUM SCIENCE & TECHNOLOGY BLanguage:EnglishDocument Type:ArticleKeywords Plus:EMISSION CHARACTERISTICS; FILM; SAMPLESAbstract:To satisfy the demand of mass spectrometers for high sensitivity and high resolution ion detection, a type of pulse/analog dual-mode, arc-shaped, discrete-dynode electron multiplier (DM-ADD-EM) with 20-stage dynode structure was proposed, and its gain and time characteristics were investigated by three-dimensional numerical simulation. Each of the 2nd-20th dynodes has an arc-shaped substrate consisting of a long arc segment and a short arc segment, attached with a pair of side baffles. The simulation results indicate that the two side baffles play a role in focusing the electron beam to the central regions between them, reducing the number of secondary electrons escaping from the dynode array and, therefore, raising the electron collection efficiency of dynodes. As the radius (R) of arc-shaped substrates increases, the device gain rises. In the case of the 3.6-mm R, there is an optimum long-arc-segment center angle (alpha = 79 degrees) at which the DM-ADD-EM reaches relatively high analog gain and pulse gain together with preferable time response, and its dynodes in the pulse section can be better protected from electron impact in analog output mode. In addition, the long-arc-segment center angle of the 12th-17th dynodes was further optimized to 84 degrees for suppressing ion feedback. A dynode-configuration-optimized DM-ADD-EM with SiO2-doped MgO-Au secondary electron emission film achieves a pulse gain of 7.2 x 10(8), an analog gain of 1.3 x 10(4), a pulse rise time of 3.8 ns, and a pulse width of 9.2 ns under the analog-section/pulse-section voltages of -1800 V/1000 V, exhibiting significantly improved pulse gain and better time response. These results provide a basis for the design and fabrication of high-performance EMs.Addresses:[Liu, Li; Li, Jie; Liu, Biye; Wang, Teng; Yun, Xintuan; Wu, Shengli; Hu, Wenbo] Xi An Jiao Tong Univ, Sch Elect Sci & Engn, Minist Educ, Key Lab Phys Elect ad Devices,State Key Lab Mech B, 28 Xianning West Rd, Xian 710049, Peoples R China; [Liu, Hulin] Chinese Acad Sci, Inst Opt & Precis Mech, 17 Xinxi Rd, Xian 710119, Peoples R China; [Wu, Shengli; Hu, Wenbo] Xi An Jiao Tong Univ, Sch Elect Sci & Engn, Moe, Key Lab Multifunct Mat & Struct, 28 Xianning West Rd, Xian 710049, Peoples R ChinaAffiliations:Xi'an Jiaotong University; Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Xi'an Jiaotong UniversityPublication Year:2025Volume:43Issue:1Article Number:12201DOI Link:http://dx.doi.org/10.1116/6.0004105数据库ID(收录号):WOS:001388033700001 -
Record 18 of
Title:SCM-YOLO for Lightweight Small Object Detection in Remote Sensing Images
Author Full Names:Qiang, Hao; Hao, Wei; Xie, Meilin; Tang, Qiang; Shi, Heng; Zhao, Yixin; Han, XiaotengSource Title:REMOTE SENSINGLanguage:EnglishDocument Type:ArticleAbstract:Currently, small object detection in complex remote sensing environments faces significant challenges. The detectors designed for this scenario have limitations, such as insufficient extraction of spatial local information, inflexible feature fusion, and limited global feature acquisition capability. In addition, there is a need to balance performance and complexity when improving the model. To address these issues, this paper proposes an efficient and lightweight SCM-YOLO detector improved from YOLOv5 with spatial local information enhancement, multi-scale feature adaptive fusion, and global sensing capabilities. The SCM-YOLO detector consists of three innovative and lightweight modules: the Space Interleaving in Depth (SPID) module, the Cross Block and Channel Reweight Concat (CBCC) module, and the Mixed Local Channel Attention Global Integration (MAGI) module. These three modules effectively improve the performance of the detector from three aspects: feature extraction, feature fusion, and feature perception. The ability of SCM-YOLO to detect small objects in complex remote sensing environments has been significantly improved while maintaining its lightweight characteristics. The effectiveness and lightweight characteristics of SCM-YOLO are verified through comparison experiments with AI-TOD and SIMD public remote sensing small object detection datasets. In addition, we validate the effectiveness of the three modules, SPID, CBCC, and MAGI, through ablation experiments. The comparison experiments on the AI-TOD dataset show that the mAP50 and mAP50-95 metrics of SCM-YOLO reach 64.053% and 27.283%, respectively, which are significantly better than other models with the same parameter size.Addresses:[Qiang, Hao; Hao, Wei; Xie, Meilin; Tang, Qiang; Shi, Heng; Zhao, Yixin; Han, Xiaoteng] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Peoples R China; [Qiang, Hao; Hao, Wei; Xie, Meilin; Tang, Qiang; Shi, Heng; Zhao, Yixin; Han, Xiaoteng] Univ Chinese Acad Sci, Beijing 100049, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CASPublication Year:2025Volume:17Issue:2Article Number:249DOI Link:http://dx.doi.org/10.3390/rs17020249数据库ID(收录号):WOS:001404682700001 -
Record 19 of
Title:YOLO-SS: optimizing YOLO for enhanced small object detection in remote sensing imagery
Author Full Names:Tang, Qiang; Su, Chang; Tian, Yuan; Zhao, Shibin; Yang, Kai; Hao, Wei; Feng, Xubin; Xie, MeilinSource Title:JOURNAL OF SUPERCOMPUTINGLanguage:EnglishDocument Type:ArticleAbstract:The identification of minuscule objects in remote sensing data presents a formidable challenge in computer vision, where objects may occupy a mere handful of pixels. The lack of unique shape features in such small objects hinders the effectiveness of established object detection algorithms. Remote sensing of small object detection plays an important role in areas such as environmental monitoring and estimating agricultural production. To address this challenge, in this study, we introduce YOLO-SS, an enhanced version of the YOLO algorithm tailored specifically for small object detection in remote sensing imagery. YOLO-SS incorporates an optimized backbone network, a restructured loss function and an asymmetric training sample weighting strategy. These improvements prioritize the model's attention toward high-quality positive samples of small objects while reducing sensitivity to complex backgrounds. Evaluation on the AI-TOD dataset demonstrates YOLO-SS's exceptional performance, achieving an AP50 score of 0.535, surpassing YOLOv6L by 13.4% and other popular object detection algorithms. Our findings offer a novel pathway for advancing small object detection capabilities in diverse remote sensing applications.Addresses:[Tang, Qiang; Su, Chang; Tian, Yuan; Zhao, Shibin; Yang, Kai; Hao, Wei; Feng, Xubin; Xie, Meilin] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710000, Shaanxi, Peoples R China; [Tang, Qiang; Su, Chang; Tian, Yuan; Zhao, Shibin; Yang, Kai; Hao, Wei; Feng, Xubin; Xie, Meilin] Univ Chinese Acad Sci, Beijing 100049, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CASPublication Year:2025Volume:81Issue:1Article Number:303DOI Link:http://dx.doi.org/10.1007/s11227-024-06765-8数据库ID(收录号):WOS:001379074400004 -
Record 20 of
Title:Application of Enhanced Weighted Least Squares with Dark Background Image Fusion for Inhomogeneity Noise Removal in Brain Tumor Hyperspectral Images
Author Full Names:Yan, Jiayue; Tao, Chenglong; Wang, Yuan; Du, Jian; Qi, Meijie; Zhang, Zhoufeng; Hu, BingliangSource Title:APPLIED SCIENCES-BASELLanguage:EnglishDocument Type:ArticleAbstract:The inhomogeneity of spectral pixel response is an unavoidable phenomenon in hyperspectral imaging, which is mainly manifested by the existence of inhomogeneity banding noise in the acquired hyperspectral data. It must be carried out to get rid of this type of striped noise since it is frequently uneven and densely distributed, which negatively impacts data processing and application. By analyzing the source of the instrument noise, this work first created a novel non-uniform noise removal method for a spatial dimensional push sweep hyperspectral imaging system. Clean and clear medical hyperspectral brain tumor tissue images were generated by combining scene-based and reference-based non-uniformity correction denoising algorithms, providing a strong basis for further diagnosis and classification. The precise procedure entails gathering the reference dark background image for rectification and the actual medical hyperspectral brain tumor image. The original hyperspectral brain tumor image is then smoothed using a weighted least squares algorithm model embedded with bilateral filtering (BLF-WLS), followed by a calculation and separation of the instrument fixed-mode fringe noise component from the acquired reference dark background image. The purpose of eliminating non-uniform fringe noise is achieved. In comparison to other common image denoising methods, the evaluation is based on the subjective effect and unreferenced image denoising evaluation indices. The approach discussed in this paper, according to the experiments, produces the best results in terms of the subjective effect and unreferenced image denoising evaluation indices (MICV and MNR). The image processed by this method has almost no residual non-uniform noise, the image is clear, and the best visual effect is achieved. It can be concluded that different denoising methods designed for different noises have better denoising effects on hyperspectral images. The non-uniformity denoising method designed in this paper based on a spatial dimension push-sweep hyperspectral imaging system can be widely used.Addresses:[Yan, Jiayue; Tao, Chenglong; Du, Jian; Qi, Meijie; Zhang, Zhoufeng; Hu, Bingliang] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Peoples R China; [Yan, Jiayue] Univ Chinese Acad Sci, Beijing 100049, Peoples R China; [Yan, Jiayue; Tao, Chenglong; Du, Jian; Zhang, Zhoufeng; Hu, Bingliang] Key Lab Biomed Spect Xian, Xian 710119, Peoples R China; [Tao, Chenglong] Chinese Acad Sci, Inst Ctr Shared Technol & Facil XIOPM, Xian 710119, Peoples R China; [Wang, Yuan] Tangdu Hosp Air Force Med Univ, Xian 710119, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CAS; Chinese Academy of SciencesPublication Year:2025Volume:15Issue:1Article Number:321DOI Link:http://dx.doi.org/10.3390/app15010321数据库ID(收录号):WOS:001393515300001 -
Record 21 of
Title:Multiscale Adaptively Spatial Feature Fusion Network for Spacecraft Component Recognition
Author Full Names:Zhang, Wuxia; Shao, Xiaoxiao; Mei, Chao; Pan, Xiaoying; Lu, XiaoqiangSource Title:IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSINGLanguage:EnglishDocument Type:ArticleAbstract:Spacecraft component recognition is crucial for tasks such as on-orbit maintenance and space docking, aiming to identify and categorize different parts of a spacecraft. Semantic segmentation, known for its excellence in instance-level recognition, precise boundary delineation, and enhancement of automation capabilities, is well-suited for this task. However, applying existing semantic segmentation methods to spacecraft component recognition still encounters issues with false detections, missed detections, and unclear boundaries of spacecraft components. In order to address these issues, we propose a multiscale adaptively spatial feature fusion network (MASFFN) for spacecraft component recognition. The MASFFN comprises a spatial attention-aware encoder (SAE) and a multiscale adaptively spatial feature fusion-based decoder (Multi-ASFFD). First, the spatial attention-aware feature fusion module within the SAE integrates spatial attention-aware features, mid-level semantic features, and input features to enhance the extraction of component characteristics, thus improving the accuracy in capturing size, shape, and texture information. Second, the multi-scale adaptively spatial feature fusion module within the Multi-ASFFD cascades four adaptively spatial feature fusion blocks to fuse low-level, middle-level, and high-level features at various scales to enrich the semantic information for different spacecraft components. Finally, a compound loss function comprising the cross-entropy and boundary losses is presented to guide the MASFFN better focus on the unclear component edge. The proposed method has been validated on the UESD and URSO datasets, and the experimental results demonstrate the superiority of MASFFN over existing spacecraft component recognition methods.Addresses:[Zhang, Wuxia; Shao, Xiaoxiao; Pan, Xiaoying] Xian Univ Posts & Telecommun, Sch Comp Sci & Technol, Shaanxi Key Lab Network Data Anal & Intelligent Pr, Xian 710121, Peoples R China; [Mei, Chao] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Ctr Opt Imagery Anal & Learning, Xian 710119, Peoples R China; [Lu, Xiaoqiang] Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R ChinaAffiliations:Xi'an University of Posts & Telecommunications; Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Fuzhou UniversityPublication Year:2025Volume:18Start Page:3501End Page:3513DOI Link:http://dx.doi.org/10.1109/JSTARS.2024.3523273数据库ID(收录号):WOS:001398675100022 -
Record 22 of
Title:SPRNet: Laser spot center position and reconstruction under atmospheric turbulence based on enhancement
Author Full Names:Wang, Jiaqi; Meng, Xiangsheng; Zhou, Shun; Wang, Xuan; Han, Junfeng; Guo, Yifan; Song, Shigeng; Liu, WeiguoSource Title:OPTICS AND LASERS IN ENGINEERINGLanguage:EnglishDocument Type:ArticleKeywords Plus:ADAPTIVE OPTICS; NEURAL-NETWORK; SYSTEM; ARRAY; SHAPEAbstract:Optical communication suffers from atmospheric turbulence for free space optical communication (FSOC) and the received spot has undergone severe wavefront distortion. It is difficult to position the spot center accurately or reconstruct the original spot, which leads to the loss of the transmitted information. Therefore, we establish a novel neural network to achieve spot center position and reconstruction, named SPRNet. Our SPRNet consists of spot structural feature extraction (SSFE) module and field distribution feature enhancement (FDFE) module to locate the center and restore the quality-enhanced spot. In FDFE module, we propose a novel spot-constrained attention module to better fuse the dual feature. To solve the problem of lacking ground truth (label), we propose the multi-frame aggregation method to obtain the labels to train our deep-learning-based method and establish the Turbulence50 dataset. We carried out experiments with simulated data and real-world data to verify the effectiveness of our SPRNet. The experiment results show that our method has better performance and strong robustness compared to other methods, which improves more than 2.2422 pixels on the benchmark of Manhattan distance for spot center position and more than 3.2477dB on the benchmark of PSNR for spot reconstruction.Addresses:[Wang, Jiaqi; Meng, Xiangsheng; Wang, Xuan; Han, Junfeng; Guo, Yifan] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Key Lab Space Precis Measurement Technol, Xian 710119, Peoples R China; [Wang, Jiaqi; Zhou, Shun; Guo, Yifan; Liu, Weiguo] Xian Technol Univ, Sch Optoelect Engn, Xian 710021, Peoples R China; [Song, Shigeng] Univ West Scotland, Inst Thin Films Sensors & Imaging, Scottish Univ Phys Alliance SUPA, Paisley PA1 2BE, ScotlandAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Xi'an Technological University; University of West ScotlandPublication Year:2025Volume:186Article Number:108775DOI Link:http://dx.doi.org/10.1016/j.optlaseng.2024.108775数据库ID(收录号):WOS:001391991500001 -
Record 23 of
Title:Regulable crack patterns for the fabrication of high-performance transparent EMI shielding windows
Author Full Names:Guan, Yongmao; Yang, Liqing; Chen, Chao; Wan, Rui; Guo, Chen; Wang, Pengfei; Guan, Yongmao; Yang, Liqing; Chen, Chao; Wan, Rui; Guo, Chen; Wang, PengfeiSource Title:ISCIENCELanguage:EnglishDocument Type:ArticleKeywords Plus:GRAPHENE; FILMS; NANOPARTICLES; CONDUCTION; NETWORK; RINGAbstract:Crack pattern-based metal grid film is an ideal candidate material for transparent electromagnetic interference shielding optical windows. However, achieving crack patterns with narrow grid spacing, small wire width, and high connectivity remains challenging. Herein, an aqueous acrylic colloidal dispersion was developed as a crack precursor for preparing crack patterns. The ratio of hard monomers in the precursor, the coating thickness, and the drying mediation strategy were systematically varied to control the spacing and width of the crack patterns. The resulting dense and narrow crack patterns served as sacrificial templates for the fabrication of patterning metal grid films on transparent substrates, intended for optoelectronic applications. These films demonstrated excellent optoelectronic properties (82.7% transmission at 550 nm visible light, sheet resistance 4.1 U /sq) and strong EMI shielding effectiveness (average shielding effectiveness 33.6 dB at 1-18 GHz), showcasing their potential as a scalable and effective transparent EMI shielding solution.Addresses:[Guan, Yongmao; Yang, Liqing; Chen, Chao; Wan, Rui; Guo, Chen; Wang, Pengfei; Guan, Yongmao; Yang, Liqing; Chen, Chao; Wan, Rui; Guo, Chen; Wang, Pengfei] Chinese Acad Sci, Xian Inst Opt & Precis Mech, State Key Lab Transient Opt & Photon, Xian 710119, Shaanxi, Peoples R China; [Guan, Yongmao; Wang, Pengfei; Guan, Yongmao; Wang, Pengfei] Univ Chinese Acad Sci, Ctr Mat Sci & Optoelect Engn, Beijing 100049, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; State Key Laboratory of Transient Optics & Photonics; Chinese Academy of Sciences; University of Chinese Academy of Sciences, CASPublication Year:2025Volume:28Issue:1Article Number:111543DOI Link:http://dx.doi.org/10.1016/j.isci.2024.111543数据库ID(收录号):WOS:001391450500001 -
Record 24 of
Title:Infrared and visible image fusion based on relative total variation and multi feature decomposition
Author Full Names:Xu, Xiaoqing; Ren, Long; Liang, Xiaowei; Liu, XinSource Title:INFRARED PHYSICS & TECHNOLOGYLanguage:EnglishDocument Type:ArticleKeywords Plus:VISUAL IMAGES; TRANSFORM; FRAMEWORK; NETWORKAbstract:The fusion technology of infrared and visible images has been widely applied in military and civilian fields, such as remote sensing, image detection and recognition, medical image analysis, computer vision, meteorological observation, aviation investigation, and battlefield assessment. It is of great significance in both military and civilian fields. In this paper, we have proposed a new feature decomposition-based method. Firstly, we used the relative total variation method to decompose the image to obtain its structural and texture layers. The structural layer retains the main structural features of the image, while the texture layer contains texture and detail information. Afterwards, we further decompose the texture layer to obtain a large-scale middle layer and a smallscale detail layer. In response to the noise problem exiting in infrared images due to environmental temperature and other factors, denoising is carried out in the detail layer. Different fusion weights are used to complete the fusion work for each layer according to the characteristics of different feature layer. Finally, each fusion feature layer is added to obtain the final fusion image. The experiment shows that this algorithm can effectively complete the fusion work of infrared and visible images, preserving more visible detail texture features and infrared radiation feature information. Compared with the other nine advanced algorithms by fusion and object detection experiments, it has certain advantages in both subjective and objective evaluation indicators.Addresses:[Xu, Xiaoqing; Liang, Xiaowei; Liu, Xin] Xian Eurasia Univ, Xian 710119, Peoples R China; [Ren, Long] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Peoples R China; [Ren, Long] Xi An Jiao Tong Univ, 28 Xianning West Rd, Xian 710049, Shaanxi, Peoples R ChinaAffiliations:Chinese Academy of Sciences; Xi'an Institute of Optics & Precision Mechanics, CAS; Xi'an Jiaotong UniversityPublication Year:2025Volume:145Article Number:105667DOI Link:http://dx.doi.org/10.1016/j.infrared.2024.105667数据库ID(收录号):WOS:001391579300001