2024

2024

  • Record 61 of

    Title:Observation of 2 µm multiple annular structured vortex pulsed beams by cavity-mode tailoring
    Author(s):Zhu, Qiang(1,2); Song, Xiaozhao(1); Li, Luyao(1); Kang, Hui(1,2); Yao, Tianchen(1,2); Liu, Guangmiao(1,2); Miao, Kairui(1); Zhou, Wei(1,2); Wang, Haotian(1,2); Xu, Xiaodong(1); Jia, Baohua(3); Wang, Yishan(4); Wang, Fei(2); Shen, Deyuan(1,2)
    Source:Optics Letters
    Volume: 49  Issue: 13  DOI: 10.1364/OL.524370  Published: July 1, 2024  
    Abstract:In the past few years, annular structured beams have been extensively studied due to their unique "doughnut" structure and characteristics such as phase and polarization vortices. Especially in the 2 µm wavelength range, they have shown promising applications in fields such as novel laser communication, optical processing, and quantum information processing. In this Letter, we observed basis vector patterns with orthogonality and completeness by finely cavity-mode tailoring with end-mirror space position in a Tm:CaYAlO4 laser. Multiple annular structured beams including azimuthally, linearly, and radially polarized beams (APB, LPB, and RPB) operated at a Q-switched mode-locking (QML) state with a typical output power of ∼18 mW around 1962 nm. Further numerical simulation proved that the multiple annular structured beams are the coherent superposition of different Hermitian Gaussian modes. Using a self-made M–Z interferometer, we have demonstrated that the obtained multiple annular beams have a vortex phase with orbital angular momentum (OAM) of l = ±1. To the best of our knowledge, this is the first observation of vector and scalar annular vortex beams in the 2 µm solid-state laser. © 2024 Optica Publishing Group.
    Accession Number: 20242816657496
  • Record 62 of

    Title:Multi-level efficient 3D image reconstruction model based on ViT
    Author(s):Zhang, Renhao(1,2); Hu, Bingliang(1); Chen, Tieqiao(1); Zhang, Geng(1); Li, Siyuan(1); Chen, Baocheng(1,2); Liu, Jia(1,3,5); Jia, Xinyin(1); Wang, Xing(4); Su, Chang(2,4); Li, Xijie(1); Zhang, Ning(1); Qiao, Kai(2,4)
    Source:Optics Express
    Volume: 32  Issue: 19  DOI: 10.1364/OE.535211  Published: September 9, 2024  
    Abstract:Single-photon LIDAR faces challenges in high-quality 3D reconstruction due to high noise levels, low accuracy, and long inference times. Traditional methods, which rely on statistical data to obtain parameter information, are inefficient in high-noise environments. Although convolutional neural networks (CNNs)-based deep learning methods can improve 3D reconstruction quality compared to traditional methods, they struggle to effectively capture global features and long-range dependencies. To address these issues, this paper proposes a multi-level efficient 3D image reconstruction model based on vision transformer (ViT). This model leverages the self-attention mechanism of ViT to capture both global and local features and utilizes attention mechanisms to fuse and refine the extracted features. By introducing generative adversarial ngenerative adversarial networks (GANs), the reconstruction quality and robustness of the model in high noise and low photon environments are further improved. Furthermore, the proposed 3D reconstruction network has been applied in real-world imaging systems, significantly enhancing the imaging capabilities of single-photon 3D reconstruction under strong noise conditions. © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.
    Accession Number: 20243817055804
  • Record 63 of

    Title:All-PM Yb-doped mode-locked fiber laser with high single pulse energy and high repetition frequency
    Author(s):Fu, Chaohui(1,2); Song, Yuanqi(1,2); Tao, Jianing(1,2); Zhang, Pu(3); Qi, Mei(4); Chen, Haowei(1,2); Bai, Jintao(1,2)
    Source:Journal of Optics (United Kingdom)
    Volume: 26  Issue: 7  DOI: 10.1088/2040-8986/ad4612  Published: July 1, 2024  
    Abstract:We demonstrate an all-polarization-maintaining (PM) ytterbium (Yb)-doped fiber laser with a figure-of-9 structure to generate mode-locked pulses with high single pulse energy and high repetition frequency. By exploiting the nonlinear amplifying loop mirror, a stably self-started mode-locking is achieved with a spectrum bandwidth of 13 nm and a pulse duration of 4.53 ps. The fundamental frequency is 97.966 MHz at the maximum output power of 143 mW in single pulse mode-locked operation, corresponding to the single pulse energy is 1.46 nJ. The output pulses maintain both high repetition frequency and high single-pulse energy. This laser oscillator can be an ideal seed source for applications such as high-energy amplifiers. © 2024 IOP Publishing Ltd.
    Accession Number: 20242216152311
  • Record 64 of

    Title:Optical fibre based artificial compound eyes for direct static imaging and ultrafast motion detection
    Author(s):Jiang, Heng(1,2); Tsoi, Chi Chung(1,2); Yu, Weixing(3); Ma, Mengchao(4); Li, Mingjie(1); Wang, Zuankai(5); Zhang, Xuming(1,2,6)
    Source:Light: Science and Applications
    Volume: 13  Issue: 1  DOI: 10.1038/s41377-024-01580-5  Published: December 2024  
    Abstract:Natural selection has driven arthropods to evolve fantastic natural compound eyes (NCEs) with a unique anatomical structure, providing a promising blueprint for artificial compound eyes (ACEs) to achieve static and dynamic perceptions in complex environments. Specifically, each NCE utilises an array of ommatidia, the imaging units, distributed on a curved surface to enable abundant merits. This has inspired the development of many ACEs using various microlens arrays, but the reported ACEs have limited performances in static imaging and motion detection. Particularly, it is challenging to mimic the apposition modality to effectively transmit light rays collected by many microlenses on a curved surface to a flat imaging sensor chip while preserving their spatial relationships without interference. In this study, we integrate 271 lensed polymer optical fibres into a dome-like structure to faithfully mimic the structure of NCE. Our ACE has several parameters comparable to the NCEs: 271 ommatidia versus 272 for bark beetles, and 180o field of view (FOV) versus 150–180o FOV for most arthropods. In addition, our ACE outperforms the typical NCEs by ~100 times in dynamic response: 31.3 kHz versus 205 Hz for Glossina morsitans. Compared with other reported ACEs, our ACE enables real-time, 180o panoramic direct imaging and depth estimation within its nearly infinite depth of field. Moreover, our ACE can respond to an angular motion up to 5.6×106 deg/s with the ability to identify translation and rotation, making it suitable for applications to capture high-speed objects, such as surveillance, unmanned aerial/ground vehicles, and virtual reality. © The Author(s) 2024.
    Accession Number: 20243817075668
  • Record 65 of

    Title:Multiscale Random-Shape Convolution and Adaptive Graph Convolution Fusion Network for Hyperspectral Image Classification
    Author(s):Gao, Hongmin(1); Sheng, Runhua(1); Chen, Zhonghao(1); Liu, Haiyun(1); Xu, Shufang(1,2); Zhang, Bing(3)
    Source:IEEE Transactions on Geoscience and Remote Sensing
    Volume: 62  Issue:   DOI: 10.1109/TGRS.2024.3390928  Published: 2024  
    Abstract:Convolution neural networks (CNNs) are extensively utilized in hyperspectral image (HSI) classification due to their remarkable capability to extract features from patterns with fixed shapes. These networks have been shown to effectively capture features at the pixel level. However, the fixed shape of convolution kernels poses a challenge for CNNs to adapt to the diverse shapes found in HSIs. Graph neural networks (GNNs), particularly graph convolution networks (GCNs), possess robust feature extraction capabilities on graph structures and are extensively applied in HSI classification. However, one significant challenge in using GNNs is the selection of appropriate neighboring nodes for information aggregation. To address the existing challenges of GCN and CNN and leverage their respective advantages, this article introduces a novel patch-based CNN-GCN fusion classification network, named multiscale random-shape convolution and adaptive graph convolution fusion network (MRCAGCFN). It consists of a spectral transformation module (STM) and three main modules we proposed: a multiscale random-shape convolution (RSC) module for extracting convolution features, where the shape of the convolution kernel is randomized and a multiscale approach is applied to enhance adaptability to data with diverse shapes; an adaptive feature-fusion graph convolution module (AFGCM) for extracting graph convolution features, where the weights for neighborhood aggregation are learned adaptively to reduce feature fusion from dissimilar nodes and strengthen feature fusion from similar nodes; and an adaptive local feature processing module for processing features, where two different methods are employed to convert patch-level features to pixel-level features, thereby improving feature representation. MRCAGCFN combines the strengths of CNN and GCN while introducing enhancements to better accommodate diverse feature shapes. Experimental results on three HSI classification datasets demonstrate that our proposed MRCAGCFN outperforms some existing methods. The codes of our MRCAGCFN will be available at https://github.com/shengrunhua/MRCAGCFN. © 1980-2012 IEEE.
    Accession Number: 20241715959755
  • Record 66 of

    Title:Single-pixel imaging based on self-supervised conditional mask classifier-free guidance
    Author(s):Li, Qianxi(1,2); Yan, Qiurong(3); Dong, Jiawei(1,2); Feng, Jia(1); Wu, Jiaxin(1,2); Cao, Jianzhong(1); Liu, Guangsen(1); Wang, Hao(1)
    Source:Optics Express
    Volume: 32  Issue: 11  DOI: 10.1364/OE.518455  Published: May 20, 2024  
    Abstract:Reconstructing high-quality images at a low measurement rate is a pivotal objective of Single-Pixel Imaging (SPI). Currently, deep learning methods achieve this by optimizing the loss between the target image and the original image, thereby constraining the potential of low measurement values. We employ conditional probability to ameliorate this, introducing the classifier-free guidance model (CFG) for enhanced reconstruction. We propose a self-supervised conditional masked classifier-free guidance (SCM-CFG) for single-pixel reconstruction. At a 10% measurement rate, SCM-CFG efficiently completed the training task, achieving an average peak signal-to-noise ratio (PSNR) of 26.17 dB on the MNIST dataset. This surpasses other methods of photon imaging and computational ghost imaging. It demonstrates remarkable generalization performance. Moreover, thanks to the outstanding design of the conditional mask in this paper, it can significantly enhance the accuracy of reconstructed images through overlay. SCM-CFG achieved a notable improvement of an average of 7.3 dB in overlay processing, in contrast to only a 1 dB improvement in computational ghost imaging. Subsequent physical experiments validated the effectiveness of SCM-CFG. © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.
    Accession Number: 20242216154637
  • Record 67 of

    Title:Communication-Constrained UAVs' Coverage Search Method in Uncertain Scenarios
    Author(s):Xu, Shufang(1,2); Zhou, Ziyun(1); Li, Jianni(1); Wang, Longbao(3); Zhang, Xuejie(3); Gao, Hongmin(1)
    Source:IEEE Sensors Journal
    Volume: 24  Issue: 10  DOI: 10.1109/JSEN.2024.3384261  Published: May 15, 2024  
    Abstract:Solve the target search problems for unmanned aerial vehicle (UAV) swarm in uncertain environments without prior knowledge, this article designed a communication-constrained coverage search model and a new optimization algorithm. We proposed a multiobjective optimization model based on grid environment modeling and UAVs' modeling with communication constraints. Meanwhile, to optimize the proposed model, this article designed a collaborative search scheme based on model predictive control and communication constraints (CSS-MPCCCs) to solve the UAVs' path planning and decision-making at each moment. CSS-MPCCC is a two-stage optimization method with joint optimization of search performance and optimal selection from Parato frontier, which relies on NSGA-II. Our model and algorithm enabled the UAV swarm to perform coverage search on both static and dynamic targets. In simulations, the proposed CSS-MPCCC algorithm is compared with classical random search and parallel search algorithms, and the performance of model and algorithm is analyzed in several groups of simulations. It is validated that the proposed algorithm effectively enables the UAV swarm to search for static and dynamic targets in uncertain scenarios. © 2001-2012 IEEE.
    Accession Number: 20241615919207
  • Record 68 of

    Title:Rotary error modeling and assembly optimization of parallel structure shafting
    Author(s):Dong, Yi-Ming(1,2,3); Jiang, Bo(1,3); Li, Xiang-Yu(1,3); Xie, You-Jin(1,3); Lv, Tao(1,3); Ruan, Ping(1,3)
    Source:Chinese Optics
    Volume: 17  Issue: 3  DOI: 10.37188/CO.2023-0171  Published: May 2024  
    Abstract:In order to improve the shafting motion accuracy of two-dimensional turntables such as photoelectric theodolites, we establish a mathematical model considering both the structural error of parts and the coupling amplification effect based on Jacobian-Torsor theory. Aiming at a shafting structure with one fixed end and one swimming, an analysis method of partial parallel structure was proposed. Through numerical simulation analysis, the impact of each part’s structural errors on the motion accuracy of the shafting and the optimal shafting assembly scheme were obtained. The results of assembly and adjustment of a photoelectric theodolite with an optical diameter of 650 mm show that assembly optimization improved the motion accuracy of the shaft system by 32.1%. The precision model and optimization method of shafting motion provide a theoretical basis for the shafting adjustment and tolerance design of two-dimensional turntables such as photoelectric theodolites. © 2024 Editorial Office of Chinese Optics. All rights reserved.
    Accession Number: 20242316212544
  • Record 69 of

    Title:Switchable hybrid-order optical vortex lattice
    Author(s):Qin, Xueyun(1); Zhang, Hao(1); Tang, Miaomiao(1); Zhou, Yujie(1); Tai, Yuping(1,2); Li, Xinzhong(1,2)
    Source:Optics Letters
    Volume: 49  Issue: 9  DOI: 10.1364/OL.515906  Published: May 1, 2024  
    Abstract:Optical vortex (OV) modulation is a powerful technique for enhancing the intrinsic degrees-of-freedom in structured light applications. Particularly, the lattices involving multiple OVs have garnered significant academic interest owing to their wide applicability in optical tweezers and condensed matter physics. However, all OVs in a lattice possess the same order, which cannot be modulated individually, limiting its versatile application. Herein, we propose, to our knowledge, a novel concept, called the hot-swap method, to design a switchable hybrid-order OV lattice, in which each OV is easily replaced by arbitrary orders. We experimentally generated the switchable hybrid-order OV lattice and studied its characteristics, including interferograms, retrieved phase, energy flow, and orbital angular momentum. Furthermore, the significant advantages of the switchable hybrid-order OV lattice are demonstrated through the independent manipulation of multiple yeast cells. This study provides a novel scheme for accurate control and modulation of OV lattices, which greatly facilitates the diverse applications of optical manipulation and particle trapping and control. © 2024 Optica Publishing Group.
    Accession Number: 20241916073719
  • Record 70 of

    Title:SMALE: Hyperspectral Image Classification via Superpixels and Manifold Learning
    Author(s):Liao, Nannan(1); Gong, Jianglei(1,2); Li, Wenxing(1); Li, Cheng(3); Zhang, Chaoyan(1); Guo, Baolong(1)
    Source:Remote Sensing
    Volume: 16  Issue: 18  DOI: 10.3390/rs16183442  Published: September 2024  
    Abstract:As an extremely efficient preprocessing tool, superpixels have become more and more popular in various computer vision tasks. Nevertheless, there are still several drawbacks in the application of hyperspectral image (HSl) processing. Firstly, it is difficult to directly apply superpixels because of the high dimension of HSl information. Secondly, existing superpixel algorithms cannot accurately classify the HSl objects due to multi-scale feature categorization. For the processing of high-dimensional problems, we use the principle of PCA to extract three principal components from numerous bands to form three-channel images. In this paper, a novel superpixel algorithm called Seed Extend by Entropy Density (SEED) is proposed to alleviate the seed point redundancy caused by the diversified content of HSl. It also focuses on breaking the dilemma of manually setting the number of superpixels to overcome the difficulty of classification imprecision caused by multi-scale targets. Next, a space–spectrum constraint model, termed Hyperspectral Image Classification via superpixels and manifold learning (SMALE), is designed, which integrates the proposed SEED to generate a dimensionality reduction framework. By making full use of spatial context information in the process of unsupervised dimension reduction, it could effectively improve the performance of HSl classification. Experimental results show that the proposed SEED could effectively promote the classification accuracy of HSI. Meanwhile, the integrated SMALE model outperforms existing algorithms on public datasets in terms of several quantitative metrics. © 2024 by the authors.
    Accession Number: 20244017136858
  • Record 71 of

    Title:RGB-guided hyperspectral image super-resolution with deep progressive learning
    Author(s):Zhang, Tao(1); Fu, Ying(1); Huang, Liwei(2); Li, Siyuan(3); You, Shaodi(4); Yan, Chenggang(5)
    Source:CAAI Transactions on Intelligence Technology
    Volume: 9  Issue: 3  DOI: 10.1049/cit2.12256  Published: June 2024  
    Abstract:Due to hardware limitations, existing hyperspectral (HS) camera often suffer from low spatial/temporal resolution. Recently, it has been prevalent to super-resolve a low resolution (LR) HS image into a high resolution (HR) HS image with a HR RGB (or multispectral) image guidance. Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors. Recently, researchers pay more attention to deep learning methods with direct supervised or unsupervised learning, which exploit deep prior only from training dataset or testing data. In this article, an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance. Specifically, a progressive HS image super-resolution network is proposed, which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance. Then, the super-resolution network is progressively trained with supervised pre-training and unsupervised adaption, where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes. The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint. It has a good generalisation capability, especially for blind HS image super-resolution. Comprehensive experimental results show that the proposed deep progressive learning method outperforms the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases. © 2023 The Authors. CAAI Transactions on Intelligence Technology published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology and Chongqing University of Technology.
    Accession Number: 20233014425959
  • Record 72 of

    Title:Design of an optical passive semi-athermalization zoom lens
    Author(s):Yan, Aqi(1,2); Chen, Weining(1,2); Li, Qianxi(1,3); Guo, Min(1); Wang, Hao(1,2)
    Source:Applied Optics
    Volume: 63  Issue: 13  DOI: 10.1364/AO.517025  Published: May 1, 2024  
    Abstract:Traditional zoom lenses cannot clearly image during the entire zoom process when the ambient temperature changes and needs to focus frequently at middle focal length positions. An innovative design method called the optical passive semi-athermalization (OPSA) design for zoom optical systems is proposed which, based on the difference in the focusing sensitivity of the focusing group at short and long focal length positions, seeks out sensitive groups that have a greater impact on the imaging quality at the short focal position. By changing the temperature characteristics of the temperature-sensitive lenses in these groups, an OPSA zoom optical system can be realized, which exhibits a compact structure and excellent imaging quality. Under the ambient temperature of −40◦C to +60◦C, the OPSA zoom lens needs to refocus only once at the long focal length position, which can ensure an image clearly during the entire zoom process. Remarkably, this innovative method not only mitigates the frequent focusing challenges in traditional zoom lenses, but also contributes to the diminutive size. © 2024 Optica Publishing Group (formerly OSA). All rights reserved.
    Accession Number: 20242016084730