2016

2016

  • Record 73 of

    Title:Perception-inspired background subtraction in complex scenes based on spatiotemporal features
    Author(s):Shi, Liu(1,2); Liu, Jiahang(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 10157  Issue:   DOI: 10.1117/12.2244151  Published: 2016  
    Abstract:Background subtraction (BGS) is a fundamental preprocessing step in most video-based applications. Most BGS methods fail to handle dynamic unconstrained scenarios accurately. This is because of overreliance on statistical model. In this paper, we develop a novel non-parametric sample-based background subtraction method. First, the background sample set is initialized by a clean sample frame rather than the first frame. This can avoid introducing a ghost when the first frame contains foreground objects. Here, we utilize the Gaussian mixture model to validate whether a pixel at the location is clean or not and construct the initialization of background model. Second, for an actual scenario with diversified environmental conditions (e.g., illumination changes, dynamic background), we employ normalized color space and a scale invariant local ternary pattern operator to handle these variations. In the meantime, in order to achieve high detection accuracy in the unconstrained scenarios without requiring any scenario-specific parameter tuning, we employ the perception-inspired confidence interval to modify the threshold in the color space. Third, the hole filling approach is used to reduce noise which comes from false segmentation, fill the blank area in the foreground region and maintain the integrity of foreground object. Our experimental results indicate that the proposed approach is superior to several state-of-the-art methods in terms of F-score and kappa index. © 2016 SPIE.
    Accession Number: 20170503310157
  • Record 74 of

    Title:Deep Learning for Hyperspectral Data Classification through Exponential Momentum Deep Convolution Neural Networks
    Author(s):Yue, Qi(1,2,3); Ma, Caiwen(1)
    Source: Journal of Sensors  Volume: 2016  Issue:   DOI: 10.1155/2016/3150632  Published: 2016  
    Abstract:Classification is a hot topic in hyperspectral remote sensing community. In the last decades, numerous efforts have been concentrated on the classification problem. Most of the existing studies and research efforts are following the conventional pattern recognition paradigm, which is based on complex handcrafted features. However, it is rarely known which features are important for the problem. In this paper, a new classification skeleton based on deep machine learning is proposed for hyperspectral data. The proposed classification framework, which is composed of exponential momentum deep convolution neural network and support vector machine (SVM), can hierarchically construct high-level spectral-spatial features in an automated way. Experimental results and quantitative validation on widely used datasets showcase the potential of the developed approach for accurate hyperspectral data classification. © 2016 Qi Yue and Caiwen Ma.
    Accession Number: 20164603013264
  • Record 75 of

    Title:Spatiochromatic Context Modeling for Color Saliency Analysis
    Author(s):Zhang, Jun(1); Wang, Meng(1); Zhang, Shengping(2); Li, Xuelong(3); Wu, Xindong(1,4)
    Source: IEEE Transactions on Neural Networks and Learning Systems  Volume: 27  Issue: 6  DOI: 10.1109/TNNLS.2015.2464316  Published: June 2016  
    Abstract:Visual saliency is one of the most noteworthy perceptual abilities of human vision. Recent progress in cognitive psychology suggests that: 1) visual saliency analysis is mainly completed by the bottom-up mechanism consisting of feedforward low-level processing in primary visual cortex (area V1) and 2) color interacts with spatial cues and is influenced by the neighborhood context, and thus it plays an important role in a visual saliency analysis. From a computational perspective, the most existing saliency modeling approaches exploit multiple independent visual cues, irrespective of their interactions (or are not computed explicitly), and ignore contextual influences induced by neighboring colors. In addition, the use of color is often underestimated in the visual saliency analysis. In this paper, we propose a simple yet effective color saliency model that considers color as the only visual cue and mimics the color processing in V1. Our approach uses region-/boundary-defined color features with spatiochromatic filtering by considering local color-orientation interactions, therefore captures homogeneous color elements, subtle textures within the object and the overall salient object from the color image. To account for color contextual influences, we present a divisive normalization method for chromatic stimuli through the pooling of contrary/complementary color units. We further define a color perceptual metric over the entire scene to produce saliency maps for color regions and color boundaries individually. These maps are finally globally integrated into a one single saliency map. The final saliency map is produced by Gaussian blurring for robustness. We evaluate the proposed method on both synthetic stimuli and several benchmark saliency data sets from the visual saliency analysis to salient object detection. The experimental results demonstrate that the use of color as a unique visual cue achieves competitive results on par with or better than 12 state-of-the-art approaches. © 2015 IEEE.
    Accession Number: 20153601242356
  • Record 76 of

    Title:Multiple representations-based face sketch-photo synthesis
    Author(s):Peng, Chunlei(1); Gao, Xinbo(2); Wang, Nannan(3); Tao, Dacheng(4); Li, Xuelong(5); Li, Jie(1)
    Source: IEEE Transactions on Neural Networks and Learning Systems  Volume: 27  Issue: 11  DOI: 10.1109/TNNLS.2015.2464681  Published: November 2016  
    Abstract:Face sketch-photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch-photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch-photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science. © 2012 IEEE.
    Accession Number: 20173404066611
  • Record 77 of

    Title:Scattering effects and high-spatial-frequency nanostructures on ultrafast laser irradiated surfaces of zirconium metallic alloys with nanoscaled topographies
    Author(s):Li, Chen(1,2,3); Cheng, Guanghua(1); Sedao, Xxx(2); Zhang, Wei(4); Zhang, Hao(2); Faure, Nicolas(2); Jamon, Damien(2); Colombier, Jean-Philippe(2); Stoian, Razvan(2)
    Source: Optics Express  Volume: 24  Issue: 11  DOI: 10.1364/OE.24.011558  Published: May 30, 2016  
    Abstract:The origin of high-spatial-frequency laser-induced periodic surface structures (HSFL) driven by incident ultrafast laser fields, with their ability to achieve structure resolutions below λ/2, is often obscured by the overlap with regular ripples patterns at quasi-wavelength periodicities. We experimentally demonstrate here employing defined surface topographies that these structures are intrinsically related to surface roughness in the nano-scale domain. Using Zr-based bulk metallic glass (Zr-BMG) and its crystalline alloy (Zr-CA) counterpart formed by thermal annealing from its glassy precursor, we prepared surfaces showing either smooth appearances on thermoplastic BMG or high-density nano-protuberances from randomly distributed embedded nano-crystallites with average sizes below 200 nm on the recrystallized alloy. Upon ultrashort pulse irradiation employing linearly polarized 50 fs, 800 nm laser pulses, the surfaces show a range of nanoscale organized features. The change of topology was then followed under multiple pulse irradiation at fluences around and below the single pulse threshold. While the former material (Zr-BMG) shows a specific high quality arrangement of standard ripples around the laser wavelength, the latter (Zr-CA) demonstrates strong predisposition to form high spatial frequency rippled structures (HSFL). We discuss electromagnetic scenarios assisting their formation based on near-field interaction between particles and field-enhancement leading to structure linear growth. Finite-differencetime-domain simulations outline individual and collective effects of nanoparticles on electromagnetic energy modulation and the feedback processes in the formation of HSFL structures with correlation to regular ripples (LSFL). ©2016 Optical Society of America.
    Accession Number: 20162402499605
  • Record 78 of

    Title:The Motion Planning of a Six DOF Manipulator Based on ROS Platform
    Author(s):Meng, Shaonan(1,2); Liang, Yanbing(2); Shi, Heng(1,2)
    Source: Shanghai Jiaotong Daxue Xuebao/Journal of Shanghai Jiaotong University  Volume: 50  Issue:   DOI: 10.16183/j.cnki.jsjtu.2016.S.024  Published: July 1, 2016  
    Abstract:To establish the six DOF manipulator model in the SolidWorks and get a simplified model with the sw2urdf plugin. Using MoveIt! setup assistant makes it easy to configure the manipulator for motion planning based on ROS. We can accomplish the motion planning in RViz with MotionPlanning plugin based on OMPL which consists of many sampling-based motion planning algorithms and analyse the KPIECE algorithm that is specifically designed for systems with complex dynamic. Considering the manipulator's motion planning in complex environment, using ROS-based 3D model can realize the virtual control and get a set of joint information about position, speed, effort and so on, so we can make more analysis and improvement about the motion planning algorithms. © 2016, Shanghai Jiao Tong University Press. All right reserved.
    Accession Number: 20171803618111
  • Record 79 of

    Title:Difference frequency generation wildly tunable continuous wave Mid-Infrared Radiation laser source based on a MgO:PPLN crystal
    Author(s):Zhang, Ze-Yu(1,3); Zhu, Guo-Shen(2); Wang, Wei(1,3); Duan, Tao(1); Yang, Song(2); Hao, Qiang(2); Han, Biao(1,3); Xie, Xiao-Ping(1); Zeng, He-Ping(2)
    Source: Guangzi Xuebao/Acta Photonica Sinica  Volume: 45  Issue: 9  DOI: 10.3788/gzxb20164509.0914003  Published: September 1, 2016  
    Abstract:The continuous-wave Mid-Infrared Radiation (Mid-IR) was experimentally obtained by difference frequency quasi-phase-matching in a MgO-doped periodically poled LiNbO3 crystal (MgO:PPLN), which the narrow linewidth light sources with 1083 nm and 1550 nm were used as pump light and signal light, respectively. Moreover, the multiple mid-IR wavelengths were realized by adjusting the signal wavelength and using the temperature controlling on a MgO:PPLN. The wavelength tuning region is around 3547.6 nm to 3629.1 nm. A maximum mid-infrared radiation power of 3.2 mW at 3597.0 nm is generated when the optical power of signal and pump lights are amplified to 3.5 W and 2.8 W respectively. The power jitter of mid-infrared output is less than ±1.6% after along time test recording. This study can be used as a reference for the design and development of narrow line width multi-wavelength continuous-wave infrared light source. © 2016, Science Press. All right reserved.
    Accession Number: 20163802821820
  • Record 80 of

    Title:SURF and KPCA based image perceptual hashing algorithm
    Author(s):Qi, Yinlong(1,2); Qiu, Yuehong(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 10033  Issue:   DOI: 10.1117/12.2244291  Published: 2016  
    Abstract:Image perceptual hashing is a notable concept in the field of image processing. Its application ranges from image retrieval, image authentication, image recognition, to content-based image management. In this paper a novel image hashing algorithm based on SURF and KPCA, which extracts speed-up robust feature as the perceptual feature, is proposed. SURF retains the robust properties of SIFT, and it is 3 to 10 times faster than SIFT. Then, the Kernel PCA is used to decompose key points' descriptors and get compact expressions with well-preserved feature information. To improve the precision of digest matching, a binary image template of input image is generated which contains information of salient region to ensure the key points in it have greater weight during matching. After that, the hashing digest for image retrieval and image recognition is constructed. Experiments indicated that compared to SIFT and PCA based perceptual hashing, the proposed method could increase the precision of recognition, enhance robustness, and effectively reduce process time. © 2016 SPIE.
    Accession Number: 20164903101676
  • Record 81 of

    Title:Research progress of new space mirror materials
    Author(s):Wang, Yongjie(1,2); Xie, Yongjie(1); Ma, Zhen(1); Xu, Liang(1); Ding, Jiaoteng(1)
    Source: Cailiao Daobao/Materials Review  Volume: 30  Issue: 4  DOI: 10.11896/j.issn.1005-023X.2016.07.025  Published: April 10, 2016  
    Abstract:Traditional mirror materials cannot meet the lager and lighter requirement of the future space reflectors. Carbon fiber-reinforced composites will become significant ones in the space mirror field due to their outstanding properties. In this paper, three composites (C/SiC, CFRP and C/C composites) are introduced, which have great potential on space mirrors application. The properties, fabrication methods, application status, technological constraints of these composites are also described. Finally, the corresponding prospective application and development of carbon reinforced composites are anticipated. © 2016, Materials Review Magazine. All right reserved.
    Accession Number: 20162302468405
  • Record 82 of

    Title:Influence of atmospheric turbulence on detecting performance of all-day star sensor
    Author(s):Pan, Yue(1,2); Wang, Hu(1); Shen, Yang(1,2); Xue, Yaoke(1); Liu, Jie(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 9903  Issue:   DOI: 10.1117/12.2211689  Published: 2016  
    Abstract:All-day star sensor makes it possible to observe stars in all-day time in the atmosphere. But the detecting performance is influenced by atmospheric turbulence. According to the characteristic of turbulence in long-exposure model, the modulation transfer function, point spread function and encircled power of the imaging system have been analyzed. Combined with typical star sensor optical system, the signal to noise ratio and the detectable stellar magnitude limit affected by turbulence have been calculated. The result shows the ratio of aperture diameter to atmospheric coherence length is main basis for the evaluation of the impact of turbulence. In condition of medium turbulence in day time, signal to noise ratio of the star sensor with diameter 120mm will drop about 4dB at most in typical work environment, and the detectable stellar limit will drop 1 magnitude. © 2016 SPIE.
    Accession Number: 20161102084743
  • Record 83 of

    Title:Mutual component analysis for heterogeneous face recognition
    Author(s):Li, Zhifeng(1); Gong, Dihong(1); Li, Qiang(2); Tao, Dacheng(2); Li, Xuelong(3)
    Source: ACM Transactions on Intelligent Systems and Technology  Volume: 7  Issue: 3  DOI: 10.1145/2807705  Published: February 2016  
    Abstract:Heterogeneous face recognition, also known as cross-modality face recognition or intermodality face recognition, refers to matching two face images from alternative image modalities. Since face images from different image modalities of the same person are associated with the same face object, there should be mutual components that reflect those intrinsic face characteristics that are invariant to the image modalities. Motivated by this rationality, we propose a novel approach called Mutual Component Analysis (MCA) to infer the mutual components for robust heterogeneous face recognition. In the MCA approach, a generative model is first proposed to model the process of generating face images in different modalities, and then an Expectation Maximization (EM) algorithm is designed to iteratively learn the model parameters. The learned generative model is able to infer the mutual components (which we call the hidden factor, where hidden means the factor is unreachable and invisible, and can only be inferred from observations) that are associated with the person's identity, thus enabling fast and effective matching for cross-modality face recognition. To enhance recognition performance, we propose an MCA-based multiclassifier framework using multiple local features. Experimental results show that our new approach significantly outperforms the state-of-the-art results on two typical application scenarios: sketch-to-photo and infrared-to-visible face recognition.
    Accession Number: 20161202130216
  • Record 84 of

    Title:Moving target detection based on features matching of RGB on a foggy day
    Author(s):Zhang, Ya-Qun(1,2); Song, Zong-Xi(1)
    Source: Proceedings of SPIE - The International Society for Optical Engineering  Volume: 10033  Issue:   DOI: 10.1117/12.2243963  Published: 2016  
    Abstract:Moving target detection is a significant research content of image processing and computer vision. Precise detection of moving target is the basic of target positioning, target tracking and target classification. There are many applications of it in intelligent monitoring, traffic statistics and many other fields. How to detect the moving object in a bad weather, for example, a heavy foggy day, is a problem that needs be solved in the engineering, we all know that the haze has been a quite serious environment problem in our life! The paper is based on this. First, getting rid of fog in the video, and then, extracting the features of pixels, establishing features dictionaries, building models for background by features matching in order to extract foreground. The result shows that the proposed algorithm can detect the moving target accurately in a foggy day. © 2016 SPIE.
    Accession Number: 20164903101476