Imperial College London

DrSenWang

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Senior Lecturer
 
 
 
//

Contact

 

sen.wang

 
 
//

Location

 

Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

82 results found

Dong Y, Wu P, Wang S, Liu Yet al., 2023, ShipGAN: Generative Adversarial Network based simulation-to-real image translation for ships, Applied Ocean Research, Vol: 131, ISSN: 0141-1187

Recent advances in robotics and autonomous systems (RAS) have significantly improved the autonomy level of unmanned surface vehicles (USVs) and made them capable of undertaking demanding tasks in various environments. During the operation of USVs, apart from normal situations, it is those unexpected scenes, such as busy waterways or navigation in dust/nighttime, impose most dangers to USVs as these scenes are rarely seen during training. Such a rare occurrence also makes the manual collection and recording of these scenes into dataset difficult, expensive and inefficient, with the majority of existing public available datasets not able to fully cover them. One of many plausible solutions is to purposely generate these data using computer vision techniques with the assistance from high-fidelity simulations that can create various desirable motions/scenarios. However, the stylistic difference between the simulation images and the natural images would cause a domain shift problem. Hence, there is a need for designing a method that can transfer the data distribution and styles of the simulation images into the realistic domain. This paper proposes and evaluates a novel solution to fill this gap using a Generative Adversarial Network (GAN) based model, ShipGAN, to translate the simulation images into realistic images. Experiments were carried out to investigate the feasibility of generating realistic images using GAN-based image translation models. The synthetic realistic images from the simulation images were demonstrated to be reliable by the object detection and image segmentation algorithms trained with natural images.

Journal article

Li C, Yan F, Wang S, Zhuang Yet al., 2023, A 3D LiDAR odometry for UGVs using coarse-to-fine deep scene flow estimation, Transactions of the Institute of Measurement and Control, Vol: 45, Pages: 274-286, ISSN: 0142-3312

Light detection and ranging (LiDAR) odometry plays a crucial role in autonomous mobile robots and unmanned ground vehicles (UGVs). This paper presents a deep learning–based odometry system using two successive three-dimensional (3D) point clouds to estimate their scene flow and then predict their relative pose. The network consumes continuous 3D point clouds directly and outputs their scene flow and uncertain mask in a coarse-to-fine fashion. A pose estimation layer without trainable parameters is designed to compute the pose with the scene flow. We also introduce a scan-to-map optimization algorithm to enhance the robustness and accuracy of the system. Our experiments on the KITTI odometry data set and our campus data set demonstrate the effectiveness of the proposed deep learning–based point cloud odometry.

Journal article

Rao Y, Ju Y, Wang S, Gao F, Fan H, Dong Jet al., 2023, Learning Enriched Feature Descriptor for Image Matching and Visual Measurement, IEEE Transactions on Instrumentation and Measurement, Vol: 72, ISSN: 0018-9456

Recent feature descriptor research has witnessed tremendous progress with the development of the deep neural network. However, most existing descriptors solely focus on learning strong discriminativeness with deep-invariant features, neglecting their representation ability and rich hierarchical clues hidden in images, which could further establish high-quality matches via implicit hierarchical comparisons in rich representative descriptor space. In this article, we consider both the discriminative and representation ability of feature descriptors to enrich the descriptor space with a novel representative learning framework. On the one hand, we introduce histogram of oriented gradient (HOG) as a prior term to guide our descriptor to learn a powerful representation and robustness in a self-supervised manner. On the other hand, we present an adaptive triplet loss (ATL), which penalizes the triplet loss (TL) according to the descriptor matching distances in order to encourage our descriptor to learn strong discriminativeness. Moreover, to fully use the information encapsulated in images and boost the representation ability, we propose a novel HIerarchical Feature Transformer Network (HIFT), which derives dense descriptions from the semantic and cross-scale-enhanced hierarchical features in a local-to-global manner. Extensive experiments on popular feature matching and visual localization benchmarks show that the HIFT achieves highly competitive performance compared with the state-of-the-art methods. Applications on visual measurement tasks of visual 3-D reconstruction and ego-motion estimation also demonstrate the high generalization ability of our method. Our model is available at https://github.com/Ray2OUC/HIFT.

Journal article

Zhang K, Hong Z, Xu S, Wang Set al., 2022, CURL: Continuous, Ultra-compact Representation for LiDAR, Robotics: Science and Systems 2022, Publisher: Robotics: Science and Systems Foundation

Conference paper

Luo D, Zhuang Y, Wang S, 2022, Hybrid sparse monocular visual odometry with online photometric calibration, The International Journal of Robotics Research, Pages: 027836492211077-027836492211077, ISSN: 0278-3649

<jats:p> Most monocular visual Simultaneous Localization and Mapping (vSLAM) and visual odometry (VO) algorithms focus on either feature-based methods or direct methods. Hybrid (semi-direct) approach is less studied although it is equally important. In this paper, a hybrid sparse visual odometry (HSO) algorithm with online photometric calibration is proposed for monocular vision. HSO introduces two novel measures, that is, direct image alignment with adaptive mode selection and image photometric description using ratio factors, to enhance the robustness against dramatic image intensity changes and motion blur. Moreover, HSO is able to establish pose constraints between keyframes far apart in time and space by using KLT tracking enhanced with a local-global brightness consistency. The convergence speed of candidate map points is adopted as the basis for keyframe selection, which strengthens the coordination between the front end and the back end. Photometric calibration is elegantly integrated into the VO system working in tandem: (1) Photometric interference from the camera, such as vignetting and changes in exposure time, is accurately calibrated and compensated in HSO, thereby improving the accuracy and robustness of VO. (2) On the other hand, VO provides pre-calculated data for the photometric calibration algorithm, which reduces resource consumption and improves the estimation accuracy of photometric parameters. Extensive experiments are performed on various public datasets to evaluate the proposed HSO against the state-of-the-art monocular vSLAM/VO and online photometric calibration methods. The results show that the proposed HSO achieves superior performance on VO and photometric calibration in terms of accuracy, robustness, and efficiency, being comparable with the state-of-the-art VO/vSLAM systems. We open source HSO for the benefit of the community. </jats:p>

Journal article

Hong Z, Petillot Y, Wallace A, Wang Set al., 2022, RadarSLAM: A robust simultaneous localization and mapping system for all weather conditions, The International Journal of Robotics Research, Vol: 41, Pages: 519-542, ISSN: 0278-3649

<jats:p> A Simultaneous Localization and Mapping (SLAM) system must be robust to support long-term mobile vehicle and robot applications. However, camera and LiDAR based SLAM systems can be fragile when facing challenging illumination or weather conditions which degrade the utility of imagery and point cloud data. Radar, whose operating electromagnetic spectrum is less affected by environmental changes, is promising although its distinct sensor model and noise characteristics bring open challenges when being exploited for SLAM. This paper studies the use of a Frequency Modulated Continuous Wave radar for SLAM in large-scale outdoor environments. We propose a full radar SLAM system, including a novel radar motion estimation algorithm that leverages radar geometry for reliable feature tracking. It also optimally compensates motion distortion and estimates pose by joint optimization. Its loop closure component is designed to be simple yet efficient for radar imagery by capturing and exploiting structural information of the surrounding environment. Extensive experiments on three public radar datasets, ranging from city streets and residential areas to countryside and highways, show competitive accuracy and reliability performance of the proposed radar SLAM system compared to the state-of-the-art LiDAR, vision and radar methods. The results show that our system is technically viable in achieving reliable SLAM in extreme weather conditions on the RADIATE Dataset, for example, heavy snow and dense fog, demonstrating the promising potential of using radar for all-weather localization and mapping. </jats:p>

Journal article

Gao H, Liang B, Oboe R, Shi Y, Wang S, Tomizuka Met al., 2022, Guest Editorial Introduction to the Focused Section on Adaptive Learning and Control for Advanced Mechatronics Systems, IEEE-ASME TRANSACTIONS ON MECHATRONICS, Vol: 27, Pages: 607-610, ISSN: 1083-4435

Journal article

Wang C, Zhang Q, Wang X, Xu S, Petillot Y, Wang Set al., 2022, Autonomous Underwater Robotic Grasping Research Based on Navigation and Hierarchical Operation, Pages: 176-182

This paper proposes a new framework for the autonomous underwater operation of underwater vehicle manipulator systems (UVMS), which is modular, standardized, and hierarchical. The framework consists of three subsystems: perception, navigation, and grasping. The perception module is based on an underwater stereo vision system, which provides effective environment and target information for the navigation and grasping modules. The navigation module is based on ORBSLAM and acoustic odometry, which generates the global map and plans a trajectory for the first initial stage. The grasping module generates the target grasping pose based on the extracted point cloud and the current robot state, and then executes the grasping task based on the motion planner. The proposed system is tested to perform several underwater target grasping tasks in a water tank, demonstrating the effectiveness of the system.

Conference paper

Fraser H, Wang S, 2022, Monocular Depth Estimation for Equirectangular Videos, Pages: 5293-5299, ISSN: 2153-0858

Depth estimation from panoramic imagery has received minimal attention in contrast to standard perspective imagery, which constitutes the majority of the literature on the key research topic. The vast - and frequently complete - field of view provided by such panoramic photographs makes them appealing for a variety of applications, including robots, autonomous vehicles, and virtual reality. Consumer-level camera systems capable of capturing such images are likewise growing more affordable, and may be desirable complements to autonomous systems' sensor packages. They do, however, introduce significant distortions and violate some assumptions regarding perspective view images. Additionally, many state-of-the-art algorithms are not designed for its projection model, and their depth estimation performance tends to degrade when being applied to panoramic imagery. This paper presents a novel technique for adapting view synthesis-based depth estimation models to omnidirectional vision. Specifically, we: 1) integrate a 'virtual' spherical camera model into the training pipeline, facilitating the model training, 2) exploit spherical convolutional layers to perform convolution operations on equirectangular images, handling the severe distortion, and 3) propose an optical flow-based masking scheme to mitigate the effect of unwanted pixels during training. Our qualitative and quantitative results demonstrate that these simple yet efficient designs result in significantly improved depth estimations when compared to previous approaches.

Conference paper

Wang C, Zhang Q, Wang X, Xu S, Petillot Y, Wang Set al., 2022, Multi-Task Reinforcement Learning based Mobile Manipulation Control for Dynamic Object Tracking and Grasping, Pages: 34-40

Agile control of mobile manipulator is challenging because of the high complexity coupled by the robotic system and the unstructured working environment. Tracking and grasping a dynamic object with a random trajectory is even harder. In this paper, a multi-task reinforcement learning-based mobile manipulation control framework is proposed to achieve general dynamic object tracking and grasping. Several basic types of dynamic trajectories are chosen as the training set for the task. To improve policy generalization in practice, random noise and dynamics randomization are introduced during the training process. Extensive experiments show that our trained policy can adapt to unseen random dynamic trajectories with about 0.1 m tracking error and 75% grasping success rate for dynamic objects. The trained policy can also be successfully deployed on a real mobile manipulator.

Conference paper

Xu S, Luczynski T, Willners JS, Hong Z, Zhang K, Petillot YR, Wang Set al., 2021, Underwater Visual Acoustic SLAM with Extrinsic Calibration, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE

Conference paper

Sheeny M, De Pellegrin E, Mukherjee S, Ahrabian A, Wang S, Wallace Aet al., 2021, RADIATE: A Radar Dataset for Automotive Perception in Bad Weather, 2021 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

Conference paper

Vargas E, Scona R, Willners JS, Luczynski T, Cao Y, Wang S, Petillot YRet al., 2021, Robust Underwater Visual SLAM Fusing Acoustic Sensing, 2021 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

Conference paper

Li R, Wang S, Gu D, 2021, DeepSLAM: A Robust Monocular SLAM System With Unsupervised Deep Learning, IEEE Transactions on Industrial Electronics, Vol: 68, Pages: 3577-3587, ISSN: 0278-0046

Journal article

Li C, Wang S, Zhuang Y, Yan Fet al., 2021, Deep Sensor Fusion Between 2D Laser Scanner and IMU for Mobile Robot Localization, IEEE SENSORS JOURNAL, Vol: 21, Pages: 8501-8509, ISSN: 1530-437X

Journal article

Cao F, Yan F, Wang S, Zhuang Y, Wang Wet al., 2021, Season-Invariant and Viewpoint-Tolerant LiDAR Place Recognition in GPS-Denied Environments, IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, Vol: 68, Pages: 563-574, ISSN: 0278-0046

Journal article

Wang C, Zhang Q, Li S, Wang X, Lane D, Petillot Y, Wang Set al., 2021, Learning-Based Underwater Autonomous Grasping via 3D Point Cloud, ISSN: 0197-7385

Underwater autonomous grasping is a challenging task for robotic research. In this paper, we propose a learning-based underwater grasping method using 3D point cloud generated from an underwater stereo camera. First, we use Pinax-model for accurate refraction correction of a stereo camera in a flat-pane housing. Second, dense point cloud of the target is generated using the calibrated stereo images. An improved Grasp Pose Detection (GPD) method is then developed to generate the candidate grasping poses and select the best one based on kinematic constraints. Finally, an optimal trajectory is planned to finish the grasping task. Experiments in a water tank have proved the effectiveness of our method.

Conference paper

Willners JS, Carlucho I, Katagiri S, Lemoine C, Roe J, Stephens D, Luczynski T, Xu S, Carreno Y, Pairet E, Barbalata C, Petillot Y, Wang Set al., 2021, From market-ready ROVs to low-cost AUVs, ISSN: 0197-7385

Autonomous Underwater Vehicles (AUVs) are becoming increasingly important for different types of industrial applications. The generally high cost of AUVs restricts the access to them and therefore advances in research and technological development. However, recent advances have led to lower cost commercially available Remotely Operated Vehicles (ROVs), which present a platform that can be enhanced to enable a high degree of autonomy, similar to that of a high-end AUV. In this article, we present how a low-cost commercial-off-the-shelf ROV can be used as a foundation for developing versatile and affordable AUVs. We introduce the required hardware modifications to obtain a system capable of autonomous operations as well as the necessary software modules. Additionally, we present a set of use cases exhibiting the versatility of the developed platform for intervention and mapping tasks.

Conference paper

Xie L, Miao Y, Wang S, Blunsom P, Wang Z, Chen C, Markham A, Trigoni Net al., 2021, Learning With Stochastic Guidance for Robot Navigation, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Vol: 32, Pages: 166-176, ISSN: 2162-237X

Journal article

Willners JS, Carreno Y, Xu S, Luczynski T, Katagiri S, Roe J, Pairet È, Petillot Y, Wang Set al., 2021, Robust underwater SLAM using autonomous relocalisation, Pages: 273-280

This paper presents a robust underwater simultaneous localisation and mapping (SLAM) framework using autonomous relocalisation. The proposed approach strives to maintain a single consistent map during operation and updates its current plan when the SLAM loses feature tracking. The updated plan transverses viewpoints that are likely to aid in merging the current map into the global map. We present the sub-systems of the framework: the SLAM, viewpoint generation, and high level planning. In-water experiments show the advantage of our approach used on an autonomous underwater vehicle (AUV) performing inspections.

Conference paper

Antonelli G, Indiveri G, Barrera C, Caccia M, Dooly G, Flavin N, Ferreira F, Miskovic N, Furlong M, Kopf A, Bachmayer R, Ludvigsen M, Opderbecke J, Pascoal A, Petroccia R, Alves J, Ridao P, Vallicrosa G, De Sousa JB, Costa M, Wang Set al., 2021, Advancing the EU Marine Robotics Research Infrastructure Network: The EU Marine Robots project, ISSN: 0197-7385

This paper provides an overview of the H2020 Marine robotics research infrastructure network (EU Marine Robots) project. The overview is organized around the three main activities of infrastructure projects: i) Networking activities (NA); ii) Transnational access (TNA) in which access to marine robotic infrastructures from the partners is granted in competitive calls; iii) Joint research activities (JRA) aimed at making robotic infrastructures more operable and transitioning new systems and technologies to field operations. The strategic significance of the project and future developments are discussed as conclusions.

Conference paper

Mukherjee S, Wallace AM, Wang S, 2021, Predicting Vehicle Behavior Using Automotive Radar and Recurrent Neural Networks, IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, Vol: 2, Pages: 254-268

Journal article

Wei B, Xu W, Luo C, Zoppi G, Ma D, Wang Set al., 2020, SolarSLAM: Battery-free loop closure for indoor localisation, Pages: 4485-4490, ISSN: 2153-0858

In this paper, we propose SolarSLAM, a batteryfree loop closure method for indoor localisation. Inertial Measurement Unit (IMU) based indoor localisation method has been widely used due to its ubiquity in mobile devices, such as mobile phones, smartwatches and wearable bands. However, it suffers from the unavoidable long term drift. To mitigate the localisation error, many loop closure solutions have been proposed using sophisticated sensors, such as cameras, laser, etc. Despite achieving high-precision localisation performance, these sensors consume a huge amount of energy. Different from those solutions, the proposed SolarSLAM takes advantage of an energy harvesting solar cell as a sensor and achieves effective battery-free loop closure method. The proposed method suggests the key-point dynamic time warping for detecting loops and uses robust simultaneous localisation and mapping (SLAM) as the optimiser to remove falsely recognised loop closures. Extensive evaluations in the real environments have been conducted to demonstrate the advantageous photocurrent characteristics for indoor localisation and good localisation accuracy of the proposed method.

Conference paper

Hong Z, Petillot Y, Wang S, 2020, RadarSLAM: Radar based large-scale SLAM in all weathers, Pages: 5164-5170, ISSN: 2153-0858

Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been presented in last decade using different sensor modalities. However, robust SLAM in extreme weather conditions is still an open research problem. In this paper, RadarSLAM, a full radar based graph SLAM system, is proposed for reliable localization and mapping in large-scale environments. It is composed of pose tracking, local mapping, loop closure detection and pose graph optimization, enhanced by novel feature matching and probabilistic point cloud generation on radar images. Extensive experiments are conducted on a public radar dataset and several self-collected radar sequences, demonstrating the state-of-the-art reliability and localization accuracy in various adverse weather conditions, such as dark night, dense fog and heavy snowfall.

Conference paper

Bharti V, Lane D, Wang S, 2020, Learning to Detect Subsea Pipelines with Deep Segmentation Network and Self-Supervision

Regular inspection of subsea pipelines is crucial for assessing their integrity and for maintenance. These inspections usually are very expensive without employing Autonomous Underwater Vehicles (AUVs). Most of the research focus in this area has been directed in automating the process to reduce operational costs and is done by using multiple perceptive sensors. This paper investigates the problem of pipeline detection using optical sensors in highly turbid subsea scenarios. A deep neural network is designed to segment pipes from optical images. Since a common issue with underwater optical sensing is dynamic changes in scenes and the difficulty of labelling large dataset, a novel self-supervision method based on multibeam echosounder is proposed to fine-tune a pre-trained network on the fly. Extensive experiments are conducted in real-world challenging scenarios, showing the effectiveness of the proposed method. The proposed method can run real-time on an Nvidia Jetson AGX embedded PC, supporting AUV field operation.

Conference paper

Ochal M, Vazquez J, Petillot Y, Wang Set al., 2020, A Comparison of Few-Shot Learning Methods for Underwater Optical and Sonar Image Classification

Deep convolutional neural networks generally perform well in underwater object recognition tasks on both optical and sonar images. Many such methods require hundreds, if not thousands, of images per class to generalize well to unseen examples. However, obtaining and labeling sufficiently large volumes of data can be relatively costly and time-consuming, especially when observing rare objects or performing real-time operations. Few-Shot Learning (FSL) efforts have produced many promising methods to deal with low data availability. However, little attention has been given in the underwater domain, where the style of images poses additional challenges for object recognition algorithms. To the best of our knowledge, this is the first paper to evaluate and compare several supervised and semi-supervised Few-Shot Learning (FSL) methods using underwater optical and side-scan sonar imagery. Our results show that FSL methods offer a significant advantage over the traditional transfer learning methods that fine-tune pre-trained models. We hope that our work will help apply FSL to autonomous underwater systems and expand their learning capabilities.

Conference paper

Sheeny M, Wallace A, Wang S, 2020, 300 Fhz radar object recognition based on deep neural networks and transfer learning, IET Radar, Sonar and Navigation, Vol: 14, Pages: 1483-1493, ISSN: 1751-8784

For high-resolution scene mapping and object recognition, optical technologies such as cameras and LiDAR are the sensors of choice. However, for future vehicle autonomy and driver assistance in adverse weather conditions, improvements in automotive radar technology and the development of algorithms and machine learning for robust mapping and recognition are essential. In this study, the authors describe a methodology based on deep neural networks to recognise objects in 300 GHz radar images using the returned power data only, investigating robustness to changes in range, orientation and different receivers in a laboratory environment. As the training data is limited, they have also investigated the effects of transfer learning. As a necessary first step before road trials, they have also considered detection and classification in multiple object scenes.

Journal article

Alcaraz D, Antonelli G, Caccia M, Dooly G, Flavin N, Kopf A, Ludvigsen M, Opderbecke J, Palmer M, Pascoal A, De Sousa JB, Petroccia R, Ridao P, Miskovic N, Wang Set al., 2020, The Marine Robotics Research Infrastructure Network (EUMarine Robots): An Overview

The Marine robotics research infrastructure network (EUMarine Robots) addresses the H2020 call topic INFRAIA-02-2017: Integrating Activities for Starting Communities by mobilizing a comprehensive consortium of most of the key marine robotics research infrastructures which, in turn, mobilized stakeholders from different Member States, Associated Countries and other third countries to achieve the following main objective: Open up key national and regional marine robotics research infrastructures (RIs) to all European researchers ensuring their optimal use and joint development to establish a world-class marine robotics integrated infrastructure.

Conference paper

Bharti V, Lane D, Wang S, 2020, A Semi-Heuristic Approach for Tracking Buried Subsea Pipelines using Fluxgate Magnetometers, Pages: 469-475, ISSN: 2161-8070

Integrity assessment of subsea oil and gas transmission lines is crucial for safe and environment-friendly operations. These are usually very expensive without employing Autonomous Underwater Vehicles (AUVs). Buried sections of long pipelines pose a major hurdle in effective pipeline tracking through an AUV. If a pipe track is lost, then the vehicle needs to invest resources to relocate the pipeline. This work presents a heuristic-based method to detect buried pipes using magnetometers followed by a Kalman filter parameterized to optimally localize subsea pipes. Extensive experiments on real and simulated data are conducted to show the reliable performance of this method for tracking buried pipelines.

Conference paper

Sheeny M, Wallace A, Wang S, 2020, RADIO: Parameterized generative radar data augmentation for small datasets, Applied Sciences (Switzerland), Vol: 10

We present a novel, parameterised radar data augmentation (RADIO) technique to generate realistic radar samples from small datasets for the development of radar-related deep learning models. RADIO leverages the physical properties of radar signals, such as attenuation, azimuthal beam divergence and speckle noise, for data generation and augmentation. Exemplary applications on radar-based classification and detection demonstrate that RADIO can generate meaningful radar samples that effectively boost the accuracy of classification and generalisability of deep models trained with a small dataset.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01097500&limit=30&person=true