Imperial College London

DrSenWang

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Senior Lecturer
 
 
 
//

Contact

 

sen.wang

 
 
//

Location

 

Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

93 results found

Xie L, Miao Y, Wang S, Blunsom P, Wang Z, Chen C, Markham A, Trigoni Net al., 2021, Learning With Stochastic Guidance for Robot Navigation, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Vol: 32, Pages: 166-176, ISSN: 2162-237X

Journal article

Cao F, Yan F, Wang S, Zhuang Y, Wang Wet al., 2021, Season-Invariant and Viewpoint-Tolerant LiDAR Place Recognition in GPS-Denied Environments, IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, Vol: 68, Pages: 563-574, ISSN: 0278-0046

Journal article

Mukherjee S, Wallace AM, Wang S, 2021, Predicting Vehicle Behavior Using Automotive Radar and Recurrent Neural Networks, IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, Vol: 2, Pages: 254-268

Journal article

Wei B, Xu W, Luo C, Zoppi G, Ma D, Wang Set al., 2020, SolarSLAM: Battery-free loop closure for indoor localisation, Pages: 4485-4490, ISSN: 2153-0858

In this paper, we propose SolarSLAM, a batteryfree loop closure method for indoor localisation. Inertial Measurement Unit (IMU) based indoor localisation method has been widely used due to its ubiquity in mobile devices, such as mobile phones, smartwatches and wearable bands. However, it suffers from the unavoidable long term drift. To mitigate the localisation error, many loop closure solutions have been proposed using sophisticated sensors, such as cameras, laser, etc. Despite achieving high-precision localisation performance, these sensors consume a huge amount of energy. Different from those solutions, the proposed SolarSLAM takes advantage of an energy harvesting solar cell as a sensor and achieves effective battery-free loop closure method. The proposed method suggests the key-point dynamic time warping for detecting loops and uses robust simultaneous localisation and mapping (SLAM) as the optimiser to remove falsely recognised loop closures. Extensive evaluations in the real environments have been conducted to demonstrate the advantageous photocurrent characteristics for indoor localisation and good localisation accuracy of the proposed method.

Conference paper

Hong Z, Petillot Y, Wang S, 2020, RadarSLAM: Radar based large-scale SLAM in all weathers, Pages: 5164-5170, ISSN: 2153-0858

Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been presented in last decade using different sensor modalities. However, robust SLAM in extreme weather conditions is still an open research problem. In this paper, RadarSLAM, a full radar based graph SLAM system, is proposed for reliable localization and mapping in large-scale environments. It is composed of pose tracking, local mapping, loop closure detection and pose graph optimization, enhanced by novel feature matching and probabilistic point cloud generation on radar images. Extensive experiments are conducted on a public radar dataset and several self-collected radar sequences, demonstrating the state-of-the-art reliability and localization accuracy in various adverse weather conditions, such as dark night, dense fog and heavy snowfall.

Conference paper

Bharti V, Lane D, Wang S, 2020, Learning to Detect Subsea Pipelines with Deep Segmentation Network and Self-Supervision

Regular inspection of subsea pipelines is crucial for assessing their integrity and for maintenance. These inspections usually are very expensive without employing Autonomous Underwater Vehicles (AUVs). Most of the research focus in this area has been directed in automating the process to reduce operational costs and is done by using multiple perceptive sensors. This paper investigates the problem of pipeline detection using optical sensors in highly turbid subsea scenarios. A deep neural network is designed to segment pipes from optical images. Since a common issue with underwater optical sensing is dynamic changes in scenes and the difficulty of labelling large dataset, a novel self-supervision method based on multibeam echosounder is proposed to fine-tune a pre-trained network on the fly. Extensive experiments are conducted in real-world challenging scenarios, showing the effectiveness of the proposed method. The proposed method can run real-time on an Nvidia Jetson AGX embedded PC, supporting AUV field operation.

Conference paper

Ochal M, Vazquez J, Petillot Y, Wang Set al., 2020, A Comparison of Few-Shot Learning Methods for Underwater Optical and Sonar Image Classification

Deep convolutional neural networks generally perform well in underwater object recognition tasks on both optical and sonar images. Many such methods require hundreds, if not thousands, of images per class to generalize well to unseen examples. However, obtaining and labeling sufficiently large volumes of data can be relatively costly and time-consuming, especially when observing rare objects or performing real-time operations. Few-Shot Learning (FSL) efforts have produced many promising methods to deal with low data availability. However, little attention has been given in the underwater domain, where the style of images poses additional challenges for object recognition algorithms. To the best of our knowledge, this is the first paper to evaluate and compare several supervised and semi-supervised Few-Shot Learning (FSL) methods using underwater optical and side-scan sonar imagery. Our results show that FSL methods offer a significant advantage over the traditional transfer learning methods that fine-tune pre-trained models. We hope that our work will help apply FSL to autonomous underwater systems and expand their learning capabilities.

Conference paper

Sheeny M, Wallace A, Wang S, 2020, 300 Fhz radar object recognition based on deep neural networks and transfer learning, IET Radar, Sonar and Navigation, Vol: 14, Pages: 1483-1493, ISSN: 1751-8784

For high-resolution scene mapping and object recognition, optical technologies such as cameras and LiDAR are the sensors of choice. However, for future vehicle autonomy and driver assistance in adverse weather conditions, improvements in automotive radar technology and the development of algorithms and machine learning for robust mapping and recognition are essential. In this study, the authors describe a methodology based on deep neural networks to recognise objects in 300 GHz radar images using the returned power data only, investigating robustness to changes in range, orientation and different receivers in a laboratory environment. As the training data is limited, they have also investigated the effects of transfer learning. As a necessary first step before road trials, they have also considered detection and classification in multiple object scenes.

Journal article

Alcaraz D, Antonelli G, Caccia M, Dooly G, Flavin N, Kopf A, Ludvigsen M, Opderbecke J, Palmer M, Pascoal A, De Sousa JB, Petroccia R, Ridao P, Miskovic N, Wang Set al., 2020, The Marine Robotics Research Infrastructure Network (EUMarine Robots): An Overview

The Marine robotics research infrastructure network (EUMarine Robots) addresses the H2020 call topic INFRAIA-02-2017: Integrating Activities for Starting Communities by mobilizing a comprehensive consortium of most of the key marine robotics research infrastructures which, in turn, mobilized stakeholders from different Member States, Associated Countries and other third countries to achieve the following main objective: Open up key national and regional marine robotics research infrastructures (RIs) to all European researchers ensuring their optimal use and joint development to establish a world-class marine robotics integrated infrastructure.

Conference paper

Bharti V, Lane D, Wang S, 2020, A Semi-Heuristic Approach for Tracking Buried Subsea Pipelines using Fluxgate Magnetometers, Pages: 469-475, ISSN: 2161-8070

Integrity assessment of subsea oil and gas transmission lines is crucial for safe and environment-friendly operations. These are usually very expensive without employing Autonomous Underwater Vehicles (AUVs). Buried sections of long pipelines pose a major hurdle in effective pipeline tracking through an AUV. If a pipe track is lost, then the vehicle needs to invest resources to relocate the pipeline. This work presents a heuristic-based method to detect buried pipes using magnetometers followed by a Kalman filter parameterized to optimally localize subsea pipes. Extensive experiments on real and simulated data are conducted to show the reliable performance of this method for tracking buried pipelines.

Conference paper

Sheeny M, Wallace A, Wang S, 2020, RADIO: Parameterized generative radar data augmentation for small datasets, Applied Sciences (Switzerland), Vol: 10

We present a novel, parameterised radar data augmentation (RADIO) technique to generate realistic radar samples from small datasets for the development of radar-related deep learning models. RADIO leverages the physical properties of radar signals, such as attenuation, azimuthal beam divergence and speckle noise, for data generation and augmentation. Exemplary applications on radar-based classification and detection demonstrate that RADIO can generate meaningful radar samples that effectively boost the accuracy of classification and generalisability of deep models trained with a small dataset.

Journal article

Mukherjee S, Wang S, Wallace A, 2020, Interacting Vehicle Trajectory Prediction with Convolutional Recurrent Neural Networks, Pages: 4336-4342, ISSN: 1050-4729

Anticipating the future trajectories of surrounding vehicles is a crucial and challenging task in path planning for autonomy. We propose a novel Convolutional Long Short Term Memory (Conv-LSTM) based neural network architecture to predict the future positions of cars using several seconds of historical driving observations. This consists of three modules: 1) Interaction Learning to capture the effect of surrounding cars, 2) Temporal Learning to identify the dependency on past movements and 3) Motion Learning to convert the extracted features from these two modules into future positions. To continuously achieve accurate prediction, we introduce a novel feedback scheme where the current predicted positions of each car are leveraged to update future motion, encapsulating the effect of the surrounding cars. Experiments on two public datasets demonstrate that the proposed methodology can match or outperform the state-of-the-art methods for long-term trajectory prediction.

Conference paper

Wang C, Zhang Q, Tian Q, Li S, Wang X, Lane D, Petillot Y, Wang Set al., 2020, Learning Mobile Manipulation through Deep Reinforcement Learning, SENSORS, Vol: 20

Journal article

Wang C-X, Di Renzo M, Stanczak S, Wang S, Larsson EGet al., 2020, Artificial Intelligence Enabled Wireless Networking for 5G and Beyond: Recent Advances and Future Challenges, IEEE WIRELESS COMMUNICATIONS, Vol: 27, Pages: 16-23, ISSN: 1536-1284

Journal article

Fraser H, Wang S, 2020, DeepBev: A conditional adversarial network for bird's eye view generation, Pages: 5581-5586, ISSN: 1051-4651

Obtaining a meaningful, interpretable yet compact representation of the immediate surroundings of an autonomous vehicle is paramount for effective operation as well as safety. This paper proposes a solution to this by representing semantically important objects from a top-down, ego-centric bird's eye view. The novelty in this work is from formulating this problem as an adversarial learning task, tasking a generator model to produce bird's eye view representations which are plausible enough to be mistaken as a ground truth sample. This is achieved by using a Wasserstein Generative Adversarial Network based model conditioned on object detections from monocular RGB images and the corresponding bounding boxes. Extensive experiments show our model is more robust to novel data compared to strictly supervised benchmark models, while being a fraction of the size of the next best.

Conference paper

Zhou Y, Zhou W, Fei M, Wang Set al., 2020, 3D Curve Planning Algorithm of Aircraft Under Multiple Constraints, Pages: 236-249, ISSN: 1865-0929

The trajectory planning of the aircraft is generally based on different mission requirements, under certain constraints to find an available optimal mission route. The traditional 3D trajectory planning algorithm is easy to fall into the local optimum. The search speed is full when the algorithm is searched. Some algorithms can only perform polyline search and fail to fully consider the physical reality of the aircraft. Aiming at the above problems, this paper proposes a path planning algorithm that combines A∗ algorithm and Dubins curve comprehensive optimization. The algorithm in this paper adopts heuristic search algorithm, performs two pruning by setting parameters, and through reasonable parameter settings, in a short time, the UAV’s three-dimensional curve trajectory planning is quickly performed, which is greatly improved compared with other current algorithms.

Conference paper

Yang B, Wang S, Markham A, Trigoni Net al., 2020, Robust Attentional Aggregation of Deep Feature Sets for Multi-view 3D Reconstruction, INTERNATIONAL JOURNAL OF COMPUTER VISION, Vol: 128, Pages: 53-73, ISSN: 0920-5691

Journal article

Magyar B, Tsiogkas N, Brito B, Patel M, Lane D, Wang Set al., 2019, Guided Stochastic Optimization for Motion Planning, Frontiers in Robotics and AI, Vol: 6

Learning from Demonstration (LfD) is a family of methods used to teach robots specific tasks. It is used to assist them with the increasing difficulty of performing manipulation tasks in a scalable manner. The state-of-the-art in collaborative robots allows for simple LfD approaches that can handle limited parameter changes of a task. These methods however typically approach the problem from a control perspective and therefore are tied to specific robot platforms. In contrast, this paper proposes a novel motion planning approach that combines the benefits of LfD approaches with generic motion planning that can provide robustness to the planning process as well as scaling task learning both in number of tasks and number of robot platforms. Specifically, it introduces Dynamical Movement Primitives (DMPs) based LfD as initial trajectories for the Stochastic Optimization for Motion Planning (STOMP) framework. This allows for successful task execution even when the task parameters and the environment change. Moreover, the proposed approach allows for skill transfer between robots. In this case a task is demonstrated to one robot via kinesthetic teaching and can be successfully executed by a different robot. The proposed approach, coined Guided Stochastic Optimization for Motion Planning (GSTOMP) is evaluated extensively using two different manipulator systems in simulation and in real conditions. Results show that GSTOMP improves task success compared to simple LfD approaches employed by the state-of-the-art collaborative robots. Moreover, it is shown that transferring skills is feasible and with good performance. Finally, the proposed approach is compared against a plethora of state-of-the-art motion planners. The results show that the motion planning performance is comparable or better than the state-of-the-art.

Journal article

Bo Y, Jianan W, Clark R, Sen W, Andrew M, Qingyong H, Niki Tet al., 2019, Learning object bounding boxes for 3D instance segmentation on point clouds, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Publisher: Neural Information Processing Systems Foundation, Inc.

We propose a novel, conceptually simple and general framework for instance seg-mentation on 3D point clouds. Our method, called3D-BoNet, follows the simpledesign philosophy of per-point multilayer perceptrons (MLPs). The frameworkdirectly regresses 3Dboundingboxes for all instances in a point cloud, whilesimultaneously predicting a point-level mask for each instance. It consists of abackbone network followed by two parallel network branches for 1) bounding boxregression and 2) point mask prediction. 3D-BoNet is single-stage, anchor-freeand end-to-end trainable. Moreover, it is remarkably computationally efficientas, unlike existing approaches, it does not require any post-processing steps suchas non-maximum suppression, feature sampling, clustering or voting. Extensiveexperiments show that our approach surpasses existing work on both ScanNet andS3DIS datasets while being approximately10×more computationally efficient.Comprehensive ablation studies demonstrate the effectiveness of our design.

Conference paper

Hong Z, Petillot Y, Lane D, Miao Y, Wang Set al., 2019, TextPlace: Visual Place Recognition and Topological Localization Through Reading Scene Texts, 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Publisher: IEEE

Conference paper

Wen H, Clark R, Wang S, Lu X, Du B, Hu W, Trigoni Net al., 2019, Efficient indoor positioning with visual experiences via lifelong learning, IEEE Transactions on Mobile Computing, Vol: 18, Pages: 814-829, ISSN: 1536-1233

Positioning with visual sensors in indoor environments has many advantages: it doesn't require infrastructure or accurate maps, and is more robust and accurate than other modalities such as WiFi. However, one of the biggest hurdles that prevents its practical application on mobile devices is the time-consuming visual processing pipeline. To overcome this problem, this paper proposes a novel lifelong learning approach to enable efficient and real-time visual positioning. We explore the fact that when following a previous visual experience for multiple times, one could gradually discover clues on how to traverse it with much less effort, e.g., which parts of the scene are more informative, and what kind of visual elements we should expect. Such second-order information is recorded as parameters, which provide key insights of the context and empower our system to dynamically optimise itself to stay localised with minimum cost. We implement the proposed approach on an array of mobile and wearable devices, and evaluate its performance in two indoor settings. Experimental results show our approach can reduce the visual processing time up to two orders of magnitude, while achieving sub-metre positioning accuracy.

Journal article

Liu Y, Petillot Y, Lane D, Wang Set al., 2019, Global Localization with Object-Level Semantics and Topology, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 4909-4915, ISSN: 1050-4729

Conference paper

Saputra MRU, de Gusmao PPB, Wang S, Markham A, Trigoni Net al., 2019, Learning Monocular Visual Odometry through Geometry-Aware Curriculum Learning, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3549-3555, ISSN: 1050-4729

Conference paper

Li R, Wang S, Gu D, 2018, Ongoing Evolution of Visual SLAM from Geometry to Deep Learning: Challenges and Opportunities, COGNITIVE COMPUTATION, Vol: 10, Pages: 875-889, ISSN: 1866-9956

Journal article

Carlucho I, De Paula M, Wang S, Petillot Y, Acosta GGet al., 2018, Adaptive low-level control of autonomous underwater vehicles using deep reinforcement learning, ROBOTICS AND AUTONOMOUS SYSTEMS, Vol: 107, Pages: 71-86, ISSN: 0921-8890

Journal article

Wang S, Xia Q, Smith W, 2018, Geomagnetic Field based Human Search and Following for Autonomous Robots, Pages: 66-67

In this paper, we present a novel human search and following system which uses the pervasive geomagnetic field to enable autonomous robots tracking and finding human users in large-scale indoor environments for human robot coordination. By leveraging geomagnetic field signature around, a robot can search and follow users who traverse freely in a space without the need of direct visual line of sight. The system is low-cost since it requires a magnetometer, which can be found on almost every mobile device. Since no visual data is used for either the user or the robot, privacy is mostly respected. The system includes two main elements. A geomagnetic field based Simultaneous Localisation and Mapping algorithm which generates geomagnetic field map of an environment, and sequence matching of geomagnetic field for autonomous navigation of a robot. The system is tested in a big indoor office demonstrating its effectiveness for large-scale human search and following for human and robot co-working environments.

Conference paper

Wang S, Clark R, Wen H, Trigoni Net al., 2018, End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks, International Journal of Robotics Research, Vol: 37, Pages: 513-542, ISSN: 0278-3649

This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. An end-to-end, sequence-to-sequence probabilistic visual odometry (ESP-VO) framework is proposed for the monocular VO based on deep recurrent convolutional neural networks. It is trained and deployed in an end-to-end manner, that is, directly inferring poses and uncertainties from a sequence of raw images (video) without adopting any modules from the conventional VO pipeline. It can not only automatically learn effective feature representation encapsulating geometric information through convolutional neural networks, but also implicitly model sequential dynamics and relation for VO using deep recurrent neural networks. Uncertainty is also derived along with the VO estimation without introducing much extra computation. Extensive experiments on several datasets representing driving, flying and walking scenarios show competitive performance of the proposed ESP-VO to the state-of-the-art methods, demonstrating a promising potential of the deep learning technique for VO and verifying that it can be a viable complement to current VO systems.

Journal article

Chen L, Wang S, Hu H, Gu D, Dukes Iet al., 2018, Voice-directed autonomous navigation of a smart-wheelchair, Smart Wheelchairs and Brain-computer Interfaces: Mobile Assistive Technologies, Pages: 405-424, ISBN: 9780128128930

This chapter presents our research work related to voice-directed indoor navigation for a smart-wheelchair that is equipped with various kinds of sensors and an embedded computer system. The embedded sensors include two encoders, two laser scanners, a microphone, and a ring of sonars. It is not only capable of autonomous navigation in an indoor environment, but also can be directed from one place to another by voice commands given by user. A number of experiments have been conducted to evaluate the autonomous navigation system of our smart-wheelchair. Results clearly show the effectiveness and good performance of the proposed solution.

Book chapter

Wang Z, Rosa S, Xie L, Yang B, Wang S, Trigoni N, Markham Aet al., 2018, Defo-Net: Learning Body Deformation using Generative Adversarial Networks, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 2440-2447, ISSN: 1050-4729

Conference paper

Li R, Wang S, Long Z, Gu Det al., 2018, UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 7286-7291, ISSN: 1050-4729

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=01097500&limit=30&person=true&page=2&respub-action=search.html