Imperial College London

DrSenWang

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Senior Lecturer
 
 
 
//

Contact

 

sen.wang

 
 
//

Location

 

Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

89 results found

Hansen KF, Yao L, Ren K, Wang S, Liu W, Liu Yet al., 2023, Image segmentation in marine environments using convolutional LSTM for temporal context, Applied Ocean Research, Vol: 139, ISSN: 0141-1187

Unmanned surface vehicles (USVs) carry a wealth of possible applications, many of which are limited by the vehicle's level of autonomy. The development of efficient and robust computer vision algorithms is a key factor in improving this, as they permit autonomous detection and thereby avoidance of obstacles. Recent developments in convolutional neural networks (CNNs), and the collection of increasingly diverse datasets, present opportunities for improved computer vision algorithms requiring less data and computational power. One area of potential improvement is the utilisation of temporal context from USV camera feeds in the form of sequential video frames to consistently identify obstacles in diverse marine environments under challenging conditions. This paper documents the implementation of this through long short-term memory (LSTM) cells in existing CNN structures and the exploration of parameters affecting their efficacy. It is found that LSTM cells are promising for achieving improved performance; however, there are weaknesses associated with network training procedures and datasets. Several novel network architectures are presented and compared using a state-of-the-art benchmarking method. It is shown that LSTM cells allow for better model performance with fewer training iterations, but that this advantage diminishes with additional training.

Journal article

Dong Y, Huang W, Bharti V, Cox V, Banks A, Wang S, Zhao X, Schewe S, Huang Xet al., 2023, Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance, ACM Transactions on Embedded Computing Systems, Vol: 22, ISSN: 1539-9087

The increasing use of Machine Learning (ML) components embedded in autonomous systems - so-called Learning-Enabled Systems (LESs) - has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety analysis and reliability modelling activities. While such assurance activities are traditionally guided by consensus-based standards developed from vast engineering experience, LESs pose new challenges in safety-critical application due to the characteristics and design of ML models. In this article, we first present an overall assurance framework for LESs with an emphasis on quantitative aspects, e.g., breaking down system-level safety targets to component-level requirements and supporting claims stated in reliability metrics. We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers that utilises the operational profile and robustness verification evidence. We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM and propose solutions to practical use. Probabilistic safety argument templates at the lower ML component-level are also developed based on the RAM. Finally, to evaluate and demonstrate our methods, we not only conduct experiments on synthetic/benchmark datasets but also scope our methods with case studies on simulated Autonomous Underwater Vehicles and physical Unmanned Ground Vehicles.

Journal article

Dong Y, Wu P, Wang S, Liu Yet al., 2023, ShipGAN: Generative Adversarial Network based simulation-to-real image translation for ships, Applied Ocean Research, Vol: 131, ISSN: 0141-1187

Recent advances in robotics and autonomous systems (RAS) have significantly improved the autonomy level of unmanned surface vehicles (USVs) and made them capable of undertaking demanding tasks in various environments. During the operation of USVs, apart from normal situations, it is those unexpected scenes, such as busy waterways or navigation in dust/nighttime, impose most dangers to USVs as these scenes are rarely seen during training. Such a rare occurrence also makes the manual collection and recording of these scenes into dataset difficult, expensive and inefficient, with the majority of existing public available datasets not able to fully cover them. One of many plausible solutions is to purposely generate these data using computer vision techniques with the assistance from high-fidelity simulations that can create various desirable motions/scenarios. However, the stylistic difference between the simulation images and the natural images would cause a domain shift problem. Hence, there is a need for designing a method that can transfer the data distribution and styles of the simulation images into the realistic domain. This paper proposes and evaluates a novel solution to fill this gap using a Generative Adversarial Network (GAN) based model, ShipGAN, to translate the simulation images into realistic images. Experiments were carried out to investigate the feasibility of generating realistic images using GAN-based image translation models. The synthetic realistic images from the simulation images were demonstrated to be reliable by the object detection and image segmentation algorithms trained with natural images.

Journal article

Li C, Yan F, Wang S, Zhuang Yet al., 2023, A 3D LiDAR odometry for UGVs using coarse-to-fine deep scene flow estimation, Transactions of the Institute of Measurement and Control, Vol: 45, Pages: 274-286, ISSN: 0142-3312

Light detection and ranging (LiDAR) odometry plays a crucial role in autonomous mobile robots and unmanned ground vehicles (UGVs). This paper presents a deep learning–based odometry system using two successive three-dimensional (3D) point clouds to estimate their scene flow and then predict their relative pose. The network consumes continuous 3D point clouds directly and outputs their scene flow and uncertain mask in a coarse-to-fine fashion. A pose estimation layer without trainable parameters is designed to compute the pose with the scene flow. We also introduce a scan-to-map optimization algorithm to enhance the robustness and accuracy of the system. Our experiments on the KITTI odometry data set and our campus data set demonstrate the effectiveness of the proposed deep learning–based point cloud odometry.

Journal article

Hong Z, Petillot Y, Zhang K, Xu S, Wang Set al., 2023, Large-Scale Radar Localization using Online Public Maps, Pages: 3990-3996, ISSN: 1050-4729

In this paper, we propose using online public maps, e.g., OpenStreetMap (OSM), for large-scale radar-based localization without needing a prior sensing map. This can potentially extend the localization system to anywhere worldwide without building, saving, or maintaining a sensing map, as long as an online public map covers the operating area. Existing methods using OSM only use route network or semantics information. These two sources of information are not combined in the previous works, while our proposed system fuses them to improve localization accuracy. Our experiments, on three open datasets collected from three different continents, show that the proposed system outperforms the state-of-the-art localization methods, reducing up to 50% of position errors. We release an open-source implementation for the community.

Conference paper

Rao Y, Ju Y, Wang S, Gao F, Fan H, Dong Jet al., 2023, Learning Enriched Feature Descriptor for Image Matching and Visual Measurement, IEEE Transactions on Instrumentation and Measurement, Vol: 72, ISSN: 0018-9456

Recent feature descriptor research has witnessed tremendous progress with the development of the deep neural network. However, most existing descriptors solely focus on learning strong discriminativeness with deep-invariant features, neglecting their representation ability and rich hierarchical clues hidden in images, which could further establish high-quality matches via implicit hierarchical comparisons in rich representative descriptor space. In this article, we consider both the discriminative and representation ability of feature descriptors to enrich the descriptor space with a novel representative learning framework. On the one hand, we introduce histogram of oriented gradient (HOG) as a prior term to guide our descriptor to learn a powerful representation and robustness in a self-supervised manner. On the other hand, we present an adaptive triplet loss (ATL), which penalizes the triplet loss (TL) according to the descriptor matching distances in order to encourage our descriptor to learn strong discriminativeness. Moreover, to fully use the information encapsulated in images and boost the representation ability, we propose a novel HIerarchical Feature Transformer Network (HIFT), which derives dense descriptions from the semantic and cross-scale-enhanced hierarchical features in a local-to-global manner. Extensive experiments on popular feature matching and visual localization benchmarks show that the HIFT achieves highly competitive performance compared with the state-of-the-art methods. Applications on visual measurement tasks of visual 3-D reconstruction and ego-motion estimation also demonstrate the high generalization ability of our method. Our model is available at https://github.com/Ray2OUC/HIFT.

Journal article

Xu S, Willners JS, Hong Z, Zhang K, Petillot YR, Wang Set al., 2023, Observability-Aware Active Extrinsic Calibration of Multiple Sensors, Pages: 2091-2097, ISSN: 1050-4729

The extrinsic parameters play a crucial role in multi-sensor fusion, such as visual-inertial Simultaneous Localization and Mapping(SLAM), as they enable the accurate alignment and integration of measurements from different sensors. However, extrinsic calibration is challenging in scenarios, such as underwater, where in-view structures are scanty and visibility is limited, causing incorrect extrinsic calibration due to insufficient motion on all degrees of freedom. In this paper, we propose an entropy-based active extrinsic calibration algorithm leverages observability analysis and information entropy to enhance the accuracy and reliability of extrinsic calibration. It determines the system observability numerically by using singular value decomposition (SVD) of the Fisher Information Matrix (FIM). Furthermore, when the extrinsic parameter is not fully observable, our method actively searches for the next best motion to recover the system's observability via entropy-based optimization. Experimental results on synthetic data, in a simulation, and using an actual underwater vehicle verify that the proposed method is able to avoid the calibration failure while improving the calibration accuracy and reliability.

Conference paper

Ochal M, Patacchiola M, Vazquez J, Storkey A, Wang Set al., 2023, Few-Shot Learning With Class Imbalance, IEEE Transactions on Artificial Intelligence

Few-Shot Learning (FSL) algorithms are commonly trained through Meta-Learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation. However, the standard training procedures overlook the real-world dynamics where classes commonly occur at different frequencies. While it is generally understood that class imbalance harms the performance of supervised methods, limited research examines the impact of imbalance on the FSL evaluation task. Our analysis compares 10 state-of-the-art meta-learning and FSL methods on different imbalance distributions and rebalancing techniques. Our results reveal that 1) some FSL methods display a natural disposition against imbalance while most other approaches produce a performance drop by up to 17% compared to the balanced task without the appropriate mitigation; 2) many meta-learning algorithms will not automatically learn to balance from exposure to imbalanced training tasks; 3) classical rebalancing strategies, such as random oversampling, can still be very effective, leading to state-of-the-art performances and should not be overlooked.

Journal article

Rao Y, Liu W, Li K, Fan H, Wang S, Dong Jet al., 2023, Deep Color Compensation for Generalized Underwater Image Enhancement, IEEE Transactions on Circuits and Systems for Video Technology, ISSN: 1051-8215

Underwater images suffer from quality degradation due to the underwater light absorption and scattering. It remains challenging to enhance underwater images using deep learning-based methods since the scarcity of real-world underwater images and their enhanced counterparts. Although existing works manually select well-enhanced images as reference images to train enhancement networks in an end-to-end manner, their performance tends to be inferior in some scenarios. We argue that the manually selected reference images cannot approximate their ground truth perfectly, leading to imbalanced learning and domain shift in enhancement networks. To address this issue, we analyse widely used underwater datasets from the perspective of color spectrum distribution and surprisingly find the sound color spectrum distribution of the enhanced reference images compared to in-air datasets. Based on this perceptive observation, instead of directly learning the enhancement mapping, we propose a novel methodology to learn color compensation for general purposes. Specifically, we present a probabilistic color compensation network that estimates the probabilistic distribution of colors by multi-scale volumetric fusion of texture and color features. We further propose a novel two-stage enhancement framework that first performs color compensation and then enhancement, which is highly flexible to be integrated with an existing enhancement method without tuning. Extensive experiments on underwater image enhancement across various challenging scenarios show that our proposed approach consistently improves the results of the popular conventional and learning-based methods by a significant margin. Moreover, our enhanced images achieve superior performance on underwater salient object detection and visual 3D reconstruction, demonstrating that our method can successfully break through the generalization bottleneck of existing learning-based enhancement models. Our implementation will be made availa

Journal article

Zhang K, Hong Z, Xu S, Wang Set al., 2022, CURL: Continuous, Ultra-compact Representation for LiDAR, Robotics: Science and Systems 2022, Publisher: Robotics: Science and Systems Foundation

Conference paper

Luo D, Zhuang Y, Wang S, 2022, Hybrid sparse monocular visual odometry with online photometric calibration, The International Journal of Robotics Research, Pages: 027836492211077-027836492211077, ISSN: 0278-3649

<jats:p> Most monocular visual Simultaneous Localization and Mapping (vSLAM) and visual odometry (VO) algorithms focus on either feature-based methods or direct methods. Hybrid (semi-direct) approach is less studied although it is equally important. In this paper, a hybrid sparse visual odometry (HSO) algorithm with online photometric calibration is proposed for monocular vision. HSO introduces two novel measures, that is, direct image alignment with adaptive mode selection and image photometric description using ratio factors, to enhance the robustness against dramatic image intensity changes and motion blur. Moreover, HSO is able to establish pose constraints between keyframes far apart in time and space by using KLT tracking enhanced with a local-global brightness consistency. The convergence speed of candidate map points is adopted as the basis for keyframe selection, which strengthens the coordination between the front end and the back end. Photometric calibration is elegantly integrated into the VO system working in tandem: (1) Photometric interference from the camera, such as vignetting and changes in exposure time, is accurately calibrated and compensated in HSO, thereby improving the accuracy and robustness of VO. (2) On the other hand, VO provides pre-calculated data for the photometric calibration algorithm, which reduces resource consumption and improves the estimation accuracy of photometric parameters. Extensive experiments are performed on various public datasets to evaluate the proposed HSO against the state-of-the-art monocular vSLAM/VO and online photometric calibration methods. The results show that the proposed HSO achieves superior performance on VO and photometric calibration in terms of accuracy, robustness, and efficiency, being comparable with the state-of-the-art VO/vSLAM systems. We open source HSO for the benefit of the community. </jats:p>

Journal article

Hong Z, Petillot Y, Wallace A, Wang Set al., 2022, RadarSLAM: A robust simultaneous localization and mapping system for all weather conditions, The International Journal of Robotics Research, Vol: 41, Pages: 519-542, ISSN: 0278-3649

<jats:p> A Simultaneous Localization and Mapping (SLAM) system must be robust to support long-term mobile vehicle and robot applications. However, camera and LiDAR based SLAM systems can be fragile when facing challenging illumination or weather conditions which degrade the utility of imagery and point cloud data. Radar, whose operating electromagnetic spectrum is less affected by environmental changes, is promising although its distinct sensor model and noise characteristics bring open challenges when being exploited for SLAM. This paper studies the use of a Frequency Modulated Continuous Wave radar for SLAM in large-scale outdoor environments. We propose a full radar SLAM system, including a novel radar motion estimation algorithm that leverages radar geometry for reliable feature tracking. It also optimally compensates motion distortion and estimates pose by joint optimization. Its loop closure component is designed to be simple yet efficient for radar imagery by capturing and exploiting structural information of the surrounding environment. Extensive experiments on three public radar datasets, ranging from city streets and residential areas to countryside and highways, show competitive accuracy and reliability performance of the proposed radar SLAM system compared to the state-of-the-art LiDAR, vision and radar methods. The results show that our system is technically viable in achieving reliable SLAM in extreme weather conditions on the RADIATE Dataset, for example, heavy snow and dense fog, demonstrating the promising potential of using radar for all-weather localization and mapping. </jats:p>

Journal article

Gao H, Liang B, Oboe R, Shi Y, Wang S, Tomizuka Met al., 2022, Guest Editorial Introduction to the Focused Section on Adaptive Learning and Control for Advanced Mechatronics Systems, IEEE-ASME TRANSACTIONS ON MECHATRONICS, Vol: 27, Pages: 607-610, ISSN: 1083-4435

Journal article

Fraser H, Wang S, 2022, Monocular Depth Estimation for Equirectangular Videos, Pages: 5293-5299, ISSN: 2153-0858

Depth estimation from panoramic imagery has received minimal attention in contrast to standard perspective imagery, which constitutes the majority of the literature on the key research topic. The vast - and frequently complete - field of view provided by such panoramic photographs makes them appealing for a variety of applications, including robots, autonomous vehicles, and virtual reality. Consumer-level camera systems capable of capturing such images are likewise growing more affordable, and may be desirable complements to autonomous systems' sensor packages. They do, however, introduce significant distortions and violate some assumptions regarding perspective view images. Additionally, many state-of-the-art algorithms are not designed for its projection model, and their depth estimation performance tends to degrade when being applied to panoramic imagery. This paper presents a novel technique for adapting view synthesis-based depth estimation models to omnidirectional vision. Specifically, we: 1) integrate a 'virtual' spherical camera model into the training pipeline, facilitating the model training, 2) exploit spherical convolutional layers to perform convolution operations on equirectangular images, handling the severe distortion, and 3) propose an optical flow-based masking scheme to mitigate the effect of unwanted pixels during training. Our qualitative and quantitative results demonstrate that these simple yet efficient designs result in significantly improved depth estimations when compared to previous approaches.

Conference paper

Wang C, Zhang Q, Wang X, Xu S, Petillot Y, Wang Set al., 2022, Multi-Task Reinforcement Learning based Mobile Manipulation Control for Dynamic Object Tracking and Grasping, Pages: 34-40

Agile control of mobile manipulator is challenging because of the high complexity coupled by the robotic system and the unstructured working environment. Tracking and grasping a dynamic object with a random trajectory is even harder. In this paper, a multi-task reinforcement learning-based mobile manipulation control framework is proposed to achieve general dynamic object tracking and grasping. Several basic types of dynamic trajectories are chosen as the training set for the task. To improve policy generalization in practice, random noise and dynamics randomization are introduced during the training process. Extensive experiments show that our trained policy can adapt to unseen random dynamic trajectories with about 0.1 m tracking error and 75% grasping success rate for dynamic objects. The trained policy can also be successfully deployed on a real mobile manipulator.

Conference paper

Wang C, Zhang Q, Wang X, Xu S, Petillot Y, Wang Set al., 2022, Autonomous Underwater Robotic Grasping Research Based on Navigation and Hierarchical Operation, Pages: 176-182

This paper proposes a new framework for the autonomous underwater operation of underwater vehicle manipulator systems (UVMS), which is modular, standardized, and hierarchical. The framework consists of three subsystems: perception, navigation, and grasping. The perception module is based on an underwater stereo vision system, which provides effective environment and target information for the navigation and grasping modules. The navigation module is based on ORBSLAM and acoustic odometry, which generates the global map and plans a trajectory for the first initial stage. The grasping module generates the target grasping pose based on the extracted point cloud and the current robot state, and then executes the grasping task based on the motion planner. The proposed system is tested to perform several underwater target grasping tasks in a water tank, demonstrating the effectiveness of the system.

Conference paper

Wang X, Wang S, Liang X, Zhao D, Huang J, Xu X, Dai B, Miao Qet al., 2022, Deep Reinforcement Learning: A Survey, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 2162-237X

Deep reinforcement learning (DRL) integrates the feature representation ability of deep learning with the decision-making ability of reinforcement learning so that it can achieve powerful end-to-end learning control capabilities. In the past decade, DRL has made substantial advances in many tasks that require perceiving high-dimensional input and making optimal or near-optimal decisions. However, there are still many challenging problems in the theory and applications of DRL, especially in learning control tasks with limited samples, sparse rewards, and multiple agents. Researchers have proposed various solutions and new theories to solve these problems and promote the development of DRL. In addition, deep learning has stimulated the further development of many subfields of reinforcement learning, such as hierarchical reinforcement learning (HRL), multiagent reinforcement learning, and imitation learning. This article gives a comprehensive overview of the fundamental theories, key algorithms, and primary research domains of DRL. In addition to value-based and policy-based DRL algorithms, the advances in maximum entropy-based DRL are summarized. The future research topics of DRL are also analyzed and discussed.

Journal article

Xu S, Luczynski T, Willners JS, Hong Z, Zhang K, Petillot YR, Wang Set al., 2021, Underwater Visual Acoustic SLAM with Extrinsic Calibration, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE

Conference paper

Vargas E, Scona R, Willners JS, Luczynski T, Cao Y, Wang S, Petillot YRet al., 2021, Robust Underwater Visual SLAM Fusing Acoustic Sensing, 2021 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

Conference paper

Sheeny M, De Pellegrin E, Mukherjee S, Ahrabian A, Wang S, Wallace Aet al., 2021, RADIATE: A Radar Dataset for Automotive Perception in Bad Weather, 2021 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

Conference paper

Li R, Wang S, Gu D, 2021, DeepSLAM: A Robust Monocular SLAM System With Unsupervised Deep Learning, IEEE Transactions on Industrial Electronics, Vol: 68, Pages: 3577-3587, ISSN: 0278-0046

Journal article

Li C, Wang S, Zhuang Y, Yan Fet al., 2021, Deep Sensor Fusion Between 2D Laser Scanner and IMU for Mobile Robot Localization, IEEE SENSORS JOURNAL, Vol: 21, Pages: 8501-8509, ISSN: 1530-437X

Journal article

Mukherjee S, Wallace AM, Wang S, 2021, Predicting Vehicle Behavior Using Automotive Radar and Recurrent Neural Networks, IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, Vol: 2, Pages: 254-268

Journal article

Cao F, Yan F, Wang S, Zhuang Y, Wang Wet al., 2021, Season-Invariant and Viewpoint-Tolerant LiDAR Place Recognition in GPS-Denied Environments, IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, Vol: 68, Pages: 563-574, ISSN: 0278-0046

Journal article

Xie L, Miao Y, Wang S, Blunsom P, Wang Z, Chen C, Markham A, Trigoni Net al., 2021, Learning With Stochastic Guidance for Robot Navigation, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Vol: 32, Pages: 166-176, ISSN: 2162-237X

Journal article

Antonelli G, Indiveri G, Barrera C, Caccia M, Dooly G, Flavin N, Ferreira F, Miskovic N, Furlong M, Kopf A, Bachmayer R, Ludvigsen M, Opderbecke J, Pascoal A, Petroccia R, Alves J, Ridao P, Vallicrosa G, De Sousa JB, Costa M, Wang Set al., 2021, Advancing the EU Marine Robotics Research Infrastructure Network: The EU Marine Robots project, ISSN: 0197-7385

This paper provides an overview of the H2020 Marine robotics research infrastructure network (EU Marine Robots) project. The overview is organized around the three main activities of infrastructure projects: i) Networking activities (NA); ii) Transnational access (TNA) in which access to marine robotic infrastructures from the partners is granted in competitive calls; iii) Joint research activities (JRA) aimed at making robotic infrastructures more operable and transitioning new systems and technologies to field operations. The strategic significance of the project and future developments are discussed as conclusions.

Conference paper

Wang C, Zhang Q, Li S, Wang X, Lane D, Petillot Y, Wang Set al., 2021, Learning-Based Underwater Autonomous Grasping via 3D Point Cloud, ISSN: 0197-7385

Underwater autonomous grasping is a challenging task for robotic research. In this paper, we propose a learning-based underwater grasping method using 3D point cloud generated from an underwater stereo camera. First, we use Pinax-model for accurate refraction correction of a stereo camera in a flat-pane housing. Second, dense point cloud of the target is generated using the calibrated stereo images. An improved Grasp Pose Detection (GPD) method is then developed to generate the candidate grasping poses and select the best one based on kinematic constraints. Finally, an optimal trajectory is planned to finish the grasping task. Experiments in a water tank have proved the effectiveness of our method.

Conference paper

Willners JS, Carlucho I, Katagiri S, Lemoine C, Roe J, Stephens D, Luczynski T, Xu S, Carreno Y, Pairet E, Barbalata C, Petillot Y, Wang Set al., 2021, From market-ready ROVs to low-cost AUVs, ISSN: 0197-7385

Autonomous Underwater Vehicles (AUVs) are becoming increasingly important for different types of industrial applications. The generally high cost of AUVs restricts the access to them and therefore advances in research and technological development. However, recent advances have led to lower cost commercially available Remotely Operated Vehicles (ROVs), which present a platform that can be enhanced to enable a high degree of autonomy, similar to that of a high-end AUV. In this article, we present how a low-cost commercial-off-the-shelf ROV can be used as a foundation for developing versatile and affordable AUVs. We introduce the required hardware modifications to obtain a system capable of autonomous operations as well as the necessary software modules. Additionally, we present a set of use cases exhibiting the versatility of the developed platform for intervention and mapping tasks.

Conference paper

Willners JS, Carreno Y, Xu S, Luczynski T, Katagiri S, Roe J, Pairet È, Petillot Y, Wang Set al., 2021, Robust underwater SLAM using autonomous relocalisation, Pages: 273-280

This paper presents a robust underwater simultaneous localisation and mapping (SLAM) framework using autonomous relocalisation. The proposed approach strives to maintain a single consistent map during operation and updates its current plan when the SLAM loses feature tracking. The updated plan transverses viewpoints that are likely to aid in merging the current map into the global map. We present the sub-systems of the framework: the SLAM, viewpoint generation, and high level planning. In-water experiments show the advantage of our approach used on an autonomous underwater vehicle (AUV) performing inspections.

Conference paper

Wei B, Xu W, Luo C, Zoppi G, Ma D, Wang Set al., 2020, SolarSLAM: Battery-free loop closure for indoor localisation, Pages: 4485-4490, ISSN: 2153-0858

In this paper, we propose SolarSLAM, a batteryfree loop closure method for indoor localisation. Inertial Measurement Unit (IMU) based indoor localisation method has been widely used due to its ubiquity in mobile devices, such as mobile phones, smartwatches and wearable bands. However, it suffers from the unavoidable long term drift. To mitigate the localisation error, many loop closure solutions have been proposed using sophisticated sensors, such as cameras, laser, etc. Despite achieving high-precision localisation performance, these sensors consume a huge amount of energy. Different from those solutions, the proposed SolarSLAM takes advantage of an energy harvesting solar cell as a sensor and achieves effective battery-free loop closure method. The proposed method suggests the key-point dynamic time warping for detecting loops and uses robust simultaneous localisation and mapping (SLAM) as the optimiser to remove falsely recognised loop closures. Extensive evaluations in the real environments have been conducted to demonstrate the advantageous photocurrent characteristics for indoor localisation and good localisation accuracy of the proposed method.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01097500&limit=30&person=true