Imperial College London

DrSenWang

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Senior Lecturer
 
 
 
//

Contact

 

sen.wang

 
 
//

Location

 

Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

93 results found

Dong Y, Zhao X, Wang S, Huang Xet al., 2024, Reachability Verification Based Reliability Assessment for Deep Reinforcement Learning Controlled Robotics and Autonomous Systems, IEEE Robotics and Automation Letters, Vol: 9, Pages: 3299-3306

Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RAS). A key challenge to its deployment in real-life operations is the presence of spuriously unsafe DRL policies. Unexplored states may lead the agent to make wrong decisions that could result in hazards, especially in applications where DRL-trained end-to-end controllers govern the behaviour of RAS. This letter proposes a novel quantitative reliability assessment framework for DRL-controlled RAS, leveraging verification evidence generated from formal reliability analysis of neural networks. A two-level verification framework is introduced to check the safety property with respect to inaccurate observations that are due to, e.g., environmental noise and state changes. Reachability verification tools are leveraged locally to generate safety evidence of trajectories. In contrast, at the global level, we quantify the overall reliability as an aggregated metric of local safety evidence, corresponding to a set of distinct tasks and their occurrence probabilities. The effectiveness of the proposed verification framework is demonstrated and validated via experiments on real RAS.

Journal article

Ochal M, Patacchiola M, Vazquez J, Storkey A, Wang Set al., 2023, Few-shot learning with class imbalance, IEEE Transactions on Artificial Intelligence, Vol: 4, Pages: 1348-1358, ISSN: 2691-4581

Impact Statement:Large datasets can be costly to obtain and annotate [LeCun et al. 2015]. This is limiting in many realistic situations, for example, when some objects are rarely encountered or when it is necessary to perform real-time operations [Ochal et al. 2020], [Guan et al. 2020], [Zhang et al. 2020], [Massiceti et al. 2021]. Few-shot learning (FSL) alleviates this burden by training a model to rapidly adapt with a limited amount of data. However, recent progress in the field has focused on the idealized scenario with balanced classes, which is easily compromised in the real world. To this end, we evaluate various few-shot and meta-learning methods across multiple class imbalance distributions and offer practical advice and best practices for dealing with these more realistic settings. We hope our work will continue and help narrow the gap between theoretical and real-world performance in FSL.Abstract:Few-shot learning (FSL) algorithms are commonly trained through meta-learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation. However, the standard training procedures overlook the real-world dynamics where classes commonly occur at different frequencies. While it is generally understood that class imbalance harms the performance of supervised methods, limited research examines the impact of imbalance on the FSL evaluation task. Our analysis compares ten state-of-the-art ML and FSL methods on different imbalance distributions and rebalancing techniques. Our results reveal that: 1) some FSL methods display a natural disposition against imbalance while most other approaches produce a performance drop by up to 17% compared to the balanced task without the appropriate mitigation; 2) many ML algorithms will not automatically learn to balance from exposure to imbalanced training tasks; 3) classical rebalancing strategies, such as random oversampling, can still be very effective, leading to state-of-the-a

Journal article

Hansen KF, Yao L, Ren K, Wang S, Liu W, Liu Yet al., 2023, Image segmentation in marine environments using convolutional LSTM for temporal context, APPLIED OCEAN RESEARCH, Vol: 139, ISSN: 0141-1187

Journal article

Dong Y, Huang W, Bharti V, Cox V, Banks A, Wang S, Zhao X, Schewe S, Huang Xet al., 2023, Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance, ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, Vol: 22, ISSN: 1539-9087

Journal article

Dong Y, Wu P, Wang S, Liu Yet al., 2023, ShipGAN: Generative Adversarial Network based simulation-to-real image translation for ships, APPLIED OCEAN RESEARCH, Vol: 131, ISSN: 0141-1187

Journal article

Nicolay P, Petillot Y, Marfeychuk M, Wang S, Carlucho Iet al., 2023, Enhancing AUV Autonomy with Model Predictive Path Integral Control, ISSN: 0197-7385

Autonomous underwater vehicles (AUVs) play a crucial role in surveying marine environments, carrying out underwater inspection tasks, and ocean exploration. However, in order to ensure that the AUV is able to carry out its mission successfully, a control system capable of adapting to changing environmental conditions is required. Furthermore, to ensure the safe operation of the robotic platform the onboard controller should be able to operate under certain constraints. In this work, we investigate the feasibility of Model Predictive Path Integral Control (MPPI) for the control of an AUV. We utilise a non-linear model of the AUV to propagate the samples of the MPPI, which allow us to compute the control action in real-time. We provide a detailed evaluation of the effect of the main hyperparameters on the performance of the MPPI controller. Furthermore, we compared the performance of the proposed method with a classical PID and Cascade PID approach, demonstrating the superiority of our proposed controller. Finally, we present results where environmental constraints are added and show how MPPI can handle them by simply incorporating those constraints in the cost function.

Conference paper

Rao Y, Liu W, Li K, Fan H, Wang S, Dong Jet al., 2023, Deep Color Compensation for Generalized Underwater Image Enhancement, IEEE Transactions on Circuits and Systems for Video Technology, ISSN: 1051-8215

Underwater images suffer from quality degradation due to the underwater light absorption and scattering. It remains challenging to enhance underwater images using deep learning-based methods since the scarcity of real-world underwater images and their enhanced counterparts. Although existing works manually select well-enhanced images as reference images to train enhancement networks in an end-to-end manner, their performance tends to be inferior in some scenarios. We argue that the manually selected reference images cannot approximate their ground truth perfectly, leading to imbalanced learning and domain shift in enhancement networks. To address this issue, we analyse widely used underwater datasets from the perspective of color spectrum distribution and surprisingly find the sound color spectrum distribution of the enhanced reference images compared to in-air datasets. Based on this perceptive observation, instead of directly learning the enhancement mapping, we propose a novel methodology to learn color compensation for general purposes. Specifically, we present a probabilistic color compensation network that estimates the probabilistic distribution of colors by multi-scale volumetric fusion of texture and color features. We further propose a novel two-stage enhancement framework that first performs color compensation and then enhancement, which is highly flexible to be integrated with an existing enhancement method without tuning. Extensive experiments on underwater image enhancement across various challenging scenarios show that our proposed approach consistently improves the results of the popular conventional and learning-based methods by a significant margin. Moreover, our enhanced images achieve superior performance on underwater salient object detection and visual 3D reconstruction, demonstrating that our method can successfully break through the generalization bottleneck of existing learning-based enhancement models. Our implementation will be made availa

Journal article

Hong Z, Petillot Y, Zhang K, Xu S, Wang Set al., 2023, Large-Scale Radar Localization using Online Public Maps, Pages: 3990-3996, ISSN: 1050-4729

In this paper, we propose using online public maps, e.g., OpenStreetMap (OSM), for large-scale radar-based localization without needing a prior sensing map. This can potentially extend the localization system to anywhere worldwide without building, saving, or maintaining a sensing map, as long as an online public map covers the operating area. Existing methods using OSM only use route network or semantics information. These two sources of information are not combined in the previous works, while our proposed system fuses them to improve localization accuracy. Our experiments, on three open datasets collected from three different continents, show that the proposed system outperforms the state-of-the-art localization methods, reducing up to 50% of position errors. We release an open-source implementation for the community.

Conference paper

Xu S, Willners JS, Hong Z, Zhang K, Petillot YR, Wang Set al., 2023, Observability-Aware Active Extrinsic Calibration of Multiple Sensors, Pages: 2091-2097, ISSN: 1050-4729

The extrinsic parameters play a crucial role in multi-sensor fusion, such as visual-inertial Simultaneous Localization and Mapping(SLAM), as they enable the accurate alignment and integration of measurements from different sensors. However, extrinsic calibration is challenging in scenarios, such as underwater, where in-view structures are scanty and visibility is limited, causing incorrect extrinsic calibration due to insufficient motion on all degrees of freedom. In this paper, we propose an entropy-based active extrinsic calibration algorithm leverages observability analysis and information entropy to enhance the accuracy and reliability of extrinsic calibration. It determines the system observability numerically by using singular value decomposition (SVD) of the Fisher Information Matrix (FIM). Furthermore, when the extrinsic parameter is not fully observable, our method actively searches for the next best motion to recover the system's observability via entropy-based optimization. Experimental results on synthetic data, in a simulation, and using an actual underwater vehicle verify that the proposed method is able to avoid the calibration failure while improving the calibration accuracy and reliability.

Conference paper

Li C, Yan F, Wang S, Zhuang Yet al., 2023, A 3D LiDAR odometry for UGVs using coarse-to-fine deep scene flow estimation, Transactions of the Institute of Measurement and Control, Vol: 45, Pages: 274-286, ISSN: 0142-3312

Light detection and ranging (LiDAR) odometry plays a crucial role in autonomous mobile robots and unmanned ground vehicles (UGVs). This paper presents a deep learning–based odometry system using two successive three-dimensional (3D) point clouds to estimate their scene flow and then predict their relative pose. The network consumes continuous 3D point clouds directly and outputs their scene flow and uncertain mask in a coarse-to-fine fashion. A pose estimation layer without trainable parameters is designed to compute the pose with the scene flow. We also introduce a scan-to-map optimization algorithm to enhance the robustness and accuracy of the system. Our experiments on the KITTI odometry data set and our campus data set demonstrate the effectiveness of the proposed deep learning–based point cloud odometry.

Journal article

Rao Y, Ju Y, Wang S, Gao F, Fan H, Dong Jet al., 2023, Learning Enriched Feature Descriptor for Image Matching and Visual Measurement, IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, Vol: 72, ISSN: 0018-9456

Journal article

Luo D, Zhuang Y, Wang S, 2022, Hybrid sparse monocular visual odometry with online photometric calibration, The International Journal of Robotics Research, Pages: 027836492211077-027836492211077, ISSN: 0278-3649

<jats:p> Most monocular visual Simultaneous Localization and Mapping (vSLAM) and visual odometry (VO) algorithms focus on either feature-based methods or direct methods. Hybrid (semi-direct) approach is less studied although it is equally important. In this paper, a hybrid sparse visual odometry (HSO) algorithm with online photometric calibration is proposed for monocular vision. HSO introduces two novel measures, that is, direct image alignment with adaptive mode selection and image photometric description using ratio factors, to enhance the robustness against dramatic image intensity changes and motion blur. Moreover, HSO is able to establish pose constraints between keyframes far apart in time and space by using KLT tracking enhanced with a local-global brightness consistency. The convergence speed of candidate map points is adopted as the basis for keyframe selection, which strengthens the coordination between the front end and the back end. Photometric calibration is elegantly integrated into the VO system working in tandem: (1) Photometric interference from the camera, such as vignetting and changes in exposure time, is accurately calibrated and compensated in HSO, thereby improving the accuracy and robustness of VO. (2) On the other hand, VO provides pre-calculated data for the photometric calibration algorithm, which reduces resource consumption and improves the estimation accuracy of photometric parameters. Extensive experiments are performed on various public datasets to evaluate the proposed HSO against the state-of-the-art monocular vSLAM/VO and online photometric calibration methods. The results show that the proposed HSO achieves superior performance on VO and photometric calibration in terms of accuracy, robustness, and efficiency, being comparable with the state-of-the-art VO/vSLAM systems. We open source HSO for the benefit of the community. </jats:p>

Journal article

Gao H, Liang B, Oboe R, Shi Y, Wang S, Tomizuka Met al., 2022, Guest Editorial Introduction to the Focused Section on Adaptive Learning and Control for Advanced Mechatronics Systems, IEEE-ASME TRANSACTIONS ON MECHATRONICS, Vol: 27, Pages: 607-610, ISSN: 1083-4435

Journal article

Hong Z, Petillot Y, Wallace A, Wang Set al., 2022, RadarSLAM: A robust simultaneous localization and mapping system for all weather conditions, The International Journal of Robotics Research, Vol: 41, Pages: 519-542, ISSN: 0278-3649

<jats:p> A Simultaneous Localization and Mapping (SLAM) system must be robust to support long-term mobile vehicle and robot applications. However, camera and LiDAR based SLAM systems can be fragile when facing challenging illumination or weather conditions which degrade the utility of imagery and point cloud data. Radar, whose operating electromagnetic spectrum is less affected by environmental changes, is promising although its distinct sensor model and noise characteristics bring open challenges when being exploited for SLAM. This paper studies the use of a Frequency Modulated Continuous Wave radar for SLAM in large-scale outdoor environments. We propose a full radar SLAM system, including a novel radar motion estimation algorithm that leverages radar geometry for reliable feature tracking. It also optimally compensates motion distortion and estimates pose by joint optimization. Its loop closure component is designed to be simple yet efficient for radar imagery by capturing and exploiting structural information of the surrounding environment. Extensive experiments on three public radar datasets, ranging from city streets and residential areas to countryside and highways, show competitive accuracy and reliability performance of the proposed radar SLAM system compared to the state-of-the-art LiDAR, vision and radar methods. The results show that our system is technically viable in achieving reliable SLAM in extreme weather conditions on the RADIATE Dataset, for example, heavy snow and dense fog, demonstrating the promising potential of using radar for all-weather localization and mapping. </jats:p>

Journal article

Wang C, Zhang Z, Chen Y, Zhang Q, Li S, Wang X, Wang Set al., 2022, Deep Reinforcement Learning and Multi-Parameter Domain Randomization Based Underwater Adaptive Grasping Research for Underwater Manipulator, Information and Control, Vol: 51, Pages: 651-661, ISSN: 1002-0411

This study proposes a general control system for underwater manipulation, which combines deep reinforcement learning and domain randomization for autonomous underwater manipulation of underwater manipulators. First, a reinforcement learning-based robot control system is established. Subsequently, multi-parameter domain randomization is used to improve the policy robustness and transferring effectiveness, including parameters of manipulator dynamics, hydrodynamic parameters, and noise and delay of state and action spaces. Finally, the trained policy is deployed on a new simulation environment and real underwater arm. The experimental results verify the validity of the proposed method and lay a foundation for autonomous manipulation in the real deep-sea environment in the future.

Journal article

Ding Y, Wallace AM, Wang S, 2022, Variational Simultaneous Stereo Matching and Defogging in Low Visibility

Given a stereo pair of daytime foggy images, we seek to estimate a dense disparity map and to restore a fog-free image simultaneously. Such tasks remain extremely challenging in low visibility, partially preventing modern autonomous vehicles from operating safely. In this paper, we propose a novel simultaneous stereo matching and defogging algorithm based on variational continuous optimisation. It effectively fuses depth cues from disparity and scattering to achieve accurate depth estimation as the first step. Then the depth information is used to help restore a defogged image by leveraging a photo-inconsistency check. Extensive experiments on both synthetic and real data show the proposed algorithm outperforms comparative methods in all metrics on depth estimation, and produces visually more appealing defogged images.

Conference paper

Zhang K, Hong Z, Xu S, Wang Set al., 2022, CURL: Continuous, Ultra-compact Representation for LiDAR

Increasing the density of the 3D LiDAR point cloud is appealing for many applications in robotics. However, highdensity LiDAR sensors are usually costly and still limited to a level of coverage per scan (e.g., 128 channels). Meanwhile, denser point cloud scans and maps mean larger volumes to store and longer times to transmit. Existing works focus on either improving point cloud density or compressing its size. This paper aims to design a novel 3D point cloud representation that can continuously increase point cloud density while reducing its storage and transmitting size. The pipeline of the proposed Continuous, Ultra-compact Representation of LiDAR (CURL) includes four main steps: meshing, upsampling, encoding, and continuous reconstruction. It is capable of transforming a 3D LiDAR scan or map into a compact spherical harmonics representation which can be used or transmitted in low latency to continuously reconstruct a much denser 3D point cloud. Extensive experiments on four public datasets, covering college gardens, city streets, and indoor rooms, demonstrate that much denser 3D point clouds can be accurately reconstructed using the proposed CURL representation while achieving up to 80% storage spacesaving. We open-source the CURL codes for the community.

Conference paper

Fraser H, Wang S, 2022, Monocular Depth Estimation for Equirectangular Videos, Pages: 5293-5299, ISSN: 2153-0858

Depth estimation from panoramic imagery has received minimal attention in contrast to standard perspective imagery, which constitutes the majority of the literature on the key research topic. The vast - and frequently complete - field of view provided by such panoramic photographs makes them appealing for a variety of applications, including robots, autonomous vehicles, and virtual reality. Consumer-level camera systems capable of capturing such images are likewise growing more affordable, and may be desirable complements to autonomous systems' sensor packages. They do, however, introduce significant distortions and violate some assumptions regarding perspective view images. Additionally, many state-of-the-art algorithms are not designed for its projection model, and their depth estimation performance tends to degrade when being applied to panoramic imagery. This paper presents a novel technique for adapting view synthesis-based depth estimation models to omnidirectional vision. Specifically, we: 1) integrate a 'virtual' spherical camera model into the training pipeline, facilitating the model training, 2) exploit spherical convolutional layers to perform convolution operations on equirectangular images, handling the severe distortion, and 3) propose an optical flow-based masking scheme to mitigate the effect of unwanted pixels during training. Our qualitative and quantitative results demonstrate that these simple yet efficient designs result in significantly improved depth estimations when compared to previous approaches.

Conference paper

Wang X, Wang S, Liang X, Zhao D, Huang J, Xu X, Dai B, Miao Qet al., 2022, Deep Reinforcement Learning: A Survey, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 2162-237X

Deep reinforcement learning (DRL) integrates the feature representation ability of deep learning with the decision-making ability of reinforcement learning so that it can achieve powerful end-to-end learning control capabilities. In the past decade, DRL has made substantial advances in many tasks that require perceiving high-dimensional input and making optimal or near-optimal decisions. However, there are still many challenging problems in the theory and applications of DRL, especially in learning control tasks with limited samples, sparse rewards, and multiple agents. Researchers have proposed various solutions and new theories to solve these problems and promote the development of DRL. In addition, deep learning has stimulated the further development of many subfields of reinforcement learning, such as hierarchical reinforcement learning (HRL), multiagent reinforcement learning, and imitation learning. This article gives a comprehensive overview of the fundamental theories, key algorithms, and primary research domains of DRL. In addition to value-based and policy-based DRL algorithms, the advances in maximum entropy-based DRL are summarized. The future research topics of DRL are also analyzed and discussed.

Journal article

Wang C, Zhang Q, Wang X, Xu S, Petillot Y, Wang Set al., 2022, Multi-Task Reinforcement Learning based Mobile Manipulation Control for Dynamic Object Tracking and Grasping, Pages: 34-40

Agile control of mobile manipulator is challenging because of the high complexity coupled by the robotic system and the unstructured working environment. Tracking and grasping a dynamic object with a random trajectory is even harder. In this paper, a multi-task reinforcement learning-based mobile manipulation control framework is proposed to achieve general dynamic object tracking and grasping. Several basic types of dynamic trajectories are chosen as the training set for the task. To improve policy generalization in practice, random noise and dynamics randomization are introduced during the training process. Extensive experiments show that our trained policy can adapt to unseen random dynamic trajectories with about 0.1 m tracking error and 75% grasping success rate for dynamic objects. The trained policy can also be successfully deployed on a real mobile manipulator.

Conference paper

Wang C, Zhang Q, Wang X, Xu S, Petillot Y, Wang Set al., 2022, Autonomous Underwater Robotic Grasping Research Based on Navigation and Hierarchical Operation, Pages: 176-182

This paper proposes a new framework for the autonomous underwater operation of underwater vehicle manipulator systems (UVMS), which is modular, standardized, and hierarchical. The framework consists of three subsystems: perception, navigation, and grasping. The perception module is based on an underwater stereo vision system, which provides effective environment and target information for the navigation and grasping modules. The navigation module is based on ORBSLAM and acoustic odometry, which generates the global map and plans a trajectory for the first initial stage. The grasping module generates the target grasping pose based on the extracted point cloud and the current robot state, and then executes the grasping task based on the motion planner. The proposed system is tested to perform several underwater target grasping tasks in a water tank, demonstrating the effectiveness of the system.

Conference paper

Xu S, Luczynski T, Willners JS, Hong Z, Zhang K, Petillot YR, Wang Set al., 2021, Underwater Visual Acoustic SLAM with Extrinsic Calibration, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE

Conference paper

Sheeny M, De Pellegrin E, Mukherjee S, Ahrabian A, Wang S, Wallace Aet al., 2021, RADIATE: A Radar Dataset for Automotive Perception in Bad Weather, 2021 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

Conference paper

Vargas E, Scona R, Willners JS, Luczynski T, Cao Y, Wang S, Petillot YRet al., 2021, Robust Underwater Visual SLAM Fusing Acoustic Sensing, 2021 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

Conference paper

Li R, Wang S, Gu D, 2021, DeepSLAM: A Robust Monocular SLAM System With Unsupervised Deep Learning, IEEE Transactions on Industrial Electronics, Vol: 68, Pages: 3577-3587, ISSN: 0278-0046

Journal article

Li C, Wang S, Zhuang Y, Yan Fet al., 2021, Deep Sensor Fusion Between 2D Laser Scanner and IMU for Mobile Robot Localization, IEEE SENSORS JOURNAL, Vol: 21, Pages: 8501-8509, ISSN: 1530-437X

Journal article

Wang C, Zhang Q, Li S, Wang X, Lane D, Petillot Y, Wang Set al., 2021, Learning-Based Underwater Autonomous Grasping via 3D Point Cloud, ISSN: 0197-7385

Underwater autonomous grasping is a challenging task for robotic research. In this paper, we propose a learning-based underwater grasping method using 3D point cloud generated from an underwater stereo camera. First, we use Pinax-model for accurate refraction correction of a stereo camera in a flat-pane housing. Second, dense point cloud of the target is generated using the calibrated stereo images. An improved Grasp Pose Detection (GPD) method is then developed to generate the candidate grasping poses and select the best one based on kinematic constraints. Finally, an optimal trajectory is planned to finish the grasping task. Experiments in a water tank have proved the effectiveness of our method.

Conference paper

Antonelli G, Indiveri G, Barrera C, Caccia M, Dooly G, Flavin N, Ferreira F, Miskovic N, Furlong M, Kopf A, Bachmayer R, Ludvigsen M, Opderbecke J, Pascoal A, Petroccia R, Alves J, Ridao P, Vallicrosa G, De Sousa JB, Costa M, Wang Set al., 2021, Advancing the EU Marine Robotics Research Infrastructure Network: The EU Marine Robots project, ISSN: 0197-7385

This paper provides an overview of the H2020 Marine robotics research infrastructure network (EU Marine Robots) project. The overview is organized around the three main activities of infrastructure projects: i) Networking activities (NA); ii) Transnational access (TNA) in which access to marine robotic infrastructures from the partners is granted in competitive calls; iii) Joint research activities (JRA) aimed at making robotic infrastructures more operable and transitioning new systems and technologies to field operations. The strategic significance of the project and future developments are discussed as conclusions.

Conference paper

Willners JS, Carlucho I, Katagiri S, Lemoine C, Roe J, Stephens D, Luczynski T, Xu S, Carreno Y, Pairet E, Barbalata C, Petillot Y, Wang Set al., 2021, From market-ready ROVs to low-cost AUVs, ISSN: 0197-7385

Autonomous Underwater Vehicles (AUVs) are becoming increasingly important for different types of industrial applications. The generally high cost of AUVs restricts the access to them and therefore advances in research and technological development. However, recent advances have led to lower cost commercially available Remotely Operated Vehicles (ROVs), which present a platform that can be enhanced to enable a high degree of autonomy, similar to that of a high-end AUV. In this article, we present how a low-cost commercial-off-the-shelf ROV can be used as a foundation for developing versatile and affordable AUVs. We introduce the required hardware modifications to obtain a system capable of autonomous operations as well as the necessary software modules. Additionally, we present a set of use cases exhibiting the versatility of the developed platform for intervention and mapping tasks.

Conference paper

Willners JS, Carreno Y, Xu S, Luczynski T, Katagiri S, Roe J, Pairet È, Petillot Y, Wang Set al., 2021, Robust underwater SLAM using autonomous relocalisation, Pages: 273-280

This paper presents a robust underwater simultaneous localisation and mapping (SLAM) framework using autonomous relocalisation. The proposed approach strives to maintain a single consistent map during operation and updates its current plan when the SLAM loses feature tracking. The updated plan transverses viewpoints that are likely to aid in merging the current map into the global map. We present the sub-systems of the framework: the SLAM, viewpoint generation, and high level planning. In-water experiments show the advantage of our approach used on an autonomous underwater vehicle (AUV) performing inspections.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01097500&limit=30&person=true