Imperial College London

ProfessorAndrewDavison

Faculty of EngineeringDepartment of Computing

Professor of Robot Vision
 
 
 
//

Contact

 

+44 (0)20 7594 8316a.davison Website

 
 
//

Assistant

 

Ms Lucy Atthis +44 (0)20 7594 8259

 
//

Location

 

303William Penney LaboratorySouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

166 results found

Johns E, Leutenegger S, Davison AJ, 2016, Pairwise Decomposition of Image Sequences for Active Multi-View Recognition, Computer Vision and Pattern Recognition, Publisher: Computer Vision Foundation (CVF), ISSN: 1063-6919

A multi-view image sequence provides a much richercapacity for object recognition than from a single image.However, most existing solutions to multi-view recognitiontypically adopt hand-crafted, model-based geometric methods,which do not readily embrace recent trends in deeplearning. We propose to bring Convolutional Neural Networksto generic multi-view recognition, by decomposingan image sequence into a set of image pairs, classifyingeach pair independently, and then learning an object classi-fier by weighting the contribution of each pair. This allowsfor recognition over arbitrary camera trajectories, withoutrequiring explicit training over the potentially infinite numberof camera paths and lengths. Building these pairwiserelationships then naturally extends to the next-best-viewproblem in an active recognition framework. To achievethis, we train a second Convolutional Neural Network tomap directly from an observed image to next viewpoint.Finally, we incorporate this into a trajectory optimisationtask, whereby the best recognition confidence is sought fora given trajectory length. We present state-of-the-art resultsin both guided and unguided multi-view recognition on theModelNet dataset, and show how our method can be usedwith depth images, greyscale images, or both.

Conference paper

Whelan T, Salas-Moreno RF, Glocker B, Davison AJ, Leutenegger Set al., 2016, ElasticFusion: real-time dense SLAM and light source estimation, International Journal of Robotics Research, Vol: 35, Pages: 1697-1716, ISSN: 1741-3176

We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globallyconsistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incrementalonline fashion, without pose graph optimisation or any post-processing steps. This is accomplished by using dense frame-tomodelcamera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surfacedeformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay closeto the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novelapproach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoorenvironments in real-time as a user handheld camera explores the scene. Absolutely no prior information about the scene ornumber of light sources is required. By making a small set of simple assumptions about the appearance properties of the sceneour method can incrementally estimate both the quantity and location of multiple light sources in the environment in an onlinefashion. Our results demonstrate that our technique functions well in many different environments and lighting configurations.We show that this enables (a) more realistic augmented reality (AR) rendering; (b) a richer understanding of the scene beyondpure geometry and; (c) more accurate and robust photometric tracking

Journal article

Whelan T, Salas Moreno R, Leutenegger S, Davison A, Glocker Bet al., 2016, Modelling a Three-Dimensional Space, WO2016189274

Patent

Zienkiewicz J, Davison AJ, Leutenegger S, 2016, Real-Time Height Map Fusion using Differentiable Rendering, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, ISSN: 2153-0866

We present a robust real-time method whichperforms dense reconstruction of high quality height mapsfrom monocular video. By representing the height map as atriangular mesh, and using efficient differentiable renderingapproach, our method enables rigorous incremental probabilisticfusion of standard locally estimated depth and colour intoan immediately usable dense model. We present results forthe application of free space and obstacle mapping by a lowcostrobot, showing that detailed maps suitable for autonomousnavigation can be obtained using only a single forward-lookingcamera.

Conference paper

Johns E, Leutenegger S, Davison AJ, 2016, Deep learning a grasp function for grasping under gripper pose uncertainty, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, Pages: 4461-4468, ISSN: 2153-0866

This paper presents a new method for paralleljawgrasping of isolated objects from depth images, underlarge gripper pose uncertainty. Whilst most approaches aimto predict the single best grasp pose from an image, ourmethod first predicts a score for every possible grasp pose,which we denote the grasp function. With this, it is possibleto achieve grasping robust to the gripper’s pose uncertainty,by smoothing the grasp function with the pose uncertaintyfunction. Therefore, if the single best pose is adjacent to aregion of poor grasp quality, that pose will no longer be chosen,and instead a pose will be chosen which is surrounded by aregion of high grasp quality. To learn this function, we traina Convolutional Neural Network which takes as input a singledepth image of an object, and outputs a score for each grasppose across the image. Training data for this is generated byuse of physics simulation and depth image simulation with 3Dobject meshes, to enable acquisition of sufficient data withoutrequiring exhaustive real-world experiments. We evaluate withboth synthetic and real experiments, and show that the learnedgrasp score is more robust to gripper pose uncertainty thanwhen this uncertainty is not accounted for.

Conference paper

Handa A, Bloesch M, Patraucean V, Stent S, McCormac J, Davison Aet al., 2016, gvnn: neural network library for geometric computer vision, 14th European Conference on Computer Vision (ECCV), Publisher: Springer Verlag, Pages: 67-82, ISSN: 0302-9743

We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning. Inspired by the recent success of Spatial Transformer Networks, we propose several new layers which are often used as parametric transformations on the data in geometric computer vision. These layers can be inserted within a neural network much in the spirit of the original spatial transformers and allow backpropagation to enable end-to-end learning of a network involving any domain knowledge in geometric computer vision. This opens up applications in learning invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error.

Conference paper

Tsiotsios C, Davison AJ, Kim T-K, 2016, Near-lighting Photometric Stereo for unknown scene distance and medium attenuation, IMAGE AND VISION COMPUTING, Vol: 57, Pages: 44-57, ISSN: 0262-8856

Journal article

Kim H, Leutenegger S, Davison AJ, 2016, Real-time 3D reconstruction and 6-DoF tracking with an event camera, ECCV 2016-European Conference on Computer Vision, Publisher: Springer, Pages: 349-364, ISSN: 0302-9743

We propose a method which can perform real-time 3D reconstructionfrom a single hand-held event camera with no additional sensing,and works in unstructured scenes of which it has no prior knowledge.It is based on three decoupled probabilistic filters, each estimating 6-DoFcamera motion, scene logarithmic (log) intensity gradient and scene inversedepth relative to a keyframe, and we build a real-time graph ofthese to track and model over an extended local workspace. We alsoupgrade the gradient estimate for each keyframe into an intensity image,allowing us to recover a real-time video-like intensity sequence withspatial and temporal super-resolution from the low bit-rate input eventstream. To the best of our knowledge, this is the first algorithm provablyable to track a general 6D motion along with reconstruction of arbitrarystructure including its intensity and the reconstruction of grayscale videothat exclusively relies on event camera data.

Conference paper

Zia MZ, Nardi L, Jack A, Vespa E, Bodin B, Kelly PHJ, Davison AJet al., 2016, Comparative design space exploration of dense and semi-dense SLAM, 2016 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 1292-1299, ISSN: 1050-4729

SLAM has matured significantly over the past few years, and is beginning to appear in serious commercial products. While new SLAM systems are being proposed at every conference, evaluation is often restricted to qualitative visualizations or accuracy estimation against a ground truth. This is due to the lack of benchmarking methodologies which can holistically and quantitatively evaluate these systems. Further investigation at the level of individual kernels and parameter spaces of SLAM pipelines is non-existent, which is absolutely essential for systems research and integration. We extend the recently introduced SLAMBench framework to allow comparing two state-of-the-art SLAM pipelines, namely KinectFusion and LSD-SLAM, along the metrics of accuracy, energy consumption, and processing frame rate on two different hardware platforms, namely a desktop and an embedded device. We also analyze the pipelines at the level of individual kernels and explore their algorithmic and hardware design spaces for the first time, yielding valuable insights.

Conference paper

Tsiotsios C, Kim TK, Davison AJ, Narasimhan SGet al., 2016, Model effectiveness prediction and system adaptation for photometric stereo in murky water, Computer Vision and Image Understanding, Vol: 150, Pages: 126-138, ISSN: 1090-235X

In murky water, the light interaction with the medium particles results in a complex image formation model that is hard to use effectively with a shape estimation framework like Photometric Stereo. All previous approaches have resorted to necessary model simplifications that were though used arbitrarily, without describing how their validity can be estimated in an unknown underwater situation. In this work, we evaluate the effectiveness of such simplified models and we show that this varies strongly with the imaging conditions. For this reason, we propose a novel framework that can predict the effectiveness of a photometric model when the scene is unknown. To achieve this we use a dynamic lighting framework where a robotic platform is able to probe the scene with varying light positions, and the respective change in estimated surface normals serves as a faithful proxy of the true reconstruction error. This creates important benefits over traditional Photometric Stereo frameworks, as our system can adapt some critical factors to an underwater scenario, such as the camera-scene distance and the light position or the photometric model, in order to minimize the reconstruction error. Our work is evaluated through both numerical simulations and real experiments for different distances, underwater visibilities and light source baselines.

Journal article

Lukierski R, Leutenegger S, Davison AJ, 2015, Rapid free-space mapping from a single omnidirectional camera, 2015 European Conference on Mobile Robots (ECMR), Publisher: IEEE, Pages: 1-8

Low-cost robots such as floor cleaners generally rely on limited perception and simple algorithms, but some new models now have enough sensing capability and computation power to enable Simultaneous Localisation And Mapping (SLAM) and intelligent guided navigation. In particular, computer vision is now a serious option in low cost robotics, though its use to date has been limited to feature-based mapping for localisation. Dense environment perception such as free space finding has required additional specialised sensors, adding expense and complexity. Here we show that a robot with a single passive omnidirectional camera can perform rapid global free-space reasoning within typical rooms. Upon entering a new room, the robot makes a circular movement to capture a closely-spaced omni image sequence with disparity in all horizontal directions. feature-based visual SLAM procedure obtains accurate poses for these frames before passing them to a dense matching step, 3D semi-dense reconstruction and visibility reasoning. The result is turned into a 2D occupancy map, which can be improved and extended if necessary through further movement. This rapid, passive technique can capture high quality free space information which gives a robot a global understanding of the space around it. We present results in several scenes, including quantitative comparison with laser-based mapping.

Conference paper

Zienkiewicz J, Davison A, 2015, Extrinsics autocalibration for dense planar visual odometry, Journal of Field Robotics, Vol: 32, Pages: 803-825, ISSN: 1556-4967

A single downward-looking camera can be used as a high-precision visual odometry sensor in a wide range of real-world mobile robotics applications. In particular, a simple and computationally efficient dense alignment approach can take full advantage of the local planarity of floor surfaces to make use of the whole texture available rather than sparse feature points. In this paper, we present and analyze highly practical solutions for autocalibration of such a camera's extrinsic orientation and position relative to a mobile robot's coordinate frame. We show that two degrees of freedom, the out-of-plane camera angles, can be autocalibrated in any conditions, and that bringing in a small amount of information from wheel odometry or another independent motion source allows rapid, full, and accurate six degree-of-freedom calibration. Of particular practical interest is the result that this can be achieved to almost the same level even without wheel odometry and based only on widely applicable assumptions about nonholonomic robot motion and the forward/backward direction of its movement. We show the accurate, rapid, and robust performance of our autocalibration techniques for varied camera positions over a range of low-textured real surfaces, both indoors and outdoors.

Journal article

Whelan T, Leutenegger S, Salas-Moreno RF, Glocker B, Davison AJet al., 2015, ElasticFusion: Dense SLAM without a Pose Graph, Robotics: Science and Systems, Publisher: Robotics: Science and Systems, ISSN: 2330-765X

Conference paper

Milford M, Kim H, Mangan M, Leutenegger S, Stone T, Webb B, Davison Aet al., 2015, Place Recognition with Event-based Cameras and a Neural Implementation of SeqSLAM

Event-based cameras offer much potential to the fields of robotics andcomputer vision, in part due to their large dynamic range and extremely high"frame rates". These attributes make them, at least in theory, particularlysuitable for enabling tasks like navigation and mapping on high speed roboticplatforms under challenging lighting conditions, a task which has beenparticularly challenging for traditional algorithms and camera sensors. Beforethese tasks become feasible however, progress must be made towards adapting andinnovating current RGB-camera-based algorithms to work with event-basedcameras. In this paper we present ongoing research investigating two distinctapproaches to incorporating event-based cameras for robotic navigation: theinvestigation of suitable place recognition / loop closure techniques, and thedevelopment of efficient neural implementations of place recognition techniquesthat enable the possibility of place recognition using event-based cameras atvery high frame rates using neuromorphic computing hardware.

Journal article

Nardi L, Bodin B, Zia MZ, Mawer J, Nisbet A, Kelly PHJ, Davison AJ, Luján M, O'Boyle MFP, Riley G, Topham N, Furber Set al., 2015, Introducing SLAMBench, a performance and accuracy benchmarking methodology for SLAM, 2015 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 5783-5790, ISSN: 1050-4729

Real-time dense computer vision and SLAM offer great potential for a new level of scene modelling, tracking and real environmental interaction for many types of robot, but their high computational requirements mean that use on mass market embedded platforms is challenging. Meanwhile, trends in low-cost, low-power processing are towards massive parallelism and heterogeneity, making it difficult for robotics and vision researchers to implement their algorithms in a performance-portable way. In this paper we introduce SLAMBench, a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption of a dense RGB-D SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP, OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D sequences with trajectory and scene ground truth for reliable accuracy comparison of different implementation and algorithms. We present an analysis and breakdown of the constituent algorithmic elements of KinectFusion, and experimentally investigate their execution time on a variety of multicore and GPU-accelerated platforms. For a popular embedded platform, we also present an analysis of energy efficiency for different configuration alternatives.

Conference paper

Nardi L, Bodin B, Zia MZ, Mawer J, Nisbet A, Kelly PHJ, Davison AJ, Luján M, O'Boyle MFP, Riley GD, Topham NP, Furber SBet al., 2015, Introducing SLAMBench, a performance and accuracy benchmarking methodology for SLAM., Publisher: IEEE, Pages: 5783-5790

Conference paper

Jachnik J, Goldman DB, Luo L, Davison AJet al., 2015, Interactive 3D Face Stylization Using Sculptural Abstraction., CoRR, Vol: abs/1502.01954

Journal article

Salas-Moreno R, Glocker B, Kelly P, Davison Aet al., 2014, Dense planar SLAM, International Symposium on Mixed and Augmented Reality (ISMAR), Publisher: Institute of Electrical and Electronics Engineers, Pages: 367-368

Using higher-level entities during mapping has the potential to improve camera localisation performance and give substantial perception capabilities to real-time 3D SLAM systems. We present an efficient new real-time approach which densely maps an environment using bounded planes and surfels extracted from depth images (like those produced by RGB-D sensors or dense multi-view stereo reconstruction). Our method offers the every-pixel descriptive power of the latest dense SLAM approaches, but takes advantage directly of the planarity of many parts of real-world scenes via a data-driven process to directly regularize planar regions and represent their accurate extent efficiently using an occupancy approach with on-line compression. Large areas can be mapped efficiently and with useful semantic planar structure which enables intuitive and useful AR applications such as using any wall or other planar surface in a scene to display a user's content.

Conference paper

Handa A, Whelan T, McDonald J, Davison AJet al., 2014, A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 1524-1531, ISSN: 1050-4729

We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available.

Conference paper

Chang PL, Handa A, Davison AJ, Stoyanov D, Edwards PEet al., 2014, Robust real-time visual odometry for stereo endoscopy using dense quadrifocal tracking, Pages: 11-20, ISSN: 0302-9743

Visual tracking in endoscopic scenes is known to be a difficult task due to the lack of textures, tissue deformation and specular reflection. In this paper, we devise a real-time visual odometry framework to robustly track the 6-DoF stereo laparoscope pose using the quadrifocal relationship. The instant motion of a stereo camera creates four views which can be constrained by the quadrifocal geometry. Using the previous stereo pair as a reference frame, the current pair can be warped back by minimising a photometric error function with respect to a camera pose constrained by the quadrifocal geometry. Using a robust estimator can further remove the outliers caused by occlusion, deformation and specular highlights during the optimisation. Since the optimisation uses all pixel data in the images, it results in a very robust pose estimation even for a textureless scene. The quadrifocal geometry is initialised by using real-time stereo reconstruction algorithm which can be efficiently parallelised and run on the GPU together with the proposed tracking framework. Our system is evaluated using a ground truth synthetic sequence with a known model and we also demonstrate the accuracy and robustness of the approach using phantom and real examples of endoscopic augmented reality. © 2014 Springer International Publishing Switzerland.

Conference paper

Kim H, Handa A, Benosman R, Ieng SH, Davison AJet al., 2014, Simultaneous mosaicing and tracking with an event camera

An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering.

Conference paper

Chang PL, Handa A, Davison AJ, Stoyanov D, Edwards PEet al., 2014, Robust real-time visual odometry for stereo endoscopy using dense quadrifocal tracking, Pages: 11-20, ISSN: 0302-9743

Visual tracking in endoscopic scenes is known to be a difficult task due to the lack of textures, tissue deformation and specular reflection. In this paper, we devise a real-time visual odometry framework to robustly track the 6-DoF stereo laparoscope pose using the quadrifocal relationship. The instant motion of a stereo camera creates four views which can be constrained by the quadrifocal geometry. Using the previous stereo pair as a reference frame, the current pair can be warped back by minimising a photometric error function with respect to a camera pose constrained by the quadrifocal geometry. Using a robust estimator can further remove the outliers caused by occlusion, deformation and specular highlights during the optimisation. Since the optimisation uses all pixel data in the images, it results in a very robust pose estimation even for a textureless scene. The quadrifocal geometry is initialised by using real-time stereo reconstruction algorithm which can be efficiently parallelised and run on the GPU together with the proposed tracking framework. Our system is evaluated using a ground truth synthetic sequence with a known model and we also demonstrate the accuracy and robustness of the approach using phantom and real examples of endoscopic augmented reality. © 2014 Springer International Publishing Switzerland.

Conference paper

Tsiotsios C, Angelopoulou ME, Kim T-K, Davison AJet al., 2014, Backscatter Compensated Photometric Stereo with 3 Sources, 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 2259-2266, ISSN: 1063-6919

Conference paper

Chang PL, Stoyanov D, Davison AJ, Edwards Pet al., 2013, Real-time dense stereo reconstruction using convex optimisation with a cost-volume for image-guided robotic surgery, Pages: 42-49, ISSN: 0302-9743

Reconstructing the depth of stereo-endoscopic scenes is an important step in providing accurate guidance in robotic-assisted minimally invasive surgery. Stereo reconstruction has been studied for decades but remains a challenge in endoscopic imaging. Current approaches can easily fail to reconstruct an accurate and smooth 3D model due to textureless tissue appearance in the real surgical scene and occlusion by instruments. To tackle these problems, we propose a dense stereo reconstruction algorithm using convex optimisation with a cost-volume to efficiently and effectively reconstruct a smooth model while maintaining depth discontinuity. The proposed approach has been validated by quantitative evaluation using simulation and real phantom data with known ground truth. We also report qualitative results from real surgical images. The algorithm outperforms state of the art methods and can be easily parallelised to run in real-time on recent graphics hardware. © 2013 Springer-Verlag.

Conference paper

Alcantarilla PF, Bergasa LM, Davison AJ, 2013, Gauge-SURF descriptors, IMAGE AND VISION COMPUTING, Vol: 31, Pages: 103-116, ISSN: 0262-8856

Journal article

Zienkiewicz J, Lukierski R, Davison A, 2013, Dense, Auto-Calibrating Visual Odometry from a Downward-Looking Camera, 24th British Machine Vision Conference, Publisher: B M V A PRESS

Conference paper

Salas-Moreno RF, Newcombe RA, Strasdat H, Kelly PHJ, Davison AJet al., 2013, SLAM++: Simultaneous Localisation and Mapping at the Level of Objects, Computer Vision and Pattern Recognition, Publisher: IEEE Press, Pages: 1352-1359, ISSN: 1063-6919

Conference paper

Chang P-L, Stoyanov D, Davison AJ, Edwards PEet al., 2013, Real-Time Dense Stereo Reconstruction Using Convex Optimisation with a Cost-Volume for Image-Guided Robotic Surgery, 16th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER-VERLAG BERLIN, Pages: 42-49, ISSN: 0302-9743

Conference paper

Strasdat H, Montiel JMM, Davison AJ, 2012, WITHDRAWN: Visual SLAM: Why filter?, Image and Vision Computing, ISSN: 0262-8856

While the most accurate solution to off-line structure from motion (SFM) problems is undoubtedly to extract as much correspondence information as possible and perform batch optimisation, sequential methods suitable for live video streams must approximate this to fit within fixed computational bounds. Two quite different approaches to real-time SFM - also called visual SLAM (simultaneous localisation and mapping) - have proven successful, but they sparsify the problem in different ways. Filtering methods marginalise out past poses and summarise the information gained over time with a probability distribution. Keyframe methods retain the optimisation approach of global bundle adjustment, but computationally must select only a small number of past frames to process. In this paper we perform a rigorous analysis of the relative advantages of filtering and sparse bundle adjustment for sequential visual SLAM. In a series of Monte Carlo experiments we investigate the accuracy and cost of visual SLAM. We measure accuracy in terms of entropy reduction as well as root mean square error (RMSE), and analyse the efficiency of bundle adjustment versus filtering using combined cost/accuracy measures. In our analysis, we consider both SLAM using a stereo rig and monocular SLAM as well as various different scenes and motion patterns. For all these scenarios, we conclude that keyframe bundle adjustment outperforms filtering, since it gives the most accuracy per unit of computing time. © 2012 Elsevier B.V. All rights reserved.

Journal article

Strasdat H, Montiel JMM, Davison AJ, 2012, Visual SLAM: Why filter?, Image and Vision Computing (IVC), Vol: 30, Pages: 65-77, ISSN: 0262-8856

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00450245&limit=30&person=true&page=3&respub-action=search.html