Publications
73 results found
Li W, Saeedi S, McCormac J, et al., 2018, InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset
Datasets have gained an enormous amount of popularity in the computer visioncommunity, from training and evaluation of Deep Learning-based methods tobenchmarking Simultaneous Localization and Mapping (SLAM). Without a doubt,synthetic imagery bears a vast potential due to scalability in terms of amountsof data obtainable without tedious manual ground truth annotations ormeasurements. Here, we present a dataset with the aim of providing a higherdegree of photo-realism, larger scale, more variability as well as serving awider range of purposes compared to existing datasets. Our dataset leveragesthe availability of millions of professional interior designs and millions ofproduction-level furniture and object assets -- all coming with fine geometricdetails and high-resolution texture. We render high-resolution and highframe-rate video sequences following realistic trajectories while supportingvarious camera types as well as providing inertial measurements. Together withthe release of the dataset, we will make executable program of our interactivesimulator software as well as our renderer available athttps://interiornetdataset.github.io. To showcase the usability and uniquenessof our dataset, we show benchmarking results of both sparse and dense SLAMalgorithms.
Li M, Songur N, Orlov P, et al., 2018, Towards an embodied semantic fovea: Semantic 3D scene reconstruction from ego-centric eye-tracker videos
Incorporating the physical environment is essential for a completeunderstanding of human behavior in unconstrained every-day tasks. This isespecially important in ego-centric tasks where obtaining 3 dimensionalinformation is both limiting and challenging with the current 2D video analysismethods proving insufficient. Here we demonstrate a proof-of-concept systemwhich provides real-time 3D mapping and semantic labeling of the localenvironment from an ego-centric RGB-D video-stream with 3D gaze pointestimation from head mounted eye tracking glasses. We augment existing work inSemantic Simultaneous Localization And Mapping (Semantic SLAM) with collectedgaze vectors. Our system can then find and track objects both inside andoutside the user field-of-view in 3D from multiple perspectives with reasonableaccuracy. We validate our concept by producing a semantic map from images ofthe NYUv2 dataset while simultaneously estimating gaze position and gazeclasses from recorded gaze data of the dataset images.
Clark R, Bloesch M, Czarnowski J, et al., 2018, LS-Net: Learning to Solve Nonlinear Least Squares for Monocular Stereo, European Conference on Computer Vision
Vespa E, Nikolov N, Grimm M, et al., 2018, Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping, IEEE Robotics and Automation Letters, Vol: 3, Pages: 1144-1151, ISSN: 2377-3766
We present a dense volumetric simultaneous localisation and mapping (SLAM) framework that uses an octree representation for efficient fusion and rendering of either a truncated signed distance field (TSDF) or an occupancy map. The primary aim of this letter is to use one single representation of the environment that can be used not only for robot pose tracking and high-resolution mapping, but seamlessly for planning. We show that our highly efficient octree representation of space fits SLAM and planning purposes in a real-time control loop. In a comprehensive evaluation, we demonstrate dense SLAM accuracy and runtime performance on-par with flat hashing approaches when using TSDF-based maps, and considerable speed-ups when using occupancy mapping compared to standard occupancy maps frameworks. Our SLAM system can run at 10-40 Hz on a modern quadcore CPU, without the need for massive parallelization on a GPU. We, furthermore, demonstrate a probabilistic occupancy mapping as an alternative to TSDF mapping in dense SLAM and show its direct applicability to online motion planning, using the example of informed rapidly-exploring random trees (RRT*).
Czarnowski J, Leutenegger S, Davison AJ, 2018, Semantic Texture for Robust Dense Tracking, 16th IEEE International Conference on Computer Vision (ICCV), Publisher: IEEE, Pages: 851-859, ISSN: 2473-9936
McCormac, Handa A, Leutenegger S, et al., 2017, SceneNet RGB-D: Can 5M synthetic images beat generic ImageNet pre-training on indoor segmentation?, International Conference on Computer Vision 2017, Publisher: IEEE, Pages: 2697-2706, ISSN: 2380-7504
We introduce SceneNet RGB-D, a dataset providing pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection. It also provides perfect camera poses and depth data, allowing investigation into geometric computer vision problems such as optical flow, camera pose estimation, and 3D scene labelling tasks. Random sampling permits virtually unlimited scene configurations, and here we provide 5M rendered RGB-D images from 16K randomly generated 3D trajectories in synthetic layouts, with random but physically simulated object configurations. We compare the semantic segmentation performance of network weights produced from pretraining on RGB images from our dataset against generic VGG-16 ImageNet weights. After fine-tuning on the SUN RGB-D and NYUv2 real-world datasets we find in both cases that the synthetically pre-trained network outperforms the VGG-16 weights. When synthetic pre-training includes a depth channel (something ImageNet cannot natively provide) the performance is greater still. This suggests that large-scale high-quality synthetic RGB datasets with task-specific labels can be more useful for pretraining than real-world generic pre-training such as ImageNet. We host the dataset at http://robotvault. bitbucket.io/scenenet-rgbd.html.
Platinsky L, Davison AJ, Leutenegger S, 2017, Monocular visual odometry: sparse joint optimisation or dense alternation?, IEEE International Conference on Robotics and Automation (ICRA), 2017, Publisher: IEEE, Pages: 5126-5133
Real-time monocular SLAM is increasingly mature and entering commercial products. However, there is a divide between two techniques providing similar performance. Despite the rise of `dense' and `semi-dense' methods which use large proportions of the pixels in a video stream to estimate motion and structure via alternating estimation, they have not eradicated feature-based methods which use a significantly smaller amount of image information from keypoints and retain a more rigorous joint estimation framework. Dense methods provide more complete scene information, but in this paper we focus on how the amount of information and different optimisation methods affect the accuracy of local motion estimation (monocular visual odometry). This topic becomes particularly relevant after the recent results from a direct sparse system. We propose a new method for fairly comparing the accuracy of SLAM frontends in a common setting. We suggest computational cost models for an overall comparison which indicates that there is relative parity between the approaches at the settings allowed by current serial processors when evaluated under equal conditions.
Lukierski R, Leutenegger S, Davison AJ, 2017, Room layout estimation from rapid omnidirectional exploration, IEEE International Conference on Robotics and Automation (ICRA), 2017, Publisher: IEEE
A new generation of practical, low-cost indoor robots is now using wide-angle cameras to aid navigation, but usually this is limited to position estimation via sparse feature-based SLAM. Such robots usually have little global sense of the dimensions, demarcation or identities of the rooms they are in, information which would be very useful to enable behaviour with much more high level intelligence. In this paper we show that we can augment an omni-directional SLAM pipeline with straightforward dense stereo estimation and simple and robust room model fitting to obtain rapid and reliable estimation of the global shape of typical rooms from short robot motions. We have tested our method extensively in real homes, offices and on synthetic data. We also give examples of how our method can extend to making composite maps of larger rooms, and detecting room transitions.
McCormac J, Handa A, Davison AJ, et al., 2017, SemanticFusion: dense 3D semantic mapping with convolutional neural networks, IEEE International Conference on Robotics and Automation (ICRA), 2017, Publisher: IEEE, Pages: 4628-4635
Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need to extend beyond geometry and appearance — they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of ≈25Hz.
McCormac J, Handa A, Davison A, et al., 2017, SemanticFusion: Dense 3D semantic mapping with convolutional neural networks
Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need to extend beyond geometry and appearance - they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of ≈25Hz.
- Abstract
- Open Access Link
- Cite
- Citations: 438
Laidlow T, Blosch M, Li W, et al., 2017, Dense RGB-D-Inertial SLAM with Map Deformations, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE
Oettershagen P, Melzer A, Mantel T, et al., 2017, Design of small hand-launched solar-powered UAVs: From concept study to a multi-day world endurance record flight, Journal of Field Robotics, Vol: 34, Pages: 1352-1377, ISSN: 1556-4967
We present the development process behind AtlantikSolar, a small 6.9 kg hand-launchable low-altitude solar-powered unmanned aerial vehicle (UAV) that recently completed an 81-hour continuous flight and thereby established a new flight endurance world record for all aircraft below 50 kg mass. The goal of our work is to increase the usability of such solar-powered robotic aircraft by maximizing their perpetual flight robustness to meteorological deteriorations such as clouds or winds. We present energetic system models and a design methodology, implement them in our publicly available conceptual design framework for perpetual flight-capable solar-powered UAVs, and finally apply the framework to the AtlantikSolar UAV. We present the detailed AtlantikSolar characteristics as a practical design example. Airframe, avionics, hardware, state estimation, and control method development for autonomous flight operations are described. Flight data are used to validate the conceptual design framework. Flight results from the continuous 81-hour and 2,338 km covered ground distance flight show that AtlantikSolar achieves 39% minimum state-of-charge, 6.8 h excess time and 6.2 h charge margin. These performance metrics are a significant improvement over previous solar-powered UAVs. A performance outlook shows that AtlantikSolar allows perpetual flight in a 6-month window around June 21 at mid-European latitudes, and that multi-day flights with small optical- or infrared-camera payloads are possible for the first time. The demonstrated performance represents the current state-of-the-art in solar-powered low-altitude perpetual flight performance. We conclude with lessons learned from the three-year AtlantikSolar UAV development process and with a sensitivity analysis that identifies the most promising technological areas for future solar-powered UAV performance improvements.
Zienkiewicz J, Tsiotsios C, Davison AJ, et al., 2016, Monocular, Real-Time Surface Reconstruction using Dynamic Level of Detail, International Conference on 3DVision, Publisher: IEEE
We present a scalable, real-time capable method for robustsurface reconstruction that explicitly handles multiplescales. As a monocular camera browses a scene, ouralgorithm processes images as they arrive and incrementallybuilds a detailed surface model. While most of theexisting reconstruction approaches rely on volumetric orpoint-cloud representations of the environment, we performdepth-map and colour fusion directly into a multi-resolutiontriangular mesh that can be adaptively tessellated usingthe concept of Dynamic Level of Detail. Our method relieson least-squares optimisation, which enables a probabilisticallysound and principled formulation of the fusionalgorithm. We demonstrate that our method is capable ofobtaining high quality, close-up reconstruction, as well ascapturing overall scene geometry, while being memory andcomputationally efficient.
McCormac J, Handa A, Leutenegger S, et al., 2016, SceneNet RGB-D: 5M Photorealistic Images of Synthetic Indoor Trajectories with Ground Truth
We introduce SceneNet RGB-D, expanding the previous work of SceneNet toenable large scale photorealistic rendering of indoor scene trajectories. Itprovides pixel-perfect ground truth for scene understanding problems such assemantic segmentation, instance segmentation, and object detection, and alsofor geometric computer vision problems such as optical flow, depth estimation,camera pose estimation, and 3D reconstruction. Random sampling permitsvirtually unlimited scene configurations, and here we provide a set of 5Mrendered RGB-D images from over 15K trajectories in synthetic layouts withrandom but physically simulated object poses. Each layout also has randomlighting, camera trajectories, and textures. The scale of this dataset is wellsuited for pre-training data-driven computer vision techniques from scratchwith RGB-D inputs, which previously has been limited by relatively smalllabelled datasets in NYUv2 and SUN RGB-D. It also provides a basis forinvestigating 3D scene labelling tasks by providing perfect camera poses anddepth data as proxy for a SLAM system. We host the dataset athttp://robotvault.bitbucket.io/scenenet-rgbd.html
Johns E, Leutenegger S, Davison AJ, 2016, Pairwise Decomposition of Image Sequences for Active Multi-View Recognition, Computer Vision and Pattern Recognition, Publisher: Computer Vision Foundation (CVF), ISSN: 1063-6919
A multi-view image sequence provides a much richercapacity for object recognition than from a single image.However, most existing solutions to multi-view recognitiontypically adopt hand-crafted, model-based geometric methods,which do not readily embrace recent trends in deeplearning. We propose to bring Convolutional Neural Networksto generic multi-view recognition, by decomposingan image sequence into a set of image pairs, classifyingeach pair independently, and then learning an object classi-fier by weighting the contribution of each pair. This allowsfor recognition over arbitrary camera trajectories, withoutrequiring explicit training over the potentially infinite numberof camera paths and lengths. Building these pairwiserelationships then naturally extends to the next-best-viewproblem in an active recognition framework. To achievethis, we train a second Convolutional Neural Network tomap directly from an observed image to next viewpoint.Finally, we incorporate this into a trajectory optimisationtask, whereby the best recognition confidence is sought fora given trajectory length. We present state-of-the-art resultsin both guided and unguided multi-view recognition on theModelNet dataset, and show how our method can be usedwith depth images, greyscale images, or both.
Bardow P, Davison AJ, Leutenegger S, 2016, Simultaneous Optical Flow and Intensity Estimation from an Event Camera, Computer Vision and Pattern Recognition 2016, Publisher: Computer Vision Foundation (CVF), ISSN: 1063-6919
Event cameras are bio-inspired vision sensors whichmimic retinas to measure per-pixel intensity change ratherthan outputting an actual intensity image. This proposedparadigm shift away from traditional frame cameras offerssignificant potential advantages: namely avoiding highdata rates, dynamic range limitations and motion blur.Unfortunately, however, established computer vision algorithmsmay not at all be applied directly to event cameras.Methods proposed so far to reconstruct images, estimateoptical flow, track a camera and reconstruct a scene comewith severe restrictions on the environment or on the motionof the camera, e.g. allowing only rotation. Here, wepropose, to the best of our knowledge, the first algorithm tosimultaneously recover the motion field and brightness image,while the camera undergoes a generic motion throughany scene. Our approach employs minimisation of a costfunction that contains the asynchronous event data as wellas spatial and temporal regularisation within a sliding windowtime interval. Our implementation relies on GPU optimisationand runs in near real-time. In a series of examples,we demonstrate the successful operation of our framework,including in situations where conventional cameras sufferfrom dynamic range limitations and motion blur.
Whelan T, Salas-Moreno RF, Glocker B, et al., 2016, ElasticFusion: real-time dense SLAM and light source estimation, International Journal of Robotics Research, Vol: 35, Pages: 1697-1716, ISSN: 1741-3176
We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globallyconsistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incrementalonline fashion, without pose graph optimisation or any post-processing steps. This is accomplished by using dense frame-tomodelcamera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surfacedeformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay closeto the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novelapproach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoorenvironments in real-time as a user handheld camera explores the scene. Absolutely no prior information about the scene ornumber of light sources is required. By making a small set of simple assumptions about the appearance properties of the sceneour method can incrementally estimate both the quantity and location of multiple light sources in the environment in an onlinefashion. Our results demonstrate that our technique functions well in many different environments and lighting configurations.We show that this enables (a) more realistic augmented reality (AR) rendering; (b) a richer understanding of the scene beyondpure geometry and; (c) more accurate and robust photometric tracking
Johns E, Leutenegger S, Davison AJ, 2016, Deep learning a grasp function for grasping under gripper pose uncertainty, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, Pages: 4461-4468, ISSN: 2153-0866
This paper presents a new method for paralleljawgrasping of isolated objects from depth images, underlarge gripper pose uncertainty. Whilst most approaches aimto predict the single best grasp pose from an image, ourmethod first predicts a score for every possible grasp pose,which we denote the grasp function. With this, it is possibleto achieve grasping robust to the gripper’s pose uncertainty,by smoothing the grasp function with the pose uncertaintyfunction. Therefore, if the single best pose is adjacent to aregion of poor grasp quality, that pose will no longer be chosen,and instead a pose will be chosen which is surrounded by aregion of high grasp quality. To learn this function, we traina Convolutional Neural Network which takes as input a singledepth image of an object, and outputs a score for each grasppose across the image. Training data for this is generated byuse of physics simulation and depth image simulation with 3Dobject meshes, to enable acquisition of sufficient data withoutrequiring exhaustive real-world experiments. We evaluate withboth synthetic and real experiments, and show that the learnedgrasp score is more robust to gripper pose uncertainty thanwhen this uncertainty is not accounted for.
Zienkiewicz J, Davison AJ, Leutenegger S, 2016, Real-Time Height Map Fusion using Differentiable Rendering, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, ISSN: 2153-0866
We present a robust real-time method whichperforms dense reconstruction of high quality height mapsfrom monocular video. By representing the height map as atriangular mesh, and using efficient differentiable renderingapproach, our method enables rigorous incremental probabilisticfusion of standard locally estimated depth and colour intoan immediately usable dense model. We present results forthe application of free space and obstacle mapping by a lowcostrobot, showing that detailed maps suitable for autonomousnavigation can be obtained using only a single forward-lookingcamera.
Whelan T, Salas Moreno R, Leutenegger S, et al., 2016, Modelling a Three-Dimensional Space, WO2016189274
Kim H, Leutenegger S, Davison AJ, 2016, Real-time 3D reconstruction and 6-DoF tracking with an event camera, ECCV 2016-European Conference on Computer Vision, Publisher: Springer, Pages: 349-364, ISSN: 0302-9743
We propose a method which can perform real-time 3D reconstructionfrom a single hand-held event camera with no additional sensing,and works in unstructured scenes of which it has no prior knowledge.It is based on three decoupled probabilistic filters, each estimating 6-DoFcamera motion, scene logarithmic (log) intensity gradient and scene inversedepth relative to a keyframe, and we build a real-time graph ofthese to track and model over an extended local workspace. We alsoupgrade the gradient estimate for each keyframe into an intensity image,allowing us to recover a real-time video-like intensity sequence withspatial and temporal super-resolution from the low bit-rate input eventstream. To the best of our knowledge, this is the first algorithm provablyable to track a general 6D motion along with reconstruction of arbitrarystructure including its intensity and the reconstruction of grayscale videothat exclusively relies on event camera data.
Leutenegger S, Hurzeler C, Stowers AK, et al., 2016, Flying Robot, SPRINGER HANDBOOK OF ROBOTICS, Editors: Siciliano, Khatib, Publisher: SPRINGER-VERLAG BERLIN, Pages: 623-669, ISBN: 978-3-319-32550-7
- Author Web Link
- Cite
- Citations: 10
Lukierski R, Leutenegger S, Davison AJ, 2015, Rapid free-space mapping from a single omnidirectional camera, 2015 European Conference on Mobile Robots (ECMR), Publisher: IEEE, Pages: 1-8
Low-cost robots such as floor cleaners generally rely on limited perception and simple algorithms, but some new models now have enough sensing capability and computation power to enable Simultaneous Localisation And Mapping (SLAM) and intelligent guided navigation. In particular, computer vision is now a serious option in low cost robotics, though its use to date has been limited to feature-based mapping for localisation. Dense environment perception such as free space finding has required additional specialised sensors, adding expense and complexity. Here we show that a robot with a single passive omnidirectional camera can perform rapid global free-space reasoning within typical rooms. Upon entering a new room, the robot makes a circular movement to capture a closely-spaced omni image sequence with disparity in all horizontal directions. feature-based visual SLAM procedure obtains accurate poses for these frames before passing them to a dense matching step, 3D semi-dense reconstruction and visibility reasoning. The result is turned into a 2D occupancy map, which can be improved and extended if necessary through further movement. This rapid, passive technique can capture high quality free space information which gives a robot a global understanding of the space around it. We present results in several scenes, including quantitative comparison with laser-based mapping.
Whelan T, Leutenegger S, Salas-Moreno RF, et al., 2015, ElasticFusion: Dense SLAM without a Pose Graph, Robotics: Science and Systems, Publisher: Robotics: Science and Systems, ISSN: 2330-765X
Milford M, Kim H, Mangan M, et al., 2015, Place Recognition with Event-based Cameras and a Neural Implementation of SeqSLAM
Event-based cameras offer much potential to the fields of robotics andcomputer vision, in part due to their large dynamic range and extremely high"frame rates". These attributes make them, at least in theory, particularlysuitable for enabling tasks like navigation and mapping on high speed roboticplatforms under challenging lighting conditions, a task which has beenparticularly challenging for traditional algorithms and camera sensors. Beforethese tasks become feasible however, progress must be made towards adapting andinnovating current RGB-camera-based algorithms to work with event-basedcameras. In this paper we present ongoing research investigating two distinctapproaches to incorporating event-based cameras for robotic navigation: theinvestigation of suitable place recognition / loop closure techniques, and thedevelopment of efficient neural implementations of place recognition techniquesthat enable the possibility of place recognition using event-based cameras atvery high frame rates using neuromorphic computing hardware.
Leutenegger S, Lynen S, Bosse M, et al., 2015, Keyframe-based visual–inertial odometry using nonlinear optimization, The International Journal of Robotics Research, Vol: 34, Pages: 314-334, ISSN: 0278-3649
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.
Oettershagen P, Melzer A, Mantel T, et al., 2015, A Solar-Powered Hand-Launchable UAV for Low-Altitude Multi-Day Continuous Flight, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 3986-3993, ISSN: 1050-4729
- Author Web Link
- Cite
- Citations: 44
Leutenegger S, 2014, Unmanned Solar Airplanes: Design and Algorithms for Efficient and Robust Autonomous Operation
Nikolic J, Rehder J, Burri M, et al., 2014, A Synchronized Visual-Inertial Sensor System with FPGA Pre-Processing for Accurate Real-Time SLAM
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.