Advanced Robotics

Module aims

This course addresses topics of advanced robotics, with a focus on real-time state estimation and mapping, with application to drones and Augmented and Virtual Reality.

We build on the knowledge acquired in the Robotics course 333. In this respect, the covered challenges are extended towards 6D motion estimation and control, focusing on the camera as a core sensor. We furthermore discuss fusion with complementary sensors such as Inertial Measurement Units, which have recently become very popular.

The objective of this course is to provide the understanding, mathematical tools, and practical experience that allow students to implement their own multi-sensor Simultaneous Localisation And Mapping (SLAM) algorithms for deployment on a broad range of mobile robots, such as a multicopter Unmanned Aerial System (UAS) – which we will do in the practicals.

The practicals lead on to the “Amazin’ Challenge” to be held in the last session: students work on a multicopter UAS to be operating autonomously with on-board vision-based state estimation and control such that a simple delivery task can be achieved reliably, accurately, and fast.

Learning outcomes

On successful completion of the module, students should be able to:

  • explain the software components of a typical mobile robot, as well as their interactions with hardware (sensors, motors),
  • explain the components of multi-sensor Simultaneous Localisation and Mapping (SLAM) systems,
  • describe the kinematics and dynamics of wheeled and flying robots in maths,
  • describe multi-sensor estimators with sparse and dense map representations in maths,
  • describe different feedback-control approaches for robots in maths and explain the differences,
  • implement basic estimators as well as feedback controllers to run in real-time using modern C++.

Module syllabus

The course is broken down into the following sub-modules, which consist of lectures as well as related practicals running in parallel.
 
1. Introduction, Problem Formulation and Examples
An introduction into mobile robotics, applications, SLAM and related estimation problems and popular approaches is provided.
 
2. Representations and Sensors
We overview main sensors such as laser scanners, wheel odometry, IMUs, magnetometers, pressure sensors, GPS, and, most importantly: 2D and 3D cameras. Furthermore, robot and sensor-internal state representations are discussed. This includes orientation parameterisation (rotation matrices, Quaternions).
 
3. Kinematics and Temporal Models
We revisit wheel odometry and discuss IMU kinematics, both in a direct and indirect formulation. Continuous-time Ordinary Differential Equations and their discrete-time approximations are revisited. We finally also cover physics-based dynamic models, such as 6D-rigid-body dynamics.
 
4. The Extended Kalman Filter in Practice
We introduce the practical aspects of how an Extended Kalman Filter (EKF) operates and relate that to the particle filter based localisation in CO333. The aspects of linearisation and how to implement the EKF in software is looked at next, along with examples.
 
5. Feedback Control
Controlling the motion with feedback from sensing and estimation forms a main component of any mobile robot. We revisit PID control introduced in CO333 in more detail and look at model-based control that exploit previously introduced kinematic or dynamic temporal models, achieving higher accuracy.
 
6. Nonlinear Least Squares
We establish the relationship between Maximum Likelihood (ML) / Maximum a Posteriori (MAP) estimation and nonlinear least squares problems. We derive the solution to linear least squares and deduce iterative minimisation techniques, such as Gauss-Newton and Levenberg Marquardt. We apply these algorithms to various examples that are applicable in real-world robotics, e.g. camera calibration.
 
7. Vision-Based Simultaneous Localisation and Mapping
A real SLAM system doesn't just consist of the estimator core, but a large range of additional challenges: here, we discuss sparse keypoint detection, description and matching, as well as related outlier rejection and boot-strapping (e.g. RANSAC-based). We then turn to dense tracking and mapping systems with their specific system-level challenges. Finally, techniques of identifying loop-closure constraints as well as ways to apply them as part of an estimation problem in both a sparse landmark setting as well as dense frameworks are discussed. These aspects will be related to presented case studies in the form of state-of-the-art SLAM systems.

Pre-requisites

Required: CO496: Mathematics for Machine Learning (CO496), Robotics (CO333). For MSc students, CO496 is required, and equivalent courses to CO333 in their previous studies are required.

Recommended but optional: Computer Vision (CO316).

Teaching methods

2 hours of lectures and 2 hours for practicals every week.

Lectures 1-5 are more geared towards being directly applicable in the practicals, whereas 6-8 are providing deeper insight and extend the skills, which may optionally be used in the practicals to improve the quality of the developed solution.

The practicals will be following the lecture and will lead to autonomously operating a drone with visual feedback. This means, the students will implement a visual-inertial state estimator that uses visual markers and feed this into a model-based controller that should as accurately as possible track a specified trajectory. As a platform, the Parrot AR Drone is suggested. To ensure safety, using one of the small offices next to the Huxley labs is suggested as a dedicated testing room.

Assessments

  • 3 assessed practicals, counting 10% for MSc and 15% for MEng
  • Exam, choose 3 out of 4 questions

Module leaders

Dr Stefan Leutenegger