Imperial College London

DrEdwardJohns

Faculty of EngineeringDepartment of Computing

Lecturer
 
 
 
//

Contact

 

e.johns Website

 
 
//

Location

 

365ACE ExtensionSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

21 results found

Johns E, Liu S, Davison A, End-To-End Multi-Task Learning With Attention, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Conference paper

James S, Davison A, Johns E, 2017, Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task, Conference on Robot Learning, Publisher: PMLR, Pages: 334-343

End-to-end control for robot manipulation and grasping is emergingas an attractive alternative to traditional pipelined approaches. However, end-to-end methods tend to either be slow to train, exhibit little or no generalisability,or lack the ability to accomplish long-horizon or multi-stage tasks. In this paper,we show how two simple techniques can lead to end-to-end (image to velocity)execution of a multi-stage task, which is analogous to a simple tidying routine,without having seen a single real image. This involves locating, reaching for, andgrasping a cube, then locating a basket and dropping the cube inside. To achievethis, robot trajectories are computed in a simulator, to collect a series of controlvelocities which accomplish the task. Then, a CNN is trained to map observedimages to velocities, using domain randomisation to enable generalisation to realworld images. Results show that we are able to successfully accomplish the taskin the real world with the ability to generalise to novel environments, includingthose with dynamic lighting conditions, distractor objects, and moving objects,including the basket itself. We believe our approach to be simple, highly scalable,and capable of learning long-horizon tasks that have until now not been shownwith the state-of-the-art in end-to-end robot control.

Conference paper

Ye M, Johns E, Walter B, Meining A, Yang G-Zet al., 2017, An image retrieval framework for real-time endoscopic image retargeting, International Journal of Computer Assisted Radiology and Surgery, Vol: 12, Pages: 1281-1292, ISSN: 1861-6429

PurposeSerial endoscopic examinations of a patient are important for early diagnosis of malignancies in the gastrointestinal tract. However, retargeting for optical biopsy is challenging due to extensive tissue variations between examinations, requiring the method to be tolerant to these changes whilst enabling real-time retargeting.MethodThis work presents an image retrieval framework for inter-examination retargeting. We propose both a novel image descriptor tolerant of long-term tissue changes and a novel descriptor matching method in real time. The descriptor is based on histograms generated from regional intensity comparisons over multiple scales, offering stability over long-term appearance changes at the higher levels, whilst remaining discriminative at the lower levels. The matching method then learns a hashing function using random forests, to compress the string and allow for fast image comparison by a simple Hamming distance metric.ResultsA dataset that contains 13 in vivo gastrointestinal videos was collected from six patients, representing serial examinations of each patient, which includes videos captured with significant time intervals. Precision-recall for retargeting shows that our new descriptor outperforms a number of alternative descriptors, whilst our hashing method outperforms a number of alternative hashing approaches.ConclusionWe have proposed a novel framework for optical biopsy in serial endoscopic examinations. A new descriptor, combined with a novel hashing method, achieves state-of-the-art retargeting, with validation on in vivo videos from six patients. Real-time performance also allows for practical integration without disturbing the existing clinical workflow.

Journal article

Saeedi Gharahbolagh S, Nardi L, Johns E, Bodin B, Kelly PHJ, Davison AJet al., Application-oriented Design Space Exploration for SLAM Algorithms, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE

In visual SLAM, there are many software and hardware parameters, such as algorithmic thresholds and GPU frequency, that need to be tuned; however, this tuning should also take into account the structure and motion of the camera. In this paper, we determine the complexity of the structure and motion with a few parameters calculated using information theory. Depending on this complexity and the desired performance metrics, suitable parameters are explored and determined. Additionally, based on the proposed structure and motion parameters, several applications are presented, including a novel active SLAM approach which guides the camera in such a way that the SLAM algorithm achieves the desired performance metrics. Real-world and simulated experimental results demonstrate the effectiveness of the proposed design space and its applications.

Conference paper

Johns E, Leutenegger S, Davison AJ, 2016, Pairwise Decomposition of Image Sequences for Active Multi-View Recognition, Computer Vision and Pattern Recognition, Publisher: Computer Vision Foundation (CVF), ISSN: 1063-6919

A multi-view image sequence provides a much richercapacity for object recognition than from a single image.However, most existing solutions to multi-view recognitiontypically adopt hand-crafted, model-based geometric methods,which do not readily embrace recent trends in deeplearning. We propose to bring Convolutional Neural Networksto generic multi-view recognition, by decomposingan image sequence into a set of image pairs, classifyingeach pair independently, and then learning an object classi-fier by weighting the contribution of each pair. This allowsfor recognition over arbitrary camera trajectories, withoutrequiring explicit training over the potentially infinite numberof camera paths and lengths. Building these pairwiserelationships then naturally extends to the next-best-viewproblem in an active recognition framework. To achievethis, we train a second Convolutional Neural Network tomap directly from an observed image to next viewpoint.Finally, we incorporate this into a trajectory optimisationtask, whereby the best recognition confidence is sought fora given trajectory length. We present state-of-the-art resultsin both guided and unguided multi-view recognition on theModelNet dataset, and show how our method can be usedwith depth images, greyscale images, or both.

Conference paper

Johns E, Leutenegger S, Davison AJ, 2016, Deep Learning a Grasp Function for Grasping Under Gripper Pose Uncertainty, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, ISSN: 2153-0866

This paper presents a new method for paralleljawgrasping of isolated objects from depth images, underlarge gripper pose uncertainty. Whilst most approaches aimto predict the single best grasp pose from an image, ourmethod first predicts a score for every possible grasp pose,which we denote the grasp function. With this, it is possibleto achieve grasping robust to the gripper’s pose uncertainty,by smoothing the grasp function with the pose uncertaintyfunction. Therefore, if the single best pose is adjacent to aregion of poor grasp quality, that pose will no longer be chosen,and instead a pose will be chosen which is surrounded by aregion of high grasp quality. To learn this function, we traina Convolutional Neural Network which takes as input a singledepth image of an object, and outputs a score for each grasppose across the image. Training data for this is generated byuse of physics simulation and depth image simulation with 3Dobject meshes, to enable acquisition of sufficient data withoutrequiring exhaustive real-world experiments. We evaluate withboth synthetic and real experiments, and show that the learnedgrasp score is more robust to gripper pose uncertainty thanwhen this uncertainty is not accounted for.

Conference paper

Ye M, Johns E, Walter B, Meining A, Yang G-Zet al., 2016, Robust Image Descriptors for Real-Time Inter-Examination Retargeting in Gastrointestinal Endoscopy, International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, Publisher: Springer, Pages: 448-456, ISSN: 0302-9743

For early diagnosis of malignancies in the gastrointestinaltract, surveillance endoscopy is increasingly used to monitor abnormaltissue changes in serial examinations of the same patient. Despite suc-cesses with optical biopsy forin vivoandin situtissue characterisa-tion, biopsy retargeting for serial examinations is challenging becausetissue may change in appearance between examinations. In this paper, wepropose an inter-examination retargeting framework for optical biopsy,based on an image descriptor designed for matching between endoscopicscenes over significant time intervals. Each scene is described by a hierar-chy of regional intensity comparisons at various scales, offering toleranceto long-term change in tissue appearance whilst remaining discrimina-tive. Binary coding is then used to compress the descriptor via a novelrandom forests approach, providing fast comparisons in Hamming spaceand real-time retargeting. Extensive validation conducted on 13in vivogastrointestinal videos, collected from six patients, show that our ap-proach outperforms state-of-the-art methods.

Conference paper

Johns E, Mac Aodha O, Brostow G, 2015, Becoming the expert - interactive multi-class machine teaching, Conference on Computer Vision and Pattern Recognition 2015, Publisher: Institute of Electrical and Electronics Engineers, ISSN: 1063-6919

Compared to machines, humans are extremely good atclassifying images into categories, especially when theypossess prior knowledge of the categories at hand. If thisprior information is not available, supervision in the formof teaching images is required. To learn categories morequickly, people should see important and representative im-ages first, followed by less important images later – or not atall. However, image-importance is individual-specific, i.e.a teaching image is important to a student if it changes theiroverall ability to discriminate between classes. Further, stu-dents keep learning, so while image-importance depends ontheir current knowledge, it also varies with time.In this work we propose an Interactive Machine Teach-ing algorithm that enables a computer to teach challeng-ing visual concepts to a human. Our adaptive algorithmchooses, online, which labeled images from a teaching setshould be shown to the student as they learn. We show that ateaching strategy that probabilistically models the student’sability and progress, based on their correct and incorrectanswers, produces better ‘experts’. We present results us-ing real human participants across several varied and chal-lenging real-world datasets.

Conference paper

Ye M, Johns E, Giannarou S, Yang G-Zet al., Online Scene Association for Endoscopic Navigation, 17th International Conference MICCAI 2014, Publisher: Springer International Publishing, Pages: 316-323, ISSN: 0302-9743

Endoscopic surveillance is a widely used method for moni-toring abnormal changes in the gastrointestinal tract such as Barrett'sesophagus. Direct visual assessment, however, is both time consumingand error prone, as it involves manual labelling of abnormalities on alarge set of images. To assist surveillance, this paper proposes an onlinescene association scheme to summarise an endoscopic video into scenes,on-the-y. This provides scene clustering based on visual contents, andalso facilitates topological localisation during navigation. The proposedmethod is based on tracking and detection of visual landmarks on thetissue surface. A generative model is proposed for online learning of pair-wise geometrical relationships between landmarks. This enables robustdetection of landmarks and scene association under tissue deformation.Detailed experimental comparison and validation have been conductedon in vivo endoscopic videos to demonstrate the practical value of ourapproach.

Conference paper

Johns E, Yang G-Z, 2014, Generative Methods for Long-Term Place Recognition in Dynamic Scenes, INTERNATIONAL JOURNAL OF COMPUTER VISION, Vol: 106, Pages: 297-314, ISSN: 0920-5691

Journal article

Ye M, Johns E, Giannarou S, Yang GZet al., 2014, Online scene association for endoscopic navigation, Pages: 316-323

Endoscopic surveillance is a widely used method for monitoring abnormal changes in the gastrointestinal tract such as Barrett's esophagus. Direct visual assessment, however, is both time consuming and error prone, as it involves manual labelling of abnormalities on a large set of images. To assist surveillance, this paper proposes an online scene association scheme to summarise an endoscopic video into scenes, on-the-fly. This provides scene clustering based on visual contents, and also facilitates topological localisation during navigation. The proposed method is based on tracking and detection of visual landmarks on the tissue surface. A generative model is proposed for online learning of pairwise geometrical relationships between landmarks. This enables robust detection of landmarks and scene association under tissue deformation. Detailed experimental comparison and validation have been conducted on in vivo endoscopic videos to demonstrate the practical value of our approach.

Conference paper

Johns E, Yang G-Z, 2014, Pairwise Probabilistic Voting: Fast Place Recognition without RANSAC, 13th European Conference on Computer Vision (ECCV), Publisher: SPRINGER-VERLAG BERLIN, Pages: 504-519, ISSN: 0302-9743

Conference paper

Johns E, Yang G-Z, 2013, Dynamic Scene Models for Incremental, Long-Term, Appearance-Based Localisation, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 2731-2736, ISSN: 1050-4729

Conference paper

Johns E, Yang G-Z, 2013, Feature Co-occurrence Maps: Appearance-based Localisation Throughout the Day, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3212-3218, ISSN: 1050-4729

Conference paper

Liu J, Johns E, Atallah L, Pettitt C, Lo B, Frost G, Yang G-Zet al., 2012, An intelligent food-intake monitoring system using wearable sensors, Pages: 154-160

Conference paper

Johns E, Yang G-Z, 2011, From Images to Scenes: Compressing an Image Cluster into a Single Scene Model for Place Recognition, IEEE International Conference on Computer Vision (ICCV), Publisher: IEEE, Pages: 874-881, ISSN: 1550-5499

Conference paper

Johns E, Yang G-Z, 2011, Place Recognition and Online Learning in Dynamic Scenes with Spatio-Temporal Landmarks, 22nd British Machine Vision Conference, Publisher: B M V A PRESS

Conference paper

Johns E, Yang G-Z, 2011, Global Localization in a Dense Continuous Topological Map, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 1032-1037, ISSN: 1050-4729

Conference paper

Liu J, Johns E, Yang G-Z, 2011, A scene-associated training method for mobile robot speech recognition in multisource reverberated environments, Pages: 542-549

Conference paper

Johns E, Yang G-Z, 2010, Scene Association for Mobile Robot Navigation, IEEE/RSJ International Conference on Intelligent Robots and Systems, Publisher: IEEE, ISSN: 2153-0858

Conference paper

Ballantyne J, Johns E, Valibeik S, Wong C, Yang G-Zet al., 2010, Autonomous Navigation for Mobile Robots with Human-Robot Interaction, Robot Intelligence, Editors: Liu, Gu, Howlett, Liu, Publisher: Springer London, Pages: 245-268, ISBN: 978-1-84996-328-2

Book chapter

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00389724&limit=30&person=true