Imperial College London

ProfessorChristos-SavvasBouganis

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Intelligent Digital Systems
 
 
 
//

Contact

 

+44 (0)20 7594 6144christos-savvas.bouganis Website

 
 
//

Location

 

904Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

192 results found

Rajagopal A, Bouganis C-S, 2021, perf4sight: A toolflow to model CNN training performance on Edge GPUs., Publisher: IEEE, Pages: 963-971

Conference paper

Montgomerie-Corcoran A, Bouganis C-S, 2021, POMMEL: Exploring Off-Chip Memory Energy & Power Consumption in Convolutional Neural Network Accelerators, 24th Euromicro Conference on Digital System Design (DSD), Publisher: IEEE COMPUTER SOC, Pages: 442-448

Conference paper

Rajagopal A, Bouganis C-S, 2021, perf4sight: A toolflow to model CNN training performance on Edge GPUs., CoRR, Vol: abs/2108.05580

Journal article

Yu Z, Bouganis C-S, 2021, StreamSVD: Low-rank Approximation and Streaming Accelerator Co-design., Publisher: IEEE, Pages: 1-9

Conference paper

Rajagopal A, Bouganis C-S, 2021, perf4sight: A toolflow to model CNN training performance on Edge GPUs, 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), Pages: 963-971, ISSN: 2473-9936

Journal article

Vink DA, Rajagopal A, Venieris SI, Bouganis C-Set al., 2020, Caffe barista: brewing caffe with FPGAs in the training loop, Publisher: arXiv

As the complexity of deep learning (DL) models increases, their computerequirements increase accordingly. Deploying a Convolutional Neural Network(CNN) involves two phases: training and inference. With the inference tasktypically taking place on resource-constrained devices, a lot of research hasexplored the field of low-power inference on custom hardware accelerators. Onthe other hand, training is both more compute- and memory-intensive and isprimarily performed on power-hungry GPUs in large-scale data centres. CNNtraining on FPGAs is a nascent field of research. This is primarily due to thelack of tools to easily prototype and deploy various hardware and/oralgorithmic techniques for power-efficient CNN training. This work presentsBarista, an automated toolflow that provides seamless integration of FPGAs intothe training of CNNs within the popular deep learning framework Caffe. To thebest of our knowledge, this is the only tool that allows for such versatile andrapid deployment of hardware and algorithms for the FPGA-based training ofCNNs, providing the necessary infrastructure for further research anddevelopment.

Working paper

Rajagopal A, Vink DA, Venieris SI, Bouganis C-Set al., 2020, Multi-Precision Policy Enforced Training (MuPPET): A precision-switching strategy for quantised fixed-point training of CNNs, Publisher: arXiv

Large-scale convolutional neural networks (CNNs) suffer from very longtraining times, spanning from hours to weeks, limiting the productivity andexperimentation of deep learning practitioners. As networks grow in size andcomplexity, training time can be reduced through low-precision datarepresentations and computations. However, in doing so the final accuracysuffers due to the problem of vanishing gradients. Existing state-of-the-artmethods combat this issue by means of a mixed-precision approach utilising twodifferent precision levels, FP32 (32-bit floating-point) and FP16/FP8(16-/8-bit floating-point), leveraging the hardware support of recent GPUarchitectures for FP16 operations to obtain performance gains. This work pushesthe boundary of quantised training by employing a multilevel optimisationapproach that utilises multiple precisions including low-precision fixed-pointrepresentations. The novel training strategy, MuPPET, combines the use ofmultiple number representation regimes together with a precision-switchingmechanism that decides at run time the transition point between precisionregimes. Overall, the proposed strategy tailors the training process to thehardware-level capabilities of the target hardware architecture and yieldsimprovements in training time and energy efficiency compared tostate-of-the-art approaches. Applying MuPPET on the training of AlexNet,ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA TuringGPU, MuPPET achieves the same accuracy as standard full-precision training withtraining-time speedup of up to 1.84$\times$ and an average speedup of1.58$\times$ across the networks.

Working paper

Kouris A, Venieris S, Bouganis C-S, 2020, A throughput-latency co-optimised cascade of convolutional neural network classifiers, Design, Automation and Test in Europe Conference (DATE 2020), Publisher: IEEE, Pages: 1656-1661

Convolutional Neural Networks constitute a promi-nent AI model for classification tasks, serving a broad span ofdiverse application domains. To enable their efficient deploymentin real-world tasks, the inherent redundancy of CNNs is fre-quently exploited to eliminate unnecessary computational costs.Driven by the fact that not all inputs require the same amount ofcomputation to drive a confident prediction, multi-precision cas-cade classifiers have been recently introduced. FPGAs comprise apromising platform for the deployment of such input-dependentcomputation models, due to their enhanced customisation ca-pabilities. Current literature, however, is limited to throughput-optimised cascade implementations, employing large batching atthe expense of a substantial latency aggravation prohibiting theirdeployment on real-time scenarios. In this work, we introduce anovel methodology for throughput-latency co-optimised cascadedCNN classification, deployed on a custom FPGA architecturetailored to the target application and deployment platform,with respect to a set of user-specified requirements on accuracyand performance. Our experiments indicate that the proposedapproach achieves comparable throughput gains with relatedstate-of-the-art works, under substantially reduced overhead inlatency, enabling its deployment on latency-sensitive applications.

Conference paper

Rajagopal A, Bouganis C-S, 2020, Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge, Publisher: arXiv

In today's world, a vast amount of data is being generated by edge devicesthat can be used as valuable training data to improve the performance ofmachine learning algorithms in terms of the achieved accuracy or to reduce thecompute requirements of the model. However, due to user data privacy concernsas well as storage and communication bandwidth limitations, this data cannot bemoved from the device to the data centre for further improvement of the modeland subsequent deployment. As such there is a need for increased edgeintelligence, where the deployed models can be fine-tuned on the edge, leadingto improved accuracy and/or reducing the model's workload as well as its memoryand power footprint. In the case of Convolutional Neural Networks (CNNs), boththe weights of the network as well as its topology can be tuned to adapt to thedata that it processes. This paper provides a first step towards enabling CNNfinetuning on an edge device based on structured pruning. It explores theperformance gains and costs of doing so and presents an extensible open-sourceframework that allows the deployment of such approaches on a wide range ofnetwork architectures and devices. The results show that on average, data-awarepruning with retraining can provide 10.2pp increased accuracy over a wide rangeof subsets, networks and pruning levels with a maximum improvement of 42.0ppover pruning and retraining in a manner agnostic to the data being processed bythe network.

Working paper

Kouris A, Venieris SI, Rizakis M, Bouganis C-Set al., 2020, Approximate LSTMs for time-constrained inference: enabling fast reaction in self-driving cars., IEEE Consumer Electronics Magazine, Vol: 9, Pages: 11-26, ISSN: 2162-2248

The need to recognize long-term dependencies in sequential data, such as video streams, has made long short-term memory (LSTM) networks a prominent artificial intelligence model for many emerging applications. However, the high computational and memory demands of LSTMs introduce challenges in their deployment on latency-critical systems such as self-driving cars, which are equipped with limited computational resources on-board. In this article, we introduce a progressive inference computing scheme that combines model pruning and computation restructuring leading to the best possible approximation of the result given the available latency budget of the target application. The proposed methodology enables mission-critical systems to make informed decisions even in early stages of the computation, based on approximate LSTM inference, meeting their specifications on safety and robustness. Our experiments on a state-of-the-art driving model for autonomous vehicle navigation demonstrate that the proposed approach can yield outputs with similar quality of result compared to a faithful LSTM baseline, up to 415× faster (198× on average, 76× geo. mean).

Journal article

Yu Z, Bouganis C-S, 2020, A parameterisable FPGA-tailored architecture for YOLOv3-Tiny, 16th International Symposium, ARC 2020, Publisher: Springer International Publishing, Pages: 330-344, ISSN: 0302-9743

Object detection is the task of detecting the position of objects in an image or video as well as their corresponding class. The current state of the art approach that achieves the highest performance (i.e. fps) without significant penalty in accuracy of detection is the YOLO framework, and more specifically its latest version YOLOv3. When embedded systems are targeted for deployment, YOLOv3-tiny, a lightweight version of YOLOv3, is usually adopted. The presented work is the first to implement a parameterised FPGA-tailored architecture specifically for YOLOv3-tiny. The architecture is optimised for latency-sensitive applications, and is able to be deployed in low-end devices with stringent resource constraints. Experiments demonstrate that when a low-end FPGA device is targeted, the proposed architecture achieves a 290x improvement in latency, compared to the hard core processor of the device, achieving at the same time a reduction in mAP of 2.5 pp (30.9% vs 33.4%) compared to the original model. The presented work opens the way for low-latency object detection on low-end FPGA devices.

Conference paper

Abdelhadi A, Bouganis C, Constantinides G, 2020, Accelerated approximate nearest neighbors search through hierarchical product quantization, 2019 International Conference on Field-Programmable Technology, Publisher: IEEE, Pages: 1-9

A fundamental recurring task in many machinelearning applications is the search for the Nearest Neighbor inhigh dimensional metric spaces. Towards answering queries inlarge scale problems, state-of-the-art methods employ Approx-imate Nearest Neighbors (ANN) search, a search that returnsthe nearest neighbor with high probability, as well as techniquesthat compress the dataset. Product-Quantization (PQ) based ANNsearch methods have demonstrated state-of-the-art performancein several problems, including classification, regression and infor-mation retrieval. The dataset is encoded into a Cartesian productof multiple low-dimensional codebooks, enabling faster searchand higher compression. Being intrinsically parallel, PQ-basedANN search approaches are amendable for hardware accelera-tion. This paper proposes a novel Hierarchical PQ (HPQ) basedANN search method as well as an FPGA-tailored architecturefor its implementation that outperforms current state of theart systems. HPQ gradually refines the search space, reducingthe number of data compares and enabling a pipelined search.The mapping of the architecture on a Stratix 10 FPGA devicedemonstrates over×250 speedups over current state-of-the-artsystems, opening the space for addressing larger datasets and/orimproving the query times of current systems.

Conference paper

Kouris A, Kyrkou C, Bouganis C-S, 2020, Informed region selection for efficient UAV-based object detectors: altitude-aware vehicle detection with CyCAR dataset, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 51-58, ISSN: 2153-0858

Conference paper

Rajagopal A, Vink DA, Venieris SI, Bouganis CSet al., 2020, Multi-precision policy enforced training (muppet): A precision-switching strategy for quantised fixed-point training of cnns, Pages: 7899-7908

Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity, training time can be reduced through low-precision data representations and computations, however, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach utilising two different precision levels, FP32 (32-bit floating-point) and FP16/FP8 (16-/8-bit floating-point), leveraging the hardware support of recent GPU architectures for FP16 operations to obtain performance gains. This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations resulting in a novel training strategy MuPPET; it combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition point between precision regimes. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the target hardware architecture and yields improvements in training time and energy efficiency compared to state-of-the-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, MuPPET achieves the same accuracy as standard full-precision training with training-time speedup of up to 1.84× and an average speedup of 1.58× across the networks.

Conference paper

Vasileiadis M, Bouganis CS, Stavropoulos G, Tzovaras Det al., 2020, Optimising 3D-CNN design towards human pose estimation on low power devices

3D CNN-based architectures have found application in a variety of 3D vision tasks, significantly outperforming earlier approaches. This increase in accuracy, however, has come at the cost of computational complexity, with deep learning models becoming more and more complex, requiring significant computational resources, especially in the case of 3D data. Meanwhile, the growing adoption of low power devices in various technology fields has shifted the research focus towards the implementation of deep learning on systems with limited resources. While plenty of approaches have achieved promising results in terms of reducing the computational complexity in 2D tasks, their applicability in 3D-CNN designs has not been thoroughly researched. The current work aims at filling this void, by investigating a series of efficient CNN design techniques within the scope of 3D-CNNs, in order to produce guidelines for 3D-CNN design that can be applied to already established architectures, reducing their computational complexity. Following these guidelines, a computationally efficient 3D-CNN architecture for human pose estimation from 3D data is proposed, achieving comparable accuracy to the state-of-the-art. The proposed design guidelines are further validated within the scope of 3D object classification, achieving high accuracy results at a low computational cost.

Conference paper

Olaizola J, Bouganis C-S, ArgandoƱa ESD, Iturrospe A, Abete JMet al., 2020, Real-time servo press force estimation based on dual particle filter., IEEE Transactions on Industrial Electronics, Vol: 67, Pages: 4088-4097, ISSN: 0278-0046

The ability to monitor the quality of the metal forming process as well as the machine's condition is of significant importance in modern industrial processes. In the case where a physical device (i.e., sensor) cannot be deployed due to the characteristics of the system, models that rely on the estimation of both the applied force and the dynamic behavior of the machine (i.e., system) are adopted. The development of such models and the corresponding algorithms used to estimate the above-mentioned quantities has attracted the interest of the community. The main contribution of this paper is the estimation of a servo press force by employing a novel dual particle filter based algorithm, achieving a maximum relative error in the force estimation of 3.6%. Moreover, to address real-time performance requirements, this paper proposes a field programmable gate array based accelerator that improves the sampling rate by a factor of 200 compared to a processor-based solution, thus enabling the deployment of the system in many realistic scenarios.

Journal article

Rajagopal A, Bouganis C-S, 2020, Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge., Publisher: Computer Vision Foundation / IEEE, Pages: 3058-3067

Conference paper

Rajagopal A, Vink DA, Venieris SI, Bouganis C-Set al., 2020, Multi-Precision Policy Enforced Training (MuPPET) : A Precision-Switching Strategy for Quantised Fixed-Point Training of CNNs., Publisher: PMLR, Pages: 7943-7952

Conference paper

Ribes S, Trancoso P, Sourdis I, Bouganis C-Set al., 2020, Mapping Multiple LSTM models on FPGAs, 19th International Conference on Field-Programmable Technology (ICFPT), Publisher: IEEE COMPUTER SOC, Pages: 1-9

Conference paper

Vink DA, Rajagopal A, Venieris SI, Bouganis C-Set al., 2020, Caffe Barista: Brewing Caffe with FPGAs in the Training Loop., Publisher: IEEE, Pages: 317-322

Conference paper

Kouris A, Venieris S, Bouganis C-S, 2019, Towards efficient on-board deployment of DNNs on intelligent autonomous systems, 18th IEEE-Computer-Society Annual Symposium on VLSI (ISVLSI), Publisher: IEEE COMPUTER SOC, Pages: 570-575, ISSN: 2159-3469

With their unprecedented performance in major AI tasks, deep neural networks (DNNs) have emerged as a primary building block in modern autonomous systems. Intelligent systems such as drones, mobile robots and driverless cars largely base their perception, planning and application-specific tasks on DNN models. Nevertheless, due to the nature of these applications, such systems require on-board local processing in order to retain their autonomy and meet latency and throughput constraints. In this respect, the large computational and memory demands of DNN workloads pose a significant barrier on their deployment on the resource-and power-constrained compute platforms that are available on-board. This paper presents an overview of recent methods and hardware architectures that address the system-level challenges of modern DNN-enabled autonomous systems at both the algorithmic and hardware design level. Spanning from latency-driven approximate computing techniques to high-throughput mixed-precision cascaded classifiers, the presented set of works paves the way for the on-board deployment of sophisticated DNN models on robots and autonomous systems.

Conference paper

Martorell X, Alvarez C, Bouganis CS, Sourdis Iet al., 2019, Preface, Proceedings - 29th International Conference on Field-Programmable Logic and Applications, FPL 2019

Journal article

Vasileiadis M, Bouganis C-S, Tzovaras D, 2019, Multi-person 3D pose estimation from 3D cloud data using 3D convolutional neural networks, Computer Vision and Image Understanding, Vol: 185, Pages: 12-23, ISSN: 1077-3142

Human pose estimation is considered one of the major challenges in the field of Computer Vision, playing an integral role in a large variety of technology domains. While, in the last few years, there has been an increased number of research approaches towards CNN-based 2D human pose estimation from RGB images, respective work on CNN-based 3D human pose estimation from depth/3D data has been rather limited, with current approaches failing to outperform earlier methods, partially due to the utilization of depth maps as simple 2D single-channel images, instead of an actual 3D world representation. In order to overcome this limitation, and taking into consideration recent advances in 3D detection tasks of similar nature, we propose a novel fully-convolutional, detection-based 3D-CNN architecture for 3D human pose estimation from 3D data. The architecture follows the sequential network architecture paradigm, generating per-voxel likelihood maps for each human joint, from a 3D voxel-grid input, and is extended, through a bottom-up approach, towards multi-person 3D pose estimation, allowing the algorithm to simultaneously estimate multiple human poses, without its runtime complexity being affected by the number of people within the scene. The proposed multi-person architecture, which is the first within the scope of 3D human pose estimation, is comparatively evaluated on three single person public datasets, achieving state-of-the-art performance, as well as on a public multi-person dataset achieving high recognition accuracy.

Journal article

Ahmadi N, Constandinou TG, Bouganis C-S, 2019, End-to-End Hand Kinematic Decoding from LFPs Using Temporal Convolutional Network, IEEE Biomedical Circuits and Systems Conference (BioCAS), Publisher: IEEE, Pages: 1-4, ISSN: 2163-4025

In recent years, local field potentials (LFPs) haveemerged as a promising alternative input signal for brain-machine interfaces (BMIs). Several studies have demonstratedthat LFP-based BMIs could provide long-term recording stabilityand comparable decoding performance to their spike counter-parts. Despite the compelling results, however, most LFP-basedBMIs still make use of hand-crafted features which can betime-consuming and suboptimal. In this paper, we propose anend-to-end system approach based on temporal convolutionalnetwork (TCN) to automatically extract features and decodekinematics of hand movements directly from raw LFP signals.We benchmark its decoding performance against traditionalapproach incorporating long short-term memory (LSTM) de-coders driven by hand-crafted LFP features. Experimental re-sults demonstrate significant performance improvement of theproposed approach compared to the traditional approach. Thissuggests the suitability of TCN-based end-to-end system and itspotential for providng stable and high decoding performanceLFP-based BMIs.

Conference paper

Liu J, Bouganis C, Cheung PYK, 2019, Context-based image acquisition from memory in digital systems, Journal of Real-Time Image Processing, Vol: 16, Pages: 1057-1076, ISSN: 1861-8200

A key consideration in the design of image and video processing systems is the ever increasing spatial resolution of the captured images, which has a major impact on the performance requirements of the memory subsystem. This is further amplified by the facts that the memory bandwidth requirements and energy consumption of accessing the captured images have started to become the bottlenecks in the design of high-performance image processing systems. Inspired by the successful application of progressive image sampling techniques in various image processing tasks, this work proposes the concept of Context-based Image Acquisition for hardware systems that efficiently trades image quality for reduced cost of the image acquisition process. Based on the proposed framework, a hardware architecture is developed which alters the conventional memory access pattern, to progressively and adaptively access pixels from a memory subsystem. The sampled pixels are used to reconstruct an approximation to the ground truth, which is stored in a high-performance image buffer for further processing. An instance of the architecture is prototyped on an FPGA and its performance evaluation shows that a saving of up to 85 % of memory accessing time and 33 %/45 % of image acquisition time/energy are achieved on a set of benchmarks while maintaining a high PSNR.

Journal article

Kostavelis I, Vasileiadis M, Skartados E, Kargakos A, Giakoumis D, Bouganis C-S, Tzovaras Det al., 2019, Understanding of human behavior with a robotic agent through daily activity analysis, International Journal of Social Robotics, Vol: 11, Pages: 437-462, ISSN: 1875-4791

Personal assistive robots to be realized in the near future should have the ability to seamlessly coexist with humans in unconstrained environments, with the robot’s capability to understand and interpret the human behavior during human–robot cohabitation significantly contributing towards this end. Still, the understanding of human behavior through a robot is a challenging task as it necessitates a comprehensive representation of the high-level structure of the human’s behavior from the robot’s low-level sensory input. The paper at hand tackles this problem by demonstrating a robotic agent capable of apprehending human daily activities through a method, the Interaction Unit analysis, that enables activities’ decomposition into a sequence of units, each one associated with a behavioral factor. The modelling of human behavior is addressed with a Dynamic Bayesian Network that operates on top of the Interaction Unit, offering quantification of the behavioral factors and the formulation of the human’s behavioral model. In addition, light-weight human action and object manipulation monitoring strategies have been developed, based on RGB-D and laser sensors, tailored for onboard robot operation. As a proof of concept, we used our robot to evaluate the ability of the method to differentiate among the examined human activities, as well as to assess the capability of behavior modeling of people with Mild Cognitive Impairment. Moreover, we deployed our robot in 12 real house environments with real users, showcasing the behavior understanding ability of our method in unconstrained realistic environments. The evaluation process revealed promising performance and demonstrated that human behavior can be automatically modeled through Interaction Unit analysis, directly from robotic agents.

Journal article

Ahmadi N, Cavuto ML, Feng P, Leene LB, Maslik M, Mazza F, Savolainen O, Szostak KM, Bouganis C-S, Ekanayake J, Jackson A, Constandinou TGet al., 2019, Towards a distributed, chronically-implantable neural interface, 9th IEEE/EMBS International Conference on Neural Engineering (NER), Publisher: IEEE, Pages: 719-724, ISSN: 1948-3546

We present a platform technology encompassing a family of innovations that together aim to tackle key challenges with existing implantable brain machine interfaces. The ENGINI (Empowering Next Generation Implantable Neural Interfaces) platform utilizes a 3-tier network (external processor, cranial transponder, intracortical probes) to inductively couple power to, and communicate data from, a distributed array of freely-floating mm-scale probes. Novel features integrated into each probe include: (1) an array of niobium microwires for observing local field potentials (LFPs) along the cortical column; (2) ultra-low power instrumentation for signal acquisition and data reduction; (3) an autonomous, self-calibrating wireless transceiver for receiving power and transmitting data; and (4) a hermetically-sealed micropackage suitable for chronic use. We are additionally engineering a surgical tool, to facilitate manual and robot-assisted insertion, within a streamlined neurosurgical workflow. Ongoing work is focused on system integration and preclinical testing.

Conference paper

Kouris A, Venieris SI, Rizakis M, Bouganis C-Set al., 2019, Approximate LSTMs for time-constrained inference: Enabling fast reaction in self-driving cars, Publisher: arXiv

The need to recognise long-term dependencies in sequential data such as videostreams has made LSTMs a prominent AI model for many emerging applications.However, the high computational and memory demands of LSTMs introducechallenges in their deployment on latency-critical systems such as self-drivingcars which are equipped with limited computational resources on-board. In thispaper, we introduce an approximate computing scheme combining model pruning andcomputation restructuring to obtain a high-accuracy approximation of the resultin early stages of the computation. Our experiments demonstrate that using theproposed methodology, mission-critical systems responsible for autonomousnavigation and collision avoidance are able to make informed decisions based onapproximate calculations within the available time budget, meeting theirspecifications on safety and robustness.

Working paper

De Souza Rosa L, Bouganis C, Bonato V, 2019, Scaling up modulo scheduling for high-level synthesis, Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol: 38, Pages: 912-925, ISSN: 0278-0070

High-Level Synthesis tools have been increasingly used within the hardware design community to bridge the gap between productivity and the need to design large and complex systems. When targeting heterogeneous systems, where the CPU and the FPGA fabric are both available to perform computations, a design space exploration is usually carried out for deciding which parts of the initial code should be mapped to the FPGA fabric such as the overall system’s performance is enhanced by accelerating its computation via dedicated processors. As the targeted systems become more complex and larger, leading to a large design space exploration, the fast estimative of the possible acceleration that can be obtained by mapping certain functionality into the FPGA fabric is of paramount importance. Loop pipelining, which is responsible for the majority of HLS compilation time, is a key optimization towards achieving high-performance acceleration kernels. A new modulo scheduling algorithm is proposed, which reformulates the classical modulo scheduling problem and leads to a reduced number of integer linear problems solved, resulting in large computational savings. Moreover, the proposed approach has a controlled trade-off between solution quality and computation time. Results show the scalability is improved efficiently from quadratic, for the state-of-the-art method, to linear, for the proposed approach, while the optimized loop suffers a 1% (geomean) increment in the total number of cycles.

Journal article

Boikos K, Bouganis C-S, 2019, A scalable FPGA-based architecture for depth estimation in SLAM, ARC 2019, Publisher: Springer, Pages: 181-196

The current state of the art of Simultaneous Localisation and Mapping, or SLAM, on low power embedded systems is about sparse localisation and mapping with low resolution results in the name of efficiency. Meanwhile, research in this field has provided many advances for information rich processing and semantic understanding, combined with high computational requirements for real-time processing. This work provides a solution to bridging this gap, in the form of a scalable SLAM-specific architecture for depth estimation for direct semi-dense SLAM. Targeting an off-the-shelf FPGA-SoC this accelerator architecture achieves a rate of more than 60 mapped frames/sec at a resolution of 640×480 achieving performance on par to a highly-optimised parallel implementation on a high-end desktop CPU with an order of magnitude improved power consumption. Furthermore, the developed architecture is combined with our previous work for the task of tracking, to form the first complete accelerator for semi-dense SLAM on FPGAs, establishing the state of the art in the area of embedded low-power systems.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00232411&limit=30&person=true&page=2&respub-action=search.html