Imperial College London

DrChristos-SavvasBouganis

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Reader in Intelligent Digital Systems
 
 
 
//

Contact

 

+44 (0)20 7594 6144christos-savvas.bouganis Website

 
 
//

Location

 

904Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

153 results found

Rosa LDS, Bouganis C-S, Bonato V, 2021, Non-iterative SDC modulo scheduling for high-level synthesis, Microprocessors and Microsystems, Vol: 86, Pages: 1-13, ISSN: 0141-9331

High-level synthesis is a powerful tool for increasing productivity in digital hardware design. However, as digital systems become larger and more complex, designers have to consider an increased number of optimizations and directives offered by high-level synthesis tools to control the hardware generation process. One of the most explored optimizations is loop pipelining due to its impact on hardware throughput and resources. Nevertheless, the modulo scheduling algorithms used at resource-constrained loop pipelining are computationally expensive, and their application through the whole design space is often non-viable. Current state-of-the-art approaches rely on solving multiple optimization problems in polynomial time, or on solving one optimization problem in exponential time. This work proposes a novel data-flow-based approach, where exactly two optimization problems of polynomial time complexity are solved, leading to significant reductions on computation time for generating a single loop pipeline. Results indicate that, even for complex loops, the proposed method generates high-quality designs, comparable to the ones produced by existing state-of-the-art methods, achieving a reduction on the design-space exploration time by

Journal article

Ahmadi N, Constandinou T, Bouganis C, 2021, Inferring entire spiking activity from local field potentials, Scientific Reports, Vol: 11, Pages: 1-13, ISSN: 2045-2322

Extracellular recordings are typically analysed by separating them into two distinct signals: local field potentials (LFPs) andspikes. Previous studies have shown that spikes, in the form of single-unit activity (SUA) or multiunit activity (MUA), can beinferred solely from LFPs with moderately good accuracy. SUA and MUA are typically extracted via threshold-based techniquewhich may not be reliable when the recordings exhibit a low signal-to-noise ratio (SNR). Another type of spiking activity, referredto as entire spiking activity (ESA), can be extracted by a threshold-less, fast, and automated technique and has led to betterperformance in several tasks. However, its relationship with the LFPs has not been investigated. In this study, we aim toaddress this issue by inferring ESA from LFPs intracortically recorded from the motor cortex area of three monkeys performingdifferent tasks. Results from long-term recording sessions and across subjects revealed that ESA can be inferred from LFPswith good accuracy. On average, the inference performance of ESA was consistently and significantly higher than those of SUAand MUA. In addition, local motor potential (LMP) was found to be the most predictive feature. The overall results indicate thatLFPs contain substantial information about spiking activity, particularly ESA. This could be useful for understanding LFP-spikerelationship and for the development of LFP-based BMIs.

Journal article

Martorell X, Alvarez C, Bouganis C-S, Sourdis Iet al., 2021, Introduction to the Special Section on FPL 2019, ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, Vol: 14, ISSN: 1936-7406

Journal article

Bonato V, Bouganis C-S, 2021, Class-specific early exit design methodology for convolutional neural networks., Applied Soft Computing, Vol: 107, Pages: 1-12, ISSN: 1568-4946

Convolutional Neural Network-based (CNN) inference is a demanding computational task where a longsequence of operations is applied to an input as dictated by the network topology. Optimisationsby data quantisation, data reuse, network pruning, and dedicated hardware architectures have astrong impact on reducing both energy consumption and hardware resource requirements, and onimproving inference latency. Implementing new applications from established models available fromboth academic and industrial worlds is common nowadays. Further optimisations by preserving modelarchitecture have been proposed via early exiting approaches, where additional exit points are includedin order to evaluate classifications of samples that produce feature maps with sufficient evidence tobe classified before reaching the final model exit. This paper proposes a methodology for designingearly-exit networks from a given baseline model aiming to improve the average latency for a targetedsubset class constrained by the original accuracy for all classes. Results demonstrate average timesaving in the order of 2.09× to 8.79× for dataset CIFAR10 and 15.00× to 20.71× for CIFAR100 forbaseline models ResNet-21, ResNet-110, Inceptionv3-159, and DenseNet-121.

Journal article

Ahmadi N, Constandinou TG, Bouganis C-S, 2021, Robust and accurate decoding of hand kinematics from entire spiking activity using deep learning, Journal of Neural Engineering, Vol: 18, Pages: 1-23, ISSN: 1741-2552

Objective. Brain–machine interfaces (BMIs) seek to restore lost motor functions in individuals with neurological disorders by enabling them to control external devices directly with their thoughts. This work aims to improve robustness and decoding accuracy that currently become major challenges in the clinical translation of intracortical BMIs. Approach. We propose entire spiking activity (ESA)—an envelope of spiking activity that can be extracted by a simple, threshold-less, and automated technique—as the input signal. We couple ESA with deep learning-based decoding algorithm that uses quasi-recurrent neural network (QRNN) architecture. We evaluate comprehensively the performance of ESA-driven QRNN decoder for decoding hand kinematics from neural signals chronically recorded from the primary motor cortex area of three non-human primates performing different tasks. Main results. Our proposed method yields consistently higher decoding performance than any other combinations of the input signal and decoding algorithm previously reported across long-term recording sessions. It can sustain high decoding performance even when removing spikes from the raw signals, when using the different number of channels, and when using a smaller amount of training data. Significance. Overall results demonstrate exceptionally high decoding accuracy and chronic robustness, which is highly desirable given it is an unresolved challenge in BMIs.

Journal article

Ahmadi N, Constandinou T, Bouganis C-S, 2021, Impact of referencing scheme on decoding performance of LFP-based brain-machine interface, Journal of Neural Engineering, Vol: 18, ISSN: 1741-2552

OBJECTIVE: There has recently been an increasing interest in local field potential (LFP) for brain-machine interface (BMI) applications due to its desirable properties (signal stability and low bandwidth). LFP is typically recorded with respect to a single unipolar reference which is susceptible to common noise. Several referencing schemes have been proposed to eliminate the common noise, such as bipolar reference, current source density (CSD), and common average reference (CAR). However, to date, there have not been any studies to investigate the impact of these referencing schemes on decoding performance of LFP-based BMIs. APPROACH: To address this issue, we comprehensively examined the impact of different referencing schemes and LFP features on the performance of hand kinematic decoding using a deep learning method. We used LFPs chronically recorded from the motor cortex area of a monkey while performing reaching tasks. MAIN RESULTS: Experimental results revealed that local motor potential (LMP) emerged as the most informative feature regardless of the referencing schemes. Using LMP as the feature, CAR was found to yield consistently better decoding performance than other referencing schemes over long-term recording sessions. Significance Overall, our results suggest the potential use of LMP coupled with CAR for enhancing the decoding performance of LFP-based BMIs.

Journal article

Boroumand S, Bouganis C, Constantinides G, 2021, Learning Boolean circuits from examples for approximate logic synthesis, 26th Asia and South Pacific Design Automation Conference - ASP-DAC 2021, Publisher: ACM

Many computing applications are inherently error resilient. Thus,it is possible to decrease computing accuracy to achieve greater effi-ciency in area, performance, and/or energy consumption. In recentyears, a slew of automatic techniques for approximate computinghas been proposed; however, most of these techniques require fullknowledge of an exact, or ‘golden’ circuit description. In contrast,there has been significant recent interest in synthesizing computa-tion from examples, a form of supervised learning. In this paper, weexplore the relationship between supervised learning of Booleancircuits and existing work on synthesizing incompletely-specifiedfunctions. We show that when considered through a machine learn-ing lens, the latter work provides a good training accuracy butpoor test accuracy. We contrast this with prior work from the 1990swhich uses mutual information to steer the search process, aimingfor good generalization. By combining this early work with a recentapproach to learning logic functions, we are able to achieve a scal-able and efficient machine learning approach for Boolean circuitsin terms of area/delay/test-error trade-off.

Conference paper

Miliadis P, Bouganis C-S, Pnevmatikatos D, 2021, Performance landscape of resource-constrained platforms targeting DNNs

Over the recent years, a significant number of complex, deep neural networkshave been developed for a variety of applications including speech and facerecognition, computer vision in the areas of health-care, automatictranslation, image classification, etc. Moreover, there is an increasing demandin deploying these networks in resource-constrained edge devices. As thecomputational demands of these models keep increasing, pushing to their limitsthe targeted devices, the constant development of new hardware systems tailoredto those workloads has been observed. Since programmability of these diverseand complex platforms -- compounded by the rapid development of new DNN models-- is a major challenge, platform vendors have developed Machine Learningtailored SDKs to maximize the platform's performance. This work investigates the performance achieved on a number of moderncommodity embedded platforms coupled with the vendors' provided softwaresupport when state-of-the-art DNN models from image classification, objectdetection and image segmentation are targeted. The work quantifies the relativelatency gains of the particular embedded platforms and provides insights on therelationship between the required minimum batch size for achieving maximumthroughput, concluding that modern embedded systems reach their maximumperformance even for modest batch sizes when a modern state of the art DNNmodel is targeted. Overall, the presented results provide a guide for theexpected performance for a number of state-of-the-art DNNs on popular embeddedplatforms across the image classification, detection and segmentation domains.

Journal article

Montgomerie-Corcoran A, Bouganis C-S, 2021, DEF: Differential Encoding of Featuremaps for Low Power Convolutional Neural Network Accelerators., Publisher: ACM, Pages: 703-708

Conference paper

Vink DA, Rajagopal A, Venieris SI, Bouganis C-Set al., 2020, Caffe barista: brewing caffe with FPGAs in the training loop, Publisher: arXiv

As the complexity of deep learning (DL) models increases, their computerequirements increase accordingly. Deploying a Convolutional Neural Network(CNN) involves two phases: training and inference. With the inference tasktypically taking place on resource-constrained devices, a lot of research hasexplored the field of low-power inference on custom hardware accelerators. Onthe other hand, training is both more compute- and memory-intensive and isprimarily performed on power-hungry GPUs in large-scale data centres. CNNtraining on FPGAs is a nascent field of research. This is primarily due to thelack of tools to easily prototype and deploy various hardware and/oralgorithmic techniques for power-efficient CNN training. This work presentsBarista, an automated toolflow that provides seamless integration of FPGAs intothe training of CNNs within the popular deep learning framework Caffe. To thebest of our knowledge, this is the only tool that allows for such versatile andrapid deployment of hardware and algorithms for the FPGA-based training ofCNNs, providing the necessary infrastructure for further research anddevelopment.

Working paper

Rajagopal A, Vink DA, Venieris SI, Bouganis C-Set al., 2020, Multi-Precision Policy Enforced Training (MuPPET): A precision-switching strategy for quantised fixed-point training of CNNs, Publisher: arXiv

Large-scale convolutional neural networks (CNNs) suffer from very longtraining times, spanning from hours to weeks, limiting the productivity andexperimentation of deep learning practitioners. As networks grow in size andcomplexity, training time can be reduced through low-precision datarepresentations and computations. However, in doing so the final accuracysuffers due to the problem of vanishing gradients. Existing state-of-the-artmethods combat this issue by means of a mixed-precision approach utilising twodifferent precision levels, FP32 (32-bit floating-point) and FP16/FP8(16-/8-bit floating-point), leveraging the hardware support of recent GPUarchitectures for FP16 operations to obtain performance gains. This work pushesthe boundary of quantised training by employing a multilevel optimisationapproach that utilises multiple precisions including low-precision fixed-pointrepresentations. The novel training strategy, MuPPET, combines the use ofmultiple number representation regimes together with a precision-switchingmechanism that decides at run time the transition point between precisionregimes. Overall, the proposed strategy tailors the training process to thehardware-level capabilities of the target hardware architecture and yieldsimprovements in training time and energy efficiency compared tostate-of-the-art approaches. Applying MuPPET on the training of AlexNet,ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA TuringGPU, MuPPET achieves the same accuracy as standard full-precision training withtraining-time speedup of up to 1.84$\times$ and an average speedup of1.58$\times$ across the networks.

Working paper

Kouris A, Venieris S, Bouganis C-S, 2020, A throughput-latency co-optimised cascade of convolutional neural network classifiers, Design, Automation and Test in Europe Conference (DATE 2020), Publisher: IEEE, Pages: 1656-1661

Convolutional Neural Networks constitute a promi-nent AI model for classification tasks, serving a broad span ofdiverse application domains. To enable their efficient deploymentin real-world tasks, the inherent redundancy of CNNs is fre-quently exploited to eliminate unnecessary computational costs.Driven by the fact that not all inputs require the same amount ofcomputation to drive a confident prediction, multi-precision cas-cade classifiers have been recently introduced. FPGAs comprise apromising platform for the deployment of such input-dependentcomputation models, due to their enhanced customisation ca-pabilities. Current literature, however, is limited to throughput-optimised cascade implementations, employing large batching atthe expense of a substantial latency aggravation prohibiting theirdeployment on real-time scenarios. In this work, we introduce anovel methodology for throughput-latency co-optimised cascadedCNN classification, deployed on a custom FPGA architecturetailored to the target application and deployment platform,with respect to a set of user-specified requirements on accuracyand performance. Our experiments indicate that the proposedapproach achieves comparable throughput gains with relatedstate-of-the-art works, under substantially reduced overhead inlatency, enabling its deployment on latency-sensitive applications.

Conference paper

Rajagopal A, Bouganis C-S, 2020, Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge, Publisher: arXiv

In today's world, a vast amount of data is being generated by edge devicesthat can be used as valuable training data to improve the performance ofmachine learning algorithms in terms of the achieved accuracy or to reduce thecompute requirements of the model. However, due to user data privacy concernsas well as storage and communication bandwidth limitations, this data cannot bemoved from the device to the data centre for further improvement of the modeland subsequent deployment. As such there is a need for increased edgeintelligence, where the deployed models can be fine-tuned on the edge, leadingto improved accuracy and/or reducing the model's workload as well as its memoryand power footprint. In the case of Convolutional Neural Networks (CNNs), boththe weights of the network as well as its topology can be tuned to adapt to thedata that it processes. This paper provides a first step towards enabling CNNfinetuning on an edge device based on structured pruning. It explores theperformance gains and costs of doing so and presents an extensible open-sourceframework that allows the deployment of such approaches on a wide range ofnetwork architectures and devices. The results show that on average, data-awarepruning with retraining can provide 10.2pp increased accuracy over a wide rangeof subsets, networks and pruning levels with a maximum improvement of 42.0ppover pruning and retraining in a manner agnostic to the data being processed bythe network.

Working paper

Kouris A, Venieris SI, Rizakis M, Bouganis C-Set al., 2020, Approximate LSTMs for time-constrained inference: enabling fast reaction in self-driving cars., IEEE Consumer Electronics Magazine, Vol: 9, Pages: 11-26, ISSN: 2162-2248

The need to recognize long-term dependencies in sequential data, such as video streams, has made long short-term memory (LSTM) networks a prominent artificial intelligence model for many emerging applications. However, the high computational and memory demands of LSTMs introduce challenges in their deployment on latency-critical systems such as self-driving cars, which are equipped with limited computational resources on-board. In this article, we introduce a progressive inference computing scheme that combines model pruning and computation restructuring leading to the best possible approximation of the result given the available latency budget of the target application. The proposed methodology enables mission-critical systems to make informed decisions even in early stages of the computation, based on approximate LSTM inference, meeting their specifications on safety and robustness. Our experiments on a state-of-the-art driving model for autonomous vehicle navigation demonstrate that the proposed approach can yield outputs with similar quality of result compared to a faithful LSTM baseline, up to 415× faster (198× on average, 76× geo. mean).

Journal article

Kouris A, Kyrkou C, Bouganis C-S, 2020, Informed Region Selection for Efficient UAV-based Object Detectors: Altitude-aware Vehicle Detection with CyCAR Dataset, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 51-58, ISSN: 2153-0858

Conference paper

Ribes S, Trancoso P, Sourdis I, Bouganis C-Set al., 2020, Mapping Multiple LSTM models on FPGAs, 19th International Conference on Field-Programmable Technology (ICFPT), Publisher: IEEE COMPUTER SOC, Pages: 1-9

Conference paper

Vink DA, Rajagopal A, Venieris SI, Bouganis C-Set al., 2020, Caffe Barista: Brewing Caffe with FPGAs in the Training Loop., Pages: 317-322

Conference paper

Rajagopal A, Bouganis C-S, 2020, Now that I can see, I can improve: Enabling data-driven finetuning of CNNs on the edge., Publisher: IEEE, Pages: 3058-3067

Conference paper

Rajagopal A, Vink DA, Venieris SI, Bouganis C-Set al., 2020, Multi-Precision Policy Enforced Training (MuPPET) : A Precision-Switching Strategy for Quantised Fixed-Point Training of CNNs., Publisher: PMLR, Pages: 7943-7952

Conference paper

Yu Z, Bouganis C-S, 2020, A Parameterisable FPGA-Tailored Architecture for YOLOv3-Tiny., Publisher: Springer, Pages: 330-344

Conference paper

Olaizola J, Bouganis C-S, ArgandoƱa ESD, Iturrospe A, Abete JMet al., 2020, Real-time servo press force estimation based on dual particle filter., IEEE Transactions on Industrial Electronics, Vol: 67, Pages: 4088-4097, ISSN: 0278-0046

The ability to monitor the quality of the metal forming process as well as the machine's condition is of significant importance in modern industrial processes. In the case where a physical device (i.e., sensor) cannot be deployed due to the characteristics of the system, models that rely on the estimation of both the applied force and the dynamic behavior of the machine (i.e., system) are adopted. The development of such models and the corresponding algorithms used to estimate the above-mentioned quantities has attracted the interest of the community. The main contribution of this paper is the estimation of a servo press force by employing a novel dual particle filter based algorithm, achieving a maximum relative error in the force estimation of 3.6%. Moreover, to address real-time performance requirements, this paper proposes a field programmable gate array based accelerator that improves the sampling rate by a factor of 200 compared to a processor-based solution, thus enabling the deployment of the system in many realistic scenarios.

Journal article

Kouris A, Venieris S, Bouganis C-S, 2019, Towards efficient on-board deployment of DNNs on intelligent autonomous systems, 18th IEEE-Computer-Society Annual Symposium on VLSI (ISVLSI), Publisher: IEEE COMPUTER SOC, Pages: 570-575, ISSN: 2159-3469

With their unprecedented performance in major AI tasks, deep neural networks (DNNs) have emerged as a primary building block in modern autonomous systems. Intelligent systems such as drones, mobile robots and driverless cars largely base their perception, planning and application-specific tasks on DNN models. Nevertheless, due to the nature of these applications, such systems require on-board local processing in order to retain their autonomy and meet latency and throughput constraints. In this respect, the large computational and memory demands of DNN workloads pose a significant barrier on their deployment on the resource-and power-constrained compute platforms that are available on-board. This paper presents an overview of recent methods and hardware architectures that address the system-level challenges of modern DNN-enabled autonomous systems at both the algorithmic and hardware design level. Spanning from latency-driven approximate computing techniques to high-throughput mixed-precision cascaded classifiers, the presented set of works paves the way for the on-board deployment of sophisticated DNN models on robots and autonomous systems.

Conference paper

Ahmadi N, Constandinou TG, Bouganis C-S, 2019, End-to-End Hand Kinematic Decoding from LFPs Using Temporal Convolutional Network, IEEE Biomedical Circuits and Systems Conference (BioCAS), Publisher: IEEE, Pages: 1-4, ISSN: 2163-4025

In recent years, local field potentials (LFPs) haveemerged as a promising alternative input signal for brain-machine interfaces (BMIs). Several studies have demonstratedthat LFP-based BMIs could provide long-term recording stabilityand comparable decoding performance to their spike counter-parts. Despite the compelling results, however, most LFP-basedBMIs still make use of hand-crafted features which can betime-consuming and suboptimal. In this paper, we propose anend-to-end system approach based on temporal convolutionalnetwork (TCN) to automatically extract features and decodekinematics of hand movements directly from raw LFP signals.We benchmark its decoding performance against traditionalapproach incorporating long short-term memory (LSTM) de-coders driven by hand-crafted LFP features. Experimental re-sults demonstrate significant performance improvement of theproposed approach compared to the traditional approach. Thissuggests the suitability of TCN-based end-to-end system and itspotential for providng stable and high decoding performanceLFP-based BMIs.

Conference paper

Vasileiadis M, Bouganis C-S, Tzovaras D, 2019, Multi-person 3D pose estimation from 3D cloud data using 3D convolutional neural networks, Computer Vision and Image Understanding, Vol: 185, Pages: 12-23, ISSN: 1077-3142

Human pose estimation is considered one of the major challenges in the field of Computer Vision, playing an integral role in a large variety of technology domains. While, in the last few years, there has been an increased number of research approaches towards CNN-based 2D human pose estimation from RGB images, respective work on CNN-based 3D human pose estimation from depth/3D data has been rather limited, with current approaches failing to outperform earlier methods, partially due to the utilization of depth maps as simple 2D single-channel images, instead of an actual 3D world representation. In order to overcome this limitation, and taking into consideration recent advances in 3D detection tasks of similar nature, we propose a novel fully-convolutional, detection-based 3D-CNN architecture for 3D human pose estimation from 3D data. The architecture follows the sequential network architecture paradigm, generating per-voxel likelihood maps for each human joint, from a 3D voxel-grid input, and is extended, through a bottom-up approach, towards multi-person 3D pose estimation, allowing the algorithm to simultaneously estimate multiple human poses, without its runtime complexity being affected by the number of people within the scene. The proposed multi-person architecture, which is the first within the scope of 3D human pose estimation, is comparatively evaluated on three single person public datasets, achieving state-of-the-art performance, as well as on a public multi-person dataset achieving high recognition accuracy.

Journal article

Liu J, Bouganis C, Cheung PYK, 2019, Context-based image acquisition from memory in digital systems, Journal of Real-Time Image Processing, Vol: 16, Pages: 1057-1076, ISSN: 1861-8200

A key consideration in the design of image and video processing systems is the ever increasing spatial resolution of the captured images, which has a major impact on the performance requirements of the memory subsystem. This is further amplified by the facts that the memory bandwidth requirements and energy consumption of accessing the captured images have started to become the bottlenecks in the design of high-performance image processing systems. Inspired by the successful application of progressive image sampling techniques in various image processing tasks, this work proposes the concept of Context-based Image Acquisition for hardware systems that efficiently trades image quality for reduced cost of the image acquisition process. Based on the proposed framework, a hardware architecture is developed which alters the conventional memory access pattern, to progressively and adaptively access pixels from a memory subsystem. The sampled pixels are used to reconstruct an approximation to the ground truth, which is stored in a high-performance image buffer for further processing. An instance of the architecture is prototyped on an FPGA and its performance evaluation shows that a saving of up to 85 % of memory accessing time and 33 %/45 % of image acquisition time/energy are achieved on a set of benchmarks while maintaining a high PSNR.

Journal article

Kostavelis I, Vasileiadis M, Skartados E, Kargakos A, Giakoumis D, Bouganis C-S, Tzovaras Det al., 2019, Understanding of human behavior with a robotic agent through daily activity analysis, International Journal of Social Robotics, Vol: 11, Pages: 437-462, ISSN: 1875-4791

Personal assistive robots to be realized in the near future should have the ability to seamlessly coexist with humans in unconstrained environments, with the robot’s capability to understand and interpret the human behavior during human–robot cohabitation significantly contributing towards this end. Still, the understanding of human behavior through a robot is a challenging task as it necessitates a comprehensive representation of the high-level structure of the human’s behavior from the robot’s low-level sensory input. The paper at hand tackles this problem by demonstrating a robotic agent capable of apprehending human daily activities through a method, the Interaction Unit analysis, that enables activities’ decomposition into a sequence of units, each one associated with a behavioral factor. The modelling of human behavior is addressed with a Dynamic Bayesian Network that operates on top of the Interaction Unit, offering quantification of the behavioral factors and the formulation of the human’s behavioral model. In addition, light-weight human action and object manipulation monitoring strategies have been developed, based on RGB-D and laser sensors, tailored for onboard robot operation. As a proof of concept, we used our robot to evaluate the ability of the method to differentiate among the examined human activities, as well as to assess the capability of behavior modeling of people with Mild Cognitive Impairment. Moreover, we deployed our robot in 12 real house environments with real users, showcasing the behavior understanding ability of our method in unconstrained realistic environments. The evaluation process revealed promising performance and demonstrated that human behavior can be automatically modeled through Interaction Unit analysis, directly from robotic agents.

Journal article

Ahmadi N, Cavuto ML, Feng P, Leene LB, Maslik M, Mazza F, Savolainen O, Szostak KM, Bouganis C-S, Ekanayake J, Jackson A, Constandinou TGet al., 2019, Towards a distributed, chronically-implantable neural interface, 9th IEEE/EMBS International Conference on Neural Engineering (NER), Publisher: IEEE, Pages: 719-724, ISSN: 1948-3546

We present a platform technology encompassing a family of innovations that together aim to tackle key challenges with existing implantable brain machine interfaces. The ENGINI (Empowering Next Generation Implantable Neural Interfaces) platform utilizes a 3-tier network (external processor, cranial transponder, intracortical probes) to inductively couple power to, and communicate data from, a distributed array of freely-floating mm-scale probes. Novel features integrated into each probe include: (1) an array of niobium microwires for observing local field potentials (LFPs) along the cortical column; (2) ultra-low power instrumentation for signal acquisition and data reduction; (3) an autonomous, self-calibrating wireless transceiver for receiving power and transmitting data; and (4) a hermetically-sealed micropackage suitable for chronic use. We are additionally engineering a surgical tool, to facilitate manual and robot-assisted insertion, within a streamlined neurosurgical workflow. Ongoing work is focused on system integration and preclinical testing.

Conference paper

Kouris A, Venieris SI, Rizakis M, Bouganis C-Set al., 2019, Approximate LSTMs for time-constrained inference: Enabling fast reaction in self-driving cars, Publisher: arXiv

The need to recognise long-term dependencies in sequential data such as videostreams has made LSTMs a prominent AI model for many emerging applications.However, the high computational and memory demands of LSTMs introducechallenges in their deployment on latency-critical systems such as self-drivingcars which are equipped with limited computational resources on-board. In thispaper, we introduce an approximate computing scheme combining model pruning andcomputation restructuring to obtain a high-accuracy approximation of the resultin early stages of the computation. Our experiments demonstrate that using theproposed methodology, mission-critical systems responsible for autonomousnavigation and collision avoidance are able to make informed decisions based onapproximate calculations within the available time budget, meeting theirspecifications on safety and robustness.

Working paper

De Souza Rosa L, Bouganis C, Bonato V, 2019, Scaling up modulo scheduling for high-level synthesis, Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol: 38, Pages: 912-925, ISSN: 0278-0070

High-Level Synthesis tools have been increasingly used within the hardware design community to bridge the gap between productivity and the need to design large and complex systems. When targeting heterogeneous systems, where the CPU and the FPGA fabric are both available to perform computations, a design space exploration is usually carried out for deciding which parts of the initial code should be mapped to the FPGA fabric such as the overall system’s performance is enhanced by accelerating its computation via dedicated processors. As the targeted systems become more complex and larger, leading to a large design space exploration, the fast estimative of the possible acceleration that can be obtained by mapping certain functionality into the FPGA fabric is of paramount importance. Loop pipelining, which is responsible for the majority of HLS compilation time, is a key optimization towards achieving high-performance acceleration kernels. A new modulo scheduling algorithm is proposed, which reformulates the classical modulo scheduling problem and leads to a reduced number of integer linear problems solved, resulting in large computational savings. Moreover, the proposed approach has a controlled trade-off between solution quality and computation time. Results show the scalability is improved efficiently from quadratic, for the state-of-the-art method, to linear, for the proposed approach, while the optimized loop suffers a 1% (geomean) increment in the total number of cycles.

Journal article

Boikos K, Bouganis C-S, 2019, A scalable FPGA-based architecture for depth estimation in SLAM, ARC 2019, Publisher: Springer, Pages: 181-196

The current state of the art of Simultaneous Localisation and Mapping, or SLAM, on low power embedded systems is about sparse localisation and mapping with low resolution results in the name of efficiency. Meanwhile, research in this field has provided many advances for information rich processing and semantic understanding, combined with high computational requirements for real-time processing. This work provides a solution to bridging this gap, in the form of a scalable SLAM-specific architecture for depth estimation for direct semi-dense SLAM. Targeting an off-the-shelf FPGA-SoC this accelerator architecture achieves a rate of more than 60 mapped frames/sec at a resolution of 640×480 achieving performance on par to a highly-optimised parallel implementation on a high-end desktop CPU with an order of magnitude improved power consumption. Furthermore, the developed architecture is combined with our previous work for the task of tracking, to form the first complete accelerator for semi-dense SLAM on FPGAs, establishing the state of the art in the area of embedded low-power systems.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00232411&limit=30&person=true