Imperial College London

Dr Panagiota (Tania) Stathaki

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Reader in Signal Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6229t.stathaki Website

 
 
//

Assistant

 

Miss Vanessa Rodriguez-Gonzalez +44 (0)20 7594 6267

 
//

Location

 

812Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

185 results found

Barmpoutis P, Kastridis A, Stathaki T, Yuan J, Shi M, Grammalidis Net al., 2023, Suburban Forest Fire Risk Assessment and Forest Surveillance Using 360-Degree Cameras and a Multiscale Deformable Transformer, Remote Sensing, Vol: 15

In the current context of climate change and demographic expansion, one of the phenomena that humanity faces are the suburban wildfires. To prevent the occurrence of suburban forest fires, fire risk assessment and early fire detection approaches need to be applied. Forest fire risk mapping depends on various factors and contributes to the identification and monitoring of vulnerable zones where risk factors are most severe. Therefore, watchtowers, sensors, and base stations of autonomous unmanned aerial vehicles need to be placed carefully in order to ensure adequate visibility or battery autonomy. In this study, fire risk assessment of an urban forest was performed and the recently introduced 360-degree data were used for early fire detection. Furthermore, a single-step approach that integrates a multiscale vision transformer was introduced for accurate fire detection. The study area includes the suburban pine forest of Thessaloniki city (Greece) named Seich Sou, which is prone to wildfires. For the evaluation of the performance of the proposed workflow, real and synthetic 360-degree images were used. Experimental results demonstrate the great potential of the proposed system, which achieved an F-score for real fire event detection rate equal to 91.6%. This indicates that the proposed method could significantly contribute to the monitoring, protection, and early fire detection of the suburban forest of Thessaloniki.

Journal article

Ren G, Xie Y, Dai T, Stathaki Tet al., 2022, Progressive multi-scale fusion network for RGB-D salient object detection, COMPUTER VISION AND IMAGE UNDERSTANDING, Vol: 223, ISSN: 1077-3142

Journal article

Ren G, Yu Y, Liu H, Stathaki Tet al., 2022, Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection, SENSORS, Vol: 22

Journal article

Lazarou M, Stathaki T, Avrithis Y, 2022, Tensor feature hallucination for few-shot learning, 22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Publisher: IEEE COMPUTER SOC, Pages: 2050-2060, ISSN: 2472-6737

Few-shot learning addresses the challenge of learning how to address novel tasks given not just limited supervision but limited data as well. An attractive solution is synthetic data generation. However, most such methods are overly sophisticated, focusing on high-quality, realistic data in the input space. It is unclear whether adapting them to the few-shot regime and using them for the downstream task of classification is the right approach. Previous works on synthetic data generation for few-shot classification focus on exploiting complex models, e.g. a Wasserstein GAN with multiple regularizers or a network that transfers latent diversities from known to novel classes.We follow a different approach and investigate how a simple and straightforward synthetic data generation method can be used effectively. We make two contributions, namely we show that: (1) using a simple loss function is more than enough for training a feature generator in the few-shot setting; and (2) learning to generate tensor features instead of vector features is superior. Extensive experiments on miniImagenet, CUB and CIFAR-FS datasets show that our method sets a new state of the art, outperforming more sophisticated few-shot data augmentation methods. The source code can be found at https://github.com/MichalisLazarou/TFH_fewshot.

Conference paper

Pellegrino E, Stathaki T, 2022, Automatic Crack Detection with Calculus of Variations, 21st International Conference on Intelligent Systems Design and Applications (ISDA), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 432-440, ISSN: 2367-3370

Conference paper

Ren G, Dai T, Stathaki T, 2022, ADAPTIVE INTRA-GROUP AGGREGATION FOR CO-SALIENCY DETECTION, 47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 2520-2524, ISSN: 1520-6149

Conference paper

Barmpoutis P, Yuan J, Waddingham W, Ross C, Hamzeh K, Stathaki T, Alexander DC, Jansen Met al., 2022, Multi-scale Deformable Transformer for the Classification of Gastric Glands: The IMGL Dataset, 1st International Workshop on Cancer Prevention Through Early Detection (CaPTion), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 24-33, ISSN: 0302-9743

Conference paper

Pellegrino E, Stathaki T, 2022, An Automated Procedure for Crack Detection, 1st IFToMM for Sustainable Development Goals workshop (I4SDG), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 250-254, ISSN: 2211-0984

Conference paper

Barmpoutis P, Waddingham W, Yuan J, Ross C, Kayhanian H, Stathaki T, Alexander DC, Jansen Met al., 2022, A digital pathology workflow for the segmentation and classification of gastric glands: Study of gastric atrophy and intestinal metaplasia cases., PLoS One, Vol: 17

Gastric cancer is one of the most frequent causes of cancer-related deaths worldwide. Gastric atrophy (GA) and gastric intestinal metaplasia (IM) of the mucosa of the stomach have been found to increase the risk of gastric cancer and are considered precancerous lesions. Therefore, the early detection of GA and IM may have a valuable role in histopathological risk assessment. However, GA and IM are difficult to confirm endoscopically and, following the Sydney protocol, their diagnosis depends on the analysis of glandular morphology and on the identification of at least one well-defined goblet cell in a set of hematoxylin and eosin (H&E) -stained biopsy samples. To this end, the precise segmentation and classification of glands from the histological images plays an important role in the diagnostic confirmation of GA and IM. In this paper, we propose a digital pathology end-to-end workflow for gastric gland segmentation and classification for the analysis of gastric tissues. The proposed GAGL-VTNet, initially, extracts both global and local features combining multi-scale feature maps for the segmentation of glands and, subsequently, it adopts a vision transformer that exploits the visual dependences of the segmented glands towards their classification. For the analysis of gastric tissues, segmentation of mucosa is performed through an unsupervised model combining energy minimization and a U-Net model. Then, features of the segmented glands and mucosa are extracted and analyzed. To evaluate the efficiency of the proposed methodology we created the GAGL dataset consisting of 85 WSI, collected from 20 patients. The results demonstrate the existence of significant differences of the extracted features between normal, GA and IM cases. The proposed approach for gland and mucosa segmentation achieves an object dice score equal to 0.908 and 0.967 respectively, while for the classification of glands it achieves an F1 score equal to 0.94 showing great potential for the autom

Journal article

Lazarou M, Li B, Stathaki T, 2021, A novel shape matching descriptor for real-time static hand gesture recognition, COMPUTER VISION AND IMAGE UNDERSTANDING, Vol: 210, ISSN: 1077-3142

Journal article

Lazarou M, Stathaki T, Avrithis Y, 2021, Iterative label cleaning for transductive and semi-supervised few-shot learning, 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), Pages: 8731-8740

Journal article

Liu T, Luo W, Ma L, Huang J-J, Stathaki T, Dai Tet al., 2020, Coupled network for robust pedestrian detection with gated multi-layer feature extraction and deformable occlusion handling, IEEE Transactions on Image Processing, Vol: 30, Pages: 754-766, ISSN: 1057-7149

Pedestrian detection methods have been significantly improved with the development of deep convolutional neural networks. Nevertheless, detecting ismall-scaled pedestrians and occluded pedestrians remains a challenging problem. In this paper, we propose a pedestrian detection method with a couple-network to simultaneously address these two issues. One of the sub-networks, the gated multi-layer feature extraction sub-network, aims to adaptively generate discriminative features for pedestrian candidates in order to robustly detect pedestrians with large variations on scale. The second sub-network targets on handling the occlusion problem of pedestrian detection by using deformable regional region of interest (RoI)-pooling. We investigate two different gate units for the gated sub-network, namely, the channel-wise gate unit and the spatio-wise gate unit, which can enhance the representation ability of the regional convolutional features among the channel dimensions or across the spatial domain, repetitively. Ablation studies have validated the effectiveness of both the proposed gated multi-layer feature extraction sub-network and the deformable occlusion handling sub-network. With the coupled framework, our proposed pedestrian detector achieves promising results on both two pedestrian datasets, especially on detecting small or occluded pedestrians. On the CityPersons dataset, the proposed detector achieves the lowest missing rates (i.e. 40.78% and 34.60%) on detecting small and occluded pedestrians, surpassing the second best comparison method by 6.0% and 5.87%, respectively.

Journal article

Ren G, Dai T, Barmpoutis P, Stathaki Tet al., 2020, Salient Object Detection Combining a Self-Attention Module and a Feature Pyramid Network, ELECTRONICS, Vol: 9

Journal article

Barmpoutis P, Stathaki T, Dimitropoulos K, Grammalidis Net al., 2020, Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures, REMOTE SENSING, Vol: 12

Journal article

Liu T, Huang J, Dai T, Stathaki T, Ren Get al., 2020, Gated Multi-layer Convolutional Feature Extraction Network for Robust Pedestrian Detection, ICASSP 2020, Publisher: IEEE

Conference paper

Barmpoutis P, Kamperidou V, Stathaki T, 2020, Estimation of Extent of Trees' and Biomass' Infestation of the Suburban Forest of Thessaloniki (Seich Sou) using UAV Imagery and Combining R-CNNs and Multichannel Texture Analysis, 12th International Conference on Machine Vision (ICMV), Publisher: SPIE-INT SOC OPTICAL ENGINEERING, ISSN: 0277-786X

Conference paper

Ulku I, Barmpoutis P, Stathaki T, Akagunduz Eet al., 2020, Comparison of Single Channel Indices for U-Net Based Segmentation of Vegetation in Satellite Images, 12th International Conference on Machine Vision (ICMV), Publisher: SPIE-INT SOC OPTICAL ENGINEERING, ISSN: 0277-786X

Conference paper

Protopapadakis E, Voulodimos A, Doulamis A, Doulamis N, Stathaki Tet al., 2019, Automatic crack detection for tunnel inspection using deep learning and heuristic image post-processing, APPLIED INTELLIGENCE, Vol: 49, Pages: 2793-2806, ISSN: 0924-669X

Journal article

Konstantinidis D, Stathaki T, Argyriou V, 2019, Phase amplified correlation for improved sub-pixel motion estimation, IEEE Transactions on Image Processing, Vol: 28, Pages: 3089-3101, ISSN: 1057-7149

Phase correlation (PC) is widely employed by several sub-pixel motion estimation techniques in an attempt to accurately and robustly detect the displacement between two images. To achieve sub-pixel accuracy, these techniques employ interpolation methods and function-fitting approaches on the cross-correlation function derived from the PC core. However, such motion estimation techniques still present a lower bound of accuracy that cannot be overcome. To allow room for further improvements, we propose in this paper the enhancement of the sub-pixel accuracy of motion estimation techniques by employing a completely different approach: the concept of motion magnification. To this end, we propose the novel phase amplified correlation (PAC) that integrates motion magnification between two compared images inside the phase correlation part of frequencybased motion estimation algorithms and thus directly substitutes the PC core. The experimentation on magnetic resonance (MR) images and real video sequences demonstrates the ability of the proposed PAC core to make subtle motions highly distinguishable and improve the sub-pixel accuracy of frequency-based motion estimation techniques.

Journal article

Barmpoutis P, Stathaki T, Camarinopoulos S, 2019, Skeleton-Based Human Action Recognition through Third-Order Tensor Representation and Spatio-Temporal Analysis, INVENTIONS, Vol: 4

Journal article

Barmpoutis P, Stathaki T, Gonzalez MI, 2019, A Region-based Fusion Scheme for Human Detection in Autonomous Navigation Applications, 45th Annual Conference of the IEEE Industrial Electronics Society (IECON), Publisher: IEEE, Pages: 5566-5571, ISSN: 1553-572X

Conference paper

Barmpoutis P, Stathaki T, Kamperidou V, 2019, MONITORING OF TREES' HEALTH CONDITION USING A UAV EQUIPPED WITH LOW-COST DIGITAL CAMERA, 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 8291-8295, ISSN: 1520-6149

Conference paper

Liu T, Stathaki T, 2018, Faster R-CNN for Robust Pedestrian Detection using Semantic Segmentation Network, Frontiers in Neurorobotics

Journal article

Voulodimos A, Doulamis N, Bebis G, Stathaki Tet al., 2018, Recent Developments in Deep Learning for Engineering Applications, COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, Vol: 2018, ISSN: 1687-5265

Journal article

Alexiadis D, Mitianoudis N, Stathaki T, 2018, Frequency-Domain Joint Motion and Disparity Estimation Using Steerable Filters, INVENTIONS, Vol: 3

Journal article

Alexiadis DS, Mitianoudis N, Stathaki T, 2018, Multidimensional directional steerable filters - Theory and application to 3D flow estimation, IMAGE AND VISION COMPUTING, Vol: 71, Pages: 38-67, ISSN: 0262-8856

Journal article

Stathaki P, ElMikaty M, 2018, Car detection in aerial images of dense urban areas, IEEE Transactions on Aerospace and Electronic Systems, Vol: 54, Pages: 51-63, ISSN: 0018-9251

With the ever-increasing demand in the analysis and understanding of aerial images in order to remotely recognise targets, this paper introduces a robust system for the detection and localisation of cars in images captured by air vehicles and satellites. The system adopts a sliding-window approach. It compromises a window-evaluation and a window-classification sub-systems. The performance of the proposed framework was evaluated on the Vaihingen dataset. Results demonstrate its superiority to the state of the art.

Journal article

Liu T, Stathaki T, 2017, Enhanced pedestrian detection using deep learning based semantic image segmentation, Digital Signal Processing (DSP) 2017, Publisher: IEEE

Pedestrian detection and semantic segmentation arehighly correlated tasks which can be jointly used for betterperformance. In this paper, we propose a pedestrian detectionmethod making use of semantic labeling to improve pedestriandetection results. A deep learning based semantic segmentationmethod is used to pixel-wise label images into 11 common classes.Semantic segmentation results which encodes high-level imagerepresentation are used as additional feature channels to beintegrated with the low-level HOG+LUV features. Some falsepositives, such as falsely detected pedestrians located on a tree,can be easier eliminated by making use of the semantic cues.Boosted forest is used for training the integrated feature channelsin a cascaded manner for hard negatives mining. Experimentson the Caltech-USA pedestrian dataset show improvements ondetection accuracy by using the additional semantic cues.

Conference paper

ElMikaty M, Stathaki P, 2017, Detection of cars in high-resolution aerial images of complex urban environments, IEEE Transactions on Geoscience and Remote Sensing, Vol: 55, Pages: 5913-5924, ISSN: 0196-2892

Detection of small targets, more specifically cars, in aerial images of urban scenes, has various applications in several domains, such as surveillance, military, remote sensing, and others. This is a tremendously challenging problem, mainly because of the significant interclass similarity among objects in urban environments, e.g., cars and certain types of nontarget objects, such as buildings' roofs and windows. These nontarget objects often possess very similar visual appearance to that of cars making it hard to separate the car and the noncar classes. Accordingly, most past works experienced low precision rates at high recall rates. In this paper, a novel framework is introduced that achieves a higher precision rate at a given recall than the state of the art. The proposed framework adopts a sliding-window approach and it consists of four stages, namely, window evaluation, extraction and encoding of features, classification, and postprocessing. This paper introduces a new way to derive descriptors that encode the local distributions of gradients, colors, and texture. Image descriptors characterize the aforementioned cues using adaptive cell distributions, wherein the distribution of cells within a detection window is a function of its dominant orientation, and hence, neither the rotation of the patch under examination nor the computation of descriptors at different orientations is required. The performance of the proposed framework has been evaluated on the challenging Vaihingen and Overhead Imagery Research data sets. Results demonstrate the superiority of the proposed framework to the state of the art.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00169011&limit=30&person=true