Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Li L, Yang G, Wu F, Wong T, Mohiaddin R, Firmin D, Keegan J, Xu L, Zhuang Xet al., 2019,

    Atrial Scar Segmentation via Potential Learning in the Graph-Cut Framework

    , Pages: 152-160, ISSN: 0302-9743

    Late Gadolinium Enhancement Magnetic Resonance Imaging (LGE MRI) emerges as a routine scan for patients with atrial fibrillation (AF). However, due to the low image quality automating the quantification and analysis of the atrial scars is challenging. In this study, we proposed a fully automated method based on the graph-cut framework, where the potential of the graph is learned on a surface mesh of the left atrium (LA), using an equidistant projection and a deep neural network (DNN). For validation, we employed 100 datasets with manual delineation. The results showed that the performance of the proposed method was improved and converged with respect to the increased size of training patches, which provide important features of the structural and texture information learned by the DNN. The segmentation could be further improved when the contribution from the t-link and n-link is balanced, thanks to the inter-relationship learned by the DNN for the graph-cut algorithm. Compared with the existing methods which mostly acquired an initialization from manual delineation of the LA or LA wall, our method is fully automated and has demonstrated great potentials in tackling this task. The accuracy of quantifying the LA scars using the proposed method was 0.822, and the Dice score was 0.566. The results are promising and the method can be useful in diagnosis and prognosis of AF.

  • Conference paper
    Dong S, Gao Z, Sun S, Wang X, Li M, Zhang H, Yang G, Liu H, Li Set al., 2019,

    Holistic and deep feature pyramids for saliency detection

    © 2018. The copyright of this document resides with its authors. Saliency detection has been increasingly gaining research interest in recent years since many computer vision applications need to derive object attentions from images in the first steps. Multi-scale awareness of the saliency detector becomes essential to find thin and small attention regions as well as keeping high-level semantics. In this paper, we propose a novel holistic and deep feature pyramid neural network architecture that can leverage multi-scale semantics in feature encoding stage and saliency region prediction (decoding) stage. In the encoding stage, we exploit multi-scale and pyramidal hierarchy of feature maps via the densely connected network with variable-size dilated convolutions as well as a pyramid pooling. In the decoding stage, we fuse multi-level feature maps via up-sampling and convolution. In addition, we utilize the multi-level deep supervision via plugging in loss functions at every feature fusion level. Multi-loss supervision regularizes weights searching space among different tasks minimizing over-fitting and enhances gradient signal during backpropagation, and thus enables us training the network from scratch. This architecture builds an inherent multi-level semantic pyramidal feature maps at different scales and enhances model's capability in the saliency detection task. We validated our approach on six benchmark datasets and compared with eleven state-of-the-art methods. The results demonstrated that the design effectiveness and our approach outperformed the compared methods.

  • Conference paper
    Li M, Dong S, Zhang K, Gao Z, Wu X, Zhang H, Yang G, Li Set al., 2019,

    Deep Learning intra-image and inter-images features for Co-saliency detection

    In this paper, we propose a novel deep end-to-end co-saliency detection approach to extract common salient objects from images group. The existing approaches rely heavily on manually designed metrics to characterize co-saliency. However, these methods are so subjective and not flexible enough that leads to poor generalization ability. Furthermore, most approaches separate the process of single image features and group images features extraction, which ignore the correlation between these two features that can promote the model performance. The proposed approach solves these two problems by multistage representation to extract features based on high-spatial resolution CNN. In addition, we utilize the modified CAE to explore the learnable consistency. Finally, the intra-image contrast and the inter-images consistency are fused to generate the final co-saliency maps automatically among group images by multistage learning. Experiment results demonstrate the effectiveness and superiority of our approach beyond the state-of-the-art methods.

  • Conference paper
    Dong S, Gao Z, Sun S, Wang X, Li M, Zhang H, Yang G, Liu H, Li Set al., 2019,

    Holistic and deep feature pyramids for saliency detection

    Saliency detection has been increasingly gaining research interest in recent years since many computer vision applications need to derive object attentions from images in the first steps. Multi-scale awareness of the saliency detector becomes essential to find thin and small attention regions as well as keeping high-level semantics. In this paper, we propose a novel holistic and deep feature pyramid neural network architecture that can leverage multi-scale semantics in feature encoding stage and saliency region prediction (decoding) stage. In the encoding stage, we exploit multi-scale and pyramidal hierarchy of feature maps via the densely connected network with variable-size dilated convolutions as well as a pyramid pooling. In the decoding stage, we fuse multi-level feature maps via up-sampling and convolution. In addition, we utilize the multi-level deep supervision via plugging in loss functions at every feature fusion level. Multi-loss supervision regularizes weights searching space among different tasks minimizing over-fitting and enhances gradient signal during backpropagation, and thus enables us training the network from scratch. This architecture builds an inherent multi-level semantic pyramidal feature maps at different scales and enhances model's capability in the saliency detection task. We validated our approach on six benchmark datasets and compared with eleven state-of-the-art methods. The results demonstrated that the design effectiveness and our approach outperformed the compared methods.

  • Conference paper
    Dong S, Gao Z, Sun S, Wang X, Li M, Zhang H, Yang G, Liu H, Li Set al., 2019,

    Holistic and deep feature pyramids for saliency detection

    © 2018. The copyright of this document resides with its authors. Saliency detection has been increasingly gaining research interest in recent years since many computer vision applications need to derive object attentions from images in the first steps. Multi-scale awareness of the saliency detector becomes essential to find thin and small attention regions as well as keeping high-level semantics. In this paper, we propose a novel holistic and deep feature pyramid neural network architecture that can leverage multi-scale semantics in feature encoding stage and saliency region prediction (decoding) stage. In the encoding stage, we exploit multi-scale and pyramidal hierarchy of feature maps via the densely connected network with variable-size dilated convolutions as well as a pyramid pooling. In the decoding stage, we fuse multi-level feature maps via up-sampling and convolution. In addition, we utilize the multi-level deep supervision via plugging in loss functions at every feature fusion level. Multi-loss supervision regularizes weights searching space among different tasks minimizing over-fitting and enhances gradient signal during backpropagation, and thus enables us training the network from scratch. This architecture builds an inherent multi-level semantic pyramidal feature maps at different scales and enhances model's capability in the saliency detection task. We validated our approach on six benchmark datasets and compared with eleven state-of-the-art methods. The results demonstrated that the design effectiveness and our approach outperformed the compared methods.

  • Conference paper
    Li M, Dong S, Zhang K, Gao Z, Wu X, Zhang H, Yang G, Li Set al., 2019,

    Deep Learning intra-image and inter-images features for Co-saliency detection

    © 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. In this paper, we propose a novel deep end-to-end co-saliency detection approach to extract common salient objects from images group. The existing approaches rely heavily on manually designed metrics to characterize co-saliency. However, these methods are so subjective and not flexible enough that leads to poor generalization ability. Furthermore, most approaches separate the process of single image features and group images features extraction, which ignore the correlation between these two features that can promote the model performance. The proposed approach solves these two problems by multistage representation to extract features based on high-spatial resolution CNN. In addition, we utilize the modified CAE to explore the learnable consistency. Finally, the intra-image contrast and the inter-images consistency are fused to generate the final co-saliency maps automatically among group images by multistage learning. Experiment results demonstrate the effectiveness and superiority of our approach beyond the state-of-the-art methods.

  • Conference paper
    Li M, Zhang W, Yang G, Wang C, Zhang H, Liu H, Zheng W, Li Set al., 2019,

    Recurrent Aggregation Learning for Multi-view Echocardiographic Sequences Segmentation

    , 10th International Workshop on Machine Learning in Medical Imaging (MLMI) / 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 678-686, ISSN: 0302-9743
  • Journal article
    Raschke F, Barrick TR, Jones TL, Yang G, Ye X, Howe FAet al., 2019,

    Tissue-type mapping of gliomas

    , NEUROIMAGE-CLINICAL, Vol: 21, ISSN: 2213-1582
  • Conference paper
    Ali A-R, Li J, O'Shea SJ, Yang G, Trappenberg T, Ye Xet al., 2019,

    A Deep Learning Based Approach to Skin Lesion Border Extraction With a Novel Edge Detector in Dermoscopy Images

    , International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, ISSN: 2161-4393
  • Conference paper
    Zhang D, Yang G, Zhao S, Zhang Y, Zhang H, Li Set al., 2019,

    Direct Quantification for Coronary Artery Stenosis Using Multiview Learning

    , 10th International Workshop on Machine Learning in Medical Imaging (MLMI) / 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 449-457, ISSN: 0302-9743

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1161&limit=10&page=14&respub-action=search.html Current Millis: 1728805041590 Current Time: Sun Oct 13 08:37:21 BST 2024

General enquiries


For any questions related to the Centre, please contact:

Vasculitis Centre of Excellence Admin
VasculitisCoE@imperial.ac.uk