Publications from our Researchers

Several of our current PhD candidates and fellow researchers at the Data Science Institute have published, or in the proccess of publishing, papers to present their research.  

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Darlington J, Guo Y, 1994,

    Constraint Logic Programming in the Sequent Calculus

    , Proceedings of the 5th International Conference on Logic Programming and Automated Reasoning (LPAR 94), Publisher: Springer-Verlag
  • Journal article
    Darlington J, Guo Y, Köhler M, 1993,

    Functional programming languages with logical variables: a linear logic view

    , Progamming Language Implementation and Logic Programming, Pages: 201-219
  • Journal article
    Guo Y, 1993,

    FALCON: Functional and Logic Language with Constraints

    , A draft prepared for the, Vol: 6
  • Conference paper
    Darlington J, Guo Y, Pull H, otherset al., 1992,

    A new perspective on integrating functions and logic languages

    , Pages: 682-693
  • Journal article
    DARLINGTON J, GUO Y, 1991,


    , LECTURE NOTES IN COMPUTER SCIENCE, Vol: 516, Pages: 424-435, ISSN: 0302-9743
  • Conference paper
    Darlington J, Guo Y, Pull H, 1991,

    Introducing constraint functional logic programming

    , Publisher: Springer-Verlag, Pages: 20-34
  • Journal article
    Darlington J, Guo Y, Wu Q, 1991,

    A General Computational Scheme for Constraint Logic Programming

    , ALPUK, Pages: 56-77
  • Conference paper
    Darlington J, Guo Y, Pull H, 1991,

    A Design Space for Integrating Declarative Languages

    , London, UK, UK, Publisher: Springer-Verlag, Pages: 3-19
  • Conference paper
    Darlington J, Guo Y, 1989,

    The unification of functional and logic languages-towards constraint functional programming

    , Pages: 162 -168-162 -168
  • Journal article
    Darlington J, Guo YK, 1989,

    Narrowing and unification in functional programming�an evaluation mechanism for absolute set abstraction

    , Rewriting Techniques and Applications, Pages: 92-108
  • Journal article
    Curcin V, Guo Y, Gilardoni F,

    Scientific Workflow Applied to Nano-and Material Sciences

  • Journal article
    Mo Y, Wang S, Dai C, Zhou R, Teng Z, Bai W, Guo Yet al.,

    Efficient Deep Representation Learning by Adaptive Latent Space Sampling

    Supervised deep learning requires a large amount of training samples withannotations (e.g. label class for classification task, pixel- or voxel-wisedlabel map for segmentation tasks), which are expensive and time-consuming toobtain. During the training of a deep neural network, the annotated samples arefed into the network in a mini-batch way, where they are often regarded ofequal importance. However, some of the samples may become less informativeduring training, as the magnitude of the gradient start to vanish for thesesamples. In the meantime, other samples of higher utility or hardness may bemore demanded for the training process to proceed and require moreexploitation. To address the challenges of expensive annotations and loss ofsample informativeness, here we propose a novel training framework whichadaptively selects informative samples that are fed to the training process.The adaptive selection or sampling is performed based on a hardness-awarestrategy in the latent space constructed by a generative model. To evaluate theproposed training framework, we perform experiments on three differentdatasets, including MNIST and CIFAR-10 for image classification task and amedical image dataset IVUS for biophysical simulation task. On all threedatasets, the proposed framework outperforms a random sampling method, whichdemonstrates the effectiveness of proposed framework.

  • Journal article
    Amador J, Oehmichen A, Molina-Solana M,

    Characterizing Political Fake News in Twitter by its Meta-Data

    This article presents a preliminary approach towards characterizing politicalfake news on Twitter through the analysis of their meta-data. In particular, wefocus on more than 1.5M tweets collected on the day of the election of DonaldTrump as 45th president of the United States of America. We use the meta-dataembedded within those tweets in order to look for differences between tweetscontaining fake news and tweets not containing them. Specifically, we performour analysis only on tweets that went viral, by studying proxies for users'exposure to the tweets, by characterizing accounts spreading fake news, and bylooking at their polarization. We found significant differences on thedistribution of followers, the number of URLs on tweets, and the verificationof the users.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=607&limit=15&page=21&respub-action=search.html Current Millis: 1628049944499 Current Time: Wed Aug 04 05:05:44 BST 2021