Publications from our Researchers

Several of our current PhD candidates and fellow researchers at the Data Science Institute have published, or in the proccess of publishing, papers to present their research.  

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Darlington J, Guo Y, 1994,

    Reduction as Deduction

    , Proceedings of the 6th International Workshop on Implementation of Functional Languages, Norwich, UK
  • Conference paper
    Darlington J, Guo Y, 1994,

    Formalising Actors in Linear Logic

    , Proceedings of the International Conference on Object-Oriented Information Systems (OOIS'94), Publisher: Springer-Verlag
  • Journal article
    Guo Y, 1993,

    FALCON: Functional and Logic Language with Constraints

    , A draft prepared for the, Vol: 6
  • Journal article
    Darlington J, Guo Y, Köhler M, 1993,

    Functional programming languages with logical variables: a linear logic view

    , Progamming Language Implementation and Logic Programming, Pages: 201-219
  • Conference paper
    Darlington J, Guo Y, Pull H, otherset al., 1992,

    A new perspective on integrating functions and logic languages

    , Pages: 682-693
  • Journal article
    DARLINGTON J, GUO Y, 1991,

    CONSTRAINED EQUATIONAL DEDUCTION

    , LECTURE NOTES IN COMPUTER SCIENCE, Vol: 516, Pages: 424-435, ISSN: 0302-9743
  • Journal article
    Darlington J, Guo Y, Wu Q, 1991,

    A General Computational Scheme for Constraint Logic Programming

    , ALPUK, Pages: 56-77
  • Conference paper
    Darlington J, Guo Y, Pull H, 1991,

    A Design Space for Integrating Declarative Languages

    , London, UK, UK, Publisher: Springer-Verlag, Pages: 3-19
  • Conference paper
    Darlington J, Guo Y, Pull H, 1991,

    Introducing constraint functional logic programming

    , Publisher: Springer-Verlag, Pages: 20-34
  • Conference paper
    Darlington J, Guo Y, 1989,

    The unification of functional and logic languages-towards constraint functional programming

    , Pages: 162 -168-162 -168
  • Journal article
    Darlington J, Guo YK, 1989,

    Narrowing and unification in functional programming�an evaluation mechanism for absolute set abstraction

    , Rewriting Techniques and Applications, Pages: 92-108
  • Journal article
    Curcin V, Guo Y, Gilardoni F,

    Scientific Workflow Applied to Nano-and Material Sciences

  • Journal article
    Amador J, Oehmichen A, Molina-Solana M,

    Characterizing Political Fake News in Twitter by its Meta-Data

    This article presents a preliminary approach towards characterizing politicalfake news on Twitter through the analysis of their meta-data. In particular, wefocus on more than 1.5M tweets collected on the day of the election of DonaldTrump as 45th president of the United States of America. We use the meta-dataembedded within those tweets in order to look for differences between tweetscontaining fake news and tweets not containing them. Specifically, we performour analysis only on tweets that went viral, by studying proxies for users'exposure to the tweets, by characterizing accounts spreading fake news, and bylooking at their polarization. We found significant differences on thedistribution of followers, the number of URLs on tweets, and the verificationof the users.

  • Journal article
    Creswell A, Bharath AA,

    Task Specific Adversarial Cost Function

    The cost function used to train a generative model should fit the purpose ofthe model. If the model is intended for tasks such as generating perceptuallycorrect samples, it is beneficial to maximise the likelihood of a sample drawnfrom the model, Q, coming from the same distribution as the training data, P.This is equivalent to minimising the Kullback-Leibler (KL) distance, KL[Q||P].However, if the model is intended for tasks such as retrieval or classificationit is beneficial to maximise the likelihood that a sample drawn from thetraining data is captured by the model, equivalent to minimising KL[P||Q]. Thecost function used in adversarial training optimises the Jensen-Shannon entropywhich can be seen as an even interpolation between KL[Q||P] and KL[P||Q]. Here,we propose an alternative adversarial cost function which allows easy tuning ofthe model for either task. Our task specific cost function is evaluated on adataset of hand-written characters in the following tasks: Generation,retrieval and one-shot learning.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=607&limit=15&page=15&respub-action=search.html Current Millis: 1566764243256 Current Time: Sun Aug 25 21:17:23 BST 2019