Imperial College London

Emeritus ProfessorJohnDarlington

Faculty of EngineeringDepartment of Computing

Emeritus Professor of Computing
 
 
 
//

Contact

 

+44 (0)20 7594 8361j.darlington Website

 
 
//

Location

 

213William Penney LaboratorySouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

179 results found

Yadav P, Charalampidis I, Cohen J, Darlington J, Grey Fet al., 2018, A Collaborative Citizen Science Platform for Real-Time Volunteer Computing and Games, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 5, Pages: 9-19, ISSN: 2329-924X

Journal article

Darlington J, Field A, Hakim L, 2016, Tackling complexity in high performance computing applications, International Journal of Parallel Programming, Vol: 45, Pages: 402-420, ISSN: 1573-7640

We present a software framework that supports the specification of userdefinableconfiguration options in HPC applications independently of the applicationcode itself. Such options include model parameter values, the selection of numericalalgorithm, target platform etc. and additional constraints that prevent invalid combinationsof options from being made. Such constraints, which are capable of describingcomplex cross-domain dependencies, are often crucial to the correct functioning ofthe application and are typically either completely absent from the code or a hardto recover from it. The framework uses a combination of functional workflows andconstraint solvers. Application workflows are built from a combination of functionalcomponents: higher-order co-ordination forms and first-order data processing componentswhich can be either concrete or abstract, i.e. without a specified implementationat the outset. A repository provides alternative implementations for these abstract components.A constraint solver, written in Prolog, guides a user in making valid choicesof parameters, implementations, machines etc. for any given context. Partial designscan be stored and shared providing a systematic means of handling application useand maintenance. We describe our methodology and illustrate its application in twoclasses of application: a data intensive commercial video transcoding example and anumerically intensive incompressible Navier–Stokes solver.

Journal article

Rayna T, Striukova L, Darlington J, 2015, Co-creation and user innovation: The role of online 3D printing platforms, Journal of Engineering and Technology Management, Vol: 37, Pages: 90-102, ISSN: 0923-4748

The aim of this article is to investigate the changes brought about by online 3D printing platforms in co-creation and user innovation. As doing so requires a thorough understanding of the level of user involvement in productive processes and a clear view of the nature of co-creative processes, this article provides a ‘prosumption’ framework and a typology of co-creation activities. Then, based on case studies of 22 online 3D printing platforms, a service-based taxonomy of these platforms is constructed. The taxonomy and typology are then matched to investigate the role played by online 3D platforms in regard to the various types of co-creation activities and, consequently, how this impacts user innovation.

Journal article

Rayna T, Darlington J, Striukova L, 2015, Pricing music using personal data: mutually advantageous first-degree price discrimination, ELECTRONIC MARKETS, Vol: 25, Pages: 139-154, ISSN: 1019-6781

Journal article

Cohen J, Filippis I, Woodbridge M, Bauer D, Hong NC, Jackson M, Butcher S, Colling D, Darlington J, Fuchs B, Harvey Met al., 2013, RAPPORT: running scientific high-performance computing applications on the cloud, Philos Transact A Math Phys Eng Sci, Vol: 371, ISSN: 1364-503X

Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

Journal article

Cohen J, Moxey D, Cantwell C, Burovskiy P, Darlington J, Sherwin SJet al., 2013, Nekkloud: A Software Environment for High-order Finite Element Analysis on Clusters and Clouds, 2013 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), ISSN: 1552-5244

Journal article

Danger R, Joy RC, Darlington J, Curcin Vet al., 2012, Access Control for OPM Provenance Graphs, PROVENANCE AND ANNOTATION OF DATA AND PROCESSES, IPAW 2012, Vol: 7525, Pages: 233-235, ISSN: 0302-9743

Journal article

Cohen J, North R, Wilkins S, Darlington J, Guo Y, Hoose N, Ma Y, Polak J, Suresh V, Watson P, Bell M, Blythe P, Neasham J, Calleja M, Hayes M, Beresford A, Jones R, Mead Iet al., 2009, Creating the message infrastructure, Traffic Engineering and Control, Vol: 50, Pages: 480-483, ISSN: 0041-0683

MESSAGE set out to make use of the ever-increasing power of computer infrastructure in order to support the capture, processing, archiving, analysis and visualisation of pollution data. This paper describes how the MESSAGE e-Science architecture was developed facilitating the whole process from the capture of data through to its use. This covers the difficult process of taking data from sensor nodes, pre-processing it where necessary and then storing it in an infrastructure capable of making it available for use in a wide range of different applications and processes.

Journal article

Cohen J, Richardson C, Harder U, Martinez Ortuno F, Darlington Jet al., 2009, Node-level Architecture Design and Simulation of the MAGOG Grid Middleware, Seventh Australasian Symposium on\r\nGrid Computing and e-Research (AusGrid 2009), Publisher: Australian Computer Society, Pages: 57-66, ISSN: 1445-1336

The Middleware for Activating the Global Open Grid (MAGOG) provides a novel solution to the problem of discovering remote resources in a globally interconnected environment such as the Internet, in situations where users want to gain access to such resources to carry out remote computation. While existing Grid middleware enables the building of Grid infrastructures within closed environments where all users are known to each other, or where there is some preexisting relationship between resource providers and users, the true Grid model should enable any users at any location to access remote resources without any prior relationship with the provider. MAGOG is a peer-to-peer based architecture that provides the means to enable discovery of resources in such an environment and to enable the agreement of pricing and Service Level Agreements (SLAs) for the use of these resources. This paper provides a high-level overview of the design of MAGOG and early simulation work that has been carried out to verify this design. It then focuses on the initial design for the middleware client that players in the market will need to deploy in order to become a node in the environment.

Conference paper

Cohen J, Darlington J, 2009, HIGH PERFORMANCE UTILITY RESOURCE DEPLOYMENT AND BROKERING FOR SCIENTIFIC APPLICATIONS, ASME International Design Engineering Technical Conferences/Computers and Information in Engineering Conference, Publisher: AMER SOC MECHANICAL ENGINEERS, Pages: 1115-1124

Conference paper

Guo L, Darlington J, Fuchs B, 2009, Towards an Open, Self-Adaptive and P2P Based e-Market Infrastructure, IEEE International Conference on e-Business Engineering, Publisher: IEEE COMPUTER SOC, Pages: 67-74

Conference paper

Cohen J, Darlington J, 2008, High performance utility resource deployment and brokering for scientific applications, Proceedings of the ASME Design Engineering Technical Conference, Vol: 3, Pages: 1115-1124

As computing power continues to grow and high performance computing use increases, ever bigger scientific experiments and tasks can be carried out. However, the management of the computing power necessary to support these ever growing tasks is getting more and more difficult. Increased power consumption, heat generation and space costs for the larger numbers of resources that are required can make local hosting of resources too expensive. Emergence of utility computing platforms offers a solution. We present our recent work to develop an update to our computational markets environment for support of application deployment and brokering across multiple utility computing environments. We develop a prototype to demonstrate the potential benefits of such an environment and look at the longer term changes in the use of computing that might be enabled by such developments. Copyright © 2008 by ASME.

Journal article

Curcin V, Ghanem M, Guo Y, Darlington Jet al., 2008, Mining adverse drug reactions with e-science workflows., Proceedings of the 4th Cairo International Biomedical Engineering Conference, 2008. CIBEC 2008

Conference paper

Barton G, Abbott J, Chiba N, Huang DW, Huang Y, Krznaric M, Mack Smith J, Saleem A, Sherman BT, Tiwari B, Tomlinson CD, Aitman T, Darlington J, Game L, Sternberg MJE, Butcher Set al., 2008, EMAAS: An extensible grid-based Rich Internet Application for microarray data analysis and management, BMC Bioinformatics, Vol: 9, ISSN: 1471-2105

BackgroundMicroarray experimentation requires the application of complex analysis methods as well as the use of non-trivial computer technologies to manage the resultant large data sets. This, together with the proliferation of tools and techniques for microarray data analysis, makes it very challenging for a laboratory scientist to keep up-to-date with the latest developments in this field. Our aim was to develop a distributed e-support system for microarray data analysis and management.ResultsEMAAS (Extensible MicroArray Analysis System) is a multi-user rich internet application (RIA) providing simple, robust access to up-to-date resources for microarray data storage and analysis, combined with integrated tools to optimise real time user support and training. The system leverages the power of distributed computing to perform microarray analyses, and provides seamless access to resources located at various remote facilities. The EMAAS framework allows users to import microarray data from several sources to an underlying database, to pre-process, quality assess and analyse the data, to perform functional analyses, and to track data analysis steps, all through a single easy to use web portal. This interface offers distance support to users both in the form of video tutorials and via live screen feeds using the web conferencing tool EVO. A number of analysis packages, including R-Bioconductor and Affymetrix Power Tools have been integrated on the server side and are available programmatically through the Postgres-PLR library or on grid compute clusters. Integrated distributed resources include the functional annotation tool DAVID, GeneCards and the microarray data repositories GEO, CELSIUS and MiMiR. EMAAS currently supports analysis of Affymetrix 3' and Exon expression arrays, and the system is extensible to cater for other microarray and transcriptomic platforms.ConclusionEMAAS enables users to track and perform microarray data management and analysis tasks through

Journal article

Afzal A, McGough AS, Darlington J, 2008, Capacity planning and scheduling in Grid computing environments, 7th IEEE/ACM International Conference on Grid Computing, Publisher: ELSEVIER, Pages: 404-414, ISSN: 0167-739X

Conference paper

Cohen J, Darlington J, Lee W, 2008, Payment and negotiation for the next generation Grid and Web, 4th UK e-Science All Hands Meeting (AHM 2005), Pages: 239-251

We present a proposal for a next-generation Internet based on chargeable Web Services and Utility Computing realized by a series of open but interacting markets. We demonstrate through the U.K. e-Science project 'A Market for Computational Services' the development of some of the fundamental building blocks for such a Grid computational marketplace. This paper describes the motivation behind this restructuring of the Internet and Web-based activities as a series of markets and how Grid Computing technologies can contribute towards this goal. The paper details the work undertaken at the London e-Science Centre to build a framework to create and support negotiable and chargeable Web Services. Copyright (c) 2007 John Wiley & Sons, Ltd.

Conference paper

Curcin V, Ghanem M, Molokhia M, Guo Y, Darlington Jet al., 2008, MINING ADVERSE DRUG REACTIONS WITH E-SCIENCE WORKFLOWS, Cairo International Biomedical Engineering Conference, Publisher: IEEE, Pages: 326-+, ISSN: 2156-6097

Conference paper

McGough S, Lee W, Cohen J, Katsiri E, Darlington Jet al., 2007, ICENI, Workflows for e-Science: Scientific Workflows for Grids, Publisher: Springer, ISBN: 978-1-84628-519-6

Performing large-scale science is becoming increasingly complex.\r\nScientists have resorted to the use of computing tools to enable\r\nand automate their experimental process. As acceptance of the\r\ntechnology grows, it will become commonplace that computational\r\nexperiments will involve larger data sets, more computational\r\nresources, and scientists (often referred to as e-Scientists)\r\ndistributed across geographical and organizational boundaries. We\r\nsee the {\\em Grid} paradigm as an abstraction to a large\r\ncollection of distributed heterogeneous resources, including\r\ncomputational, storage, and instrument elements, controlled and\r\nshared by different organizations. Grid computing should\r\nfacilitate the e-Scientist's ability to run applications in a\r\ntransparent manner.\r\n

Book chapter

Afzal A, Darlington J, McGough AS, 2007, QoS-constrained stochastic workflow scheduling in enterprise and scientific grids, The 7th IEEE/ACM International Conference on Grid Computing, Publisher: IEEE Computer Society Press, Pages: 1-8

Conference paper

Patel Y, Darlington J, 2007, Novel stochastic profitable techniques for brokers in a web-service based grid market, IEEE/WIC/ACM International Conference on Web Intelligence, Publisher: IEEE COMPUTER SOC, Pages: 132-140

Conference paper

Darlington J, Afzal A, McGough S, 2007, Capacity Planning and Scheduling in Grid Computing, Future Generation Computer Systems

Journal article

Patel Y, Darlington J, 2007, Average-based workload allocation strategy for QoS-constrained workflow-based jobs in a web service-oriented Grid, International Conference on Advanced Computing and Communications, Publisher: IEEE, Pages: 647-+

Conference paper

Altmann J, Courcoubetis C, Darlington J, Cohen Jet al., 2007, GridEcon - The economic-enhanced next-generation internet, 4th International Workshop on Grid Economics and Business Models (GECON 2007), Publisher: SPRINGER-VERLAG BERLIN, Pages: 188-+, ISSN: 0302-9743

Conference paper

Patel Y, Darlington J, 2006, Average-based workload allocation strategy for QoS-constrained workflow-based jobs in a web service-oriented grid, Proceedings - 2006 14th International Conference on Advanced Computing and Communications, ADCOM 2006, Pages: 664-669

The success of web services has influenced the way in which Grid applications are being written. Web services are increasingly used as a means to realize service-oriented distributed computing. Grid users often submit their applications in the form of workflows with certain Quality of Service (QoS) requirements imposed on the workflows. These workflows detail the composition of web services and the level of service required from the Grid. This paper addresses workload allocation techniques for Grid workflows. We model a web service as a G/G] queue and minimize failures (QoS requirement violation) of jobs by solving a mixed-integer non-linear program (MINLP). The novel approach is evaluated through an experimental simulation and the results confirm that the proposed workload allocation strategy performs considerably better in terms of satisfying QoS requirements of Grid workflows than scheduling algorithms that don't employ such workload allocation techniques.

Journal article

McGough S, Afzal A, Darlington J, 2006, Capacity Planning and Stochastic Scheduling in Large-Scale Grids, Proceedings of the Second IEEE International Conference on e-Science and Grid Computing

Conference paper

McBride D, Krznaric M, van der Aa O, Aggarwal M, Colling D, Darlington Jet al., 2006, Running a Production Grid Site at the London e-Science Centre, Second IEEE International Conference on e-Science and Grid Computing (e-Science'06), Publisher: IEEE, Pages: 153-153

This paper describes how the London e-Science Centre cluster MARS, a production 400+ Opteron CPU cluster, was integrated into the production Large Hadron Collider Compute Grid. It describes the practical issues that we encountered when deploying and maintaining this system, and details the techniques that were applied to resolve them.\r\nFinally, we provide a set of recommendations based on our experiences for grid software development in general that we believe would make the technology more accessible.

Conference paper

Patel Y, McGough S, Darlington J, 2006, Grid Workflow Scheduling in WOSE\r\n, UK e-Science All Hands Meeting 2006, Pages: 566-573

The success of web services has infuenced the way in which grid applications are being written. Grid users seek to use combinations of web services to perform the overall task they need to achieve. In general this can be seen as a set of services with a workflow document describing how these services should be combined. The user may also have certain constraints on the workflow operations, such as execution time or cost to the user, specified in the form of a Quality of Service (QoS) document. These workflows need to be mapped to a subset of the Grid services taking the QoS and state of the Grid into account û service availability and performance. We propose in this paper an approach for generating constraint equations describing the workflow, the QoS requirements and the state of the Grid. This set of equations may be solved using Integer Linear Programming (ILP), which is the traditional method. We further develop a 2-stage stochastic ILP which is capable of dealing with the volatile nature of the Grid and adapting the selection of the services during the life of the workflow. We present experimental results comparing our approaches, showing that the 2-stage stochastic programming approach performs consistently better than other traditional approaches. This work forms the workflow scheduling service within WOSE (Workflow Optimisation Services for e-Science Applications), which is a collaborative work between Imperial College, Cardiff University and Daresbury Laborartory.

Conference paper

Darlington J, Cohen J, Lee WH, 2006, An architecture for a next-generation Internet based on web services and utility computing, Enabling Technologies: Infrastructure for Collaborative Enterprises, 2006. WETICE '06. 15th IEEE International Workshops on, Pages: 169-174, ISSN: 1524-4547

Conference paper

Patel Y, Darlington J, 2006, A novel approach to workload allocation of QoS-constrained workflow-based jobs in a utility grid, e-Science 2006 - Second IEEE International Conference on e-Science and Grid Computing

The Grid can be seen as a collection of services each of which performs some functionality. Grid users often submit their applications in the form of workflows with certain Quality of Service (QoS) requirements imposed on the workflows. These workflows detail the composition of Grid services and the level of service required from the Grid. This paper addresses workload allocation techniques for Grid workflows. We model a Grid service as a G/G/Í queue and minimise failures (QoS requirement violation) of jobs by solving a mixed-integer non-linear program (MINLP). The novel approach is evaluated through an experimental simulation and the results confirm that the proposed workload allocation strategy performs considerably better in terms of satisfying QoS requirements of Grid workflows than scheduling algorithms that don't employ such workload allocation techniques. © 2006 IEEE.

Journal article

Patel Y, Darlington J, 2006, A novel stochastic algorithm for scheduling workflows with QoS guarantees in a web service-oriented grid, 2nd IASTED International Conference on Computational Intelligence, Publisher: ACTA PRESS ANAHEIM, Pages: 428-+

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00001444&limit=30&person=true