109 results found
Type family applications in Haskell must be fully saturated. This means that all type-level functions haveto be first-order, leading to code that is both messy and longwinded. In this paper we detail an extension toGHC that removes this restriction. We augment Haskell’s existing type arrow,→, with anunmatchablearrow,↠, that supports partial application of type families without compromising soundness. A soundness proof isprovided. We show how the techniques described can lead to substantial code-size reduction (circa 80%) inthe type-level logic of commonly-used type-level libraries whilst simultaneously improving code quality and readability.
Kiss C, Field T, Eisenbach S, et al., 2019, Fork of GHC implementing -XUnsaturatedTypeFamilies for the paper 'Higher-Order Type-Level Programming in Haskell', Publisher: Association for Computing Machinery (ACM)
Stawinoga N, Field AJ, 2018, Predictable thread coarsening, ACM Transactions on Architecture and Code Optimization, Vol: 15, ISSN: 1544-3973
Thread coarsening on GPUs combines the work of several threads into one. We show how thread coarseningcan be implemented as a fully automated compile-time optimisation which estimates the optimal coarseningfactor based on a low-cost, approximate static analysis of cache line re-use and an occupancy prediction model.We evaluate two coarsening strategies on three different NVidia GPU architectures. For NVidia reductionkernels we achieve a maximum speedup of 5.08x and for the Rodinia benchmarks we achieve a mean speedupof 1.30x over 8 of 19 kernels that were determined safe to coarsen.
Chatley R, Field AJ, 2017, Lean learning: applying lean techniques to improve software engineering education, ICSE-SEET, Publisher: IEEE
Building a programme of education that reflects and keeps pace with industrial practice is difficult. We often hear of a skills shortage in the software industry, and the gap between what people are taught in university and the "real world". This paper is a case study showing how we have developed a programme at Imperial College London that bridges this gap, providing students with relevant skills for industrial software engineering careers. We give details of the structure and evolution of the programme, which is centred on the tools, techniques and issues that feature in the everyday life of a professional developer working in a modern team. We also show how aligning our teaching methods with the principles of lean software delivery has enabled us to provide sustained high quality learning experiences. The contributions of this paper take the form of lessons learnt, which may be seen as recommendations for others looking to evolve their own teaching structures and methods.
Darlington J, Field A, Hakim L, 2016, Tackling complexity in high performance computing applications, International Journal of Parallel Programming, Vol: 45, Pages: 402-420, ISSN: 1573-7640
We present a software framework that supports the specification of userdefinableconfiguration options in HPC applications independently of the applicationcode itself. Such options include model parameter values, the selection of numericalalgorithm, target platform etc. and additional constraints that prevent invalid combinationsof options from being made. Such constraints, which are capable of describingcomplex cross-domain dependencies, are often crucial to the correct functioning ofthe application and are typically either completely absent from the code or a hardto recover from it. The framework uses a combination of functional workflows andconstraint solvers. Application workflows are built from a combination of functionalcomponents: higher-order co-ordination forms and first-order data processing componentswhich can be either concrete or abstract, i.e. without a specified implementationat the outset. A repository provides alternative implementations for these abstract components.A constraint solver, written in Prolog, guides a user in making valid choicesof parameters, implementations, machines etc. for any given context. Partial designscan be stored and shared providing a systematic means of handling application useand maintenance. We describe our methodology and illustrate its application in twoclasses of application: a data intensive commercial video transcoding example and anumerically intensive incompressible Navier–Stokes solver.
Amaral JN, Field AJ, 2014, A special issue from the international conference on performance engineering 2013, CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, Vol: 26, Pages: 1947-1948, ISSN: 1532-0626
Field AJ, 2014, The January Haskell Tests, Publisher: Imperial College London
The “January Tests” are a series of Haskell programming tests taken by first-year Computing and Joint Maths and Computing undergraduate students at Imperial College London.
Kreft J-U, Plugge CM, Grimm V, et al., 2013, Mighty small: Observing and modeling individual microbes becomes big science, PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, Vol: 110, Pages: 18027-18028, ISSN: 0027-8424
Birch D, Liang H, Ko J, et al., 2013, Multidisciplinary Engineering Models: Methodology and Case Study in Spreadsheet Analytics, European Spreadsheet Risks Interest Group 14th Annual Conference (EuSpRIG 2013), Publisher: EuSpRIG, Pages: 1-12
Jones GL, Harrison PG, Field AJ, et al., 2012, Fluid Queue Models of Renewable Energy Storage, 'VALUETOOLS', IEEE, 2012, Publisher: IEEE, Pages: 224-225
PrintRequest PermissionsIn this extended abstract we introduce an approximation algorithm for the evaluation of networks of fluid queues. Such models can be used to describe the generation and storage of renewable energy. We discuss how our algorithm would be applied to an example where the approximation performs very well, and note a modification to the model which would result in a poorer result.
Baltas N, Field AJ, 2012, Continuous Performance Testing in Virtual Time, Ninth International Conference on Quantitative Evaluation of Systems (QEST 2012), Publisher: IEEE Computer Society, Pages: 13-22
In this paper we show how program code and performance models can be made to cooperateseamlessly to support continuous software performance testing throughout thedevelopment lifecycle. We achieve this by extending our existing VEXtool for executing programs in virtual time so that events that occurduring normal execution and those that occur during the simulation of a performance model can bescheduled on a single global virtual time line. The execution time of anincomplete component of an application is thus estimated by a performance model, whilstthat of existing code is measured by instrumentation that is added dynamicallyat program load time. A key challenge is to be able to map some or all of the resourcesin a performance model to the real resources of the host platform on which theapplication is running. We outline a continuous performance engineering methodologythat exploits our unified framework and illustrate the principles involved byway of a simple Java application development case study.
Lange M, Field T, 2012, Accelerating agent-based ecosystem models using the cell broadband engine, Berlin, Heidelberg, Publisher: Springer-Verlag, Pages: 1-12
Jones W, Field T, Allwood T, 2012, Deconstraining DSLs, New York, NY, USA, ICFP 2012, Publisher: ACM, Pages: 299-310
Sinerchia M, Field AJ, Woods JD, et al., 2011, Using an individual-based model with four trophic levels to model the effect of predation and competition on squid recruitment, ICES Journal of Marine Science
The Lagrangian Ensemble recruitment model (LERM) is the first prognostic model of fisheries recruitment based upon individuals. It incorporates five functional groups: phytoplankton (diatoms), herbivorous zooplankton (copepods), carnivorous zooplankton (squid paralarvae), and two top predators. Physiology and behaviour are described by equations derived from literature based on reproducible laboratory experiments. LERM is built using the Lagrangian Ensemble metamodel, in which the demography and biofeedback of each dynamic population are diagnostic properties, emerging from the life histories of individuals. The response of the plankton ecosystem and squid recruitment to different scenarios of exogenous forcing is investigated. Simulations were run at 41°N 27°W (Azores) under a stationary annual cycle of atmospheric forcing. The ecosystem adjusts to a stable attractor for each scenario. The emergent properties of each attractor are investigated, with focus on predation, competition for food, and spawning magnitude. Annual recruitment is a complex emergent property dependent on several factors, including food availability, predation, competition, and post-hatching growth rate, as proposed by Hjort's critical period theory, relating recruitment to predation mortality, depending on growth rate and hence food availability. The model provides a useful step towards linking small-scale processes governing the life histories of larvae and fisheries on the large scale.
Baltas N, Field AJ, 2011, Software Performance Prediction with a Time Scaling Scheduling Profiler, 2011 IEEE 19th International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS'11), Pages: 107-116, ISSN: 1526-7539
Cheadle AM, Field AJ, Nistrom-Persson J, 2011, Non-stop Java, Editors: Cai, Publisher: Concept Press
We investigate how a power-save mode effects the battery life of a device subject to stochastically determined charging and discharging periods. We use a multi-regime fluid queue, imposing a threshold at some value. When the power level falls below the threshold, a power-save mode is entered and the rate of discharge decreased. An expression for the Laplace transform of the battery life's probability density function is found and inverted numerically in particular instances. We show the life of battery can be significantly improved by the introduction of the power saving threshold.
Field AJ, Harrison PG, 2010, BUSY PERIODS IN FLUID QUEUES WITH MULTIPLE EMPTYING INPUT STATES, JOURNAL OF APPLIED PROBABILITY, Vol: 47, Pages: 474-497, ISSN: 0021-9002
, 2009, Proceedings of MASCOTS 2009, 17th Annual Meeting of the IEEE/ACM International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, MASCOTS 2009, International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Imperial College London, Publisher: IEEE Computer Society Press
Message from the Programme Committee Chairs\r\n\r\nOn behalf of the Organising and Programme Committee, it is our pleasure to present to you the proceedings of MASCOTS 2009, the IEEE Computer SocietyÆs 17th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunications Systems, which will be held in London. Our society is ever more dependent on the reliable and high performance operation of complex combinations of computer and communication technologies. As existing technologies evolve and new ones emerge, it remains critical to understand, predict and enhance system reliability and performance using stochastic models, simulation and analytical methods. Experimental studies are also needed to parameterise, calibrate and validate models against real-world observations. These are precisely the themes of the MASCOTS conference series. We are very pleased that this yearÆs conference attracted 162 submissions fromall over the world, many of which were of the highest quality. Such a large number of submissions implied a correspondingly high reviewing load, and we are very grateful to the Programme Committee members and many external reviewers who provided between three and six reviews for each submission.\r\n\r\nBased on the critical reviews of the reviewers and discussions in the Programme Committee, we accepted 32 extended papers of the highest quality, 22 high-quality regular papers and 21 posters. The accepted submissions were from 25 countries spanning five continents, included submissions with industrial co-authors from 7 different companies, and covered a diverse set of research areas (e.g. workload modelling, load management and scheduling, performance optimisation and reliability/availability modeling), and diverse application contexts (e.g. parallel and multicore systems, wireless networks and storage systems). The conference programme has been organised broadly to reflect these themes and includes invited keynote tal
Alpay E, Cutler PS, Eisenbach S, et al., 2009, Changing the Marks Based Culture of Learning through Peer Assisted Tutorials, Washington, DC, 2009 ASEE Annual Conference, Publisher: Taylor and Francis Group, Pages: 17-32
We describe and evaluate an approach to student learning that aims to instil a culture of formative assessment based on peer-assisted learning. The idea is for suitably qualified undergraduates to assist in the running of weekly first-year tutorials. They mark submitted work, provide written and verbal feedback and lead problem solving discussions during tutorials. However, contrary to normal practice, the marks they award do not contribute to the students' year total; all tutorial work becomes essentially voluntary. We report results from a pilot implementation of the scheme over a 12-month period in an engineering department at a leading academic institution. The scheme was such that a comparative and triangulated assessment was possible amongst the students and tutor team. Results show no discernible degradation in student attendance, submission rates and performance in either the weekly exercises or end of year examinations. Important benefits to the peer tutors are also found.
Aplay E, Cutler P, Eisenbach S, et al., 2009, Changing the Marks Based Culture of Learning through Peer Assisted Tutorials, 2009 ASEE Annual Conference & Exposition, Publisher: American Society for Engineering Education, Pages: 1-24
We describe and evaluate an approach to student learning that aims to instil a culture of formative assessment based on peer-assisted self learning, instead of a marks-based culture in which learning effort is rewarded with marks that contribute to the student's degree. The idea is for suitably qualified third- and fourth-year undergraduates to assist in the running of weekly first-year tutorials. They mark submitted work, provide written and verbal feedback on the students' performance and lead problem solving discussions during tutorials. However, contrary to normal practice, the marks they award do not contribute to the students' year total; all tutorial work becomes essentially voluntary. We report results from a pilot implementation of the scheme over a 12-month period in an engineering department at a leading academic institution. The set-up of the scheme was such that a comparative and triangulated assessment was possible amongst the students and tutor team. There was no\r\ndiscernible degradation in student attendance, submission rates and performance in either the weekly exercises or end of year examinations. Further analysis demonstrates that this type of peer-assisted learning improves some key aspects of student learning, and provides important benefits to the senior peers in terms of their own personal development. We conclude that the scheme provides an excellent alternative to traditional learning methods whilst substantially reducing the investment in academic staff time.\r\n
Howes L, Lokhmotov A, Kelly P, et al., 2008, Optimising component composition using indexed dependence metadata, First International Workshop on New Frontiers in High-performance and Hardware-aware Computing (HipHaC), Publisher: Karlsruhe University Press (KIT Scientific Publishing), Pages: 39-46
This paper explores the use of dependence metadata for optimising composition in component-based parallel programs. The idea is for each component to carry additional information about how points in its iteration space map to memory locations associated with its input and output data structures. When two components are composed this information can be used to implement optimisations that would otherwise require expensive analysis of the components' code at the time of composition. This dependence metadata facilitates a number of cross-component optimisations -- in this paper we focus on loop fusion and array contraction. We describe a prototype framework, based on the CLooG loop generator tool, that embodies these ideas and report experimental performance results for three non-trivial parallel benchmarks. Our results show execution time reductions of up to 50% using the proposed framework on an eight-core Intel Xeon system.
Cheadle AM, Field AJ, Nystroem-Persson J, 2008, A Method Specialisation and Virtualised Execution Environment for Java, 4th International Conference on Virtual Execution Environments, Publisher: ASSOC COMPUTING MACHINERY, Pages: 51-60
Field T, Harrison P, 2007, Approximate Analysis of a Network of Fluid Queues, Workshop on Mathematical performance Modeling and Analysis (MAMA), June 2007, Publisher: ACM
Fluid models have for some time been used to approximate stochastic networks with discrete state. These range from traditional æheavy trafficÆ approximations to the recent advances in bio-chemical system models. Here we use an approximate compositional method to analyse a simple feedforward network of fluid queues which comprises both probabilistic branching and superposition. This extends our earlier work that showed the approximation to yield excellent results for a linear chain of fluid queues. The results are compared with those from a simulation model of the same system. The compositional approach is shown to yield good approximations, \r\ndeteriorating for nodes with high load when there is correlation between their immediate inputs. This correlation arises when a common set of external sources feeds more than one queue, directly or indirectly.
Harrison P, Field T, 2007, An Approximate Compositional Approach to the Analysis of Fluid Queue Networks, To appear, IFIP WG 7.3 International Symposium on Computer Performance, Modeling, Measurements, and Evaluation, Publisher: Elsevier, Pages: 1137-1152, ISSN: 0166-5316
Fluid models have for some time been used to approximate stochastic networks with discrete state. These range from traditional `heavy traffic' approximations to the recent advances in bio-chemical system models. Here we present a simple approximate compositional method for analysing a network of fluid queues with Markov-modulated input processes at equilibrium. The idea is to approximate the on/off process at the output of a queue by an $n$-state Markov chain that modulates its rate. This chain is parameterised by matching the moments of the resulting process with those of the busy period distribution of the queue. This process is then used, in turn, as a separate Markov-modulated on/off process that feeds downstream queue(s). The moments of the busy period are derived from an exact analytical model. Approximation using two- and three-state intermediate Markov processes are validated with respect to an exact model of a tandem pair of fluid queues --- a generalisation of the single queue model. The analytical models used are rather simpler and more accessible, albeit less general, than previously published models, and are also included. The approximation method is applied to various fluid queue networks and the results are validated with respect to simulation. The results show the three-state model to yield excellent approximations for mean fluid levels, even under high load.\r\n
Gulpinar N, Harder U, Harrison P, et al., 2007, Mean-variance performance optimization of response time in a tandem router network with batch arrivals, Publisher: Springer Verlag, Pages: 203-216, ISSN: 1386-7857
The end-to-end performance of a simple wireless router network with batch arrivals is optimized in an M/G/1 queue-based, analytical model. The optimization minimizes both the mean and variance of the transmission delay (or æresponse timeÆ), subject to an upper limit on the rate of losses and finite capacity queueing and recovery buffers. Losses may be due to either full buffers or corrupted data. The queueing model is also extended to higher order moments beyond the mean and variance of the response time. The trade-off between mean and variance of response time is assessed and the optimal ratio of arrival-buffer size to recovery-buffer size is determined, which is a critical quantity, affecting both loss rate and transmission time. Graphs illustrate performance in the near-optimal region of the critical parameters. Losses at a full buffer are inferred by a time-out whereas corrupted data is detected immediately on receipt of a packet at a router, causing a N-ACK to be sent upstream. Recovery buffers hold successfully transmitted packets so that on receiving a N-ACK, the packet, if present, can be retransmitted, avoiding an expensive resend from source. The impact of the retransmission probability is investigated similarly: too high a value leads to congestion and so higher response times, too low and packets are lost forever.
Hinsley W, Field T, Woods J, 2007, Creating Individual Based Models of the Plankton Ecosystem, International Conference on Computational Science, Publisher: Springer-Verlag, LNCS, Pages: 111-118, ISSN: 0302-9743
Falconer H, Kelly, P H J, et al., 2007, A declarative framework for analysis and optimization, Compiler Construction (CC07), Publisher: Springer LNCS
Hinsley W, Field T, Woods J, 2007, Creating individual based models of the plankton ecosystem, 7th International Conference on Computational Science (ICCS 2007), Publisher: SPRINGER-VERLAG BERLIN, Pages: 111-+, ISSN: 0302-9743
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.