Imperial College London

MrLloydKamara

Faculty of EngineeringDepartment of Computing

Computing Support Manager
 
 
 
//

Contact

 

+44 (0)20 7594 8400l.kamara Website

 
 
//

Location

 

305Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

102 results found

Yavlinsky A, Stefan R, 2006, Efficient re-indexing of automatically annotated image collections using keyword combination, Departmental Technical Report: 06/9, Publisher: Department of Computing, Imperial College London, 06/9

This report presents a framework for improving the image index obtained by automated image annotation.Within this framework, the technique of keyword combination is used for fast image re-indexing based on initialautomated annotations. It aims to tackle the challenges of limited vocabulary size and low annotation accuraciesresulting from differences between training and test collections. It is useful for situations when these two problemsare not anticipated at the time of annotation. We show that based on example images from the automaticallyannotated collection, it is often possible to find multiple keyword queries that can retrieve new image conceptswhich are not present in the training vocabulary, and improve retrieval results of those that are already present.We demonstrate that this can be done at a very small computational cost and at an acceptable performancetradeoff, compared to traditional annotation models. We present a simple, robust, and computationally efficientapproach for finding an appropriate set of keywords for a given target concept. We report results on TRECVID2005, Getty Image Archive, and Web image datasets, the last two of which were specifically constructed tosupport realistic retrieval scenarios.

Report

Aziz B, 2006, A static analysis of the applied Pi calculus, Departmental Technical Report: 06/15, Publisher: Department of Computing, Imperial College London, 06/15

We present in this technical report a non-uniform static analysis fordetecting the term-substitution property in systems specified in the languageof the applied pi calculus. The analysis implements a denotationalframework that has previously introduced analyses for the pi calculus andthe spi calculus. The main novelty of this analysis is its ability to deal withsystems specified in languages with non-free term algebras, like the appliedpi calculus, where non-identity equations may relate different termsof the language. We demonstrate the applicability of the analysis to onefamous security protocol, which uses non-identity equations, namely theDiffie-Hellman protocol.

Report

Aziz B, Hamilton G, 2006, Modelling and analysis of PKI-based systems using process calculi, Departmental Technical Report: 06/14, 06/14

In this technical report, we present a process algebra aimed at modellingPKI-based systems. The new language, SPIKY, extends the spi-calculusby adding primitives for the retrieval of certified/uncertified public keys aswell as private keys belonging to users of the PKI-based system. SPIKYalso formalises the notion of process ownership by PKI users, which isnecessary in controlling the semantics of the key retrieval capabilities. Wealso construct a static analysis for SPIKY that captures the property ofterm substitutions resulting from message-passing and PKI/cryptographicoperations. This analysis is shown to be safe and computable. Finally,we use the analysis to define the term secrecy and peer participationproperties for a couple of examples of authentication protocols.

Report

Charlton N, 2006, Program verification with interacting analysis plugins, Departmental Technical Report: 06/11, Publisher: Department of Computing, Imperial College London, 06/11

In this paper we propose and argue for a modular framework for interprocedural program analysis,where multiple program analysis tools are combined in order to exploit the particular advantages ofeach. This allows for “plugging together” such tools as required by each verification task and makes it easyto integrate new analyses. Our framework automates the sharing of information between plugins using a firstorder logic with transitive closure, in a way inspired by the open product of Cortesi et al..We describe a prototype implementation of our framework, which performs static assertion checking ona simple language for heap-manipulating programs. This implementation includes plugins for three existingapproaches — predicate abstraction, 3-valued shape analysis and a decidable pointer analysis — and for asimple type system. We demonstrate through a detailed example the increase in precision that our approachcan provide. Finally we discuss the design decisions we have taken, in particular the tradeoffs involved inthe choice of language by which the plugins communicate, and identify some future directions for our work.

Report

Amadio R, Phillips I, 2006, 13th international workshop on expressiveness in concurrency, Departmental Technical Report: 06/10, Publisher: Department of Computing, Imperial College London, 06/10

Report

Thornley D, 2006, Anisotropic multidimensional savitzky Golay kernels for smoothing, differentiation and reconstruction, Departmental Technical Report: 06/8, Publisher: Department of Computing, Imperial College London, 06/8

The archetypal Savitzky–Golay convolutional filter matches a polynomial toeven-spaced data and uses this to measure smoothed derivatives. We synthesizea scheme in which heterogeneous, anisotropic linearly separable basis functionscombine to provide a general smoothing, derivative measurement and reconsructionfunction for point coulds in multiple dimensions using a linear operator in theform of a convolution kernel. We use a matrix pseudo inverse for examples, butnote that QR factorization is more stable when free weighting is introduced.

Report

Maros I, 2006, Computational study of the GMDPO dual phase-1 algorithm, Departmental Technical Report: 06/6, Publisher: Department of Computing, Imperial College London, 06/6

Maros's GDPO algorithm for phase-1 of the dual simplex method possesses some theoretical features that have potentially huge computational advantages. This paper gives account of a computational analysis of GDPO. Experience of a systematic study involving 48 problems shows that the predicted performance advantages can materialize to a large extent making GDPO an indispensable tool for dual pahase-1.

Report

Thornley DJ, 2006, Trace modelling for abduction basecalling, Departmental Technical Report: 06/7, 06/7

DNA sequencing using the fluoresence based Sanger method comprisesinterpretation of a sequence of signal peaks of varying size whose colourindicates the presence of a base. We have established that the abilityto predict the variations effectively makes available novel error correctioninformation which will improve sequencing efficacy. Our experiments sofar have used basic models of the Sanger reaction chemistry and machinelearning techniques. These have enabled us to make base calls just usingcontext information, specfically ignoring the peak data at the base callingposition. The 80% success rate of our blind experiments is striking,and will be improved by a more accurate model of trace behaviour. Tothis end, and to integrate the information into mainstream basecalling,we wish to develop an enzyme kinetics model susceptible to calibration ofits component rates such that trace data can be accurately predicted. Wedescribe DNA sequencing trace data, outline the trace prediction problemrequirements on the model, and discuss model construction and calibrationissues.

Report

Pitt JV, Kamara LD, Sergot MJ, Artikis Aet al., 2005, Voting in online deliberative assemblies, International conference on artificial intelligence (ICAIL 05), 6-11 June 2005, Spain, Pages: 195-204

Conference paper

Pitt JV, Kamara LD, Sergot MJ, Artikis Aet al., 2005, Formalization of a voting protocol for virtual organizations, 4th international joint conference on Autonomous agents and multiagent systems (AAMAS 2005), 25 - 29 July 2005, The Netherlands, Publisher: ACM Press, Pages: 373-380

Conference paper

Artikis A, Kamara L, Pitt J, Sergot Met al., 2005, A protocol for resource sharing in norm-governed ad hoc networks, 2nd International Workshop on Declarative Agent Languages and Technologies, Publisher: SPRINGER-VERLAG BERLIN, Pages: 221-238, ISSN: 0302-9743

Conference paper

Artikis A, Kamara L, Pitt J, Sergot Met al., 2004, A protocol for resource sharing in ad hoc networks, Pages: 221-238

Conference paper

Gaertner D, 2004, Natural algorithms for optimisation problems final year project report, Departmental Technical Report: 04/9, Publisher: Department of Computing, Imperial College London, 04/9

Many computational techniques borrow ideas from nature in one way or another.Neural networks imitate the structure of our human brain, genetic algorithmssimulate evolution and swarms of insects inspired algorithms for stochastic combinatorialoptimisation. These techniques are characterised by inherent parallelism,adaptivity, positive feedback and some element of randomness.This report details the investigations into certain features of one such technique:the Ant System meta-heuristic proposed by Dorigo. The algorithm is appliedto several variations of the well-known Travelling Salesman Problem, includingasymmetric and dynamic ones. Furthermore, di erent approaches to parallelisethe algorithm are investigated.Good parameters are needed to instantiate the generic algorithm e ciently andresearch to nd optimal parameter settings has been conducted in the course ofthis project. An ontology to classify instances of the TSP is introduced and athorough analysis of the dependency between parameters and classes of problemsis presented. During this analysis, optimal parameters have been found for agiven problem instance that led to the discovery of a new shortest tour for thisinstance.A novel algorithm, the Genetically Modi ed Ant System(GMAS), is then proposedthat employs ideas from the eld of genetic algorithms to eliminate theneed to set parameters for a given problem explicitly. An implementation isdescribed and the new system is evaluated with respect to the standard AntSystem and also compared to other more specialised heuristics.

Report

Ezechukwu OC, Maros I, 2004, OSCP: Optimization Service Connectivity Protocol, Departmental Technical Report: 04/5, Publisher: Department of Computing, Imperial College London, 04/5

Optimization software e.g. solvers and modelling systems, expose software and vendor specificinterfacing mechanisms to client applications e.g. decision support systems, thus introducing closecoupling. The ‘Optimization Service Connectivity Protocol’ (OSCP) is an abstraction of the interfacesto optimization software, which is aimed primarily at simplifying the process of integratingoptimization systems into software solutions by providing an abstracted, uniform and easy to useinterface to such systems, regardless of system or vendor specific requirements. This paper presents ahigh-level overview of OSCP including descriptions of its main interfaces, and illustrates its use viaexamples.

Report

Kamara LD, Pitt J, Sergot M, 2004, Norm aware agents for Ad Hoc networks, Proceedings AAMAS Workshop on Agents and Ubiquitous Computing, 20 July 2004, New York, Pages: 1-8

Conference paper

Ezechukwu OC, Maros I, 2003, AML: Algebraic Markup Language, Departmental Technical Report: 03/12, Publisher: Department of Computing, Imperial College London, 03/12

There exists a variety of formats for representing optimisation models, each with varying degrees ofsupport on optimisation platforms. Existing formats are also unsuitable for Internet distributedcomputing technologies, due not only to their syntax but also to limited degrees of portability. Thispaper describes an XML based algebraic mark-up language for representing optimisation models. Theprincipal aim of the language is to provide an abstracted representation format with sufficientgenerality to capture the structure of optimisation models, and which can easily be ported to existingformats, and is usable in an Internet computing environment.

Report

Ezechukwu OC, Maros I, 2003, ORML: Optimization Reporting Markup Language, Departmental Technical Report: 03/13, Publisher: Department of Computing, Imperial College London, 03/13

This paper presents a reporting mark-up language that enables the encoding of optimization reportingdata in XML format. The language eases the process of integrating optimization software into decisionsupport applications by shielding upstream applications from native representation formats. It alsoserves as an enabling technology for web services based optimization due to the fact that it provides aresponse encoding mechanism which is compatible with Internet standards, protocols andtechnologies.

Report

Nieminen K, Ruuth S, Maros I, 2003, Genetic algorithm for finding a good first integer solution for MILP, Departmental Technical Report: 03/4, Publisher: Department of Computing, Imperial College London, 03/4

The paper proposes a genetic algorithm based method for nding a good rst integer solu-tion to mixed integer programming problems (MILP). The objective value correspondingto this solution can be used to e ciently prune the search tree in branch and bound typealgorithms for MILP. Some preliminary computational results are also presented whichsupport the view that this approach deserves some attention.

Report

Ray O, 2003, HAIL: Hybrid Abductive-Inductive Learning, Departmental Technical Report: 03/6, Publisher: Department of Computing, Imperial College London, 03/6

The learning system Progol5 and the inference method of Bottom Generalisation are firmly established within Inductive Logic Programming (ILP). But despite their success, these approaches are known to be incomplete, and are restricted to finding hypotheses within the semantics of Plotkin's relative subsumption. This paper reveals a previously unsuspected incompleteness of Progol5 with respect to Bottom Generalisation and proposes a new approach that is shown to overcome this particular incompleteness and to further generalise Progol. This new approach is called Hybrid Abductive Inductive Learning (HAIL) because it integrates the ILP principles of Progol5 with Abductive Logic Programming (ALP). A proof procedure is described that, unlike Progol5, is able to hypothesise multiple clauses in response to a single positive example and finds hypotheses outside Plotkin's relative subsumption. A semantics is presented which extends that of Bottom Generalisation and includes the hypotheses constructed by HAIL

Report

Ezechukwu OC, Maros I, 2003, OOF: Open Optimization Framework, Departmental Technical Report: 03/7, Publisher: Department of Computing, Imperial College London, 03/7

There is currently a plethora of formats for representing optimization models and instances.The varying degrees of support for these formats, coupled with their well-documenteddeficiencies complicate the task of developing and integrating optimization models andsoftware. This has led to new initiatives to develop new forms of model representation, whichaddress these deficiencies and exploit advances in software technology, particularly theInternet. This paper describes a framework, which not only comprehensively addresses theissue of model and instance representation but also provides in-built support for distributedoptimization, particularly in the area of service delivery over the Internet.

Report

Kamara L, Artikis A, Neville B, Pitt Jet al., 2003, Simulating computational societies, Berlin, 3rd international workshop on engineering societies in the agents world, Madrid, Spain, 16 September 2001 - 17 September 2002, Publisher: Springer-Verlag Berlin, Pages: 53-67

Conference paper

Ray O, 2002, Nimble: a hybrid abductive-inductive learning algorithm, Departmental Technical Report: 02/15a, Publisher: Department of Computing, Imperial College London, 02/15a

Report

Flieger JC, 2002, Relevance and minimality in systems of defeasible argumentation, Departmental Technical Report: 02/17, Publisher: Department of Computing, Imperial College London

We present a metalogical characterisation of relevance for systemsof defeasible argumentation and use it to de¯ne the notion of a rele-vant argument system. We employ a variant of the idea (in°uentialin linguistics and philosophy) that communication and cognition aregoverned by a trade-o® between opposing demands of informationalsu±ciency and economy of means; the notion of informational suf-¯ciency is modelled in terms of satisfying a query associated with atopic of argumentation, while the notion of economy is based on proof-theoretic minimality. The resulting system of relevant argumentationis able to handle fallacies of relevance, such as the paradoxes of ma-terial implication, even when the underlying deductive system is aclassical rather than a relevance logic.

Report

Maros I, 2002, An enhanced piecewise linear dual phase-1 algorithm for the simplex method, Departmental Technical Report: 02/15b, Publisher: Department of Computing, Imperial College London, 02/15b

A dual phase-1 algorithm for the simplex method that handles all types of vari-ables is presented. In each iteration it maximizes a piecewise linear function of dualinfeasibilities in order to make the largest possible step towards dual feasibility witha selected outgoing variable. The algorithm can be viewed as a generalization oftraditional phase-1 procedures. It is based on the multiple use of the expensivelycomputed pivot row. By small amount of extra work per iteration, the progress itcan make is equivalent to many iterations of the traditional method. While this is itsmost important feature, it possesses some additional favorable properties, namely,it can be e cient in coping with degeneracy and numerical di culties. Both theo-retical and computational issues are addressed. Some computational experience isalso reported which shows that the potentials of the method can materialize on realworld problems. This paper is based on IC Departmental Technical Report 2000/13and contains an enhancement of the main algorithm.

Report

Maros I, 2002, An enhanced piecewise linear dual phase-1 algorithm for the simplex method, Departmental Technical Report: 02/15, Publisher: Department of Computing, Imperial College London, 02/15

A dual phase-1 algorithm for the simplex method that handles all types of vari-ables is presented. In each iteration it maximizes a piecewise linear function of dualinfeasibilities in order to make the largest possible step towards dual feasibility witha selected outgoing variable. The algorithm can be viewed as a generalization oftraditional phase-1 procedures. It is based on the multiple use of the expensivelycomputed pivot row. By small amount of extra work per iteration, the progress itcan make is equivalent to many iterations of the traditional method. While this is itsmost important feature, it possesses some additional favorable properties, namely,it can be efficient in coping with degeneracy and numerical difficulties. Both theo-retical and computational issues are addressed. Some computational experience isalso reported which shows that the potentials of the method can materialize on realworld problems. This paper is based on IC Departmental Technical Report 2000/13and contains an enhancement of the main algorithm.

Report

Pappas A, 2002, The accuracy of a Bayesian Network, Departmental Technical Report: 02/3, Publisher: Department of Computing, Imperial College London, 02/3

A Bayesian network is a construct that represents a joint probabilitydistribution, and can be used in order to model a given joint probabilitydistribution.A principal characteristic of a Bayesian network is the degree to which itmodels the given joint probability distribution accurately; the accuracy of aBayesian network. Although the accuracy of a Bayesian network can be welldefined in theory, it is rarely possible to determine the accuracy of aBayesian network in practice for real-world applications. Instead, alternativecharacteristics of a Bayesian network, which relate to and reflect theaccuracy, are used to model the accuracy of a Bayesian network, andappropriate measures are devised.A popular formalism that adopts such methods to study the accuracy of aBayesian network is the Minimum Description Length (MDL) formalism,which models the accuracy of a Bayesian network as the probability of theBayesian network given the data set that describes the joint probabilitydistribution the Bayesian network models. However, in the context ofBayesian Networks, the MDL formalism is flawed, exhibiting severalshortcomings, and thus inappropriate for examining the accuracy of aBayesian network.An alternative framework for Bayesian Networks is proposed, which modelsthe accuracy of a Bayesian network as the accuracy of the conditionalindependencies implied by the structure of the Bayesian network, andspecifies an appropriate measure called the Network ConditionalIndependencies Mutual Information (NCIMI) measure. The proposedframework is inspired by the principles governing the field of BayesianNetworks, and is based on formal theoretical foundations.Experiments have been conducted, using real-world problems, that evaluateboth the MDL formalism and the proposed framework for BayesianNetworks. The experimental results support the theoretical claims, andconfirm the significance of the proposed framework.

Report

d'Avila Garcez AS, Lamb LC, Gabbay DM, 2002, A connectionist inductive learning system for modal logic programming, Departmental Technical Report: 02/6, Publisher: Department of Computing, Imperial College London, 02/6

Neural-Symbolic integration has become a very active researcharea in the last decade. In this paper, we present a new massivelyparallel model for modal logic. We do so by extending the language ofModal Prolog [32, 37] to allow modal operators in the head of the clauses.We then use an ensemble of C-IL2P neural networks [14, 15] to encodethe extended modal theory (and its relations), and show that the ensemblecomputes a fixpoint semantics of the extended theory. An immediateresult of our approach is the ability to perform learning from examplesefficiently using each network of the ensemble. Therefore, one can adaptthe extended C-IL2P system by training possible world representations.Keywords: Neural-Symbolic Integration, Artificial Neural Networks,Modal Logic, Change of Representation, Learning from Structured Data.

Report

Maros I, 2001, A general pricing scheme for the simplex method, Departmental Technical Report: 01/3, Publisher: Department of Computing, Imperial College London, 01/3

Pricing is a term in the simplex method for linear programming used to refer to thestep of checking the reduced costs of nonbasic variables. If they are all of the ‘rightsign’ the current basis (and solution) is optimal, if not, this procedure selects a candidatevector that looks profitable for inclusion in the basis. While theoretically the choice ofany profitable vector will lead to a finite termination (provided degeneracy is handledproperly) but the number of iterations until termination depends very heavily on theactual choice (which is defined by the selection rule applied). Pricing has long been an areaof heuristics to help make better selection. As a result, many different and sophisticatedpricing strategies have been developed, implemented and tested. So far none of them isknown to be dominating all others in all cases. Therefore, advanced simplex solvers needto be equipped with many strategies so that the most suitable one can be activated for eachindividual problem instance. In this paper we present a general pricing scheme. It createsa large flexibility in pricing. It is controlled by three parameters. With different settingsof the parameters many of the known strategies can be reproduced as special cases. Atthe same time, the framework makes it possible to define new strategies or variants ofthem. The scheme is equally applicable to general and network simplex algorithms.

Report

Maros I, 2001, A generalized dual phase-2 simplex algorithm, Departmental Technical Report: 01/2, Publisher: Department of Computing, Imperial College London, 01/2

Recently, the dual simplex method has attracted considerable interest. This is mostlydue to its important role in solving mixed integer linear programming problems wheredual feasible bases are readily available. Even more than that, it is a realistic alternativeto the primal simplex method in many cases. Real world linear programming problemsinclude all types of variables and constraints. This necessitates a version of the dualsimplex algorithm that can handle all types of variables efficiently. The paper presentsincreasingly more capable dual algorithms that evolve into one which is based on thepiecewise linear nature of the dual objective function. The distinguishing features ofthis method are: (i) in one iteration it can make progress equivalent to many traditionaldual iterations, (ii) using proper data structures it can be implemented very efficientlyso that an iteration requires hardly more work than the traditional pivot method, (iii)its effectiveness just increases if more upper bounded variables are present, (iv) it hasinherently better numerical stability because it can create a large flexibility in findinga pivot element, (v) it can excel itself in coping with degeneracy as it can bypass dualdegenerate vertices more easily than the traditional pivot procedures. The power of themethod is demonstrated through examples. The performance on some real world problemsis also presented.

Report

Lehman MM, Ramil JF, Kahen G, 2001, A paradigm for the behavioural modelling of software processes using system dynamics, Departmental Technical Report: 01/8, Publisher: Department of Computing, Imperial College London, 01/8

Industrial software evolution processes are, in general, complexfeedback systems. Recognition of this opens up opportunities toachieve sustained process improvement. The system dynamicsapproach was used in the FEAST projects to model behaviour ofkey process and product attributes, yielding foundations for aparadigm to achieve dynamic models of software evolutionprocesses. It is based on top-down model refinement of a highlevel abstracted view of the process. Resulting models arerelatively simple, when compared to others in the literature. Theirsimplicity, however, facilitates understanding, progressiverefinement, subsequent validation, and industrial take-up.The paper illustrates the paradigm with a model of a processhighlighting a particular issue. The results of executing this modelindicate how one may analyse and eventually optimise division ofresources between progressive activity such as functionalenhancement and anti regressive activity such as complexitycontrol. The approach and model are based on empiricalobservations and interpretation of the evolution phenomena. Whenappropriately refined and validated to reflect a particular process,a model such as this can more generally be used for releaseplanning and process improvement. The approach, together withits models, provides the basis for management technology andtools to achieve sustainable long-term evolution of complexsoftware and systems.

Report

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00157058&limit=30&person=true&page=2&respub-action=search.html