Publications
360 results found
Nahabedian L, Braberman V, DIppolito N, et al., 2022, Assured automatic dynamic reconfiguration of business processes, Information Systems, Vol: 104, Pages: 1-19, ISSN: 0306-4379
In order to manage evolving organisational practice and maintain compliance with changes in policies and regulations, businesses must be capable of dynamically reconfiguring their business processes. However, such dynamic reconfiguration is a complex, human-intensive and error prone task. Not only must new business process rules be devised but also, crucially, the transition between the old and new rules must be managed.In this paper we present a fully automated technique based on formal specifications and discrete event controller synthesis to produce correct-by-construction reconfiguration strategies. These strategies satisfy user-specified transition requirements, be they domain independent - such as delayed and immediate change - or domain specific. To achieve this, we provide a discrete-event control theoretic approach to operationalise declarative business process specifications, and show how this can be extended to resolve reconfiguration problems. In this way, given the old and the new business process rules described as Dynamic Condition Response Graphs, and given the transition requirements described with linear temporal logic, the technique produces a control strategy that guides the organisation through a business process reconfiguration ensuring that all transition requirements and process rules are satisfied. The technique outputs a reconfiguration DCR whose traces reproduce the controller’s reconfiguration strategy. We illustrate and validate the approach using realistic cases and examples from the BPM Academic Initiative.
Nahabedian L, Braberman V, DIppolito N, et al., 2020, Dynamic update of discrete event controllers, IEEE Transactions on Software Engineering, Vol: 46, Pages: 1220-1240, ISSN: 0098-5589
Discrete event controllers are at the heart of many software systems that require continuous operation. Changing these controllers at runtime to cope with changes in its execution environment or system requirements change is a challenging open problem. In this paper we address the problem of dynamic update of controllers in reactive systems. We present a general approach to specifying correctness criteria for dynamic update and a technique for automatically computing a controller that handles the transition from the old to the new specification, assuring that the system will reach a state in which such a transition can correctly occur and in which the underlying system architecture can reconfigure. Our solution uses discrete event controller synthesis to automatically build a controller that guarantees both progress towards update and safe update.
Kramer J, 2020, RE @ runtime the challenge of change RE'20 Conference Keynote, 28th IEEE International Requirements Engineering Conference (RE), Publisher: IEEE, Pages: 4-6, ISSN: 2332-6441
Providing rigorous techniques and tools to support RE so that it could potentially be performed online, at runtime, is certainly challenging. However the rewards could be great. Performing RE at runtime has the potential not only to enable software self-adaptation, but also to provide support for change in general by suggesting possible remedies to engineers. This talk will present our motivation and vision, and suggest possible approaches using techniques such as model checking, learning and synthesis to try to support RE change and adaptation.
Uchitel S, Braberman V, Kramer J, et al., 2019, Dynamic reconfiguration of business processes, 17th Int. Conference on Business Process Management (BPM 2019), Publisher: Springer Verlag, Pages: 35-51, ISSN: 0302-9743
Organisations require that their business processes reflecttheir evolving practices by maintaining compliance with their policies,strategies and regulations. Designing workflows which satisfy these re-quirements is complex and error-prone. Business process reconfigurationis even more challenging as not only a new workflow must be devisedbut also an understanding of how the transition between the old andnew workflow must be managed. Transition requirements can includeboth domain independent, such as delayed and immediate change, oruser-defined domain specific requirements. In this paper we present afully automated technique which uses control synthesis to not only pro-duce correct-by-construction workflows from business process require-ments but also to compute a reconfiguration process that guarantees theevolution from an old workflow to a new one while satisfying any user-defined transition requirements. The approach is validated using threeexamples from the BPM Academic Initiative described as Dynamic Con-dition Response Graphs which we reconfigured for a variety of transitionsrequirements.
Braberman V, D Ippolito N, Kramer J, et al., 2017, An extended description of MORPH: a reference architecture for configuration and behaviour self-adaptation, SEFSAS, Publisher: Springer Verlag, Pages: 377-408, ISSN: 0302-9743
An architectural approach to self-adaptive systems involves runtime change of system configuration (i.e., the system’s components, their bindings and operational parameters) and behaviour update (i.e., component orchestration). The architecture should allow for both configuration and behaviour changes selected from pre-computed change strategies and for synthesised change strategies at run-time to satisfy changes in the environment, changes in the specified goals of the system or in response to failures or degradation in quality attributes, such as performance, of the system itself. Although controlling configuration and behaviour at runtime has been discussed and applied to architectural adaptation, architectures for self-adaptive systems often compound these two aspects reducing the potential for adaptability. In this work we provide an extended description of our proposal for a reference architecture that allows for coordinated yet transparent and independent adaptation of system configuration and behaviour.
Hazzan O, Kramer J, 2016, Assessing abstraction skills, Communications of the ACM, Vol: 59, Pages: 43-45, ISSN: 0001-0782
Alrajeh D, Russo A, Uchitel S, et al., 2016, Logic-based learning in software engineering, 38th IEEE/ACM International Conference on Software Engineering Companion (ICSE), Publisher: IEEE, Pages: 892-893
In recent years, research efforts have been directed towards the use of Machine Learning (ML) techniques to support and automate activities such as program repair, specification mining and risk assessment. The focus has largely been on techniques for classification, clustering and regression. Although beneficial, these do not produce a declarative, interpretable representation of the learned information. Hence, they cannot readily be used to inform, revise and elaborate software models. On the other hand, recent advances in ML have witnessed the emergence of new logic-based learning approaches that differ from traditional ML in that their output is represented in a declarative, rule-based manner, making them well-suited for many software engineering tasks.In this technical briefing, we will introduce the audience to the latest advances in logic-based learning, give an overview of how logic-based learning systems can successfully provide automated support to a variety of software engineering tasks, demonstrate the application to two real case studies from the domain of requirements engineering and software design and highlight future challenges and directions.
Nahabedian L, Braberman V, D'Ippolito N, et al., 2016, Assured and correct dynamic update of controllers, 11th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS '16), Publisher: ACM, Pages: 96-107
In many application domains, continuous operation is a desirable attribute for software-intensive systems. As the environment or system requirements change, so the system should change and adapt without stopping or unduly disturbing its operation. There is, therefore, a need for sound engineering techniques that can cope with dynamic change. In this paper we address the problem of dynamic update of controllers in reactive systems when the specification (environment assumptions, requirements and interface) of the current system changes. We present a general approach to specifying correctness criteria for dynamic update and a technique for automatically computing a controller that handles the transition from the old to the new specification, assuring that the system will reach a state in which such a transition can correctly occur. Indeed, using controller synthesis we show how to automatically build a controller that guarantees both progress towards update and safe update. Seven case studies have been implemented to validate the approach.
Alrajeh D, Lamsweerde A, Kramer J, et al., 2016, Risk-driven revision of requirements models, 38th International Conference on Software Engineering (ICSE '16), Publisher: Association for Computing Machinery, Pages: 855-865, ISSN: 0270-5257
Requirements incompleteness is often the result of unanticipated adverse conditions which prevent the software and its environment from behaving as expected. These conditions represent risks that can cause severe software failures. The identification and resolution of such risks is therefore a crucial step towards requirements completeness. Obstacle analysis is a goal-driven form of risk analysis that aims at detecting missing conditions that can obstruct goals from being satisfied in a given domain, and resolving them.This paper proposes an approach for automatically revising goals that may be under-specified or (partially) wrong to resolve obstructions in a given domain. The approach deploys a learning-based revision methodology in which obstructed goals in a goal model are iteratively revised from traces exemplifying obstruction and non-obstruction occurrences. Our revision methodology computes domain-consistent, obstruction-free revisions that are automatically propagated to other goals in the model in order to preserve the correctness of goal models whilst guaranteeing minimal change to the original model. We present the formal foundations of our learning-based approach, and show that it preserves the properties of our formal framework. We validate it against the benchmarking case study of the London Ambulance Service.
Braberman V, D'Ippolito N, Kramer J, et al., 2015, MORPH: a reference architecture for configuration and behaviour self-adaptation, 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Publisher: Association for Computing Machinery, Pages: 9-16
An architectural approach to self-adaptive systems involves runtime change of system configuration (i.e., the system's components, their bindings and operational parameters) and behaviour update (i.e., component orchestration). Thus, dynamic reconfiguration and discrete event control theory are at the heart of architectural adaptation. Although controlling configuration and behaviour at runtime has been discussed and applied to architectural adaptation, architectures for self-adaptive systems often compound these two aspects reducing the potential for adaptability. In this paper we propose a reference architecture that allows for coordinated yet transparent and independent adaptation of system configuration and behaviour.
Duarte LM, Kramer J, Uchitel S, 2015, Using contexts to extract models from code, Software & Systems Modeling, Vol: 16, Pages: 523-557, ISSN: 1619-1366
Behaviour models facilitate the understanding and analysis of software systems by providing an abstract view of their behaviours and also by enabling the use of validation and verification techniques to detect errors. However, depending on the size and complexity of these systems, constructing models may not be a trivial task, even for experienced developers. Model extraction techniques can automatically obtain models from existing code, thus reducing the effort and expertise required of engineers and helping avoid errors often present in manually constructed models. Existing approaches for model extraction often fail to produce faithful models, either because they only consider static information, which may include infeasible behaviours, or because they are based only on dynamic information, thus relying on observed executions, which usually results in incomplete models. This paper describes a model extraction approach based on the concept of contexts, which are abstractions of concrete states of a program, combining static and dynamic information. Contexts merge some of the advantages of using either type of information and, by their combination, can overcome some of their problems. The approach is partially implemented by a tool called LTS Extractor, which translates information collected from execution traces produced by instrumented Java code to labelled transition systems (LTS), which can be analysed in an existing verification tool. Results from case studies are presented and discussed, showing that, considering a certain level of abstraction and a set of execution traces, the produced models are correct descriptions of the programs from which they were extracted. Thus, they can be used for a variety of analyses, such as program understanding, validation, verification, and evolution.
Lupu EC, Rodrigues P, Kramer J, 2015, Compositional Reliability Analysis for Probabilistic Component Automata, 37th International Workshop on Modeling in Software Engineering (ICSE 15), Publisher: Association for Computing Machinery/IEEE
In this paper we propose a modelling formalism, Probabilistic Component Automata (PCA), as a probabilistic extension to Interface Automata to represent the probabilistic behaviour of component-based systems. The aim is to supportcomposition of component-based models for both behaviour and non-functional properties such as reliability. We show how additional primitives for modelling failure scenarios, failure handling and failure propagation, as well as other algebraic operators, can be combined with models of the system architecture to automatically construct a system model by composing models of its subcomponents. The approach is supported by the tool LTSA-PCA, an extension of LTSA, which generates a composite DTMC model. The reliability of a particular system configurationcan then be automatically analysed based on the corresponding composite model using the PRISM model checker. This approach facilitates configurability and adaptation in which the software configuration of components and the associated composition of component models are changed at run time.
Lupu EC, Rodrigues P, Kramer J, 2015, On Re-Assembling Self-Managed Components, 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM 2015), Publisher: IEEE
Self-managed systems need to adapt to changes in requirements and in operational conditions. New components or services may become available, others may become unreliable or fail. Non-functional aspects, such as reliability or other quality-of service parameters usually drive the selection of new architectural configurations. However, in existing approaches, the link between non-functional aspects and software models is established through manual annotations that require human intervention on each re-configuration and adaptation is enacted through fixed rules that require anticipation of all possible changes. We proposehere a methodology to automatically re-assemble services and component-based applications to preserve their reliability. To achieve this we define architectural and behavioural models that are composable, account for non-functional aspects andcorrespond closely to the implementation. Our approach enables autonomous components to locally adapt and control their internal configuration whilst exposing interface models to upstream components.
Kramer J, 2015, Adventures in Adaptation: A Software Engineering Playground! (Keynote), IEEE/ACM 10th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, Publisher: IEEE, Pages: 1-1
- Author Web Link
- Cite
- Citations: 2
Alrajeh D, Kramer J, Russo A, et al., 2015, Automated Support for Diagnosis and Repair, Commun. ACM, Vol: 58, Pages: 65-72, ISSN: 0001-0782
D'ippolito N, braberman V, Kramer J, et al., 2014, Hope for the best, prepare for the worst: multi-tier control for adaptive systems, 36th International Conference on Software Engineering, Publisher: ACM, Pages: 688-699
Most approaches for adaptive systems rely on models, particularly behaviour or architecture models, which describe the system and the environment in which it operates. One of the difficulties in creating such models is uncertainty about the accuracy and completeness of the models. Engineers therefore make assumptions which may prove to be invalid at runtime. In this paper we introduce a rigorous, tiered framework for combining behaviour models, each with different associated assumptions and risks. These models are used to generate operational strategies, through techniques such controller synthesis, which are then executed concurrently at runtime. We show that our framework can be used to adapt the functional behaviour of the system: through graceful degradation when the assumptions of a higher level model are broken, and through progressive enhancement when those assumptions are satisfied or restored.
Rodrigues P, Lupu E, Kramer J, 2014, Compositional reliability analysis using probabilistic component automata, Departmental Technical Report: 14/9, Publisher: Department of Computing, Imperial College London, 14/9
Compositionality is a key property in the development and analy-sis of component-based systems. In non-probabilistic formalisms suchas Labelled Transition Systems (LTS) the functional behaviour of asystem can be readily constructed from the behaviours of its parts.However, this is not true for probabilistic extensions of LTS, whichare necessary to analyse non-functional properties such as reliability.We propose Probabilistic Component Automata (PCA) as a proba-bilistic extension to Interface Automata to automatically construct asystem model by composing models of its sub-components. In par-ticular, we focus on modelling failure scenarios, failure handling andfailure propagation. Additionally, we propose a novel algorithm basedon Compositional Reachability Analysis to mitigate the well-knownstate-explosion problem associated with composable models. BothProbabilistic Component Automata and the reduction algorithm havebeen implemented in the LTSA tool.
Rodrigues P, Lupu EC, Kramer JK, 2014, LTSA-PCA: Tool Support for Compositional Reliability Analysis, ICSE Companion 2014, Publisher: ACM, Pages: 548-551
Uchitel S, Alrajeh D, Ben-David S, et al., 2013, Supporting incremental behaviour model elaboration, Computer Science - Research and Development, Vol: 28, Pages: 279-293, ISSN: 1865-2034
Behaviour model construction remains a difficult and labour intensive task which hinders the adoption of model-based methods by practitioners. We believe one reason for this is the mismatch between traditional approaches and current software development process best practices which include iterative development, adoption of use-case and scenario-based techniques and viewpoint- or stakeholder-based analysis; practices which require modelling and analysis in the presence of partial information about system behaviour. Our objective is to address the limitations of behaviour modelling and analysis by shifting the focus from traditional behaviour models and verification techniques that require full behaviour information to partial behaviour models and analysis techniques, that drive model elaboration rather than asserting adequacy. We aim to develop sound theory, techniques and tools that facilitate the construction of partial behaviour models through model synthesis, enable partial behaviour model analysis and provide feedback that prompts incremental elaboration of partial models. In this paper we present how the different research threads that we have and currently are developing help pursue this vision as part of the "Partial Behaviour Modelling - Foundations for Iterative Model Based Software Engineering" Starting Grant funded by the ERC. We cover partial behaviour modelling theory and construction, controller synthesis, automated diagnosis and refinement, and behaviour validation. © 2012 Springer-Verlag Berlin Heidelberg.
Sykes D, Corapi D, Magee J, et al., 2013, Learning Revised Models For Planning In Adaptive Systems, 35th IEEE/ACM International Conference on Software Engineering, Publisher: IEEE/ACM, Pages: 63-71
Sibay G, Braberman V, Uchitel S, et al., 2013, Synthesising Modal Transition Systems from Triggered Scenarios, IEEE Transactions on Software Engineering, ISSN: 0098-5589
Sibay GE, Uchitel S, Braberman V, et al., 2012, Distribution of modal transition systems, Pages: 403-417, ISSN: 0302-9743
In order to capture all permissible implementations, partial models of component based systems are given as at the system level. However, iterative refinement by engineers is often more convenient at the component level. In this paper, we address the problem of decomposing partial behaviour models from a single monolithic model to a component-wise model. Specifically, given a Modal Transition System (MTS) M and component interfaces (the set of actions each component can control/monitor), can MTSs M 1..., M n matching the component interfaces be produced such that independent refinement of each M i will lead to a component Labelled Transition Systems (LTS) I i such that composing the I i s result in a system LTS that is a refinement of M? We show that a sound and complete distribution can be built when the MTS to be distributed is deterministic, transition modalities are consistent and the LTS determined by its possible transitions is distributable. © 2012 Springer-Verlag.
Alrajeh D, Russo A, Uchitel S, et al., 2012, Integrating model checking and inductive logic programming, Pages: 45-60, ISSN: 0302-9743
Inductive Logic Programming can be used to provide automated support to help correct the errors identified by model checking, which in turn provides the relevant context for learning hypotheses that are meaningful within the domain of interest. Model checking and Inductive Logic Programming can thus be seen as two complementary approaches with much to gain from their integration. In this paper we present a general framework for such an integration, discuss its main characteristics and present an overview of its application. © 2012 Springer-Verlag Berlin Heidelberg.
Alrajeh D, Kramer J, van Lamsweerde A, et al., 2012, Generating obstacle conditions for requirements completeness, 34th International Conference on Software Engineering, Publisher: IEEE, Pages: 705-715, ISSN: 1558-1225
Missing requirements are known to be among the major causes of software failure. They often result from a natural inclination to conceive over-ideal systems where the software-to-be and its environment always behave as expected. Obstacle analysis is a goal-anchored form of risk analysis whereby exceptional conditions that may obstruct system goals are identified, assessed and resolved to produce complete requirements. Various techniques have been proposed for identifying obstacle conditions systematically. Among these, the formal ones have limited applicability or are costly to automate. This paper describes a tool-supported technique for generating a set of obstacle conditions guaranteed to be complete and consistent with respect to the known domain properties. The approach relies on a novel combination of model checking and learning technologies. Obstacles are iteratively learned from counterexample and witness traces produced by model checking against a goal and converted into positive and negative examples, respectively. A comparative evaluation is provided with respect to published results on the manual derivation of obstacles in a real safety-critical system for which failures have been reported.
Kramer J, 2012, Whither Software Architecture? (Keynote), 34th International Conference on Software Engineering (ICSE), Publisher: IEEE, Pages: 963-963, ISSN: 0270-5257
- Author Web Link
- Cite
- Citations: 3
Alrajeh D, Kramer J, Russo A, et al., 2012, Learning from Vacuously Satisfiable Scenario-Based Specifications, 15th International Conference on Fundamental Approaches to Software Engineering (FASE), Publisher: SPRINGER-VERLAG BERLIN, Pages: 377-393, ISSN: 0302-9743
- Author Web Link
- Cite
- Citations: 6
Alrajeh D, Kramer J, Russo A, et al., 2012, Elaborating Requirements using Model Checking and Inductive Learning, IEEE Transactions on Software Engineering, ISSN: 0098-5589
Alrajeh D, Kramer J, Russo A, et al., 2011, An Inductive approach for modal transition system refinement, Pages: 106-116, ISSN: 1868-8969
Modal Transition Systems (MTSs) provide an appropriate framework for modelling software behaviour when only a partial specification is available. A key characteristic of an MTS is that it explicitly models events that a system is required to provide and is proscribed from exhibiting, and those for which no specification is available, called maybe events. Incremental elaboration of maybe events into either required or proscribed events can be seen as a process of MTS refinement, resulting from extending a given partial specification with more information about the system behaviour. This paper focuses on providing automated support for computing strong refinements of an MTS with respect to event traces that describe required and proscribed behaviours using a non-monotonic inductive logic programming technique. A real case study is used to illustrate the practical application of the approach.
Sykes D, Magee J, Kramer J, 2011, FlashMob: Distributed adaptive self-assembly, Pages: 100-109, ISSN: 0270-5257
Autonomous systems need to support dynamic software adaptation in order to handle the complexity and unpredictability of the execution environment, and the changing needs of the end user. Although a number of approaches have been proposed, few address a key issue: that of distribution. In this paper we seek to overcome the limitations of centralised approaches. We build on our previous work on adaptive self-assembly within the three-layer model for autonomous systems to provide a decentralised technique for self-assembly. To achieve this in a fault-tolerant and scalable manner, we use a gossip protocol as a basis. While no central or leader node is aware of the full space of solutions, gossip ensures that agreement on a particular solution - in this case a component configuration - is reached in a logarithmic number of steps with respect to the size of the network. © 2011 ACM.
McVeigh A, Kramer J, Magee J, 2011, Evolve: Tool Support for Architecture Evolution, 33rd International Conference on Software Engineering (ICSE), Publisher: IEEE, Pages: 1040-1042
- Author Web Link
- Cite
- Citations: 5
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.