Imperial College London

DrPeterMcBrien

Faculty of EngineeringDepartment of Computing

Senior Lecturer - Computing
 
 
 
//

Contact

 

+44 (0)20 7594 8202p.mcbrien Website

 
 
//

Location

 

555Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

72 results found

McBrien P, Poulovassilis A, 2019, Conceptual modelling approach to visualising linked data, OTM 2019, Publisher: Elsevier, Pages: 227-245

Increasing numbers of Linked Open Datasets are being published, and many possible data visualisations may be appropriate for a user’s given exploration or analysis task over a dataset. Users may therefore find it difficult to identify visualisations that meet their data exploration or analyses needs. We propose an approach that creates conceptual models of groups of commonly used data visualisations, which can be used to analyse the data and users’ queries so as to automatically generate recommendations of possible visualisations. To our knowledge, this is the first work to propose a conceptual modelling approach to recommending visualisations for Linked Data.

Conference paper

McBrien P, Poulovassilis A, 2018, Towards data visualisation based on conceptual modelling, Conceptual Modeling 37th International Conference, ER 2018, Publisher: Springer, Pages: 91-99, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. Selecting data, transformations and visual encodings in current data visualisation tools is undertaken at a relatively low level of abstraction - namely, on tables of data - and ignores the conceptual model of the data. Domain experts, who are likely to be familiar with the conceptual model of their data, may find it hard to understand tabular data representations, and hence hard to select appropriate data transformations and visualisations to meet their exploration or question-answering needs. We propose an approach that addresses these problems by defining a set of visualisation schema patterns that each characterise a group of commonly-used data visualisations, and by using knowledge of the conceptual schema of the underlying data source to create mappings between it and the visualisation schema patterns. To our knowledge, this is the first work to propose a conceptual modelling approach to matching data and visualisations.

Conference paper

McBrien P, Liu Y, 2017, SPOWL: Spark-based OWL 2 Reasoning Materialisation, BeyondMR 2017, Publisher: ACM

This paper presents SPOWL, which uses Spark to perform OWL reasoning over large ontologies. SPOWL acts as a compiler, which maps axioms in the T-Box of an ontology to Spark programmes, which will be executed iteratively to compute and materialise a closure of reasoning results entailed by the ontology. Such a closure is then available to queries which retrieve information from the ontology. Compared to MapReduce, adopting Spark enables SPOWL to cache data in the distributed memory, to reduce the amount of I/O used, and to also parallelise jobs in a more flexible manner. We further analyse the dependencies among the Spark programmes, and propose an optimised order following the T-Box hierarchy, which makes the materialising process terminate with minimum iterations. Moreover, SPOWL uses a tableaux reasoner to classify the T-Box, and the classified axioms are complied into Spark programmes which are directly related to the ontological data under reasoning. This not only makes the reasoning by SPOWL more complete, but also avoids processing unnecessary rules, as compared to evaluating certain rulesets adopted by most state-of-the-art reasoners. Finally, since SPOWL materialises the reasoning closure for large ontologies, it processes queries retrieving ontology information faster than computing the query answers in real time.

Conference paper

Liu Y, McBrien P, 2017, Transactional and incremental type inference from data updates, Computer Journal, Vol: 60, Pages: 347-368, ISSN: 0010-4620

A distinctive property of relational database systems is the ability to perform data updates and queries in atomic blocks called transactions, with the well known ACID properties. To date, the ability of systems performing reasoning to maintain the ACID properties, even over data held within a relational database, has been largely ignored. This article studies an approach to reasoning over data from OWL 2 RL ontologies held in a relational database, where the ACID properties of transactions are maintained. Taking an incremental approach to maintaining materialised views of the result of reasoning, the approach is demonstrated to support a query and reasoning performance comparable to or better than other OWL reasoning systems, yet adding the important benefit of supporting transactions.

Journal article

Al Khuzayem L, Mcbrien P, 2016, Extracting OWL ontologies from relational databases using data analysis and machine learning, 12th International Baltic Conference on Databases and Information Systems (DB and IS), Publisher: IOS PRESS, Pages: 43-56, ISSN: 0922-6389

Extracting OWL ontologies from relational databases is extremely helpfulfor realising the Semantic Web vision. However, most of the approaches in thiscontext often drop many of the expressive features of OWL. This is because highlyexpressive axioms can not be detected from database schema alone, but insteadrequire a combined analysis of the database schema and data. In this paper, wepresent an approach that transforms a relational schema to a basic OWL schema,and then enhances it with rich OWL 2 constructs using schema and data analysistechniques. We then rely on the user for the verification of these features. Furthermore,we apply machine learning algorithms to help in ranking the resultingfeatures based on user supplied relevance scores. Testing our tool on a number ofdatabases demonstrates that our proposed approach is feasible and effective.

Conference paper

McBrien P, Al Khuzayem L, 2016, OWLRel: learning rich ontologies from relational databases, Baltic Journal of Modern Computing, Vol: 4, Pages: 466-482, ISSN: 2255-8942

Mapping between ontologies and relational databases isa necessity for realising the Semantic Web vision.Most of the work concerning this topic has either (1) extracted an OWL schema using alimited range of OWL modelling constructs from a relationalschema, or (2) extracted a relational schema from an OWLschema, that represents as much as possible the OWL schema. Bycontrast, we propose a general framework that maps between relationaldatabases and schemas expressed in OWL 2. In particular, we regardthe transformation from databases to ontologies as a two-phaseprocess. Firstly, convert the relational schema into an OWL schema,and secondly enrich theOWL schema with highly expressive axioms based on analysing the schema and the datain the database. Testing our data analysis heuristics on a number of databases show thatthey produce an OWL schema that includes more semantic information than found in the relational schema.

Journal article

Liu Y, McBrien P, 2015, Transactional and Incremental Type Inference from Data Updates, 30th British International Conference on Databases (BICOD), Publisher: SPRINGER-VERLAG BERLIN, Pages: 206-219, ISSN: 0302-9743

Conference paper

Liu Y, McBrien P, 2013, SQOWL2: Transactional type inference for OWL 2 DL in an RDBMS, Pages: 779-790, ISSN: 1613-0073

SQOWL2 is a compiler which allows an RDBMS to support sound reasoning of SROIQ(D) description logics, by implementing ontologies expressed in the OWL 2 DL language as a combination of tables and triggers in the RDBMS. The reasoning process is divided into two phases of classification of the T-Box and type inference of the A-Box. SQOWL2 establishes a relational schema based on classification completed using the Pellet reasoner, and performs type inference by using SQL triggers. SQOWL2 supports type inference over all OWL 2DL constructs, and supports a more conventional relational schemas, rather than naively mapping OWL classes and properties to relational tables with one and two columns. Moreover, SQOWL2 is a transactional reasoning system (with full ACID properties), since the results of reasoning are available within the same transaction as that in which the base data of the reasoning was inserted.

Conference paper

Khuzayem LA, McBrien P, 2012, Knowledge transformation using a hypergraph data model, Pages: 1-7

In the SemanticWeb, knowledge integration is frequently performed between heterogeneous knowledge bases. Such knowledge integration often requires the schema expressed in one knowledge modelling language be translated into an equivalent schema in another knowledge modelling language. This paper defines how schemas expressed in OWL-DL (the Web Ontology Language using Description Logic) can be translated into equivalent schemas in the Hypergraph Data Model (HDM). The HDM is used in the AutoMed data integration (DI) system. It allows constraints found in data modelling languages to be represented by a small set of primitive constraint operators. By mapping into the AutoMed HDM language, we are then able to further map the OWL-DL schemas into any of the existing modelling languages supported by AutoMed. We show how previously defined transformation rules between relational and HDM schemas, and our newly defined rules between OWL-DL and HDM schemas, can be composed to give a bidirectional mapping between OWL-DL and relational schemas through the use of the both-as-view approach in AutoMed. © Lama Al Khuzayem and Peter McBrien.

Conference paper

McBrien PJ, Rizopoulos N, Smith AC, 2012, Type inference methods and performance for data in an RDBMS, SWIM '12 Proceedings of the 4th International Workshop on Semantic Web Information Management, Publisher: ACM

In this paper we survey and measure the performance of methods for reasoning using OWL-DL rules over data stored in an RDBMS. OWL-DL Reasoning may be broken down into two processes of classification and type inference. In the context of databases, classification is the process of deriving additional schema constructs from existing schema constructs in a database, while type inference is the process of inferring values for tables/columns from values in other tables/columns. Thus it is the process of type inference that is the focus of this paper, since as data values are inserted into a database, there is the need to use the inserted data to derive new facts.The contribution of this paper is that we place the existing methods for type inference over relational data into a new general framework, and classify the methods into three different types: Application Based Reasoning uses reasoners outside of the DBMS to perform type inference, View Based Reasoning uses DBMS views to perform type inference, and Trigger Based Reasoning uses DBMS active rules to perform type inference. We discuss the advantages of each of the three methods, and identify a list of properties that each method might be expected to meet. One key property we identify is transactional reasoning, where the result of reasoning is made available within a database transaction, and we show that most reasoners today fail to have this property. We also present the results of experimental analysis of representative implementations of each of the three methods, and use the results of the experiments to justify conclusions as to when each of the methods discussed is best deployed for particular classes of application.

Conference paper

McBrien PJ, Rizopoulos N, Smith AC, 2010, SQOWL: Type Inference in an RDBMS, 29th International Conference on Conceptual Modeling, Publisher: SPRINGER-VERLAG BERLIN, Pages: 362-376, ISSN: 0302-9743

Conference paper

Rizopoulos N, McBrien P, 2009, Schema Merging Based on Semantic Mappings, 26th British National Conference on Databases, Publisher: SPRINGER-VERLAG BERLIN, Pages: 193-198, ISSN: 0302-9743

Conference paper

Smith A, McBrien P, 2008, AutoModelGen: A generic data level implementation of modelgen, Pages: 65-68, ISSN: 1613-0073

The model management operator ModelGen translates a schema expressed in one modelling language into an equivalent schema expressed in another modelling language, and in addition produces a mapping between those two schemas. AutoModelGen is a generic data level implementation of ModelGen that meets these desiderata. Our approach is distinctive in that (i) it takes a generic approach that can be applied to any modelling language, and (ii) it does not rely on knowing the modelling language in which the source schema is expressed in.

Conference paper

McBrien P, 2008, Translating schemas between data modelling languages, Information Systems Engineering: From Data Analysis to Process Networks, Pages: 1-15, ISBN: 9781599045672

Data held in information systems is modelled using a variety of languages, where the choice of language may be decided by functional concerns as well as non-technical concerns. This chapter focuses on data modelling languages, and the challenges faced in mapping schemas in one data modelling language into another data modelling language. We review the ER, relational and UML modelling languages (the later being representative of object oriented programming languages), highlighting aspects of each modelling language that are not representable in the others. We describe how a nested hypergraph data model may be used as an underlying representation of data models, and hence present the differences between the modelling languages in a more precise manner. Finally, we propose a platform for the future building of an automated procedure for translating schemas from one modelling language to another. © 2012, IGI Global.

Book chapter

Le DM, Smith AC, McBrien P, 2008, Robust data exchange for unreliable P2P networks, 19th International Conference on Database and Expert Systems Applications, Publisher: IEEE COMPUTER SOC, Pages: 352-356, ISSN: 1529-4188

Conference paper

Smith A, McBrien P, 2008, A generic data level implementation of ModelGen, 25th British National Conference on Databases, Publisher: SPRINGER-VERLAG BERLIN, Pages: 63-74, ISSN: 0302-9743

Conference paper

McBrien P, 2008, Temporal Constraints in Non-temporal Data Modelling Languages, ER 2008 Workshops held in Conjunction with the 27th International Conference on Conceptual Modeling, Publisher: SPRINGER-VERLAG BERLIN, Pages: 412-425, ISSN: 0302-9743

Conference paper

Smith A, Rizopoulos N, McBrien P, 2008, AutoMed Model Management, ER 2008 Workshops held in Conjunction with the 27th International Conference on Conceptual Modeling, Publisher: SPRINGER-VERLAG BERLIN, Pages: 542-543, ISSN: 0302-9743

Conference paper

McBrien P, Poulovassilis A, 2007, P2P query reformulation over both-as-view data transformation rules, 4th International Workshop on Databases, Information Systems, and Peer-to-Peer Computing, Publisher: SPRINGER-VERLAG BERLIN, Pages: 310-+, ISSN: 0302-9743

Conference paper

McBrien P, Smith A, 2006, Comparing and Transforming Between Data Models via an Intermediate Hypergraph Data Model, International Workshop Data Integration and the Semantic Web, Luxembourg, Publisher: Presses Universitaires de Namur, Pages: 307-321

Data exchange between heterogeneous schemas is a difficult problem that becomes more acute if the source and target schemas are from different data models. The data type of the objects to be exchanged can be useful information that should be exploited to help the data exchange process. So far little has been done to take advantage of this in inter model data exchange. Using a common data model has been shown to be effective in data exchange in general. This work aims to show how the common data model approach can be useful specifically in exchanging type information by use of a common type hierarchy.

Conference paper

McBrien P, Rizopoulos N, Lazanitis C, Bellahsène Zet al., 2006, iXPeer: Implementing layers of abstraction in P2P Schema Mapping using AutoMed, 2nd Workshop on Innovations in Web Infrastructure

The task of model based data integration becomes more complicated when the data sources to be integrated are distributed, heterogeneous, and high in number. One recent solution to the issues of distribution and scale is to perform data integration using peer-to-peer (P2P) networks. Current P2P data integration architectures have mostly been flat, only specifying mappings directly between peers. Some do form the schemas into hierarchies, but none provide any abstraction of the schemas. This paper describes a set of general purpose P2P meta-data and data exchange primitives provided by an extended version of the AutoMed toolkit, and uses the primitives to implement a new architecture called iXPeer. iXPeer deals with integration on several levels of abstraction, where the lower levels define precise mappings between data source schemas, but the higher levels are loser associations based on keywords.

Conference paper

Bellahsene Z, Lazanitis C, McBrien P, Rizopoulos Net al., 2006, Querying Distributed Data in a Super-Peer Based Architecture, IWI2006

Conference paper

McBrien P, 2006, Inter Model Data Exchange of Type Information via a Common Type Hierarchy, DISWeb06, Publisher: Presses Universitaires de Namur, Pages: 307-321

Conference paper

Kittivoravitkul S, McBrien P, 2005, Integrating unnormalised semi-structured data sources, 17th International Conference on Advanced Information Systems Engineering, Publisher: SPRINGER-VERLAG BERLIN, Pages: 460-474, ISSN: 0302-9743

Conference paper

Magnani M, Rizopoulos N, McBrien P, Montesi Det al., 2005, Schema integration based on uncertain semantic mappings, Berlin, 24th International Conference on Conceptual Modeling, 24 - 28 October 2005, Klagenfurt, Austia, Publisher: Springer-Verlag, Pages: 31-46

Conference paper

Boyd M, McBrien P J, 2005, Comparing and transforming between data models via an intermediate hypergraph data model, Journal on Data Semantics, Vol: 4, Pages: 69-109, ISSN: 0302-9743

Journal article

Rizopoulos N, Magnani M, McBrien P, Montesi Det al., 2005, Uncertainty in Semantic Schema Integration, 22nd British National Conference on Databases (BNCOD) 2005, Publisher: Univ. Sunderland Press, Pages: 13-16

In this paper we present a new method of semantic schema integration, based on uncertain semantic mappings. The purpose of semantic schema integration is to produce a unified representation of multiple data sources. First, schema matching is performed to identify the semantic mappings between the schema objects. Then, an integrated schema is produced during the schema merging process based on the identified mappings. If all semantic mappings are known, schema merging can be performed (semi-)automatically.

Conference paper

Rizopoulos N, McBrien P, 2005, A general approach to the generation of conceptual model transformations, 17th International Conference on Advanced Information Systems Engineering, Publisher: SPRINGER-VERLAG BERLIN, Pages: 326-341, ISSN: 0302-9743

Conference paper

Jasper E, Tong N, McBrien P, Poulovassilis Aet al., 2005, Generating and optimising views from both as view data integration rules, Amsterdam, 6th international baltic conference on databases and information systems, Riga, LATVIA, 6 - 9 June 2004, Publisher: I O S Press, Pages: 3-19

Conference paper

Boyd M, Kittivoravitkul S, Lazanitis C, McBrien P, Rizopoulos Net al., 2004, AutoMed: a BAV data integration system for heterogeneous data sources, Berlin, 16th intenational conference on advanced information systems engineering, Fac Comp Sci & Informat Technol, Riga, LATVIA, Publisher: Springer-Verlag, Pages: 82-97

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00171932&limit=30&person=true