197 results found
Lucke M, Chioua M, Grimholt C, et al., 2020, Integration of alarm design in fault detection and diagnosis through alarm-range normalization, Control Engineering Practice, Vol: 98, Pages: 1-12, ISSN: 0967-0661
Alarm systems designed according to engineering and safety considerations provide the primary source of information for operators when it comes to abnormal situations. Still, alarm systems have rarely been exploited for fault detection and diagnosis. Recent work has demonstrated the benefits of alarm logs for fault detection and diagnosis. However, alarm settings conceived during the alarm design stage can also be integrated into fault detection and diagnosis methods. This paper suggests the use of those alarm settings in the preprocessing of the process measurements, proposing a normalization based on the alarm thresholds of each process variable. Normalization is needed to render process measurements dimensionless for multivariate analysis. While common normalization approaches such as standardization depend on the historical process measurements available, the proposed alarm-range normalization is based on acceptable variations of the process measurements. An industrial case study of an offshore oil gas separation plant is used to demonstrate that the alarm-range normalization improves the robustness of popular methods for fault detection, fault isolation, and fault identification.
Tan R, Cong T, Ottewill JR, et al., An on-line framework for monitoring nonlinear processes with multiple operating modes, Journal of Process Control, ISSN: 0959-1524
A multivariate statistical process monitoring scheme should be able to describe multimodal data. Multimodality typically arises in process data due to varying production regimes. Moreover, multimodality may influence how easy it is for process operators to interpret the monitoring results. To address these challenges, this paper proposes an on-line monitoring framework for anomaly detection where an anomaly may either indicate a fault occurring and developing in the process or the process moving to a new operating mode. The framework incorporates the Dirichlet process, which is an unsupervised clustering method, and kernel principal component analysis with a new kernel specialized for multimode data. A monitoring model is trained using the data obtained from several healthy operating modes. When on-line, if a new healthy operating mode is confirmed by an operator, the monitoring model is updated using data collected in the new mode. Implementation issues of this framework, including the parameter tuning for the kernel and the selection of anomaly indicators, are also discussed. A bivariate numerical simulation is used to demonstrate the performance of anomaly detection of the monitoring model. The ability of this framework in model updating and anomaly detection in new operating modes is shown on data from an industrial-scale process using the PRONTO benchmark dataset. The examples will also demonstrate the industrial applicability of the proposed framework.
Lucke M, Stief A, Chioua M, et al., 2020, Fault detection and identification combining process measurements and statistical alarms, Control Engineering Practice, Vol: 94, Pages: 1-12, ISSN: 0967-0661
Classification-based methods for fault detection and identification can be difficult to implement in industrial systems where process measurements are subject to noise and to variability from one fault occurrence to another. This paper uses statistical alarms generated from process measurements to improve the robustness of the fault detection and identification on an industrial process. Two levels of alarms are defined according to the position of the alarm threshold: level-1 alarms (low severity threshold) and level-2 alarms (high severity threshold). Relevant variables are selected using the minimal-Redundancy-Maximal-Relevance criterion of level-2 alarms to only retain variables with large variations relative to the level of noise. The classification-based fault detection and identification fuses the results of a discrete Bayesian classifier on level-1 alarms and of a continuous Bayesian classifier on process measurements. The discrete classifier offers a practical way to deal with noise during the development of the fault, and the continuous classifier ensures a correct classification during later stages of the fault. The method is demonstrated on a multiphase flow facility.
Tan R, Ottewill JR, Thornhill NF, 2019, Non-stationary discrete convolution kernel for multimodal process monitoring, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227
Data-driven process monitoring has benefited from the development and application of kernel transformations, especially when various types of nonlinearity exist in the data. However, when dealing with the multimodality behavior which is frequently observed in process operations, the most widely used Radial Basis Function kernel has limitations in describing process data collected from multiple normal operating modes. In this paper, we highlight this limitation via a synthesized example. In order to account for the multimodality behavior and improve fault detection performance accordingly, we propose a novel Non-stationary Discrete Convolution kernel, which derives from the convolution kernel structure, as an alternative to the RBF kernel. By assuming the training samples to be the support of the discrete convolution, this new kernel can properly address these training samples from different operating modes with diverse properties, and therefore can improve the data description and fault detection performance. Its performance is compared with RBF kernels under a standard kernel PCA framework and with other methods proposed for multimode process monitoring via numerical examples. Moreover, a benchmark data set collected from a pilot-scale multiphase flow facility is used to demonstrate the advantages of the new kernel when applied to an experimental data set.
Bauer M, Auret L, Bacci di Capaci R, et al., 2019, Industrial PID control loop data repository and comparison of fault detection methods, Industrial & Engineering Chemistry Research, Vol: 58, Pages: 11430-11439, ISSN: 0888-5885
This paper presents the control loop data of industrial controllers that are recently made available online. All data is confirmed and some of it has been published previously to develop fault detection and diagnosis methods. Methods to detect faults that occur during the operation of an industrial process are important and have attracted attention previously but are not always widely used in industry. One of the reasons is that any method needs to be robust and fully automated. The purpose of the data repository is to present data to test methods so that false positives and negatives are reduced to an insignificant number. Three previously published methods – oscillation detection based on the autocorrelation function, the idle index and a method for quantization detection – together with a simple, novel saturation detection method and one new detection methods are applied to all industrial data. The results are discussed and ways to improve the robustness and automation potential of these methods.
Zhou B, Chioua M, Bauer M, et al., 2019, Improving root cause analysis by detecting and removing transient changes in oscillatory time series with application to a 1,3-butadiene process, Industrial && Engineering Chemistry Research, Vol: 58, Pages: 11234-11250, ISSN: 0888-5885
Oscillations occurring in industrial process plants often reflect the presence of severe disturbances affecting process operations. Accurate detection and root-cause analysis of oscillations is of great interest for the economic viability of the process operation. Standard oscillation detection and root cause analysis methods require a large enough number of data samples. Unrelated transient changes superimposed on the oscillation pattern reduce the number of useful data samples. The present paper proposes simple heuristic methods to effectively detect and remove two types of transient changes from oscillatory signals, namely step changes and spikes. The proposed methods are used to pre-process oscillatory time series. The accuracy gained when using auto-correlation function method for oscillation detection and transfer entropy method for oscillation propagation is experimentally evaluated. The methods are carried out on a 1.3-Butadiene production process where several measurements showed an established oscillation occurring after a production level change.
Zagorowska M, Ditlefsen A-M, Thornhill NF, et al., 2019, Turbomachinery degradation monitoring using adaptive trend analysis, 12th International-Federation-of-Automatic-Control (IFAC) Symposium on Dynamics and Control of Process Systems including Biosystems (DYCOPS), Publisher: International Federation of Automatic Control (IFAC), Pages: 679-684, ISSN: 1474-6670
Performance deterioration in turbomachinery is an unwanted phenomenon that changes the behaviour of the system. It can be described by a degradation indicator based on deviations from expected values of process variables. Existing models assume that the degradation is strictly increasing with fixed convexity and that there are no additional changes during the considered operating period. This work proposes the use of an exponential trend approximation with shape adaptation and apply it in a moving window framework. The suggested method of adjustment makes it possible for the model to follow the evolution of the indicator over time. The approximation method is then applied for monitoring purposes, to predict future degradation. The influence of the tuning parameters on the accuracy of the algorithm is investigated and recommendations for the values are derived. Finally directions for further work are proposed.
Lucke M, Mei X, Stief A, et al., 2019, Variable selection for fault detection and identification based on mutual information of alarm series, 12th International-Federation-of-Automatic-Control (IFAC) Symposium on Dynamics and Control of Process Systems including Biosystems (DYCOPS), Publisher: International Federation of Automatic Control (IFAC), Pages: 673-678, ISSN: 1474-6670
Reducing the dimensionality of a fault detection and identification problem is often a necessity, and variable selection is a practical way to do it. Methods based on mutual information have been successful in that regard, but their applicability to industrial processes is limited by characteristics of the process variables such as their variability across fault occurrences. The paper introduces a new estimation strategy of mutual information criteria using alarm series to improve the robustness of the variable selection. The minimal-redundancy-maximal-relevance criterion on alarm series is suggested as new reference criterion, and the results are validated on a multiphase flow facility.
Tan R, Cong T, Thornhill NF, et al., 2019, Statistical monitoring of processes with multiple operating modes, 12th International-Federation-of-Automatic-Control (IFAC) Symposium on Dynamics and Control of Process Systems including Biosystems (DYCOPS), Publisher: IFAC Secretariat, Pages: 635-642, ISSN: 1474-6670
Varying production regimes and loading conditions on equipment often result in multiple operating modes in process operations. The data recorded from such processes will typically be multimodal in nature leading to challenges in applying standard data-driven process monitoring approaches. Moreover, even if a monitoring approach is able to account for the variability present in a training set comprised of historical process data, in order to be robust and reliable the method will need to account for any new operating modes which might emerge during production. Therefore, it is desirable to have a monitoring algorithm that can both handle data multimodality in off-line training and, when implemented on-line, can actively update in order to incorporate new operating modes. This paper proposes a monitoring framework which combines an unsupervised clustering approach with a kernel-based Multivariate Statistical Process Monitoring (MSPM) algorithm. A monitoring model is trained off-line and is subsequently used to detect anomalies on-line. An anomaly might be indicative of either a developing fault or a change in the process to a new operating mode. In the latter case, the monitoring model can be updated to account for the new mode whilst still being able to detect faults under this framework. The advantages of the off-line training procedure relative to a standard kernel-based method are demonstrated via a numerical simulation. Additionally, the monitoring performance in the presence of faults and the capability of updating the model in the presence of new operating modes is demonstrated using a benchmark data set from an experimental pilot plant.
Stief A, Tan R, Cao Y, et al., 2019, A heterogeneous benchmark dataset for data analytics: Multiphase flow facility case study, Journal of Process Control, Vol: 79, Pages: 41-55, ISSN: 0959-1524
Improvements in sensing, connectivity and computing technologies mean that industrial processes now generate data from a variety of disparate sources. Data may take a number of forms, from time-domain signals, sampled at various rates using a variety of sensors, to alarm and event logs. Novel techniques need to be developed to tackle the challenges of heterogeneous data. Testing such algorithms requires benchmark datasets that allow direct comparison of the performance of the methods. This work presents the PRONTO heterogeneous benchmark dataset. Experiments were conducted on a multiphase flow facility under various operational conditions with and without induced faults. Data were collected from heterogeneous sources, including process measurements, alarm records, high frequency ultrasonic flow and pressure measurements. The presented dataset is suitable for developing and validating algorithms for fault detection and diagnosis and data fusion concepts.
Lucke M, Chioua M, Grimholt C, et al., 2019, Advances in alarm data analysis with a practical application to online alarm flood classification, Journal of Process Control, Vol: 79, Pages: 56-71, ISSN: 0959-1524
During an alarm flood, the alarm rate is greater than the operator can effectively manage. Many alarm data analysis methods have been proposed in the literature to mitigate the impact of alarm floods. This paper gives a review of the state of the art in alarm data analysis and aims at structuring the field. A distinction between sequence mining methods that apply to alarm sequences and time series analysis methods that apply to alarm series is suggested. The review highlights that online applications to help the operators during alarm flood episodes have been only treated as sequence mining problem in the literature to date. To address this gap, the paper also presents a binary series approach to classify ongoing alarm floods based on a set of historical alarm floods. The motivation for a binary series approach is demonstrated through an industrial case study of a gas-oil separation plant, and the performance of the presented method is compared with the performance of an established sequence alignment method.
Borghesan F, Chioua M, Thornhill NF, 2019, Forecasting of process disturbances using k-nearest neighbours, with an application in process control, Computers and Chemical Engineering, Vol: 128, Pages: 188-200, ISSN: 1873-4375
This paper examines the prediction of disturbances based on their past measurements using k-nearest neighbours. The aim is to provide a prediction of a measured disturbance to a controller, in order to improve the feed-forward action. This prediction method works in an unsupervised way, it is robust against changes of the characteristics of the disturbance, and its functioning is simple and transparent. The method is tested on data from industrial process plants and compared with predictions from an autoregressive model. A qualitative as well as a quantitative method for analysing the predictability of the time series is provided. As an example, the method is implemented in an MPC framework to control a simple benchmark model.
Cai L, Thornhill NF, Kuenzel S, et al., 2019, A test model of a power grid with battery energy storage and wide-area monitoring, IEEE Transactions on Power Systems, Vol: 34, Pages: 380-390, ISSN: 0885-8950
This paper presents a test model for investigating how to coordinate a power grid and Energy Storage Systems (ESSs) by Wide-Area Monitoring (WAM). It consists of three parts: (1) a model of a power grid containing different types of generators, loads and transmission network; (2) a model of lithium-ion battery ESSs; (3) a model of multivariate statistical analysis based WAM built to capture the grid information for guiding the operation of ESSs. Simulation studies using a reduced equivalent model specifically built for a UK power grid enhanced with lithium-ion battery ESSs and WAM illustrate the way in which WAM can coordinate a power grid and ESSs, and also demonstrate the benefit of ESSs on a power grid.
Stief A, Ottewill J, Thornhill NF, 2018, Digital innovation driven by university collaboration, Publisher: ABB
Zagorowska M, Thornhill N, Haugen T, et al., 2018, Load-sharing strategy taking account of compressor degradation, Conference on Control Technology and Applications (CCTA), Publisher: IEEE, Pages: 489-495
The objective of designing a control structure that takes the degradation of the system into account is to preserve its performance and mitigate further damage. This problem is often encountered in process industries, e.g. in gas processing plants, where the question arises how to distribute the control effort among multiple actuators based on their degradation. The main focus of this work is to investigate how to assign the loads in a two-compressor system taking the degradation, i.e. the loss of available performance, into consideration. Contrarily to other approaches, such as methods based on distance to surge or predictive control, the algorithm proposed in this work does not require a reconfiguration of the control structure, at the same time taking explicitly the degradation into account. The simulation results confirm that this approach mitigates further loss of performance, in particular for compressors, which have significantly different degradation rates.
Cai L, Thornhill NF, Kuenzel S, et al., 2018, Wide-area monitoring of power systems using principal component analysis and k-nearest neighbor analysis, IEEE Transactions on Power Systems, Vol: 33, Pages: 4913-4923, ISSN: 0885-8950
Wide-area monitoring of power systems is important for system security and stability. It involves the detection and localization of power system disturbances. However, the oscillatory trends and noise in electrical measurements often mask disturbances, making wide-area monitoring a challenging task. This paper presents a wide-area monitoring method to detect and locate power system disturbances by combining multivariate analysis known as Principal Component Analysis (PCA) and time series analysis known as k-Nearest Neighbor (kNN) analysis. Advantages of this method are that it can not only analyze a large number of wide-area variables in real time but also can reduce the masking effect of the oscillatory trends and noise on disturbances. Case studies conducted on data from a four-variable numerical model and the New England power system model demonstrate the effectiveness of this method.
Lucke M, Chioua M, Grimholt C, et al., 2018, On improving fault detection and diagnosis using alarm-range normalisation, IFAC-PapersOnLine, Vol: 51, Pages: 1227-1232, ISSN: 2405-8963
Alarm systems based on engineering and safety considerations are the prime source of information for operators when it comes to abnormal situations. Conversely, the presence of fault detection and diagnosis algorithms in process plants is still limited, in comparison with other process control technologies. This work presents a simple way to integrate the information contained in the alarm systems into the fault detection and diagnosis algorithm. A normalisation of the process measurements based on the alarm thresholds is proposed, improving the robustness of the algorithm with regard to the variability of the measurements across fault occurrences in industrial systems.
Borghesan F, Chioua M, Thornhill NF, 2018, An MPC with disturbance forecasting for the control of the level of a tank with limited buffer capacity, Mediterranean Conference on Control and Automation (MED), Publisher: IEEE, Pages: 727-734, ISSN: 2473-3504
The paper deals with the behavior of an MPC for the control of a level of a tank, whose inflow is subject to persistent plantwide disturbances. It is shown that the response of an industrial MPC can be aggressive and oscillatory in such situations. The result is that the disturbance propagates further. The reason is the assumption made by the industrial MPCs regarding the future evolution of the disturbance. To improve the response in presence of plantwide disturbances, an MPC with disturbance forecasting is proposed. Such MPC is able to handle tight constraints and still reduce the movement of the outflow of the tank, therefore reducing the disturbance propagation. To compare the MPC with prediction forecasting with other two strategies used in industrial practice to handle measured disturbances, the paper uses sinusoidal disturbances and real disturbances coming from a refinery.
Lucke M, Chioua M, Grimholt C, et al., 2018, Online alarm flood classification using alarm coactivations, IFAC-PapersOnLine, Vol: 51, Pages: 345-350, ISSN: 2405-8963
Alarms indicate abnormal operation of the process plants and alarm floods constitute specific abnormal episodes that cannot be handled safely by the operators. In that regard, online alarm flood classification based on a bank of past historical episodes provides support on how to handle ongoing alarm sequences. This paper introduces a new approach based on alarm coactivations that is appropriate for the analysis of ongoing sequences. The method shows improvements when compared to an established sequence alignment approach for abnormal episode analysis of a gas oil separation plant.
Xuan YY, Pretlove J, Thornhill N, 2018, Assessment of flexible operation in an LNG plant, 3rd IFAC Workshop on Automatic Control in Offshore Oil and Gas Production (OOGP), Publisher: IFAC Secretariat, Pages: 158-163, ISSN: 2405-8963
Process industries are becoming increasingly reliant on electrical power for reasons of efficiency and sustainability. A large industrial site typically has its own power management system to distribute electricity to the process and to manage electrical contingencies such as partial loss of supply. Recent work has illustrated more flexible alternatives to load shedding whereby an industrial process plant can continue to operate at a lower level making use of available electrical power. This paper presents a way for achieving such flexibility in a Liquefied Natural Gas (LNG) plant. It analyzes the consequences for production of varying the consumed power, and assesses the maximum flexibility within the feasible operating envelope of the process. The study has been conducted by modeling and simulation of an LNG plant using the Linde process with three refrigeration cycles. The results also show the relationships between electrical power consumption and production in terms of production rate and product characteristics. They also show that the vapour-liquid equilibrium plays a crucial role in establishing the operating points and setting the boundaries in which the process has to work. Thus, through the assessment and simulation of an LNG plant, this work demonstrates that flexible operation has benefits over alternatives. It achieves more operating points and therefore adds more flexibility.
Zagorowska M, Thornhill NF, Skourup C, 2018, Dynamic modelling and control of a compressor using Chebyshev polynomial approximation, ASME Turbo Expo 2018
© 2018 ASME. The aim of this study is to apply a Chebyshev polynomial approximation of the compressor map for dynamic modelling and control of centrifugal compressors. The results are compared to those from an approximation based on the third order polynomials and a compressor map derived from first principles. In the analysis of centrifugal compressors, a combination of dynamic conservation laws and static compressor map provides an insight into the surge phenomenon, whose avoidance remains one of the objectives of compressor control. The compressor maps based on the physical laws provide accurate results, but require a detailed knowledge about the properties of the system, such as the geometry of the compressor and gas quality. Third order polynomials are usually used as an approximation for the compressor map, providing simplified models at the expense of accuracy. Chebyshev polynomial approximation provides a trade-off between the accuracy of physical modelling with the ease of use provided by third order polynomial approximation.
Spuntrup FS, Londono JG, Skourup C, et al., 2018, Reliability improvement of compressors based on asset fleet reliability data, IFAC-PapersOnLine, Vol: 51, Pages: 217-224, ISSN: 2405-8963
Physical assets of the process industries include compressors, pumps, heat exchangers, batch reactors and many more. A large company that operates over many sites typically manages such assets in a coordinated way as an asset fleet. Strategic planning of maintenance and scheduling requires information about reliability, availability and maintainability of the assets in an asset fleet.The work presented in this paper assesses the reliability of centrifugal compressors based on the data collected in OREDA (Offshore and onshore REliability DAta project). The fault tree (a top-down approach to illustrate all subsystems in a system) has been modeled by focusing on the six main subsystems of the compressor (power transmission, compressor, control and monitoring, lubrication system, shaft seal system, and miscellaneous). All the maintainable items described in ISO 14224 are considered. Based on the failure rates collected in OREDA, the most prevalent failures have been identified via a Pareto analysis. The article gives recommendations which subsystems should be prioritized for maintenance and which types of faults are likely to occur. The main contribution of this paper is an industry-based statistical analysis of the failure mechanisms in centrifugal compressor systems. It is expected to improve the reliability of centrifugal compressor systems and can be implemented in industrial settings with a similar documentation system like OREDA.
Borghesan F, Chioua M, Thornhill NF, 2018, Forecast of persistent disturbances using k-nearest neighbour methods, Computer Aided Chemical Engineering, Pages: 631-636
© 2018 Elsevier B.V. This paper focuses on the prediction of persistent disturbances based on their past measurements using two versions of the k-nearest neighbours method: an unweighted and a weighted version. Results of tests on data from a refinery show that the two methods can predict the future trend of a disturbance. They also show that the weighted version is more robust against the choice of the number of nearest neighbours used. The method opens up the possibility of model-free feedforward control without the constraint of causality based on the whole history of a measurement.
Zagorowska M, Thornhill NF, 2017, Compressor map approximation using Chebyshev polynomials, IEEE 2017 25th Mediterranean Conference on Control and Automation (MED 2017), Publisher: IEEE, Pages: 864-869, ISSN: 2473-3504
Compressor maps are one of the main elements describing the behaviour of centrifugal compressors. Although the compressor map is often provided by the manufacturer, there may be changes during the lifetime of the compressor due to refurbishments or wear. Since the compressor maps are often used in real-time optimization problems, there is a need for simple approximation methods. This paper focuses on approximation of physical models using Chebyshev polynomials instead of third order polynomials which are unable to capture some aspects of the compressor behaviour. Chebyshev polynomials capture the characteristics better than third order polynomials. They provide a flexible tool for compressor map approximation and analysis.
Yang Y, Velayudhan A, Thornhill NF, et al., 2017, Multi-criteria manufacturability indices for ranking high-concentration monoclonal antibody formulations, Biotechnology and Bioengineering, Vol: 114, Pages: 2043-2056, ISSN: 1097-0290
The need for high-concentration formulations for subcutaneous delivery of therapeutic monoclonal antibodies (mAbs) can present manufacturability challenges for the final ultrafiltration/diafiltration (UF/DF) step. Viscosity levels and the propensity to aggregate are key considerations for high-concentration formulations. This work presents novel frameworks for deriving a set of manufacturability indices related to viscosity and thermostability to rank high-concentration mAb formulation conditions in terms of their ease of manufacture. This is illustrated by analysing published high-throughput biophysical screening data that explores the influence of different formulation conditions (pH, ions and excipients) on the solution viscosity and product thermostability. A decision tree classification method, CART (Classification and Regression Tree) is used to identify the critical formulation conditions that influence the viscosity and thermostability. In this work, three different multi-criteria data analysis frameworks were investigated to derive manufacturability indices from analysis of the stress maps and the process conditions experienced in the final UF/DF step. Polynomial regression techniques were used to transform the experimental data into a set of stress maps that show viscosity and thermostability as functions of the formulation conditions. A mathematical filtrate flux model was used to capture the time profiles of protein concentration and flux decay behaviour during UF/DF. Multi-criteria decision-making analysis was used to identify the optimal formulation conditions that minimize the potential for both viscosity and aggregation issues during UF/DF.
Cai L, Thornhill NF, Kuenzel S, et al., 2017, Real-time detection of power system disturbances based on k-nearest neighbor analysis, IEEE Access, Vol: 5, Pages: 5631-5639, ISSN: 2169-3536
Efficient disturbance detection is important for power system security and stability. In this paper, a new detection method is proposed based on a time series analysis technique known as k nearest neighbor (kNN) analysis. Advantages of this method are that it can deal with the electrical measurements with oscillatory trends and can be implemented in real time. The method consists of two stages which are the off-line modelling and the on-line detection. The off-line stage calculates a sequence of anomaly index values using kNN on the historical ambient data and then determines the detection threshold. Afterwards, the on-line stage calculates the anomaly index value of presently measured data by readopting kNN and compares it with the established threshold for detecting disturbances. To meet the real-time requirement, strategies for recursively calculating the distance metrics of kNN and for rapidly picking out the kth smallest metric are built. Case studies conducted on simulation data from the reduced equivalent model of Great Britain power system and measurements from an actual power system in Europe demonstrate the effectiveness of the proposed method.
Cai L, Thornhill NF, Pal BC, 2017, Multivariate detection of power system disturbances based on fourth order moment and singular value decomposition, IEEE Transactions on Power Systems, Vol: 32, Pages: 4289-4297, ISSN: 1558-0679
This paper presents a new method to detect power system disturbances in a multivariate context, which is based on Fourth Order Moment (FOM) and multivariate analysisimplemented as Singular Value Decomposition (SVD). The motivation for this development is that power systems are increasingly affected by various disturbances and there is a requirement for the analysis of measurements to detect these disturbances. The application results on the measurements of an actual power system in Europe illustrate that the proposed multivariate detection method achieves enhanced detection reliability and sensitivity.
Xenos, Kahrs O, Leira FM, et al., 2017, Challenges of the application of data-driven models for the real-time optimization of an industrial air separation plant, 2016 European Control Conference (ECC), Publisher: IEEE Conference Publications, Pages: 1025-1030
The optimization of the operation of chemical plants may require the development of mathematical models of the process units of a plant. These mathematical models can be either first-principles or data-driven models. The former type of modeling may be complex for the use in optimization and especially for online applications such as real time optimization. Available measured process data can be used to develop the latter type of modeling. Although data-driven models offer several benefits for online applications, there are some very significant challenges related to their development in a practical industrial implementation. This paper discusses the important aspects of the building of data-driven models and demonstrates the effects of these types of models on the optimization results. The current work demonstrates the application of a real time optimization framework applied to an industrial air compressor station of an air separation plant when the models are based on operating data.
Xenos DP, Mohd Noor I, Matloubi M, et al., 2016, Demand-side management and optimal operation of industrial electricity consumers: An example of an energy-intensive chemical plant, Applied Energy, Vol: 182, Pages: 418-433, ISSN: 0306-2619
Concerns about the reliability of electricity supplies have motivated researches to investigate the possibility of electrical consumers to take a more active role in the operations of the power system. The work in this paper looks into the potential of an industrial chemical plant to provide support to the electricity grid by means of demand-side response (DR) programs. To do so, this paper proposes a method to assess the flexibility of the plant to provide electrical power reserves while ensuring that the production demand is satisfied, as well as an economic analysis of the plant operations incorporating DR programs to quantify the incentives the plant should receive in order to participate in these programs. Therefore, the current study presents a novel optimization framework which integrates production scheduling with DR programs, with the aim to determine optimal decisions for the operating conditions within the plant while safely providing services to the electricity grid.
Cecılio IM, Ottewill JR, Fretheim H, et al., 2016, Removal of transient disturbances from oscillating measurements using nearest neighbors imputation, Journal of Process Control, Vol: 44, Pages: 68-78, ISSN: 0959-1524
Transient disturbances in process measurements compromise the accuracy of somemethods for plant-wide oscillation analysis. This paper presents a method to removesuch transients while maintaining the dynamic features of the original measurement.The method is based on a nearest neighbors imputation technique. Itreplaces the removed transient with an estimate which is based on the time seriesof the whole measurement. The method is demonstrated on experimental andindustrial case studies. The results demonstrate the efficacy of the method andrecommended parameters. Furthermore, inconsistency indices are proposed whichfacilitate the automation of the method.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.