Imperial College London


Faculty of EngineeringDepartment of Civil and Environmental Engineering

Research Associate







Huxley BuildingSouth Kensington Campus





Publication Type

4 results found

Nerantzis D, Pecci F, Stoianov I, 2020, Optimal control of water distribution networks without storage, European Journal of Operational Research, Vol: 284, Pages: 345-354, ISSN: 0377-2217

The paper investigates the problem of optimal control of water distribution networks without storage capacity. Using mathematical optimization, we formulate and solve the problem as a non-convex NLP, in order to obtain optimal control curves for both variable speed pumps and pressure reducing valves of the network and thus propose a methodology for the automated control of real operational networks. We consider both single-objective and multi-objective problems with average zonal pressure, pump energy consumption and water treatment cost as objectives. Furthermore, we investigate global optimality bounds for the calculated solutions using global optimization techniques. The proposed approach is shown to outperform state-of-the-art global optimization solvers. The described procedure is demonstrated in a case study using a large size operational network.

Journal article

Nerantzis D, Adjiman C, 2019, Tighter αBB relaxations through a refi nement scheme for the scaled Gerschgorin theorem, Journal of Global Optimization, Vol: 73, Pages: 467-483, ISSN: 0925-5001

Of central importance to the αBB algorithm is the calculation of the α values that guarantee the convexity of the underestimator. Improvement (reduction) of these values can result in tighter underestimators and thus increase the performance of the algorithm. For instance, it was shown by Wechsung et al. (J Glob Optim 58(3):429-438, 2014) that the emergence of the cluster effect can depend on the magnitude of the α values. Motivated by this, we present a refinement method that can improve (reduce) the magnitude of α values given by the scaled Gerschgorin method and thus create tighter convex underestimators for the αBB algorithm. We apply the new method and compare it with the scaled Gerschgorin on randomly generated interval symmetric matrices as well as interval Hessians taken from test functions. As a measure of comparison, we use the maximal separation distance between the original function and the underestimator. Based on the results obtained, we conclude that the proposed refinement method can significantly reduce the maximal separation distance when compared to the scaled Gerschgorin method. This approach therefore has the potential to improve the performance of the αBB algorithm.

Journal article

Nerantzis D, Adjiman CSJ, 2016, An interval-matrix branch-and-bound algorithm for bounding eigenvalues, Optimization Methods & Software, Vol: 32, Pages: 872-891, ISSN: 1055-6788

We present and explore the behaviour of a branch-and-bound algorithm for calculating validbounds on the k-th largest eigenvalue of a symmetric interval matrix. Branching on theinterval elements of the matrix takes place in conjunction with the application of Rohn’smethod (an interval extension of Weyl’s theorem) in order to obtain valid outer bounds onthe eigenvalues. Inner bounds are obtained with the use of two local search methods. Thealgorithm has the theoretical property that it provides bounds to any arbitrary precision > 0(assuming infinite precision arithmetic) within finite time. In contrast with existing methods,bounds for each individual eigenvalue can be obtained even if its range overlaps with theranges of other eigenvalues. Performance analysis is carried out through nine examples. Inthe first example, a comparison of the efficiency of the two local search methods is reportedusing 4,000 randomly generated matrices. The eigenvalue bounding algorithm is then appliedto five randomly generated matrices with overlapping eigenvalue ranges. Valid and sharpbounds are indeed identified given a sufficient number of iterations. Furthermore, most of therange reduction takes place in the first few steps of the algorithm so that significant benefitscan be derived without full convergence. Finally, in the last three examples, the potential ofthe algorithm for use in algorithms to identify index-1 saddle points of nonlinear functions isdemonstrated.

Journal article

Nerantzis D, Adjiman CS, 2016, Enclosure of all index-1 saddle points of general nonlinear functions, Journal of Global Optimization, Vol: 67, Pages: 451-474, ISSN: 1573-2916

Transition states (index-1 saddle points) play a crucial role in determiningthe rates of chemical transformations but their reliable identificationremains challenging in many applications. Deterministic global optimizationmethods have previously been employed for the location of transition states(TSs) by initially finding all stationary points and then identifying the TSsamong the set of solutions. We propose several regional tests, applicable togeneral nonlinear, twice continuously differentiable functions, to accelerate theconvergence of such approaches by identifying areas that do not contain anyTS or that may contain a unique TS. The tests are based on the application ofthe interval extension of theorems from linear algebra to an interval Hessianmatrix. They can be used within the framework of global optimization methodswith the potential of reducing the computational time for TS location. Wepresent the theory behind the tests, discuss their algorithmic complexity andshow via a few examples that significant gains in computational time can beachieved by using these tests.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00665428&limit=30&person=true