Imperial College London

Dr Paul A. Bilokon

Faculty of Natural SciencesDepartment of Mathematics

Casual - Academic Professional
 
 
 
//

Contact

 

+44 (0)20 7594 8241paul.bilokon01 Website CV

 
 
//

Location

 

Huxley BuildingSouth Kensington Campus

//

Summary

 

Summary

Biography

CEO and Founder of Thalesians Ltd. Previously served as Director and Head of global credit and core e-trading quants at Deutsche Bank, the teams that he helped set up with Jason Batt and Martin Zinkin. Having also worked at Morgan Stanley (in Andrew Hausler's and Nicholas Zinn's prime brokerage risk), Lehman Brothers (in Anne Sanciaume's FX research and Ronan Dowling's FX quants), Nomura (in FX e-trading, under Martin Zinkin, Abid Zaidi, and Mark Gardner), and Citigroup (first in FX quants under Sebastian del Bano Rollin, Nigel Khakoo, and Roger Vernon, then in electronic credit and rates trading), Paul pioneered electronic trading in credit with Rob Smith and William Osborn at Citigroup.

Paul has graduated from Christ Church, University of Oxford, with a distinction and Best Overall Performance prize. At Oxford he wrote a distinguished project, Bayesian methods for solving estimation and forecasting problems in the high-frequency trading environment, supervised by Daniel Jones. He also graduated twice from Imperial College London. His MSci thesis Visualising the Invisible: Detecting Objects in Quantum Noise Limited Images, supervised by Duncan Fyfe Gillies and Marin van Heel, won him the university's Donald Davis Prize and the British Computing Society SET Award for Student Making Best Use of IT.

Paul's lectures at Imperial College London in machine learning for MSc students in mathematics and finance and his course consistently achieves top rankings among the students.

Paul has made contributions to mathematical logic, domain theory, and stochastic filtering theory, and, with Abbas Edalat, has published a prestigious LICS paper. Paul has co-authored several booksMachine Learning and Big Data with kdb /q (with Jan Novotny, Aris Galiotos, and Frédéric Délèze, published by Wiley), and Machine Learning in Finance: From Theory to Practice (with Matthew F. Dixon and Igor Halperin, published by Springer). He is currently working on Python, Data Science, and Machine Learning (to be published by World Scientific).

Dr Bilokon is a Member of British Computer Society, Institution of Engineering and Technology, and European Complex Systems Society.

Paul is a frequent speaker at premier conferences such as Global Derivatives/QuantMinds, WBS QuanTech, AI, and Quantitative Finance conferences, alphascope, LICS, and Domains.

Teaching


MATH97112 - Computing in C



The module gives an introduction to object-oriented programming in C . In contrast to structured programming, where a programming task is simply split into smaller parts, which are then coded separately, the essence of object oriented programming is to decompose a problem into related subgroups, where each subgroup is self-contained and contains its own instructions as well as the data that relates to it. Starting from the simple concept of a class that contains both data and methods relating to that data, the module will cover all the major features of object-oriented programming, e.g. encapsulation, inheritance, and polymorphism. To this end, the module will address operator overloading, virtual functions, and templates.

MATH97119 - Advances in Machine Learning


The module introduces the latest advances in machine learning. We start with reinforcement learning and demonstrate how it can be combined with neural networks in deep reinforcement learning, which has achieved spectacular results in recent years, such as outplaying the human champion at Go. We also demonstrate how advanced neural networks and tree-based methods, such as decision trees and random forests, can be used for forecasting financial time series and generating alpha. We explain how these advances are related to Bayesian methods, such as particle filtering and Markov chain Monte Carlo. We apply these methods to set up a profitable algorithmic trading venture in cryptocurrencies using Python and kdb /q (a top technology for electronic trading) along the way.

Students


2014-2015


  • Raymond Lee (supervised jointly with Abbas Edalat): Approximating Algorithm for the Law of Brownian Motion

2019-2020


Projects



Graph neural networks in finance


The field of drug discovery within life sciences is being revolutionized by the utilization of graph neural networks [WPCLZY].

In finance, the data are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. Graph neural networks are a natural candidate for such problems.

In this project we explore the potential of graph neural networks within finance.

[WPCLZY] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu. A Comprehensive Survey of Graph Neural Networks. arXiv, 2019: https://arxiv.org/abs/1901.00596

Deep Reinforcement Learning and Electronic Market Making


The often lucrative business of electronic market making [G] poses many mathematical challenges.

It can be viewed as a complex optimization problem, seeking to maximize returns, minimize risks, take advantage of a suite of "alpha" signals, while minimizing adverse selection and slippage.

It is natural to consider this problem through the prism of deep reinforcement learning methodology [BS, DHB].

Our goal is to utilize deep Q-learning [MKSGAWR] and variants thereof to learn the market making strategies for specific asset classes and trading venues.

[BS] Andrew Barto and Richard S. Sutton. Reinforcement Learning: An Introduction, second edition. MIT Press, 2018.

[DHB] Matthew F. Dixon, Igor Halperin, Paul Bilokon. Machine Learning in Finance: From Theory to Practice. Springer, 2020.

[G] Olivier Gueant. The Financial Mathematics of Market Liquidity: From Optimal Execution to Market Making

[MKSGAWR] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller. Playing Atari with Deep Reinforcement Learning. 2013.

Deep Reinforcement Learning in Order Execution


In [AC], Almgren and Chriss have proposed a theoretical framework for the execution of portfolio transactions with the aim of minimizing a combination of volatility risk and transaction costs arising from permanent and temporary market impact.

Since then, Almgren himself has moved away from this framework, saying that market nuances are critical in optimal execution [D].

While it is difficult to incorporate these nuances into the original, stochastic dynamical control formulation of the problem, it is natural to consider it in through the prism of deep reinforcement learning methodology [BS, DHB].

Our goal is to utilise deep Q-learning [MKSGAWR] and variants thereof to learn the optimal order execution strategy for specific asset classes and trading venues.

[AC] Robert Almgren and Neil Chriss. Optimal Execution of Portfolio Transactions. 1999.

[BS] Andrew Barto and Richard S. Sutton. Reinforcement Learning: An Introduction, second edition. MIT Press, 2018.

[D] Sebastian Day. Why Robert Almgren no longer trades using Almgren-Chriss. Risk.net, 2017.

[DHB] Matthew F. Dixon, Igor Halperin, Paul Bilokon. Machine Learning in Finance: From Theory to Practice. Springer, 2020.

[MKSGAWR] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller. Playing Atari with Deep Reinforcement Learning. 2013.

Econometrics, linear regression, ensemble methods and neural networks for financial time series forecasting

Classical econometrics [H], linear regression, ensemble methods, and neural networks [L, N] have all been employed for financial time series forecasting, known among the practitioners, particularly on the buy-side, as "alpha generation". As markets mature, alpha generation becomes more and more challenging.

While the theory of classical econometrics is academically the most developed, regression methods combined with careful feature selection and various "tricks of the trade" are favoured by practitioners.

While theoretically neural networks subsume the linear regression family of methods, in practice their calibration on financial time series is challenging.

The goal of this project is to compare the relative efficacy of these approaches and come up with a recommended set of algorithms for alpha generation.

[H] James D. Hamilton. Time Series Analysis. Princeton University Press, 1994.

[L] Francesca Lazzeri. Time Series Forecasting: An Applied Machine Learning Approach. O'Reilly, 2020.

[N] Aileen Nielsen. Practical Time Series Analysis: Prediction with Statistics & Machine Learning. O'Reilly, 2019.

Simulated Annealing (SA) versus Quantum Annealing (QA) versus Backpropagation for Neural Networks

The advent of deep neural networks has been to a large extent driven by the backpropagation algorithm [RHW], which relies on a gradient descent method (or modifications thereof) for finding weights in a feedforward network. Gradient descent and similar methods are bound by all of the problems of any hill climbing procedure, including the problem of local optima.

Simulated annealing (SA) has been introduced in [KGV] as a general method for solving optimization problems. The idea is to use thermal fluctuations to allow the system to escape from local optima of the cost function so that the system may reach the global optimum under the appropriate annealing schedule (the rate of decrease of temperature). If the temperature is decreased too quickly, the system may become trapped in a local optimum. Too slow annealing, on the other hand, is practically useless, although such a process should certainly bring the system to the global optimum.

Quantum annealing (QA) was introduced by [KN] and [FGSSD]. In QA, quantum tunneling effects cause transitions between states in the optimization problem, in contrast to the usual thermal transitions in SA.

The advent of quantum computers, including those optimized for solving QA problems, may eventually make it practicable to calibrate large/deep neural networks quickly and optimally, finding global, rather than local, optima in the search space.

Our goal is to develop efficient algorithms for calibrating neural networks on quantum computers using the QA ideas.

[RHW] David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams. Learning representations by back-propagating errors. Nature, 1986.

[KGV] Scott Kirkpatrick, C.D. Gelatt, Mario P. Vecchi. Optimization by simulated annealing. Science, 1983.

[KN] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse Ising model. Physical Review E, 1998.

[FGSSD] A.B. Finnila, M.A. Gomez, C. Sebenik, C. Stenson, J.D. Doll. Quantum annealing: A new method for minimizing multidimensional functions. Chemical Physics Letters, 1994.

Functional Reactive Programming for Real-Time Systems



Functional Reactive Programming (FRP) is a programming paradigm for reactive programming (asynchronous dataflow programming) using the building blocks of functional programming (e.g. map, reduce, filter). FRP has been used for programming graphical user interfaces (GUIs), robotics, games, and music, aiming to simplify these problems by explicitly modelling time.

Many financial applications, such as electronic trading platforms, are real-time systems.

Many open questions remain: for example, should the system be modelled explicitly as a directed acyclic graph (DAG)? How to minimize the impact of FRP overhead on the system's latency and so on.

The goal of this project is to come up with a reference open source implementation of a real-time financial system.

[E] Conal Elliott, Paul Hudak. Functional Reactive Animation. ICFP '97: http://conal.net/papers/icfp97/

[B] Stephen Blackheath, Anthony Jones. Functional Reactive Programming. 

Low-latency Framework for High-Frequency Trading (HFT)

Very little [L, C, C1, A, S, R, G, S1] has been published in the literature on low-latency programming in C . Yet, this is a foundation of numerous high-frequency trading (HFT) businesses, such as Virtu Financial, Citadel Securities, Two Sigma Securities, Tower Research Capital, Jump Trading, DRW, Hudson River Trading, Quantlab Financial, XTX Markets, GTS. Tradebot Systems, Flow Traders, IMC Financial, Optiver, XR Trading [A].

The goal of this project is to come up with a reference open-source implementation of a high-frequency trading system.

[A] Evan Akutagawa. 15 Well-Known High Frequency Trading Firms. Medium, 2018: https://medium.com/automation-generation/15-well-known-high-frequency-trading-firms-f45292c56d05

[L] John Lockwood. A Low-Latency Library in FPGA Hardware for High-Frequency Trading. InsideHPC Report, 2012: https://www.youtube.com/watch?v=nXFcM1pGOIE

[C] Carl Cook. When a Microsecond Is an Eternity: High Performance Trading Systems in C . CppCon, 2017: https://www.youtube.com/watch?v=NH1Tta7purM

[C1] Carl Cook. Low Latency C for Fun and Profit. Pacific , 2017: https://www.youtube.com/watch?v=BxfT9fiUsZ4&t=167s

[A] Sam Adams. Low Latency Architecture at LMAX Exchange. QCon London, March, 2017: https://www.infoq.com/presentations/lmax-trading-architecture/

[S] Ariel Salihan. What I've Learned after Coding for HFT and Low Latency Systems. Medium, 29 November, 2018: https://medium.com/@ariel.silahian/what-ive-learned-after-coding-for-hft-and-low-latency-systems-b86d9ad07742

[R] Alexander Radchenko. Benchmarking C : From Video Games to Algorithmic Trading. Meeting C , 2018: https://www.youtube.com/watch?v=7YVMC5v4qCA

[G] Kevin A. Goldstein R. In-Memory Techniques: Low-Latency Trading. In-Memory Computing Summit, North America, 2018: https://www.youtube.com/watch?v=yBNpSqOOoRk

[S1] Nimrod Sapir. High Frequency Trading and Ultra Low Latency Development Techniques. Core C , 2019: https://www.youtube.com/watch?v=_0aU8S-hFQI