Imperial College London

ProfessorDaniloMandic

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Signal Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6271d.mandic Website

 
 
//

Assistant

 

Miss Vanessa Rodriguez-Gonzalez +44 (0)20 7594 6267

 
//

Location

 

813Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

618 results found

Chambers JA, Sherliker W, Mandic DP, 2000, A normalized gradient algorithm for an adaptive recurrent perceptron, IEEE International Conference on Acoustics, Speech, and Signal Processing, Publisher: IEEE, Pages: 396-399, ISSN: 1520-6149

Conference paper

Krcmar IR, Bozic MM, Mandic DP, 2000, Global asymptotic stability for RNNs with a bipolar activation function, Pages: 33-36

© 2000 IEEE. Conditions for global asymptotic stability of a nonlinear relaxation process realized by a recurrent neural network with a hyperbolic tangent activation function are provided. This analysis is based upon the contraction mapping theorem and corresponding fixed point iteration. The derived results find their application in the wide area of neural networks for optimization and signal processing.

Conference paper

Mandic DP, Chambers JA, Bozic MM, 2000, On global asymptotic stability of fully connected recurrent neural networks, Pages: 3406-3409, ISSN: 1520-6149

© 2000 IEEE. Conditions for global asymptotic stability (GAS) of a nonlinear relaxation process realized by a recurrent neural network (RNN) are provided. Existence, convergence, and robustness of such a process are analyzed. This is undertaken based upon the contraction mapping theorem (CMT) and the corresponding fixed point iteration (FPI). Upper bounds for such a process are shown to be the conditions of convergence for a commonly analyzed RNN with a linear state dependence.

Conference paper

Harvey R, Mandic DP, Kolonic DH, 2000, Some potential pitfalls with s to z-plane mappings, Pages: 3530-3533, ISSN: 1520-6149

© 2000 IEEE. Design of digital infinite impulse response (IIR) filters is a compulsory topic in most signal processing courses. Most often, it is taught by using the bilinear transform to map an analogue counterpart into the corresponding digital filter. The usual approach is to define a mapping between the complex variables s and z, and hence, by substitution, derive a mapping between ω, analogue frequency, and θ, sampled frequency. This is rather elliptical, since the real aim is to establish the correspondence between the frequency response of a prototype analogue system H(jω), and H(ejθ), the response of the sampled system. Here we provide a rigorous analysis for the mutual invertibility between the analogue frequency ω, and the digital frequency θ for this case. Based upon the definition of the tan and arctan functions, conditions of existence, uniqueness and continuity of such a mutually inverse mapping are derived. Based upon these results, simple proofs for the mutually inverse mappings ω→θ and θ→ω are given. This is supported by appropriate diagrams. This problem arose as a student question while teaching DSP.

Conference paper

Mandic DP, Chambers JA, 2000, On robust stability of time-variant discrete-time nonlinear systems with bounded parameter perturbations, IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, Vol: 47, Pages: 185-188, ISSN: 1057-7122

The upper and lower bounds for asymptotic stability (AS) of a time-variant discrete-time nonlinear system with bounded parameter perturbations are provided. The analysis is undertaken for a class of nonlinear relaxation systems with the saturation nonlinearity of sigmoid type. Based upon the theory of convex polytopes and underlying linear relaxation equation, the bounds of the stability region for such a nonlinear system are derived for every time instant.

Journal article

Mandic DP, Krcmar IR, 2000, On training with slope adaptation for feedforward NNs, Pages: 42-45

© 2000 IEEE. Relationships between the learning rate η and the slopes β in the tanh activation function for a feedforward neural network (NN) are provided. The analysis establishes the equivalence in the static and dynamic sense between a referent and an arbitrary feedforward NN which helps to reduce the number of degrees of freedom in learning algorithms for NNs.

Conference paper

Mandic DP, Chambers JA, 1999, Exploiting inherent relationships in RNN architectures, NEURAL NETWORKS, Vol: 12, Pages: 1341-1345, ISSN: 0893-6080

Journal article

Mandic DP, Chambers JA, 1999, A posteriori error learning in nonlinear adaptive filters, IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING, Vol: 146, Pages: 293-296, ISSN: 1350-245X

Journal article

Mandic DP, Chambers JA, 1999, A nonlinear adaptive predictor realised via recurrent neural networks with annealing, IEE Colloquium (Digest), ISSN: 0963-3308

A Minimum Mean Square Error (MMSE) nonlinear predictor based on the Nonlinear Autoregressive Moving Average (NARMA) model is developed for nonlinear and nonstationary signals. This is achieved through modular, nested Recurrent Neural Networks (RNN)s. A Pipelined Recurrent Neural Network (PRNN), which consists of a number of simple small-scale RNN modules with low computational complexity is introduced, which offers an improved nonlinear processing capability within the MMSE prediction framework. Since modules of the PRNN perform simultaneously in a pipelined parallel manner, this leads to a significant improvement in the total computational efficiency of such a NARMA predictor. However, some difficulties encountered with training these networks with the Real Time Recurrent Learning (RTRL) algorithm in that context may be attributed to the fact that the learning rate is maintained constant throughout the computation. To overcome this difficulty, we introduce the learning-rate annealing schedule for the PRNN. This search-then-converge scheme combines the desirable features of the standard RTRL algorithm and traditional stochastic approximation algorithms. Simulation results for nonlinear prediction of speech, which is a nonlinear and nonstationary signal, support our approach.

Journal article

Mandic DP, Chambers JA, 1999, Toward an optimal PRNN-based nonlinear predictor, IEEE TRANSACTIONS ON NEURAL NETWORKS, Vol: 10, Pages: 1435-1442, ISSN: 1045-9227

Journal article

Mandic DP, Chambers JA, 1999, Relating the slope of the activation function and the learning rate within a recurrent neural network, NEURAL COMPUTATION, Vol: 11, Pages: 1069-1077, ISSN: 0899-7667

Journal article

Mandic DP, Chambers JA, 1999, A nonlinear adaptive predictor realised via recurrent neural networks with annealing, Pages: 7-12, ISSN: 0963-3308

A Minimum Mean Square Error (MMSE) nonlinear predictor based on the Nonlinear Autoregressive Moving Average (NARMA) model is developed for nonlinear and nonstationary signals. This is achieved through modular, nested Recurrent Neural Networks (RNN)s. A Pipelined Recurrent Neural Network (PRNN), which consists of a number of simple small-scale RNN modules with low computational complexity is introduced, which offers an improved nonlinear processing capability within the MMSE prediction framework. Since modules of the PRNN perform simultaneously in a pipelined parallel manner, this leads to a significant improvement in the total computational efficiency of such a NARMA predictor. However, some difficulties encountered with training these networks with the Real Time Recurrent Learning (RTRL) algorithm in that context may be attributed to the fact that the learning rate is maintained constant throughout the computation. To overcome this difficulty, we introduce the learning-rate annealing schedule for the PRNN. This search-then-converge scheme combines the desirable features of the standard RTRL algorithm and traditional stochastic approximation algorithms. Simulation results for nonlinear prediction of speech, which is a nonlinear and nonstationary signal, support our approach.

Conference paper

Mandic DP, Chambers JA, 1999, Global asymptotic convergence of nonlinear relaxation equations realised through a recurrent perceptron, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 99), Publisher: IEEE, Pages: 1037-1040, ISSN: 1520-6149

Conference paper

Mandic DP, Chambers JA, 1998, Advanced PRNN based nonlinear prediction/system identification, IEE Colloquium (Digest), ISSN: 0963-3308

Insight into the core of the Pipelined Recurrent Neural Network (PRNN) in prediction applications is provided. It is shown that modules of the PRNN contribute to the final predicted value at the output of the PRNN in two ways, namely through the process of nesting, and through the process of learning. A measure of the influence of the output of a distant module to the amplitude at the output of the PRNN is analytically found, and the upper bound for it is 'derived. Further-more, an analysis of the influence of the forgetting factor in the cost function of the PRNN to the process of learning is undertaken, and it is found that for the PRNN, the forgetting factor can even exceed unity in order to obtain the best predictor. Simulations on three speech signals support that approach, and outperform the other stochastic gradient based schemes.

Journal article

Mandic DP, Chambers JA, 1998, A posteriori real-time recurrent learning schemes for a recurrent neural network based nonlinear predictor, IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING, Vol: 145, Pages: 365-370, ISSN: 1350-245X

Journal article

Mandic DP, Chambers JA, 1998, Towardds a PRNN-based a posteriori nonlinear predictor, IEEE Transactions on Signal Processing, Vol: 46, Pages: 1779-1780, ISSN: 1053-587X

For prediction of nonlinear and nonstationary signals, as well as in nonlinear system identification, artificial neural network (ANN) based techniques have been found to be particularly attractive. The pipelined recurrent neural network (PRNN) based nonlinear predictor is an important example of the ANN approach. Here, we address the way the PRNN calculates its weights with respect to the particular time instant at which the signal is available within the network and show that performance of the PRNN-based nonlinear predictor for a given architecture and corresponding learning algorithm can be significantly improved by a careful time-management policy. The concept of an a posteriori PRNNbased nonlinear predictor is introduced, and the algorithms for obtaining such an improved prediction scheme are provided. ©1993 IEEE.

Journal article

Mandic DP, Chambers JA, 1998, Advanced PRNN based nonlinear prediction/system identification, ISSN: 0963-3308

Insight into the core of the Pipelined Recurrent Neural Network (PRNN) in prediction applications is provided. It is shown that modules of the PRNN contribute to the final predicted value at the output of the PRNN in two ways, namely through the process of nesting, and through the process of learning. A measure of the influence of the output of a distant module to the amplitude at the output of the PRNN is analytically found, and the upper bound for it is derived. Furthermore, an analysis of the influence of the forgetting factor in the cost function of the PRNN to the process of learning is undertaken, and it is found that for the PRNN, the forgetting factor can even exceed unity in order to obtain the best predictor. Simulations on three speech signals support that approach, and outperform the other stochastic gradient based schemes.

Conference paper

Mandic DP, Chambers JA, 1998, From an a priori RNN to an a posteriori PRNN nonlinear predictor, 8th IEEE Workshop on Neural Networks for Signal Processing, Publisher: IEEE, Pages: 174-183

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00168709&limit=30&person=true&page=21&respub-action=search.html