EE Department Intranet - intranet.ee.ic.ac.uk
Close window CTRL+W

ELEC97002 (EE4-13) Adaptive Signal Processing and Machine Intelligence


Lecturer(s): Prof Danilo Mandic

Aims

Modern applications in engineering, science, internet-enabled technologies, biomedicine and economy are typically producing: (i) exceedingly large quantities of data and/or (ii) real-time streaming data. For these scenarios, classical off-line and block-based data analysis techniques are either not applicable or are computationally prohibitive.
This all requires statistical learning theories and algorithms which operate sequentially and in real time, while catering for non-stationary data natures and operating at an affordable computational complexity.
The aim of this course is introduce students to: (i) adaptive latent component extraction from real-world data, (ii) algorithms and applications of adaptive signal processing to real-time streaming data, (iii) adaptive machine intelligence techniques such as neural networks, recurrent neural networks, and deep neural networks, (iv) corresponding dimensionality reduction techniques and ways to handle Big Data through tensor decompositions.

Students will gain hands-on experience through structured MATLAB assignments based upon adaptive acoustic interference cancellation, high-resolution latent component estimation from students' own physiological recordings (ECG), component/symbol tracking in communications and smart grid applications, and universal function approximation through recurrent and deep architectures.

Learning Outcomes

At the end of the course students should be able to:
- Understand the fundamental statistical properties of complex-valued and multidimensional signals from a modern perspective.
- Employ the concept of complex non-circularity and its descriptors (pseudo-covariance) to gain more degrees of freedom when processing unbalanced and correlated data.
- Employ the notion of a basis function for signal representation, and perform dimensionality reduction in a recursive way (robust PCA), together with familiarising themselves with the trade-off between dimensionality reduction and accuracy.
- Derive and analyse statistical properties of the conventional spectral estimators, from the perspective of dimensionality reduction and change of signal basis.
- Formulate modern, parametric, spectral estimators (MUSIC, Pisarenko, subspace methods) based on dimensionality reduction and detail their statistical properties.
- Gain in-depth understanding of adaptive signal processing algorithms, such as those based on steepest descent, Least Mean Square (LMS), and Recursive Least Squares (RLS) online learning algorithms
- Examine statistical properties and performance of these algorithms in real-world applications.
- Familiarise themselves with the concept of adaptive processing of non-stationary signals, and the links between stochastic gradient algorithms and Kalman filter.
- Analyse the convergence of stochastic gradient algorithms, and derive real-time algorithms with optimal convergence properties.
- Describe complex-valued real-world data in terms of their complex noncircularity and the corresponding augmented complex statitics, and derive the corresponding class of widely linear complex adaptive filters.
- Become familiar with learning algorithms for multidimensional signals, and learn how latent signal components can be found through the exploitation of complex noncircularity.
- Understand the need for unsupervised (blind) signal processing and its applications in the separation of independent sources from their mixtures.
- Analyse nonlinear and neural adaptive processing methods, and understand the commonalities between nonlinear adaptive filters and neural architectures
- Understand the concept of memory in neural architectures, and the role of memory in recurrent and deep architectures.
- Derive online updates for the artificial neuron (dynamical perceptron) model and networks of interconnected neurons, and understand how these can be used to process nonstationary and non-zero-mean data.
- Derive the backpropagation algorithm for a multilayer perceptron (MLP) from a time-series perspective and understand how to exploit the duality between the MLP and "localised" kernel and support vector methods (radial basis functions).
- Extend the derivation to deep neural networks and understand the concepts of expressivity, explainability, and curse of dimensionality.
- Mitigate the black box nature of neural networks through Recurrent Neural Networks (RNN)
- Familiarise themselves with tensor-based Big Data approaches for dimensionality reduction, and understand how tensors can be used to mitigate the "curse of dimensionality", associated with large and streaming data.
- Establish a link between the flat-view matrix dimensionality reduction (based on e.g. SVD and PCA) and "multi-way" tensor based dimensionality reduction.
- Understand the links between linear algebra (matrices) and multilinear algebra (tensors).
- Establish links between tensor decompositions and deep neural networks, and understand how tensor decompositions may be used to optimise deep learning architectures, and dramatically reduce their dimensionality.
- Apply adaptive learning algorithms, for both linear and neural architectures, over the real-world case studies outlined in the coursework.

Syllabus

Aspects of estimation theory: bias, variance, maximum likelihood and efficiency. Augmented complex statistics. Finding latent components in data, recursive computation of PCA. Application to frequency estimation, (MUSIC, Pisarenko) methods as dimensionality reduction schemes. Time-frequency and time-scale methods, order selection. Block and sequential parameter estimation techniques. Stochastic gradient type algorithms: least mean square and recursive least squares. Intuition behind Kalman filtering and links with LMS and RLS. Complex and multidimensional adaptive filters, widely linear estimators, dealing with unbalanced data and signal noncircularity. Blind equalization and source separation. Nonlinear online learning algorithms and architectures for Neural Networks, such as dynamical perceptrons, feedforward and recurrent neural networks, and their connection with general neural networks and deep learning. Hebbian learning, unsupervised Neural Networks, and autoencoders. Tensors for Big Data applications, multilinear algebra, curse of dimensionality, computation of tensor decompositions.
Case studies.
Assessment
Exam Duration: N/A
Coursework contribution: 100%

Term: Spring

Closed or Open Book (end of year exam): N/A

Coursework Requirement:
         Nil

Oral Exam Required (as final assessment): N/A

Prerequisite module(s): None required

Course Homepage: unavailable

Book List: