ELEC97002 (EE413) Adaptive Signal Processing and Machine IntelligenceLecturer(s): Prof Danilo Mandic Aims
Modern applications in engineering, science, internetenabled technologies, biomedicine and economy are typically producing: (i) exceedingly large quantities of data and/or (ii) realtime streaming data. For these scenarios, classical offline and blockbased data analysis techniques are either not applicable or are computationally prohibitive.
This all requires statistical learning theories and algorithms which operate sequentially and in real time, while catering for nonstationary data natures and operating at an affordable computational complexity. The aim of this course is introduce students to: (i) adaptive latent component extraction from realworld data, (ii) algorithms and applications of adaptive signal processing to realtime streaming data, (iii) adaptive machine intelligence techniques such as neural networks, recurrent neural networks, and deep neural networks, (iv) corresponding dimensionality reduction techniques and ways to handle Big Data through tensor decompositions. Students will gain handson experience through structured MATLAB assignments based upon adaptive acoustic interference cancellation, highresolution latent component estimation from students' own physiological recordings (ECG), component/symbol tracking in communications and smart grid applications, and universal function approximation through recurrent and deep architectures. Learning Outcomes
At the end of the course students should be able to:
 Understand the fundamental statistical properties of complexvalued and multidimensional signals from a modern perspective.  Employ the concept of complex noncircularity and its descriptors (pseudocovariance) to gain more degrees of freedom when processing unbalanced and correlated data.  Employ the notion of a basis function for signal representation, and perform dimensionality reduction in a recursive way (robust PCA), together with familiarising themselves with the tradeoff between dimensionality reduction and accuracy.  Derive and analyse statistical properties of the conventional spectral estimators, from the perspective of dimensionality reduction and change of signal basis.  Formulate modern, parametric, spectral estimators (MUSIC, Pisarenko, subspace methods) based on dimensionality reduction and detail their statistical properties.  Gain indepth understanding of adaptive signal processing algorithms, such as those based on steepest descent, Least Mean Square (LMS), and Recursive Least Squares (RLS) online learning algorithms  Examine statistical properties and performance of these algorithms in realworld applications.  Familiarise themselves with the concept of adaptive processing of nonstationary signals, and the links between stochastic gradient algorithms and Kalman filter.  Analyse the convergence of stochastic gradient algorithms, and derive realtime algorithms with optimal convergence properties.  Describe complexvalued realworld data in terms of their complex noncircularity and the corresponding augmented complex statitics, and derive the corresponding class of widely linear complex adaptive filters.  Become familiar with learning algorithms for multidimensional signals, and learn how latent signal components can be found through the exploitation of complex noncircularity.  Understand the need for unsupervised (blind) signal processing and its applications in the separation of independent sources from their mixtures.  Analyse nonlinear and neural adaptive processing methods, and understand the commonalities between nonlinear adaptive filters and neural architectures  Understand the concept of memory in neural architectures, and the role of memory in recurrent and deep architectures.  Derive online updates for the artificial neuron (dynamical perceptron) model and networks of interconnected neurons, and understand how these can be used to process nonstationary and nonzeromean data.  Derive the backpropagation algorithm for a multilayer perceptron (MLP) from a timeseries perspective and understand how to exploit the duality between the MLP and "localised" kernel and support vector methods (radial basis functions).  Extend the derivation to deep neural networks and understand the concepts of expressivity, explainability, and curse of dimensionality.  Mitigate the black box nature of neural networks through Recurrent Neural Networks (RNN)  Familiarise themselves with tensorbased Big Data approaches for dimensionality reduction, and understand how tensors can be used to mitigate the "curse of dimensionality", associated with large and streaming data.  Establish a link between the flatview matrix dimensionality reduction (based on e.g. SVD and PCA) and "multiway" tensor based dimensionality reduction.  Understand the links between linear algebra (matrices) and multilinear algebra (tensors).  Establish links between tensor decompositions and deep neural networks, and understand how tensor decompositions may be used to optimise deep learning architectures, and dramatically reduce their dimensionality.  Apply adaptive learning algorithms, for both linear and neural architectures, over the realworld case studies outlined in the coursework. Syllabus
Aspects of estimation theory: bias, variance, maximum likelihood and efficiency. Augmented complex statistics. Finding latent components in data, recursive computation of PCA. Application to frequency estimation, (MUSIC, Pisarenko) methods as dimensionality reduction schemes. Timefrequency and timescale methods, order selection. Block and sequential parameter estimation techniques. Stochastic gradient type algorithms: least mean square and recursive least squares. Intuition behind Kalman filtering and links with LMS and RLS. Complex and multidimensional adaptive filters, widely linear estimators, dealing with unbalanced data and signal noncircularity. Blind equalization and source separation. Nonlinear online learning algorithms and architectures for Neural Networks, such as dynamical perceptrons, feedforward and recurrent neural networks, and their connection with general neural networks and deep learning. Hebbian learning, unsupervised Neural Networks, and autoencoders. Tensors for Big Data applications, multilinear algebra, curse of dimensionality, computation of tensor decompositions.
Case studies. Exam Duration: N/A Coursework contribution: 100% Term: Spring Closed or Open Book (end of year exam): N/A Coursework Requirement: Nil Oral Exam Required (as final assessment): N/A Prerequisite module(s): None required Course Homepage: unavailable Book List:
