Protim Roy MSc Seminar

Date and Time

Location

Online via Zoom (Contact Joe Sawada for access to the event)

Details

Mini-Batch Alopex-B: An Even Faster Correlation-Based
Learning Algorithm for Feed-Forward and Recurrent Neural Networks

 

Advisor: Dr. Stefan Kremer
Advisory Committee:
Dr. Graham Taylor

Abstract

The large number of information technology systems currently being used in all sectors of the global economy warrants the need to process the diverse types of data being generated. This has led to the renaissance of deep learning and subsequently, optimisation in neural networks.

Alopex Lagopus is the taxonomic synonym for the Arctic fox. In this case, the word ALOPEX is an acronym from ALgorithm Of Pattern EXtraction, a general-purpose problem solver(Harth & Tzanakou, 1974). Its adaptations ALOPEX-94(Unnikrishnan & Venugopal, 1994) and ALOPEX-B(Bia, 2001) have been used for feed-forward and recurrent neural networks. Instead of an error gradient, ALOPEX-94/B uses local correlations between the change in individual weights and the change in the global error measure for optimisation. ALOPEX-94/B is network architecture-independent. It doesn’t require the error or transfer functions to be differentiable. It has a high potential for parallelism due to only using local computations and is stochastic which helps escape local minima and traverse flat landscapes.

We implement ALOPEX-B in Python3 and TensorFlow2 for learning in logistic regressors and multi-layered perceptrons comparing results with gradient descent(Rumelhart, Hinton & Williams, 1986), mini-batch stochastic gradient descent(SGD)(Bottou, 1998), and Adam(Ba & Kingma, 2015). To increase the rate of convergence, we apply mini-batch learning and apply a gradient. We show that both versions have improved performance and propose experiments with recurrent and long-short term memory networks to tackle problems with long-term dependencies.

File Attachments

Events Archive