Speaker: Joshua Arnold
Host: Prof Janet Wiles

Seminar Type:  PhD Thesis Review


Perception, cognition and action are not instantaneous processes and information for each must be integrated over time frames as short as tens of milliseconds. Processing and integrating information at these timescales allows humans to react to complex situations within hundreds of milliseconds. For example, humans can recognise a face in less than a tenth of a second and during speech often react within half a second. Rapidly integrating information at these sub-second timescales for complex visual and language tasks are desirable features that are currently difficult to replicate in artificial systems. 

Computation in animal brains relies on cells called neurons transmitting pulses of charge known as spikes to communicate information. Relative patterns of spikes can be used to represent information with individual spike timing having millisecond precision. However, spikes can take 10s of milliseconds to travel between neurons and these conduction delays are thought to serve an active role in integrating information. The computational advantages of using spike conduction delays is not widely appreciated in neural network models. In particular, the role of conduction delays that are adjustable (plastic) for modelling precise temporal structure is not well understood.

To understand the role of conduction delay plasticity (CDP) in modelling fine temporal precision a simulator capable of delay learning was designed. A CDP rule, Synaptic Delay Variance Learning (SDVL), was implemented and its behaviour characterised under different noise conditions. An analysis of SDVLs behaviour led to simplifications being proposed which capture the core delay learning features in a simpler set of rules. The simplification is characterised on the same noise tasks and the performance compared with SDVL. The simplification is then also compared to a popular standard approach for training spiking neurons called spike time dependent plasticity (STDP). A key finding is that while both rules heavily depend on the time constant of the postsynaptic neuron, STDP generalises across temporal differences while SDVL is much more selective in learning fine-grained  temporal structure.

The outcomes of this work include a simulator with analysis/visualisation tools, customisable network inputs, separate model definition files, and tools for running experiments on high-performance computing clusters. Additionally a characterisation of the conduction delay learning rule SDVL and its simplification gives insight into the possible role of adjustable delays in neural systems for representing temporal information with millisecond precision. The comparison between CDP and the more standard STDP provides the literature with details of the similarities and differences between the two rules. While this is not an implementation of any specific biological mechanism, CDP offers a method through which millisecond precise information can be integrated rapidly. Rapid integration of precise temporal information is a key task for both artificial and biological systems to respond to and act in complex situations. 


Joshua Arnold received his Bachelors of Engineering (Software) from The University of Queensland in 2016. He is currently a PhD student under the supervision of Prof. Janet Wiles and Dr Peter Stratton. Joshua’s research interests involve biologically plausible learning rules for spiking networks and their computational implementations.