Reservoir computing is an emerging recurrent neural network approach that requires simple training. It consists of a reservoir layer, which can be built from a time-varying dynamical system, and a readout layer, which is the only part that is trained. Memristors, particularly volatile memristors, are well-suited for constructing the reservoir because their natural decay and nonlinear response allow them to retain recent input information for a short time and act as leaky integrators. In this talk, I will show how volatile memristors can be used to build reservoirs for two tasks: image recognition and time-series prediction. I will explain how the input must be encoded differently for each task to obtain better reservoir states and improve performance. I will also discuss the main design trade-offs involved in these systems, and analyse how important memristor properties such as decay rate, quantization, and device variability affect accuracy, robustness, and hardware cost. By comparing these two tasks within the same reservoir computing framework, I will highlight practical insights into how memristive reservoir systems can be designed and adapted for different applications.
MSc seminar. Supervisor: Prof Shahar Kvatinsky