RNN-Based Online Learning: An Efficient First-Order Optimization Algorithm with a Convergence Guarantee
N. M. Vural, S. F. Yilmaz, F. Ilhan and S. S. Kozat, “RNN-Based Online Learning: An Efficient First-Order Optimization Algorithm with a Convergence Guarantee”, IEEE Transactions on Signal Processing, 2020.
Abstract
We investigate online nonlinear regression with continually running recurrent neural network networks (RNNs), i.e., RNN-based online learning. For RNN-based online learning, we introduce an efficient first-order training algorithm that theoretically guarantees to converge to the optimum network parameters. Our algorithm is truly online such that it does not make any assumption on the learning environment to guarantee convergence. Through numerical simulations, we verify our theoretical results and illustrate significant performance improvements achieved by our algorithm with respect to the state-of-the-art RNN training methods.