The Orb-Weaving Spider Algorithm for Training of Recurrent Neural Networks

Abstract
The quality of operation of neural networks in solving application problems is determined by the success of the stage of their training. The task of learning neural networks is a complex optimization task. Traditional learning algorithms have a number of disadvantages, such as «sticking» in local minimums and a low convergence rate. Modern approaches are based on solving the problems of adjusting the weights of neural networks using metaheuristic algorithms. Therefore, the problem of selecting the optimal set of values of algorithm parameters is important for solving application problems with symmetry properties. This paper studies the application of a new metaheuristic optimization algorithm for weights adjustment—the algorithm of the spiders-cycle, developed by the authors of this article. The approbation of the proposed approach is carried out to adjust the weights of recurrent neural networks used to solve the time series forecasting problem on the example of three different datasets. The results are compared with the results of neural networks trained by the algorithm of the reverse propagation of the error, as well as three other metaheuristic algorithms: particle swarm optimization, bats, and differential evolution. As performance criteria for the comparison of algorithms of global optimization, in this work, descriptive statistics for metrics of the estimation of quality of predictive models, as well as the number of calculations of the target function, are used. The values of the MSE and MAE metrics on the studied datasets were obtained by adjusting the weights of the neural networks using the cycling spider algorithm at 1.32, 25.48, 8.34 and 0.38, 2.18, 1.36, respectively. Compared to the inverse error propagation algorithm, the cycling spider algorithm reduced the value of the error metrics. According to the results of the study, it is concluded that the developed algorithm showed high results and, in the assessment of performance, was not inferior to the existing algorithm.
Funding Information
  • Russian Federation of strategic academic leadership “Priority-2030” (PRIOR/SN/NU/22/SP5/16)