Quick Search Adv. Search

Journal of Bionic Engineering ›› 2024, Vol. 21 ›› Issue (5): 2460-2496.doi: 10.1007/s42235-024-00555-x

Previous Articles     Next Articles

 Improved Runge Kutta Optimization Using Compound Mutation Strategy in Reinforcement Learning Decision Making for Feature Selection

 Jinpeng Huang1 · Yi Chen1 · Ali Asghar Heidari2 · Lei Liu3 · Huiling Chen1 · Guoxi Liang4   

  1. 1.  Department of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China  2. School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1417935840, Iran  3. College of Computer Science, Sichuan University, Chengdu 610065, China  4. Department of Artificial Intelligence, Wenzhou Polytechnic, Wenzhou 325035, China
  • Online:2024-09-25 Published:2024-10-11
  • Contact: Yi Chen;Huiling Chen;Guoxi Liang;Jinpeng Huang;Ali Asghar Heidari;Lei Liu E-mail:kenyoncy2016@wzu.edu.cn;chenhuiling_jsj@wzu.edu.cn;guoxiliang@wzpt.edu.cn;huangjinpeng0907@163.com;as_heidari@ut.ac.ir;liulei.cx@gmail.com
  • About author: Jinpeng Huang1 · Yi Chen1 · Ali Asghar Heidari2 · Lei Liu3 · Huiling Chen1 · Guoxi Liang4

Abstract: Runge Kutta Optimization (RUN) is a widely utilized metaheuristic algorithm. However, it suffers from these issues: the imbalance between exploration and exploitation and the tendency to fall into local optima when it solves real-world optimization problems. To address these challenges, this study aims to endow each individual in the population with a certain level of intelligence, allowing them to make autonomous decisions about their next optimization behavior. By incorporating Reinforcement Learning (RL) and the Composite Mutation Strategy (CMS), each individual in the population goes through additional self-improvement steps after completing the original algorithmic phases, referred to as RLRUN. That is, each individual in the RUN population is trained intelligently using RL to independently choose three different differentiation strategies in CMS when solving different problems. To validate the competitiveness of RLRUN, comprehensive empirical tests were conducted using the IEEE CEC 2017 benchmark suite. Extensive comparative experiments with 13 conventional algorithms and 10 advanced algorithms were conducted. The experimental results demonstrated that RLRUN excels in convergence accuracy and speed, surpassing even some champion algorithms. Additionally, this study introduced a binary version of RLRUN, named bRLRUN, which was employed for the feature selection problem. Across 24 high-dimensional datasets encompassing UCI datasets and SBCB machine learning library microarray datasets, bRLRUN occupies the top position in classification accuracy and the number of selected feature subsets compared to some algorithms. In conclusion, the proposed algorithm demonstrated that it exhibits a strong competitive advantage in high-dimensional feature selection for complex datasets.

Key words: Runge Kutta optimization , · Metaheuristic algorithm , · Feature selection , · Reinforcement learning