MetaPerceptron: A standardized framework for metaheuristic-driven multi-layer perceptron optimization

•MetaPerceptron: an user-friendly and comprehensive metaheuristic-based MLP framework.•Supports regression and classification tasks with more than 200 metaheuristic algorithms.•Comprehensive resources: examples, documentation, and test cases for users.•It is user-friendly interface allows non-coders...

Full description

Saved in:
Bibliographic Details
Published inComputer standards and interfaces Vol. 93; p. 103977
Main Authors Thieu, Nguyen Van, Mirjalili, Seyedali, Garg, Harish, Hoang, Nguyen Thanh
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2025
Subjects
Online AccessGet full text
ISSN0920-5489
DOI10.1016/j.csi.2025.103977

Cover

More Information
Summary:•MetaPerceptron: an user-friendly and comprehensive metaheuristic-based MLP framework.•Supports regression and classification tasks with more than 200 metaheuristic algorithms.•Comprehensive resources: examples, documentation, and test cases for users.•It is user-friendly interface allows non-coders to solve problems with minimal code.•Flexible framework; users can easily modify algorithms, code and replace datasets. The multi-layer perceptron (MLP) remains a foundational architecture within neural networks, widely recognized for its ability to model complex, non-linear relationships between inputs and outputs. Despite its success, MLP training processes often face challenges like susceptibility to local optima and overfitting when relying on traditional gradient descent optimization. Metaheuristic algorithms (MHAs) have recently emerged as robust alternatives for optimizing MLP training, yet no current package offers a comprehensive, standardized framework for MHA-MLP hybrid models. This paper introduces MetaPerceptron, an standardized open-source Python framework designed to integrate MHAs with MLPs seamlessly, supporting both regression and classification tasks. MetaPerceptron is built on top of PyTorch, Scikit-Learn, and Mealpy. Through this design, MetaPerceptron promotes standardization in MLP optimization, incorporating essential machine learning utilities such as model forecasting, feature selection, hyperparameter tuning, and pipeline creation. By offering over 200 MHAs, MetaPerceptron empowers users to experiment across a broad array of metaheuristic optimization techniques without reimplementation. This framework significantly enhances accessibility, adaptability, and consistency in metaheuristic-trained neural network research and applications, positioning it as a valuable resource for machine learning, data science, and computational optimization. The entire source code is freely available on Github: https://github.com/thieu1995/MetaPerceptron
ISSN:0920-5489
DOI:10.1016/j.csi.2025.103977