Effects of hyperparameter, transformation function and optimization algorithm on prediction of entire pyrolysis process of double-base propellant employing artificial neural network
The prediction of entire thermal decomposition process of energetic materials, which receives little attention to date, can provide valuable guidance for safe storage design and combustion control. An innovative artificial neural network (ANN) framework that incorporates data interpolation, normaliz...
Saved in:
| Published in | Journal of thermal analysis and calorimetry Vol. 150; no. 15; pp. 11977 - 11994 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Dordrecht
Springer Nature B.V
01.08.2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 1388-6150 1588-2926 |
| DOI | 10.1007/s10973-025-14130-x |
Cover
| Summary: | The prediction of entire thermal decomposition process of energetic materials, which receives little attention to date, can provide valuable guidance for safe storage design and combustion control. An innovative artificial neural network (ANN) framework that incorporates data interpolation, normalization, and transformation is proposed in this study. The effects of hyperparameters (network topology, learning rate, activation function, and training function), transformation function and optimization algorithm on prediction performance and training time are systematically explored. The results indicate that training time increases with the number of hidden layer and neuron. The prediction performance with double-hidden layer is better than that with single-hidden layer, and the optimal neuron number ranges from 2 to 4. Learning rate has minimal impact on prediction performance and training time. However, activation function and training function possess a more significant influence on prediction performance of ANN models with less favorable topologies compared to that with favorable topologies. Adding one input dataset through transformation functions can improve the prediction performance. The optimal ANN framework configuration is as follows: double-hidden layer (3-2-4-1), learning rate of 0.01, softmax as the best input activation function, purelin as the best output activation function, trainbr as the best training function, 1/(1 + x1)(1+x2) as the best transformation function, and genetic algorithm (GA) as the best optimization algorithm. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1388-6150 1588-2926 |
| DOI: | 10.1007/s10973-025-14130-x |