Benchmarking Machine Learning Solutions in Production
Machine learning (ML) is becoming critical to many businesses. Keeping an ML solution online and responding is therefore a necessity, and is part of the MLOps (Machine Learning operationalization) movement. One aspect for this process is monitoring not only prediction quality, but also system resour...
Saved in:
Published in | 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) pp. 626 - 633 |
---|---|
Main Authors | , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2020
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/ICMLA51294.2020.00104 |
Cover
Summary: | Machine learning (ML) is becoming critical to many businesses. Keeping an ML solution online and responding is therefore a necessity, and is part of the MLOps (Machine Learning operationalization) movement. One aspect for this process is monitoring not only prediction quality, but also system resources. This is important to correctly provide the necessary infrastructure, either using a fully-managed cloud platform or a local solution. This is not a difficult task, as there are many tools available. However, it requires some planning and knowledge about what to monitor. Also, many ML professionals are not experts in system operations and may not have the skills to easily setup a monitoring and benchmarking environment. In the spirit of MLOps, this paper presents an approach, based on a simple API and set of tools, to monitor ML solutions. The approach was tested with 9 different solutions. The results indicate that the approach can deliver useful information to help in decision making, proper resource provision and operation of ML systems. |
---|---|
DOI: | 10.1109/ICMLA51294.2020.00104 |