Ensemble Methods

This chapter discusses the mechanics of the most popular ensemble methods. It explains that ensemble methods consist of a hierarchy of two algorithms. Ensemble methods train hundreds or thousands of the low‐level algorithms called base learners. The higher‐level algorithm controls the training of th...

Full description

Saved in:
Bibliographic Details
Published inMachine Learning with Spark and Python pp. 1 - 2
Main Author Bowles, Michael
Format Book Chapter
LanguageEnglish
Published United States John Wiley & Sons 2020
John Wiley & Sons, Incorporated
John Wiley & Sons, Inc
Edition2nd Edition
Subjects
Online AccessGet full text
ISBN1119561930
9781119561934
DOI10.1002/9781119562023.ch6

Cover

More Information
Summary:This chapter discusses the mechanics of the most popular ensemble methods. It explains that ensemble methods consist of a hierarchy of two algorithms. Ensemble methods train hundreds or thousands of the low‐level algorithms called base learners. The higher‐level algorithm controls the training of the base learners in order that their models turn out somewhat independent from one another so that combining them will reduce the variance of the combination. Several upper‐level algorithms are widely used. They are bagging, boosting, and random forest. The chapter shows how gradient boosting operates and demonstrates how to control its behavior to get the best performance. The packages available for doing gradient boosting in Python permit you to use random forest base learners with gradient boosting. The chapter explains each of the high‐level algorithms and shows a facsimile of the random forest's base learners. It discusses how to measure and regulate overfitting with binary trees.
ISBN:1119561930
9781119561934
DOI:10.1002/9781119562023.ch6