Evaluating k-NN in the Classification of Data Streams with Concept Drift
Data streams are often defined as large amounts of data flowing continuously at high speed. Moreover, these data are likely subject to changes in data distribution, known as concept drift. Given all the reasons mentioned above, learning from streams is often online and under restrictions of memory c...
Saved in:
| Main Authors | , , |
|---|---|
| Format | Journal Article |
| Language | English |
| Published |
05.10.2022
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.48550/arxiv.2210.03119 |
Cover
| Summary: | Data streams are often defined as large amounts of data flowing continuously
at high speed. Moreover, these data are likely subject to changes in data
distribution, known as concept drift. Given all the reasons mentioned above,
learning from streams is often online and under restrictions of memory
consumption and run-time. Although many classification algorithms exist, most
of the works published in the area use Naive Bayes (NB) and Hoeffding Trees
(HT) as base learners in their experiments. This article proposes an in-depth
evaluation of k-Nearest Neighbors (k-NN) as a candidate for classifying data
streams subjected to concept drift. It also analyses the complexity in time and
the two main parameters of k-NN, i.e., the number of nearest neighbors used for
predictions (k), and window size (w). We compare different parameter values for
k-NN and contrast it to NB and HT both with and without a drift detector (RDDM)
in many datasets. We formulated and answered 10 research questions which led to
the conclusion that k-NN is a worthy candidate for data stream classification,
especially when the run-time constraint is not too restrictive. |
|---|---|
| DOI: | 10.48550/arxiv.2210.03119 |