Discretisation in Lazy Learning Algorithms

This paper adopts the idea of discretising continuous attributes (Fayyad and Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha, Kibler and Albert 1991). This approach converts continuous attributes into nominal attributes at the outset. We investigate the effects of this approach...

Full description

Saved in:
Bibliographic Details
Published inThe Artificial intelligence review Vol. 11; no. 1-5; pp. 157 - 174
Main Author Ting, Kai Ming
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Nature B.V 01.02.1997
Subjects
Online AccessGet full text
ISSN0269-2821
1573-7462
DOI10.1023/A:1006504622008

Cover

More Information
Summary:This paper adopts the idea of discretising continuous attributes (Fayyad and Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha, Kibler and Albert 1991). This approach converts continuous attributes into nominal attributes at the outset. We investigate the effects of this approach on the performance of lazy learning algorithms and examine it empirically using both real-world and artificial data to characterise the benefits of discretisation in lazy learning algorithms. Specifically, we have showed that discretisation achieves an effect of noise reduction and increases lazy learning algorithms' tolerance for irrelevant continuous attributes. The proposed approach constrains the representation space in lazy learning algorithms to hyper-rectangular regions that are orthogonal to the attribute axes. Our generally better results obtained using a more restricted representation language indicate that employing a powerful representation language in a learning algorithm is not always the best choice as it can lead to a loss of accuracy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ObjectType-Article-2
ObjectType-Feature-1
ISSN:0269-2821
1573-7462
DOI:10.1023/A:1006504622008