Scaling up support vector data description by using core-sets

Support vector data description (SVDD) is a powerful kernel method that has been commonly used for novelty detection. While its quadratic programming formulation has the important computational advantage of avoiding the problem of local minimum, this has a runtime complexity of O(N/sup 3/), where N...

Full description

Saved in:
Bibliographic Details
Published in2004 IEEE International Joint Conference on Neural Networks Vol. 1; pp. 425 - 430
Main Authors Chu, C.S., Tsang, I.W., Kwok, J.T.
Format Conference Proceeding
LanguageEnglish
Published Piscataway NJ IEEE 2004
Subjects
Online AccessGet full text
ISBN0780383591
9780780383593
ISSN1098-7576
DOI10.1109/IJCNN.2004.1379943

Cover

More Information
Summary:Support vector data description (SVDD) is a powerful kernel method that has been commonly used for novelty detection. While its quadratic programming formulation has the important computational advantage of avoiding the problem of local minimum, this has a runtime complexity of O(N/sup 3/), where N is the number of training patterns. It thus becomes prohibitive when the data set is large. Inspired from the use of core-sets in approximating the minimum enclosing ball problem in computational geometry, we propose An approximation method that allows SVDD to scale better to larger data sets. Most importantly, the proposed method has a running time that is only linear in N. Experimental results on two large real-world data sets demonstrate that the proposed method can handle data sets that are much larger than those that can be handled by standard SVDD packages, while its approximate solution still attains equally good, or sometimes even better, novelty detection performance.
ISBN:0780383591
9780780383593
ISSN:1098-7576
DOI:10.1109/IJCNN.2004.1379943