A walk from 2-norm SVM to 1-norm SVM
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
|Title of host publication||Proceedings of the 9th IEEE International Conference on Data Mining, ICDM 2009, Miami, Florida, USA, 6-9 December 2009|
|Publication status||Published - 2009|
|Publication type||A4 Article in a conference publication|
This paper studies how useful the standard 2-norm regularized SVM is in approximating the 1-norm SVM problem. To this end, we examine a general method that is based on iteratively re-weighting the features and solving a 2-norm optimization problem. The convergence rate of this method is unknown. Previous work indicates that it might require an excessive number of iterations. We study how well we can do with just a small number of iterations. In theory the convergence rate is fast, except for coordinates of the current solution that are close to zero. Our empirical experiments confirm this. In many problems with irrelevant features, already one iteration is often enough to produce accuracy as good as or better than that of the 1-norm SVM. Hence, it seems that in these problems we do not need to converge to the 1-norm SVM solution near zero values. The benefit of this approach is that we can build something similar to the 1-norm regularized solver based on any 2-norm regularized solver. This is quick to implement and the solution inherits the good qualities of the solver such as scalability and stability.