Home > Articles > Business & Management

  • Print
  • + Share This
This chapter is from the book

Support Vector Machines

Support vector machines (SVMs) evolved in the 1990s from pattern recognition research at Bell Labs. They work for either classification or regression, and are very useful when working with highly dimensional data—that is, when the number of potential predictors is very large.

The SVM algorithm depends on kernels, or transformations that map input data into a high-dimensional space. Kernel functions can be linear or nonlinear. After mapping the input data, the SVM algorithm constructs one or more hyperplanes that separate the data into homogeneous subgroups.

Given its robustness with highly dimensional data, SVM is well suited to applications in handwriting recognition, text categorization, or image tagging. In medical science, researchers successfully applied SVM to the detection of tumors in breast images and the classification of complex proteins.

Commercial software packages that support SVM include Alpine Data Labs’ Alpine, IBM SPSS Modeler, Oracle Data Mining, SAS Enterprise Miner, and Statistica Data Miner. Open source options include Apache Spark MLLib, JKernalMachines, LIBSVM, and Vowpal Wabbit. For R users, there are a number of packages, including kernlab, SVMMaj, gcdnet, obliqueRF, MVpower, svcR, and rasclass; for Python users, some SVM capabilities are included in scikit-learn and PyML.

  • + Share This
  • 🔖 Save To Your Account