Support Vector Machines
Understanding Support Vector Machines:
- It is developed by Russian Mathematician Vladimir N. Vapnik in 1991 for linear classification then extended this Algorithm for nonlinear
- classification by introducing Kernel trick, Kernel trick was proposed by Bernhard Boser, Isabelle Guyon and Vladimir Vapnik.
- Initially it was developed for classification later developed for regression by Vladimir N. Vapnik, Harris Drucker, Christopher
- J. C. Burges, Linda Kaufman and Alexander J. Smola in 1996.
- This is one of the most powerful supervised Machine Learning Algorithms. It works well with linear separable and nonlinear separable data sets, it has a lot of mathematics behind it to understand this.
- We can use this Algorithm for Regression, Classification, Outlier Detection(Anomaly Detection).
- This Algorithm maps all training data points into the space, in this space we can find more than one plane to separate the data points into different predefined classes but we want the hyperplane with maximum-margin(Decision boundaries) since new data points have less chances to be misclassified.

