''Preface Support vector machines (SVMs), which were introduced by Vapnik in the early 1990s, are proved effective and promising techniques for data mining. SVMs have recently been breakthroughs in advance in their theoretical studies and implementations of algorithms. They have been successfully applied in many fields such as text categorization, speech recognition, remote sensing image analysis, time series forecasting, information security and etc. SVMs, having their roots in Statistical Learning Theory (SLT) and optimization methods, become powerful tools to solve the problems of machine learning with finite training points and to overcome some traditional difficulties such as the ''curse of dimensionality'', ''over-fitting'' and etc. SVMs theoretical foundation and implementation techniques have been established and SVMs are gaining quick development and popularity due to their many attractive features: nice mathematical representations, geometrical explanations, good generalization abilities and promising empirical performance. Some SVM monographs, including more sophisticated ones such as Cristianini & Shawe-Taylor [39] and Scholkopf & Smola [124], have been published. We have published two books about SVMs in Science Press of China since 2004 [42, 43], which attracted widespread concerns and received favorable comments. After several years research and teaching, we decide to rewrite the books and add new research achievements. The starting point and focus of the book is optimization theory, which is different from other books on SVMs in this respect. Optimization is one of the pillars on which SVMs are built, so it makes a lot of sense to consider them from this point of view''-- Read more...