64,19 €*
Versandkostenfrei per Post / DHL
Aktuell nicht verfügbar
1. Linear algebra and its applications: The chapters focus on the basics of linear algebra together with their common applications to singular value decomposition, matrix factorization, similarity matrices (kernel methods), and graph analysis. Numerous machine learning applications have been used as examples, such as spectral clustering, kernel-based classification, and outlier detection. The tight integration of linear algebra methods with examples from machine learning differentiates this book from generic volumes on linear algebra. The focus is clearly on the most relevant aspects of linear algebra for machine learning and to teach readers how to apply these concepts.
2. Optimization and its applications: Much of machine learning is posed as an optimization problem in which we try to maximize the accuracy of regression and classification models. The ¿parent problem¿ of optimization-centric machine learning is least-squares regression. Interestingly, this problem arises in both linear algebra and optimization, and is one of the key connecting problems of the two fields. Least-squares regression is also the starting point for support vector machines, logistic regression, and recommender systems. Furthermore, the methods for dimensionality reduction and matrix factorization also require the development of optimization methods. A general view of optimization in computational graphs is discussed together with its applications to back propagation in neural networks.
A frequent challenge faced by beginners in machine learning is the extensive background required in linear algebra and optimization. One problem is that the existing linear algebra and optimization courses are not specific to machine learning; therefore, one would typically have to complete more course material than is necessary to pick up machine learning. Furthermore, certain types of ideas and tricks from optimization and linear algebra recur more frequently in machine learning than other application-centric settings. Therefore, there is significant value in developing a view of linear algebra and optimization that is better suited to the specific perspective of machine learning.
1. Linear algebra and its applications: The chapters focus on the basics of linear algebra together with their common applications to singular value decomposition, matrix factorization, similarity matrices (kernel methods), and graph analysis. Numerous machine learning applications have been used as examples, such as spectral clustering, kernel-based classification, and outlier detection. The tight integration of linear algebra methods with examples from machine learning differentiates this book from generic volumes on linear algebra. The focus is clearly on the most relevant aspects of linear algebra for machine learning and to teach readers how to apply these concepts.
2. Optimization and its applications: Much of machine learning is posed as an optimization problem in which we try to maximize the accuracy of regression and classification models. The ¿parent problem¿ of optimization-centric machine learning is least-squares regression. Interestingly, this problem arises in both linear algebra and optimization, and is one of the key connecting problems of the two fields. Least-squares regression is also the starting point for support vector machines, logistic regression, and recommender systems. Furthermore, the methods for dimensionality reduction and matrix factorization also require the development of optimization methods. A general view of optimization in computational graphs is discussed together with its applications to back propagation in neural networks.
A frequent challenge faced by beginners in machine learning is the extensive background required in linear algebra and optimization. One problem is that the existing linear algebra and optimization courses are not specific to machine learning; therefore, one would typically have to complete more course material than is necessary to pick up machine learning. Furthermore, certain types of ideas and tricks from optimization and linear algebra recur more frequently in machine learning than other application-centric settings. Therefore, there is significant value in developing a view of linear algebra and optimization that is better suited to the specific perspective of machine learning.
Charu C. Aggarwal is a Distinguished Research Staff Member (DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his undergraduate degree in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his Ph.D. in Operations Research from the Massachusetts Institute of Technology in 1996. He has published more than 400 papers in refereed conferences and journals and has applied for or been granted more than 80 patents. He is author or editor of 19 books, including textbooks on data mining, neural networks, machine learning (for text), recommender systems, and outlier analysis. Because of the commercial value of his patents, he has thrice been designated a Master Inventor at IBM. He has received several internal and external awards, including the EDBT Test-of-Time Award (2014), the IEEE ICDM Research Contributions Award (2015), and the ACM SIGKDD Innovation Award (2019). He has served as editor-in-chief of the ACM SIGKDD Explorations, and is currently serving as an editor-in-chief of the ACM Transactions on Knowledge Discovery from Data. He is a fellow of the SIAM, ACM, and the IEEE, for "contributions to knowledge discovery and data mining algorithms."
First textbook to provide an integrated treatment of linear algebra and optimization with a special focus on machine learning issues
Includes many examples to simplify exposition and facilitate in learning semantically
Complemented by examples and exercises throughout the book. A solution manual for the exercises at the end of each chapter is available to teaching instructors
Includes supplementary material: [...]
Preface.- 1 Linear Algebra and Optimization: An Introduction.- 2 Linear Transformations and Linear Systems.- 3 Eigenvectors and Diagonalizable Matrices.- 4 Optimization Basics: A Machine Learning View.- 5 Advanced Optimization Solutions.- 6 Constrained Optimization and Duality.- 7 Singular Value Decomposition.- 8 Matrix Factorization.- 9 The Linear Algebra of Similarity.- 10 The Linear Algebra of Graphs.- 11 Optimization in Computational Graphs.- Index.
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik, Mathematik, Medizin, Naturwissenschaften, Technik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Buch |
Inhalt: |
xxi
495 S. 67 s/w Illustr. 26 farbige Illustr. 495 p. 93 illus. 26 illus. in color. |
ISBN-13: | 9783030403430 |
ISBN-10: | 3030403432 |
Sprache: | Englisch |
Ausstattung / Beilage: | HC runder Rücken kaschiert |
Einband: | Gebunden |
Autor: | Aggarwal, Charu C. |
Auflage: | 1st ed. 2020 |
Hersteller: |
Springer International Publishing
Springer International Publishing AG |
Maße: | 260 x 183 x 34 mm |
Von/Mit: | Charu C. Aggarwal |
Erscheinungsdatum: | 13.05.2020 |
Gewicht: | 1,165 kg |
Charu C. Aggarwal is a Distinguished Research Staff Member (DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his undergraduate degree in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his Ph.D. in Operations Research from the Massachusetts Institute of Technology in 1996. He has published more than 400 papers in refereed conferences and journals and has applied for or been granted more than 80 patents. He is author or editor of 19 books, including textbooks on data mining, neural networks, machine learning (for text), recommender systems, and outlier analysis. Because of the commercial value of his patents, he has thrice been designated a Master Inventor at IBM. He has received several internal and external awards, including the EDBT Test-of-Time Award (2014), the IEEE ICDM Research Contributions Award (2015), and the ACM SIGKDD Innovation Award (2019). He has served as editor-in-chief of the ACM SIGKDD Explorations, and is currently serving as an editor-in-chief of the ACM Transactions on Knowledge Discovery from Data. He is a fellow of the SIAM, ACM, and the IEEE, for "contributions to knowledge discovery and data mining algorithms."
First textbook to provide an integrated treatment of linear algebra and optimization with a special focus on machine learning issues
Includes many examples to simplify exposition and facilitate in learning semantically
Complemented by examples and exercises throughout the book. A solution manual for the exercises at the end of each chapter is available to teaching instructors
Includes supplementary material: [...]
Preface.- 1 Linear Algebra and Optimization: An Introduction.- 2 Linear Transformations and Linear Systems.- 3 Eigenvectors and Diagonalizable Matrices.- 4 Optimization Basics: A Machine Learning View.- 5 Advanced Optimization Solutions.- 6 Constrained Optimization and Duality.- 7 Singular Value Decomposition.- 8 Matrix Factorization.- 9 The Linear Algebra of Similarity.- 10 The Linear Algebra of Graphs.- 11 Optimization in Computational Graphs.- Index.
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik, Mathematik, Medizin, Naturwissenschaften, Technik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Buch |
Inhalt: |
xxi
495 S. 67 s/w Illustr. 26 farbige Illustr. 495 p. 93 illus. 26 illus. in color. |
ISBN-13: | 9783030403430 |
ISBN-10: | 3030403432 |
Sprache: | Englisch |
Ausstattung / Beilage: | HC runder Rücken kaschiert |
Einband: | Gebunden |
Autor: | Aggarwal, Charu C. |
Auflage: | 1st ed. 2020 |
Hersteller: |
Springer International Publishing
Springer International Publishing AG |
Maße: | 260 x 183 x 34 mm |
Von/Mit: | Charu C. Aggarwal |
Erscheinungsdatum: | 13.05.2020 |
Gewicht: | 1,165 kg |