Preface xvAbout the Companion Website xviiiPart I Mathematical Review 11 Methods of Proof and Some Notation 31.1 Methods of Proof 31.2 Notation 5Exercises 52 Vector Spaces and Matrices 72.1 Vector and Matrix 72.2 Rank of a Matrix 112.3 Linear Equations 162.4 Inner Products and Norms 18Exercises 203 Transformations 233.1 Linear Transformations 233.2 Eigenvalues and Eigenvectors 243.3 Orthogonal Projections 263.4 Quadratic Forms 273.5 Matrix Norms 32Exercises 354 Concepts from Geometry 394.1 Line Segments 394.2 Hyperplanes and Linear Varieties 394.3 Convex Sets 414.4 Neighborhoods 434.5 Polytopes and Polyhedra 44Exercises 455 Elements of Calculus 475.1 Sequences and Limits 475.2 Differentiability 525.3 The Derivative Matrix 545.4 Differentiation Rules 575.5 Level Sets and Gradients 585.6 Taylor Series 61Exercises 65Part II Unconstrained Optimization 676 Basics of Set-Constrained and Unconstrained Optimization 696.1 Introduction 696.2 Conditions for Local Minimizers 70Exercises 787 One-Dimensional Search Methods 877.1 Introduction 877.2 Golden Section Search 877.3 Fibonacci Method 917.4 Bisection Method 977.5 Newton's Method 987.6 Secant Method 1017.7 Bracketing 1037.8 Line Search in Multidimensional Optimization 103Exercises 1058 Gradient Methods 1098.1 Introduction 1098.2 Steepest Descent Method 1108.3 Analysis of Gradient Methods 117Exercises 1269 Newton's Method 1339.1 Introduction 1339.2 Analysis of Newton's Method 1359.3 Levenberg-Marquardt Modification 1389.4 Newton's Method for Nonlinear Least Squares 139Exercises 14210 Conjugate Direction Methods 14510.1 Introduction 14510.2 Conjugate Direction Algorithm 14610.2.1 Basic Conjugate Direction Algorithm 14610.3 Conjugate Gradient Algorithm 15110.4 Conjugate Gradient Algorithm for Nonquadratic Problems 154Exercises 15611 Quasi-Newton Methods 15911.1 Introduction 15911.2 Approximating the Inverse Hessian 16011.3 Rank One Correction Formula 16211.4 DFP Algorithm 16611.5 BFGS Algorithm 170Exercises 17312 Solving Linear Equations 17912.1 Least-Squares Analysis 17912.2 Recursive Least-Squares Algorithm 18712.3 Solution to a Linear Equation with Minimum Norm 19012.4 Kaczmarz's Algorithm 19112.5 Solving Linear Equations in General 194Exercises 20113 Unconstrained Optimization and Neural Networks 20913.1 Introduction 20913.2 Single-Neuron Training 21113.3 Backpropagation Algorithm 213Exercises 22214 Global Search Algorithms 22514.1 Introduction 22514.2 Nelder-Mead Simplex Algorithm 22514.3 Simulated Annealing 22914.3.1 Randomized Search 22914.3.2 Simulated Annealing Algorithm 22914.4 Particle Swarm Optimization 23114.4.1 Basic PSO Algorithm 23214.4.2 Variations 23314.5 Genetic Algorithms 23314.5.1 Basic Description 23314.5.1.1 Chromosomes and Representation Schemes 23414.5.1.2 Selection and Evolution 23414.5.2 Analysis of Genetic Algorithms 23814.5.3 Real-Number Genetic Algorithms 243Exercises 244Part III Linear Programming 24715 Introduction to Linear Programming 24915.1 Brief History of Linear Programming 24915.2 Simple Examples of Linear Programs 25015.3 Two-Dimensional Linear Programs 25615.4 Convex Polyhedra and Linear Programming 25815.5 Standard Form Linear Programs 26015.6 Basic Solutions 26415.7 Properties of Basic Solutions 26715.8 Geometric View of Linear Programs 269Exercises 27316 Simplex Method 27716.1 Solving Linear Equations Using Row Operations 27716.2 The Canonical Augmented Matrix 28316.3 Updating the Augmented Matrix 28416.4 The Simplex Algorithm 28516.5 Matrix Form of the Simplex Method 29116.6 Two-Phase Simplex Method 29416.7 Revised Simplex Method 297Exercises 30117 Duality 30917.1 Dual Linear Programs 30917.2 Properties of Dual Problems 31617.3 Matrix Games 321Exercises 32418 Nonsimplex Methods 33118.1 Introduction 33118.2 Khachiyan's Method 33218.3 Affine Scaling Method 33418.3.1 Basic Algorithm 33418.3.2 Two-Phase Method 33718.4 Karmarkar's Method 33918.4.1 Basic Ideas 33918.4.2 Karmarkar's Canonical Form 33918.4.3 Karmarkar's Restricted Problem 34118.4.4 From General Form to Karmarkar's Canonical Form 34218.4.5 The Algorithm 345Exercises 34919 Integer Linear Programming 35119.1 Introduction 35119.2 Unimodular Matrices 35119.3 The Gomory Cutting-Plane Method 358Exercises 366Part IV Nonlinear Constrained Optimization 36920 Problems with Equality Constraints 37120.1 Introduction 37120.2 Problem Formulation 37320.3 Tangent and Normal Spaces 37420.4 Lagrange Condition 37920.5 Second-Order Conditions 38720.6 Minimizing Quadratics Subject to Linear Constraints 390Exercises 39421 Problems with Inequality Constraints 39921.1 Karush-Kuhn-Tucker Condition 39921.2 Second-Order Conditions 406Exercises 41022 Convex Optimization Problems 41722.1 Introduction 41722.2 Convex Functions 41922.3 Convex Optimization Problems 42622.4 Semidefinite Programming 43122.4.1 Linear Matrix Inequalities and Their Properties 43122.4.2 LMI Solvers 43522.4.2.1 Finding a Feasible Solution Under LMI Constraints 43622.4.2.2 Minimizing a Linear Objective Under LMI Constraints 43822.4.2.3 Minimizing a Generalized Eigenvalue Under LMI Constraints 440Exercises 44223 Lagrangian Duality 44923.1 Overview 44923.2 Notation 44923.3 Primal-Dual Pair 45023.4 General Duality Properties 45123.4.1 Convexity of Dual Problem 45123.4.2 Primal Objective in Terms of Lagrangian 45123.4.3 Minimax Inequality Chain 45223.4.4 Optimality of Saddle Point 45223.4.5 Weak Duality 45323.4.6 Duality Gap 45323.5 Strong Duality 45423.5.1 Strong Duality <==> Minimax Equals Maximin 45423.5.2 Strong Duality ==> Primal Unconstrained Minimization 45523.5.3 Strong Duality ==> Optimality 45523.5.4 Strong Duality ==> KKT (Including Complementary Slackness) 45523.5.5 Strong Duality ==> Saddle Point 45623.6 Convex Case 45623.6.1 Convex Case: KKT ==> Strong Duality 45623.6.2 Convex Case: Regular Optimal Primal ==> Strong Duality 45723.6.3 Convex Case: Slater's Condition ==> Strong Duality 45723.7 Summary of Key Results 457Exercises 45824 Algorithms for Constrained Optimization 45924.1 Introduction 45924.2 Projections 45924.3 Projected Gradient Methods with Linear Constraints 46224.4 Convergence of Projected Gradient Algorithms 46524.4.1 Fixed Points and First-Order Necessary Conditions 46624.4.2 Convergence with Fixed Step Size 46824.4.3 Some Properties of Projections 46924.4.4 Armijo Condition 47024.4.5 Accumulation Points 47124.4.6 Projections in the Convex Case 47224.4.7 Armijo Condition in the Convex Case 47424.4.8 Convergence in the Convex Case 48024.4.9 Convergence Rate with Line-Search Step Size 48124.5 Lagrangian Algorithms 48324.5.1 Lagrangian Algorithm for Equality Constraints 48424.5.2 Lagrangian Algorithm for Inequality Constraints 48624.6 Penalty Methods 489Exercises 49525 Multiobjective Optimization 49925.1 Introduction 49925.2 Pareto Solutions 49925.3 Computing the Pareto Front 50125.4 From Multiobjective to Single-Objective Optimization 50525.5 Uncertain Linear Programming Problems 50825.5.1 Uncertain Constraints 50825.5.2 Uncertain Objective Function Coefficients 51125.5.3 Uncertain Constraint Coefficients 51325.5.4 General Uncertainties 513Exercises 513Part V Optimization in Machine Learning 51726 Machine Learning Problems and Feature Engineering 51926.1 Machine Learning Problems 51926.1.1 Data with Labels and Supervised Learning 51926.1.2 Data Without Labels and Unsupervised Learning 52126.2 Data Normalization 52226.3 Histogram of Oriented Gradients 52426.4 Principal Component Analysis and Linear Autoencoder 52626.4.1 Singular Value Decomposition 52626.4.2 Principal Axes and Principal Components of a Data Set 52726.4.3 Linear Autoencoder 529Exercises 53027 Stochastic Gradient Descent Algorithms 53727.1 Stochastic Gradient Descent Algorithm 53727.2 Stochastic Variance Reduced Gradient Algorithm 54027.3 Distributed Stochastic Variance Reduced Gradient 54227.3.1 Distributed Learning Environment 54227.3.2 SVRG in Distributed Optimization 54327.3.3 Communication Versus Computation 54527.3.4 Data Security 545Exercises 54628 Linear Regression and Its Variants 55328.1 Least-Squares Linear Regression 55328.1.1 A Linear Model for Prediction 55328.1.2 Training the Model 55428.1.3 Computing Optimal w 55428.1.4 Optimal Predictor and Performance Evaluation 55528.1.5 Least-Squares Linear Regression for Data Sets with Vector Labels 55628.2 Model Selection by Cross-Validation 55928.3 Model Selection by Regularization 562Exercises 56429 Logistic Regression for Classification 56929.1 Logistic Regression for Binary Classification 56929.1.1 Least-Squares Linear Regression for Binary Classification 56929.1.2 Logistic Regression for Binary Classification 57029.1.3 Interpreting Logistic Regression by Log Error 57229.1.4 Confusion Matrix for Binary Classification 57329.2 Nonlinear Decision Boundary via Linear Regression 57529.2.1 Least-Squares Linear Regression with Nonlinear Transformation 57629.2.2 Logistic Regression with Nonlinear Transformation 57829.3 Multicategory Classification 58029.3.1 One-Versus-All Multicategory Classification 58029.3.2 Softmax Regression for Multicategory Classification 581Exercises 58430 Support Vector Machines 58930.1 Hinge-Loss Functions 58930.1.1 Geometric Interpretation of the Linear Model 58930.1.2 Hinge Loss for Binary Data Sets 59030.1.3 Hinge Loss for Multicategory Data Sets 59230.2 Classification by Minimizing Hinge Loss 59330.2.1 Binary Classification by Minimizing Average Hinge Loss 59330.2.2 Multicategory Classification by Minimizing E hww or E hcs 59430.3 Support Vector Machines for Binary Classification 59630.3.1 Hard-Margin Support Vector Machines 59630.3.2 Support Vectors 59830.3.3 Soft-Margin Support Vector Machines 59930.3.4 Connection to Hinge-Loss Minimization 60230.4 Support Vector Machines for Multicategory Classification 60230.5 Kernel Trick 60330.5.1 Kernels 60330.5.2 Kernel Trick 60430.5.3 Learning with Kernels 60530.5.3.1 Regularized Logistic Regression with Nonlinear Transformation for Binary Classification 60530.5.3.2 Regularized Hinge-Loss Minimization for Binary Classification 606Exercises 60731 K-Means Clustering 61131.1 K-Means Clustering 61131.2 K-Means++ forCenterInitialization 61531.3 Variants of K-Means Clustering 61731.3.1 K-Means Clustering Based on 1-Norm Regularization 61731.3.2 PCA-Guided K-Means Clustering 61931.4 Image Compression by Vector Quantization and K-Means Clustering 622Exercises 623References 627Index 635
Edwin K. P. Chong, PhD, is Professor and Head of Electrical and Computer Engineering and Professor of Mathematics at Colorado State University. He is a Fellow of the IEEE and AAAS, and was Senior Editor of the IEEE Transactions on Automatic Control.Wu-Sheng Lu, PhD, is Professor Emeritus of Electrical and Computer Engineering at the University of Victoria, Canada. He is a Fellow of the IEEE and former Associate Editor of the IEEE Transactions on Circuits and??Systems.Stanislaw H. {ak, PhD, is Professor in the School of Electrical and Computer Engineering at Purdue University. He is former Associate Editor of Dynamics and Control and the IEEE Transactions on Neural Networks.
1997-2024 DolnySlask.com Agencja Internetowa