ISBN-13: 9781119546368 / Angielski / Twarda / 2019 / 416 str.
ISBN-13: 9781119546368 / Angielski / Twarda / 2019 / 416 str.
Introduction xiAcknowledgments xv1 Stationary Processes and Time Series 11.1 Introduction 11.2 The Prediction Problem 11.3 Random Variable 41.4 Random Vector 51.4.1 Covariance Coefficient 71.5 Stationary Process 91.6 White Process 111.7 MA Process 121.8 AR Process 161.8.1 Study of the AR(1) Process 161.9 Yule-Walker Equations 201.9.1 Yule-Walker Equations for the AR(1) Process 201.9.2 Yule-Walker Equations for the AR(2) and AR(n) Process 211.10 ARMA Process 231.11 Spectrum of a Stationary Process 241.11.1 Spectrum Properties 241.11.2 Spectral Diagram 251.11.3 Maximum Frequency in Discrete Time 251.11.4 White Noise Spectrum 251.11.5 Complex Spectrum 261.12 ARMA Model: Stability Test and Variance Computation 261.12.1 Ruzicka Stability Criterion 281.12.2 Variance of an ARMA Process 321.13 FundamentalTheorem of Spectral Analysis 351.14 Spectrum Drawing 381.15 Proof of the FundamentalTheorem of Spectral Analysis 431.16 Representations of a Stationary Process 452 Estimation of Process Characteristics 472.1 Introduction 472.2 General Properties of the Covariance Function 472.3 Covariance Function of ARMA Processes 492.4 Estimation of the Mean 502.5 Estimation of the Covariance Function 532.6 Estimation of the Spectrum 552.7 Whiteness Test 573 Prediction 613.1 Introduction 613.2 Fake Predictor 623.2.1 Practical Determination of the Fake Predictor 643.3 Spectral Factorization 663.4 Whitening Filter 703.5 Optimal Predictor from Data 713.6 Prediction of an ARMA Process 763.7 ARMAX Process 773.8 Prediction of an ARMAX Process 784 Model Identification 814.1 Introduction 814.2 Setting the Identification Problem 824.2.1 Learning from Maxwell 824.2.2 A General Identification Problem 844.3 Static Modeling 854.3.1 Learning from Gauss 854.3.2 Least Squares Made Simple 864.3.2.1 Trend Search 864.3.2.2 Seasonality Search 864.3.2.3 Linear Regression 874.3.3 Estimating the Expansion of the Universe 904.4 Dynamic Modeling 924.5 External RepresentationModels 924.5.1 Box and Jenkins Model 924.5.2 ARX and AR Models 934.5.3 ARMAX and ARMA Models 944.5.4 MultivariableModels 964.6 Internal RepresentationModels 964.7 The Model Identification Process 1004.8 The Predictive Approach 1014.9 Models in Predictive Form 1024.9.1 Box and Jenkins Model 1034.9.2 ARX and AR Models 1034.9.3 ARMAX and ARMA Models 1045 Identification of Input-Output Models 1075.1 Introduction 1075.2 Estimating AR and ARX Models: The Least Squares Method 1075.3 Identifiability 1105.3.1 The R Matrix for the ARX(1, 1) Model 1115.3.2 The R Matrix for a General ARX Model 1125.4 Estimating ARMA and ARMAX Models 1155.4.1 Computing the Gradient and the Hessian from Data 1175.5 Asymptotic Analysis 1235.5.1 Data Generation SystemWithin the Class of Models 1255.5.2 Data Generation System Outside the Class of Models 1275.5.2.1 Simulation Trial 1325.5.3 General Considerations on the Asymptotics of Predictive Identification 1325.5.4 Estimating the Uncertainty in Parameter Estimation 1325.5.4.1 Deduction of the Formula of the Estimation Covariance 1345.6 Recursive Identification 1385.6.1 Recursive Least Squares 1385.6.2 Recursive Maximum Likelihood 1435.6.3 Extended Least Squares 1455.7 Robustness of IdentificationMethods 1475.7.1 Prediction Error and Model Error 1475.7.2 Frequency Domain Interpretation 1485.7.3 Prefiltering 1495.8 Parameter Tracking 1496 Model Complexity Selection 1556.1 Introduction 1556.2 Cross-validation 1576.3 FPE Criterion 1576.3.1 FPE Concept 1576.3.2 FPE Determination 1586.4 AIC Criterion 1606.4.1 AIC Versus FPE 1616.5 MDL Criterion 1616.5.1 MDL Versus AIC 1626.6 Durbin-Levinson Algorithm 1646.6.1 Yule-Walker Equations for Autoregressive Models of Orders 1 and 2 1656.6.2 Durbin-Levinson Recursion: From AR(1) to AR(2) 1666.6.3 Durbin-Levinson Recursion for Models of Any Order 1696.6.4 Partial Covariance Function 1717 Identification of State Space Models 1737.1 Introduction 1737.2 Hankel Matrix 1757.3 Order Determination 1767.4 Determination of Matrices G and H 1777.5 Determination of Matrix F 1787.6 Mid Summary: An Ideal Procedure 1797.7 Order Determination with SVD 1797.8 Reliable Identification of a State Space Model 1818 Predictive Control 1878.1 Introduction 1878.2 Minimum Variance Control 1888.2.1 Determination of the MV Control Law 1908.2.2 Analysis of the MV Control System 1928.2.2.1 Structure 1938.2.2.2 Stability 1938.3 Generalized Minimum Variance Control 1968.3.1 Model Reference Control 1988.3.2 Penalized Control Design 2008.3.2.1 Choice A for Q(z) 2018.3.2.2 Choice B for Q(z) 2038.4 Model-Based Predictive Control 2048.5 Data-Driven Control Synthesis 2059 Kalman Filtering and Prediction 2099.1 Introduction 2099.2 Kalman Approach to Prediction and Filtering Problems 2109.3 The Bayes Estimation Problem 2129.3.1 Bayes Problem - Scalar Case 2139.3.2 Bayes Problem - Vector Case 2159.3.3 Recursive Bayes Formula - Scalar Case 2159.3.4 Innovation 2179.3.5 Recursive Bayes Formula - Vector Case 2199.3.6 Geometric Interpretation of Bayes Estimation 2209.3.6.1 Geometric Interpretation of the Bayes Batch Formula 2209.3.6.2 Geometric Interpretation of the Recursive Bayes Formula 2229.4 One-step-ahead Kalman Predictor 2239.4.1 The Innovation in the State Prediction Problem 2249.4.2 The State Prediction Error 2249.4.3 Optimal One-Step-Ahead Prediction of the Output 2259.4.4 Optimal One-Step-Ahead Prediction of the State 2269.4.5 Riccati Equation 2289.4.6 Initialization 2319.4.7 One-step-ahead Optimal Predictor Summary 2329.4.8 Generalizations 2369.4.8.1 System 2369.4.8.2 Predictor 2369.5 Multistep Optimal Predictor 2379.6 Optimal Filter 2399.7 Steady-State Predictor 2409.7.1 Gain Convergence 2419.7.2 Convergence of the Riccati Equation Solution 2449.7.2.1 Convergence Under Stability 2449.7.2.2 ConvergenceWithout Stability 2469.7.2.3 Observability 2509.7.2.4 Reachability 2519.7.2.5 General Convergence Result 2569.8 Innovation Representation 2659.9 Innovation Representation Versus Canonical Representation 2669.10 K-Theory Versus K-W Theory 2679.11 Extended Kalman Filter - EKF 2719.12 The Robust Approach to Filtering 2739.12.1 Norm of a Dynamic System 2749.12.2 Robust Filtering 27610 Parameter Identification in a Given Model 28110.1 Introduction 28110.2 Kalman Filter-Based Approaches 28110.3 Two-Stage Method 28410.3.1 First Stage - Data Generation and Compression 28510.3.2 Second Stage - Compressed Data Fitting 28711 Case Studies 29111.1 Introduction 29111.2 Kobe Earthquake Data Analysis 29111.2.1 Modeling the Normal Seismic Activity Data 29411.2.2 Model Validation 29611.2.3 Analysis of the Transition Phase via Detection Techniques 29911.2.4 Conclusions 30011.3 Estimation of a Sinusoid in Noise 30011.3.1 Frequency Estimation by Notch Filter Design 30111.3.2 Frequency Estimation with EKF 305Appendix A Linear Dynamical Systems 309A.1 State Space and Input-Output Models 309A.1.1 Characteristic Polynomial and Eigenvalues 309A.1.2 Operator Representation 310A.1.3 Transfer Function 310A.1.4 Zeros, Poles, and Eigenvalues 310A.1.5 Relative Degree 311A.1.6 Equilibrium Point and System Gain 311A.2 Lagrange Formula 312A.3 Stability 312A.4 Impulse Response 313A.4.1 Impulse Response from a State Space Model 314A.4.2 Impulse Response from an Input-Output Model 314A.4.3 Quadratic Summability of the Impulse Response 315A.5 Frequency Response 315A.6 Multiplicity of State Space Models 316A.6.1 Change of Basis 316A.6.2 Redundancy in the System Order 317A.7 Reachability and Observability 318A.7.1 Reachability 318A.7.2 Observability 320A.7.3 PBH Test of Reachability and Observability 321A.8 System Decomposition 323A.8.1 Reachability and Observability Decompositions 323A.8.2 Canonical Decomposition 324A.9 Stabilizability and Detectability 328Appendix B Matrices 331B.1 Basics 331B.2 Eigenvalues 335B.3 Determinant and Inverse 337B.4 Rank 340B.5 Annihilating Polynomial 342B.6 Algebraic and Geometric Multiplicity 345B.7 Range and Null Space 345B.8 Quadratic Forms 346B.9 Derivative of a Scalar Function with Respect to a Vector 349B.10 Matrix Diagonalization via Similarity 350B.11 Matrix Diagonalization via Singular Value Decomposition 351B.12 Matrix Norm and Condition Number 353Appendix C Problems and Solutions 357Bibliography 391Index 397
SERGIO BITTANTI is Emeritus Professor of Model Identification and Data Analysis (MIDA) at the Politecnico di Milano, Milan, Italy, where his intense activity of research and teaching has attracted the attention of many young researchers.He started teaching the course of MIDA years ago, with just a few students. Today the course is offered in various sections with about one thousand students.He has organized a number of workshops and conferences, and has served as member of the Program Committee of more than 70 international congresses.He has for many years been associated with the National Research Council (CNR) of Italy and is a member of the Academy of Science and Literature of Milan (Istituto Lombardo - Accademia di Scienze e Lettere).He received many awards, in particular the title of Ambassador of the city of Milan and the medal of the President of the Italian Republic for the IFAC World Congress held in Milan in 2011 with a record number of attendees from 73 Countries.Website: http://home.deib.polimi.it/bittanti/
1997-2024 DolnySlask.com Agencja Internetowa