ISBN-13: 9781119417385 / Angielski / Twarda / 2021 / 560 str.
ISBN-13: 9781119417385 / Angielski / Twarda / 2021 / 560 str.
Preface xvii1. Introduction To Big Dependent Data 11.1 Examples of Dependent Data 21.2 Stochastic Processes 91.2.1 Scalar Processes 91.2.1.1 Stationarity 101.2.1.2 White Noise Process 121.2.1.3 Conditional Distribution 121.2.2 Vector Processes 121.2.2.1 Vector White Noises 151.2.2.2 Invertibility 151.3 Sample Moments of Stationary Vector Process 151.3.1 Sample Mean 161.3.2 Sample Covariance and Correlation Matrices 171.4 Nonstationary Processes 211.5 Principal Component Analysis 231.5.1 Discussion 261.5.2 Properties of the PCs 271.6 Effects of Serial Dependence 31Appendix 1.A: Some Matrix Theory 34Exercises 35References 362. Linear Univariate Time Series 372.1 Visualizing a Large Set of Time Series 392.1.1 Dynamic Plots 392.1.2 Static Plots 442.2 Stationary ARMA Models 492.2.1 The Autoregressive Process 502.2.1.1 Autocorrelation Functions 512.2.2 The Moving Average Process 522.2.3 The ARMA Process 542.2.4 Linear Combinations of ARMA Processes 552.3 Spectral Analysis of Stationary Processes 582.3.1 Fitting Harmonic Functions to a Time Series 582.3.2 The Periodogram 592.3.3 The Spectral Density Function and Its Estimation 612.4 Integrated Processes 642.4.1 The Random Walk Process 642.4.2 ARIMA Models 652.4.3 Seasonal ARIMA Models 672.4.3.1 The Airline Model 692.5 Structural and State Space Models 712.5.1 Structural Time Series Models 712.5.2 State-Space Models 722.5.3 The Kalman Filter 762.6 Forecasting with Linear Models 782.6.1 Computing Optimal Predictors 782.6.2 Variances of the Predictions 802.6.3 Measuring Predictability 812.7 Modeling a Set of Time Series 822.7.1 Data Transformation 832.7.2 Testing forWhite Noise 852.7.3 Determination of the Difference Order 852.7.4 Model Identification 872.8 Estimation and Information Criteria 872.8.1 Conditional Likelihood 872.8.2 On-line Estimation 882.8.3 Maximum Likelihood (ML) Estimation 902.8.4 Model Selection 912.8.4.1 The Akaike Information Criterion (AIC) 912.8.4.2 The Bayesian Information Criterion (BIC) 922.8.4.3 Other Criteria 922.8.4.4 Cross-Validation 932.9 Diagnostic Checking 952.9.1 Residual Plot 962.9.2 Portmanteau Test for Residual Serial Correlations 962.9.3 Homoscedastic Tests 972.9.4 Normality Tests 982.9.5 Checking for Deterministic Components 982.10 Forecasting 1002.10.1 Out-of-Sample Forecasts 1002.10.2 Forecasting with Model Averaging 1002.10.3 Forecasting with Shrinkage Estimators 102Appendix 2.A: Difference Equations 103Exercises 108References 1083. Analysis of Multivariate Time Series 1113.1 Transfer Function Models 1123.1.1 Single Input and Single Output 1123.1.2 Multiple Inputs and Multiple Outputs 1183.2 Vector AR Models 1183.2.1 Impulse Response Function 1203.2.2 Some Special Cases 1213.2.3 Estimation 1223.2.4 Model Building 1233.2.5 Prediction 1253.2.6 Forecast Error Variance Decomposition 1273.3 Vector Moving-Average Models 1353.3.1 Properties of VMA Models 1363.3.2 VMA Modeling 1363.4 Stationary VARMA Models 1403.4.1 Are VAR Models Sufficient? 1403.4.2 Properties of VARMA Models 1413.4.3 Modeling VARMA Process 1413.4.4 Use of VARMA Models 1423.5 Unit Roots and Co-Integration 1473.5.1 Spurious Regression 1483.5.2 Linear Combinations of a Vector Process 1483.5.3 Co-integration 1493.5.4 Over-Differencing 1503.6 Error-Correction Models 1513.6.1 Co-integration Test 152Exercises 157References 1574. Handling Heterogeneity In Many Time Series 1614.1 Intervention Analysis 1624.1.1 Intervention with Indicator Variables 1634.1.2 Intervention with Step Functions 1654.1.3 Intervention with General Exogenous Variables 1664.1.4 Building an Intervention Model 1664.2 Estimation of Missing Values 1704.2.1 Univariate Interpolation 1704.2.2 Multivariate Interpolation 1724.3 Outliers in Vector Time Series 1744.3.1 Multivariate Additive Outliers 1754.3.1.1 Effects on Residuals and Estimation 1764.3.2 Multivariate Level Shift or Structural Break 1774.3.2.1 Effects on Residuals and Estimation 1774.3.3 Other Types of Outliers 1784.3.3.1 Multivariate Innovative Outliers 1784.3.3.2 Transitory Change 1794.3.3.3 Ramp Shift 1794.3.4 Masking and Swamping 1804.4 Univariate Outlier Detection 1804.4.1 Other Procedures for Univariate Outlier Detection 1834.4.2 New Approaches to Outlier Detection 1844.5 Multivariate Outliers Detection 1894.5.1 VARMA Outlier Detection 1894.5.2 Outlier Detection by Projections 1904.5.3 A Projection Algorithm for Outliers Detection 1924.5.4 The Nonstationary Case 1934.6 Robust Estimation 1964.7 Heterogeneity for Parameter Changes 1994.7.1 Parameter Changes in Univariate Time Series 1994.7.2 Covariance Changes in Multivariate Time Series 2004.7.2.1 Detecting Multiple Covariance Changes 2024.7.2.2 LR Test 202Appendix 4.A: Cusum Algorithms 2044.A.1 Detecting Univariate LS 2044.A.2 Detecting Multivariate Level Shift 2044.A.3 Detecting Multiple Covariance Changes 206Exercises 206References 2075. Clustering and Classification of Time Series 2115.1 Distances and Dissimilarities 2125.1.1 Distance Between Univariate Time Series 2125.1.2 Dissimilarities Between Univariate Series 2155.1.3 Dissimilarities Based on Cross-Linear Dependency 2225.2 Hierarchical Clustering of Time Series 2285.2.1 Criteria for Defining Distances Between Groups 2285.2.2 The Dendrogram 2295.2.3 Selecting the Number of Groups 2295.2.3.1 The Height and Step Plots 2295.2.3.2 Silhouette Statistic 2305.2.3.3 The Gap Statistic 2335.3 Clustering by Variables 2435.3.1 The k-means Algorithm 2445.3.1.1 Number of Groups 2465.3.2 k-Medoids 2505.3.3 Model-Based Clustering by Variables 2525.3.3.1 Maximum Likelihood (ML) Estimation of the AR Mixture Model 2535.3.3.2 The EM Algorithm 2545.3.3.3 Estimation of Mixture of Multivariate Normals 2565.3.3.4 Bayesian Estimation 2575.3.3.5 Clustering with Structural Breaks 2585.3.4 Clustering by Projections 2595.4 Classification with Time Series 2645.4.1 Classification Among a Set of Models 2645.4.2 Checking the Classification Rule 2675.5 Classification with Features 2675.5.1 Linear Discriminant Function 2685.5.2 Quadratic Classification and Admissible Functions 2695.5.3 Logistic Regression 2705.6 Nonparametric Classification 2775.6.1 Nearest Neighbors 2775.6.2 Support Vector Machines 2785.6.2.1 Linearly Separable Problems 2795.6.2.2 Nonlinearly Separable Problems 2825.6.3 Density Estimation 2845.7 Other Classification Problems and Methods 286Exercises 287References 2886. Dynamic Factor Models 2916.1 The DFM for Stationary Series 2936.1.1 Properties of the Covariance Matrices 2956.1.1.1 The Exact DFM 2956.1.1.2 The Approximate DFM 2976.1.2 Dynamic Factor and VARMA Models 2996.2 Fitting a Stationary DFM to Data 3016.2.1 Principal Components (PC) Estimation 3016.2.2 Pooled PC Estimator 3036.2.3 Generalized PC Estimator 3036.2.4 ML Estimation 3046.2.5 Selecting the Number of Factors 3056.2.5.1 Rank Testing via Canonical Correlation 3066.2.5.2 Testing a Jump in Eigenvalues 3076.2.5.3 Using Information Criteria 3076.2.6 Forecasting with DFM 3086.2.7 Alternative Formulations of the DFM 3146.3 Generalized DFM (GDFM) for Stationary Series 3156.3.1 Some Properties of the GDFM 3166.3.2 GDFM and VARMA Models 3176.4 Dynamic Principal Components 3176.4.1 Dynamic Principal Components for Optimal Reconstruction 3176.4.2 One-Sided DPCs 3186.4.3 Model Selection and Forecasting 3206.4.4 One Sided DPC and GDFM Estimation 3216.5 DFM for Nonstationary Series 3246.5.1 Cointegration and DFM 3296.6 GDFM for Nonstationary Series 3306.6.1 Estimation by Generalized DPC 3306.7 Outliers in DFMs 3336.7.1 Factor and Idiosyncratic Outliers 3336.7.2 A Procedure to Find Outliers in DFM 3356.8 DFM with Cluster Structure 3366.8.1 Fitting DFMCS 3376.9 Some Extensions of DFM 3446.10 High-Dimensional Case 3456.10.1 Sparse PCs 3456.10.2 A Structural-FM Approach 3476.10.3 Estimation 3486.10.4 Selecting the Number of Common Factors 3496.10.5 Asymptotic Properties of Loading Estimates 351Appendix 6.A: Some R Commands 352Exercises 353References 3547. Forecasting With Big Dependent Data 3597.1 Regularized Linear Models 3607.1.1 Properties of Lasso Estimator 3627.1.2 Some Extensions of Lasso Regression 3667.1.2.1 Adaptive Lasso 3677.1.2.2 Group Lasso 3677.1.2.3 Elastic Net 3687.1.2.4 Fused Lasso 3687.1.2.5 SCAD Penalty 3687.2 Impacts of Dynamic Dependence on Lasso 3777.3 Lasso for Dependent Data 3837.4 Principal Component Regression and Diffusion Index 3887.5 Partial Least Squares 3927.6 Boosting 3977.6.1 l2 Boosting 3997.6.2 Choices of Weak Learner 3997.6.3 Boosting for Classification 4037.7 Mixed-Frequency Data and Nowcasting 4047.7.1 Midas Regression 4057.7.2 Nowcasting 4067.8 Strong Serial Dependence 413Exercises 414References 4148. Machine Learning of Big Dependent Data 4198.1 Regression Trees and Random Forests 4208.1.1 Growing Tree 4208.1.2 Pruning 4228.1.3 Classification Trees 4228.1.4 Random Forests 4248.2 Neural Networks 4278.2.1 Network Training 4298.3 Deep Learning 4368.3.1 Types of Deep Networks 4368.3.2 Recurrent NN 4378.3.3 Activation Functions for Deep Learning 4398.3.4 Training Deep Networks 4408.3.4.1 Long Short-Term Memory Model 4408.3.4.2 Training Algorithm 4418.4 Some Applications 4428.4.1 The Package: keras 4428.4.2 Dropout Layer 4498.4.3 Application of Convolution Networks 4508.4.4 Application of LSTM 4578.5 Deep Generative Models 4668.6 Reinforcement Learning 466Exercises 467References 4689. Spatio-Temporal Dependent Data 4719.1 Examples and Visualization of Spatio Temporal Data 4729.2 Spatial Processes and Data Analysis 4779.3 Geostatistical Processes 4799.3.1 Stationary Variogram 4809.3.2 Examples of Semivariogram 4809.3.3 Stationary Covariance Function 4829.3.4 Estimation of Variogram 4839.3.5 Testing Spatial Dependence 4839.3.6 Kriging 4849.3.6.1 Simple Kriging 4849.3.6.2 Ordinary Kriging 4869.3.6.3 Universal Kriging 4879.4 Lattice Processes 4889.4.1 Markov-Type Models 4889.5 Spatial Point Processes 4919.5.1 Second-Order Intensity 4929.6 S-T Processes and Analysis 4959.6.1 Basic Properties 4969.6.2 Some Nonseparable Covariance Functions 4989.6.3 S-T Variogram 4999.6.4 S-T Kriging 5009.7 Descriptive S-T Models 5049.7.1 Random Effects with S-T Basis Functions 5059.7.2 Random Effects with Spatial Basis Functions 5069.7.3 Fixed Rank Kriging 5079.7.4 Spatial Principal Component Analysis 5109.7.5 Random Effects with Temporal Basis Functions 5149.8 Dynamic S-T Models 5199.8.1 Space-Time Autoregressive Moving-Average Models 5209.8.2 S-T Component Models 5219.8.3 S-T Factor Models 5219.8.4 S-T HMs 522Appendix 9.A: Some R Packages and Commands 523Exercises 525References 525Index 529
Daniel Peña, PhD, is Professor of Statistics at Universidad Carlos III de Madrid, Spain. He received his PhD from Universidad Politecnica de Madrid in 1976 and has taught at the Universities of Wisconsin-Madison, Chicago and Carlos III de Madrid, where he was Rector from 2007 to 2015.Ruey S. Tsay, PhD, is the H.G.B Alexander Professor of Econometrics & Statistics at the Booth School of Business, University of Chicago, United States. He received his PhD in 1982 from the University of Wisconsin-Madison. His research focuses on areas of business and economic forecasting, financial econometrics, risk management, and analysis of big dependent data.
1997-2025 DolnySlask.com Agencja Internetowa