"The monograph restores the emphasis on observable quantities. Considering model-free and model-based prediction, the monograph emphasises on model-free approach but also shows the close relation between these two approaches. The book is of interest for both academics and practitioners in the field of data analysis." (Pavel Stoynov, zbMATH 1397.62008, 2018)
"This self-contained and fascinating book, intended for advanced graduate and Ph.D. students, teachers, researchers and practitioners, is useful, well written, and directly oriented toward real data applications." (Gilles Teyssière, Mathematical Reviews, February, 2017)
Prediction: some heuristic notions.- The Model-free Prediction Principle.- Model-based prediction in regression.- Model-free prediction in regression.- Model-free vs. model-based confidence intervals.- Linear time series and optimal linear prediction.- Model-based prediction in autoregression.- Model-free inference for Markov processes.- Predictive inference for locally stationary time series.- Model-free vs. model-based volatility prediction.
Dimitris N. Politis is Professor of Mathematics and Adjunct Professor of Economics at the University of California, San Diego. His research interests include Time Series Analysis, Resampling and Subsampling, Nonparametric Function Estimation, and Model-free Prediction. He has served as Editor of the IMS Bulletin (2010-2013), Co-Editor of the Journal of Time Series Analysis (2013-present), Co-Editor of the Journal of Nonparametric Statistics (2008-2011), and as Associate Editor for several journals including Bernoulli, the Journal of the American Statistical Association, and the Journal of the Royal Statistical Society, Series B. He is a fellow of the Institute of Mathematical Statistics (IMS) and the American Statistical Association, former fellow of the John Simon Guggenheim Memorial Foundation, and co-founder (with M. Akritas and S.N. Lahiri) of the International Society for NonParametric Statistics.
The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality.
Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, computer-intensive methods such as the bootstrap and cross-validation freed practitioners from the limitations of parametric models, and paved the way towards the `big data' era of the 21st century. Nonetheless, there is a further step one may take, i.e., going beyond even nonparametric models; this is where the Model-Free Prediction Principle is useful.
Interestingly, being able to predict a response variable Y associated with a regressor variable X taking on any possible value seems to inadvertently also achieve the main goal of modeling, i.e., trying to describe how Y depends on X. Hence, as prediction can be treated as a by-product of model-fitting, key estimation problems can be addressed as a by-product of being able to perform prediction. In other words, a practitioner can use Model-Free Prediction ideas in order to additionally obtain point estimates and confidence intervals for relevant parameters leading to an alternative, transformation-based approach to statistical inference.