• Wyszukiwanie zaawansowane
  • Kategorie
  • Kategorie BISAC
  • Książki na zamówienie
  • Promocje
  • Granty
  • Książka na prezent
  • Opinie
  • Pomoc
  • Załóż konto
  • Zaloguj się

Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook » książka

zaloguj się | załóż konto
Logo Krainaksiazek.pl

koszyk

konto

szukaj
topmenu
Księgarnia internetowa
Szukaj
Książki na zamówienie
Promocje
Granty
Książka na prezent
Moje konto
Pomoc
 
 
Wyszukiwanie zaawansowane
Pusty koszyk
Bezpłatna dostawa dla zamówień powyżej 20 złBezpłatna dostawa dla zamówień powyżej 20 zł

Kategorie główne

• Nauka
 [2944077]
• Literatura piękna
 [1814251]

  więcej...
• Turystyka
 [70679]
• Informatyka
 [151074]
• Komiksy
 [35590]
• Encyklopedie
 [23169]
• Dziecięca
 [611005]
• Hobby
 [136031]
• AudioBooki
 [1718]
• Literatura faktu
 [225599]
• Muzyka CD
 [379]
• Słowniki
 [2916]
• Inne
 [443741]
• Kalendarze
 [1187]
• Podręczniki
 [166463]
• Poradniki
 [469211]
• Religia
 [506887]
• Czasopisma
 [481]
• Sport
 [61343]
• Sztuka
 [242115]
• CD, DVD, Video
 [3348]
• Technologie
 [219293]
• Zdrowie
 [98602]
• Książkowe Klimaty
 [124]
• Zabawki
 [2385]
• Puzzle, gry
 [3504]
• Literatura w języku ukraińskim
 [260]
• Art. papiernicze i szkolne
 [7151]
Kategorie szczegółowe BISAC

Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook

ISBN-13: 9780470529706 / Angielski / Twarda / 2011 / 632 str.

Bruce P. Gibbs
Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook Gibbs, Bruce P. 9780470529706 John Wiley & Sons - książkaWidoczna okładka, to zdjęcie poglądowe, a rzeczywista szata graficzna może różnić się od prezentowanej.

Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook

ISBN-13: 9780470529706 / Angielski / Twarda / 2011 / 632 str.

Bruce P. Gibbs
cena 762,75
(netto: 726,43 VAT:  5%)

Najniższa cena z 30 dni: 753,44
Termin realizacji zamówienia:
ok. 30 dni roboczych.

Darmowa dostawa!

This book is intended primarily as a handbook for engineers who must design practical systems. Its primary goal is to discuss model development in sufficient detail so that the reader may design an estimator that meets all application requirements and is robust to modeling assumptions. Since it is sometimes difficult to a priori determine the best model structure, use of exploratory data analysis to define model structure is discussed. Methods for deciding on the "best" model are also presented. A second goal is to present little known extensions of least squares estimation or Kalman filtering that provide guidance on model structure and parameters, or make the estimator more robust to changes in real-world behavior. A third goal is discussion of implementation issues that make the estimator more accurate or efficient, or that make it flexible so that model alternatives can be easily compared. The fourth goal is to provide the designer/analyst with guidance in evaluating estimator performance and in determining/correcting problems. The final goal is to provide a subroutine library that simplifies implementation, and flexible general purpose high-level drivers that allow both easy analysis of alternative models and access to extensions of the basic filtering.

Supplemental materials and up-to-date errata are downloadable at http: //booksupport.wiley.com.

Kategorie:
Technologie
Kategorie BISAC:
Technology & Engineering > Electronics - Circuits - General
Technology & Engineering > Engineering (General)
Wydawca:
John Wiley & Sons
Język:
Angielski
ISBN-13:
9780470529706
Rok wydania:
2011
Ilość stron:
632
Waga:
1.38 kg
Wymiary:
25.8 x 18.2 x 3.9
Oprawa:
Twarda
Wolumenów:
01
Dodatkowe informacje:
Bibliografia
Wydanie ilustrowane

PREFACE xv

1 INTRODUCTION 1

1.1 The Forward and Inverse Modeling Problem 2

1.2 A Brief History of Estimation 4

1.3 Filtering, Smoothing, and Prediction 8

1.4 Prerequisites 9

1.5 Notation 9

1.6 Summary 11

2 SYSTEM DYNAMICS AND MODELS 13

2.1 Discrete–Time Models 14

2.2 Continuous–Time Dynamic Models 17

2.2.1 State Transition and Process Noise Covariance Matrices 19

2.2.2 Dynamic Models Using Basic Function Expansions 22

2.2.3 Dynamic Models Derived from First Principles 25

2.2.4 Stochastic (Random) Process Models 31

2.2.5 Linear Regression Models 42

2.2.6 Reduced–Order Modeling 44

2.3 Computation of State Transition and Process Noise Matrices 45

2.3.1 Numeric Computation of 45

2.3.2 Numeric Computation of QD 57

2.4 Measurement Models 58

2.5 Simulating Stochastic Systems 60

2.6 Common Modeling Errors and System Biases 62

2.7 Summary 65

3 MODELING EXAMPLES 67

3.1 Angle–Only Tracking of Linear Target Motion 67

3.2 Maneuvering Vehicle Tracking 69

3.2.1 Maneuvering Tank Tracking Using Multiple Models 69

3.2.2 Aircraft Tracking 73

3.3 Strapdown Inertial Navigation System (INS) Error Model 74

3.4 Spacecraft Orbit Determination (OD) 80

3.4.1 Geopotential Forces 83

3.4.2 Other Gravitational Attractions 86

3.4.3 Solar Radiation Pressure 87

3.4.4 Aerodynamic Drag 88

3.4.5 Thrust Forces 89

3.4.6 Earth Motion 89

3.4.7 Numerical Integration and Computation of 90

3.4.8 Measurements 92

3.4.9 GOES I–P Satellites 96

3.4.10 Global Positioning System (GPS) 97

3.5 Fossil–Fueled Power Plant 99

3.6 Summary 99

4 LINEAR LEAST–SQUARES ESTIMATION: FUNDAMENTALS 101

4.1 Least–Squares Data Fitting 101

4.2 Weighted Least Squares 108

4.3 Bayesian Estimation 115

4.3.1 Bayesian Least Squares 115

4.3.2 Bayes Theorem 117

4.3.3 Minimum Variance or Minimum Mean–Squared Error (MMSE) 121

4.3.4 Orthogonal Projections 124

4.4 Probabilistic Approaches Maximum Likelihood and Maximum A Posteriori 125

4.4.1 Gaussian Random Variables 126

4.4.2 Maximum Likelihood Estimation 128

4.4.3 Maximum A Posteriori 133

4.5 Summary of Linear Estimation Approaches 137

5 LINEAR LEAST–SQUARES ESTIMATION: SOLUTION TECHNIQUES 139

5.1 Matrix Norms, Condition Number, Observability, and the Pseudo–Inverse 139

5.1.1 Vector–Matrix Norms 139

5.1.2 Matrix Pseudo–Inverse 141

5.1.3 Condition Number 141

5.1.4 Observability 145

5.2 Normal Equation Formation and Solution 145

5.2.1 Computation of the Normal Equations 145

5.2.2 Cholesky Decomposition of the Normal Equations 149

5.3 Orthogonal Transformations and the QR Method 156

5.3.1 Givens Rotations 158

5.3.2 Householder Transformations 159

5.3.3 Modified Gram–Schmidt (MGS) Orthogonalization 162

5.3.4 QR Numerical Accuracy 165

5.4 Least–Squares Solution Using the SVD 165

5.5 Iterative Techniques 167

5.5.1 Sparse Array Storage 167

5.5.2 Linear Iteration 168

5.5.3 Least–Squares Solution for Large Sparse Problems Using Krylov Space Methods 169

5.6 Comparison of Methods 175

5.6.1 Solution Accuracy for Polynomial Problem 175

5.6.2 Algorithm Timing 181

5.7 Solution Uniqueness, Observability, and Condition Number 183

5.8 Pseudo–Inverses and the Singular Value Transformation (SVD) 185

5.9 Summary 190

6 LEAST–SQUARES ESTIMATION: MODEL ERRORS AND MODEL ORDER 193

6.1 Assessing the Validity of the Solution 194

6.1.1 Residual Sum–of–Squares (SOS) 194

6.1.2 Residual Patterns 195

6.1.3 Subsets of Residuals 196

6.1.4 Measurement Prediction 196

6.1.5 Estimate Comparison 197

6.2 Solution Error Analysis 208

6.2.1 State Error Covariance and Confidence Bounds 208

6.2.2 Model Error Analysis 212

6.3 Regression Analysis for Weighted Least Squares 237

6.3.1 Analysis of Variance 238

6.3.2 Stepwise Regression 239

6.3.3 Prediction and Optimal Data Span 244

6.4 Summary 245

7 LEAST–SQUARES ESTIMATION: CONSTRAINTS, NONLINEAR MODELS, AND ROBUST TECHNIQUES 249

7.1 Constrained Estimates 249

7.1.1 Least–Squares with Linear Equality Constraints (Problem LSE) 249

7.1.2 Least–Squares with Linear Inequality Constraints (Problem LSI) 256

7.2 Recursive Least Squares 257

7.3 Nonlinear Least Squares 259

7.3.1 1–D Nonlinear Least–Squares Solutions 263

7.3.2 Optimization for Multidimensional Unconstrained Nonlinear Least Squares 264

7.3.3 Stopping Criteria and Convergence Tests 269

7.4 Robust Estimation 282

7.4.1 De–Weighting Large Residuals 282

7.4.2 Data Editing 283

7.5 Measurement Preprocessing 285

7.6 Summary 286

8 KALMAN FILTERING 289

8.1 Discrete–Time Kalman Filter 290

8.1.1 Truth Model 290

8.1.2 Discrete–Time Kalman Filter Algorithm 291

8.2 Extensions of the Discrete Filter 303

8.2.1 Correlation between Measurement and Process Noise 303

8.2.2 Time–Correlated (Colored) Measurement Noise 305

8.2.3 Innovations, Model Validation, and Editing 311

8.3 Continous–Time Kalman–Bucy Filter 314

8.4 Modifications of the Discrete Kalman Filter 321

8.4.1 Friedland Bias–FreeBias–Restoring Filter 321

8.4.2 Kalman–Schmidt Consider Filter 325

8.5 Steady–State Solution 328

8.6 Wiener Filter 332

8.6.1 Wiener–Hopf Equation 333

8.6.2 Solution for the Optimal Weighting Function 335

8.6.3 Filter Input Covariances 336

8.6.4 Equivalence of Weiner and Steady–State Kalman–Bucy Filters 337

8.7 Summary 341

9 FILTERING FOR NONLINEAR SYSTEMS, SMOOTHING, ERROR ANALYSISMODEL DESIGN, AND
MEASUREMENT PREPROCESSING 343

9.1 Nonlinear Filtering 344

9.1.1 Linearized and Extended Kalman Filters 344

9.1.2 Iterated Extended Kalman Filter 349

9.2 Smoothing 352

9.2.1 Fixed–Point Smoother 353

9.2.2 Fixed–Lag Smoother 356

9.2.3 Fixed–Interval Smoother 357

9.3 Filter Error Analysis and Reduced–Order Modeling 370

9.3.1 Linear Analysis of Independent Error Sources 372

9.3.2 Error Analysis for ROM Defi ned as a Transformed Detailed Model 380

9.3.3 Error Analysis for Different Truth and Filter Models 382

9.4 Measurement Preprocessing 385

9.5 Summary 385

10 FACTORED (SQUARE–ROOT) FILTERING 389

10.1 Filter Numerical Accuracy 390

10.2 U–D Filter 392

10.2.1 U–D Filter Measurement Update 394

10.2.2 U–D Filter Time Update 396

10.2.3 RTS Smoother for U–D Filter 401

10.2.4 U–D Error Analysis 403

10.3 Square Root Information Filter (SRIF) 404

10.3.1 SRIF Time Update 405

10.3.2 SRIF Measurement Update 407

10.3.3 Square Root Information Smoother (SRIS) 408

10.3.4 Dyer–McReynolds Covariance Smoother (DMCS) 410

10.3.5 SRIF Error Analysis 410

10.4 Inertial Navigation System (INS) Example Using Factored Filters 412

10.5 Large Sparse Systems and the SRIF 417

10.6 Spatial Continuity Constraints and the SRIF Data Equation 419

10.6.1 Flow Model 421

10.6.2 Log Conductivity Spatial Continuity Model 422

10.6.3 Measurement Models 424

10.6.4 SRIF Processing 424

10.6.5 Steady–State Flow Constrained Iterative Solution 425

10.7 Summary 427

11 ADVANCED FILTERING TOPICS 431

11.1 Maximum Likelihood Parameter Estimation 432

11.1.1 Calculation of the State Transition Partial Derivatives 434

11.1.2 Derivatives of the Filter Time Update 438

11.1.3 Derivatives of the Filter Measurement Update 439

11.1.4 Partial Derivatives for Initial Condition Errors 440

11.1.5 Computation of the Log Likelihood and Scoring Step 441

11.2 Adaptive Filtering 449

11.3 Jump Detection and Estimation 450

11.3.1 Jump–Free Filter Equations 452

11.3.2 Stepwise Regression 454

11.3.3 Correction of Jump–Free Filter State 455

11.3.4 Real–Time Jump Detection Using Stepwise Regression 456

11.4 Adaptive Target Tracking Using Multiple Model Hypotheses 461

11.4.1 Weighted Sum of Filter Estimates 462

11.4.2 Maximum Likelihood Filter Selection 463

11.4.3 Dynamic and Interactive Multiple Models 464

11.5 Constrained Estimation 471

11.6 Robust Estimation: H–Infi nity Filters 471

11.7 Unscented Kalman Filter (UKF) 474

11.7.1 Unscented Transform 475

11.7.2 UKF Algorithm 478

11.8 Particle Filters 485

11.9 Summary 490

12 EMPIRICAL MODELING 493

12.1 Exploratory Time Series Analysis and System Identification 494

12.2 Spectral Analysis Based on the Fourier Transform 495

12.2.1 Fourier Series for Periodic Functions 497

12.2.2 Fourier Transform of Continuous Energy Signals 498

12.2.3 Fourier Transform of Power Signals 502

12.2.4 Power Spectrum of Stochastic Signals 504

12.2.5 Time–Limiting Window Functions 506

12.2.6 Discrete Fourier Transform 509

12.2.7 Periodogram Computation of Power Spectra 512

12.2.8 Blackman–Tukey (Correlogram) Computation of Power Spectra 514

12.3 Autoregressive Modeling 522

12.3.1 Maximum Entropy Method (MEM) 524

12.3.2 Burg MEM 525

12.3.3 Final Prediction Error (FPE) and Akaike Information Criteria (AIC) 526

12.3.4 Marple AR Spectral Analysis 528

12.3.5 Summary of MEM Modeling Approaches 529

12.4 ARMA Modeling 531

12.4.1 ARMA Parameter Estimation 532

12.5 Canonical Variate Analysis 534

12.5.1 CVA Derivation and Overview 536

12.5.2 Summary of CVA Steps 539

12.5.3 Sample Correlation Matrices 540

12.5.4 Order Selection Using the AIC 541

12.5.5 State–Space Model 543

12.5.6 Measurement Power Spectrum Using the State–Space Model 544

12.6 Conversion from Discrete to Continuous Models 548

12.7 Summary 551

APPENDIX A SUMMARY OF VECTORMATRIX OPERATIONS 555

A.1 Definition 555

A.1.1 Vectors 555

A.1.2 Matrices 555

A.2 Elementary Vector Matrix Operations 557

A.2.1 Transpose 557

A.2.2 Addition 557

A.2.3 Inner (Dot) Product of Vectors 557

A.2.4 Outer Product of Vectors 558

A.2.5 Multiplication 558

A.3 Matrix Functions 558

A.3.1 Matrix Inverse 558

A.3.2 Partitioned Matrix Inversion 559

A.3.3 Matrix Inversion Identity 560

A.3.4 Determinant 561

A.3.5 Matrix Trace 562

A.3.6 Derivatives of Matrix Functions 563

A.3.7 Norms 564

A.4 Matrix Transformations and Factorization 565

A.4.1 LU Decomposition 565

A.4.2 Cholesky Factorization 565

A.4.3 Similarity Transformation 566

A.4.4 Eigen Decomposition 566

A.4.5 Singular Value Decomposition (SVD) 566

A.4.6 Pseudo–Inverse 567

A.4.7 Condition Number 568

APPENDIX B PROBABILITY AND RANDOM VARIABLES 569

B.1 Probability 569

B.1.1 Definitions 569

B.1.2 Joint and Conditional Probability, and Independence 570

B.2 Random Variable 571

B.2.1 Distribution and Density Functions 571

B.2.2 Bayes Theorem for Density Functions 572

B.2.3 Moments of Random Variables 573

B.2.4 Gaussian Distribution 574

B.2.5 Chi–Squared Distribution 574

B.3 Stochastic Processes 575

B.3.1 Wiener or Brownian Motion Process 576

B.3.2 Markov Process 576

B.3.3 Differential and Integral Equations with White Noise Inputs 577

BIBLIOGRAPHY 579

INDEX 599

BRUCE P. GIBBS has forty–one years of experience applying estimation and control theory to applications for NASA, the Department of Defense, the Department of Energy, the National Science Foundation, and private industry. He is currently a consulting scientist at Carr Astronautics, where he designs image navigation software for the GOES–R geosynchronous weather satellite. Gibbs previously developed similar systems for the GOES–NOP weather satellites and GPS.

The only book to cover least–squares estimation, Kalman filtering, and model development

This book provides a complete explanation of estimation theory and application, modeling approaches, and model evaluation. Each topic starts with a clear explanation of the theory (often including historical context), followed by application issues that should be considered in the design. Different implementations designed to address specific problems are presented, and numerous examples of varying complexity are used to demonstrate the concepts.

It focuses on practical methods for developing and implementing least–squares estimators, Kalman filters, and newer filtering techniques. Since model development is critical to a successful implementation, the book discusses first–principle approaches, basis function expansions, stochastic models, and ARMA–type structures. Computation of empirical models and determination of "best" model structures and order are also discussed. The text is written to help the reader design an estimator that meets all application requirements.

Specifically addressed are methods for developing models that meet estimation goals, procedures for making the estimator robust to modeling and numerical errors, extensions of the basic methods for handling non–ideal systems, and techniques for evaluating performance and analyzing accuracy problems. Including many real–world examples, the book:

  • Presents little–known extensions of least–squares estimation and Kalman filtering that provide guidance on model structure and parameters
  • Explains numerical accuracy, computational burden, and modeling tradeoffs for real–world applications
  • Discusses implementation issues that make the estimator more accurate or efficient, or that make it flexible so that model alternatives can be easily compared
  • Offers guidance in evaluating estimator performance and in determining/correcting problems
  • A related Web site provides a subroutine library that simplifies implementation, as well as general purpose high–level drivers that allow for the easy analysis of alternative models and access to extensions of the basic Kalman filtering

Drawing from four decades of the author′s experience with the material, Advanced Kalman Filtering, Least–Squares and Modeling is a comprehensive and detailed explanation of these topics. Practicing engineers, designers, analysts, and students using estimation theory to develop practical systems will find this a very useful reference.



Udostępnij

Facebook - konto krainaksiazek.pl



Opinie o Krainaksiazek.pl na Opineo.pl

Partner Mybenefit

Krainaksiazek.pl w programie rzetelna firma Krainaksiaze.pl - płatności przez paypal

Czytaj nas na:

Facebook - krainaksiazek.pl
  • książki na zamówienie
  • granty
  • książka na prezent
  • kontakt
  • pomoc
  • opinie
  • regulamin
  • polityka prywatności

Zobacz:

  • Księgarnia czeska

  • Wydawnictwo Książkowe Klimaty

1997-2026 DolnySlask.com Agencja Internetowa

© 1997-2022 krainaksiazek.pl
     
KONTAKT | REGULAMIN | POLITYKA PRYWATNOŚCI | USTAWIENIA PRYWATNOŚCI
Zobacz: Księgarnia Czeska | Wydawnictwo Książkowe Klimaty | Mapa strony | Lista autorów
KrainaKsiazek.PL - Księgarnia Internetowa
Polityka prywatnosci - link
Krainaksiazek.pl - płatnośc Przelewy24
Przechowalnia Przechowalnia