ISBN-13: 9781118056998 / Angielski / Twarda / 2011 / 728 str.
ISBN-13: 9781118056998 / Angielski / Twarda / 2011 / 728 str.
The innovative performance and scalability features with each newer edition of the Oracle database system can present challenges for users. This book teaches software developers and students how to effectively deal with Oracle performance and scalability issues throughout the entire life cycle of developing Oracle-based applications. Using real-world case studies to deliver key theories and concepts, the book introduces highly dependable and ready-to-apply performance and scalability optimization techniques, augmented with Top 10 Oracle Performance and Scalability Features as well as a supplementary support website.
PREFACE xxv
Why This Book xxv
Who This Book is For xxvi
How This Book is Organized xxvii
Software and Hardware xxviii
How to Use This Book xxix
How to Reach The Author xxxi
ACKNOWLEDGMENTS xxxiii
INTRODUCTION 1
Features of Oracle 2
Objectives 4
Conventions 5
Performance versus Scalability 6
PART 1 GETTING STARTED WITH ORACLE 7
1 Basic Concepts 9
1.1 Standard versus Flavored SQLS 10
1.2 Relational versus Object–Oriented Databases 11
1.3 An Instance versus a Database 11
1.4 Summary 12
Recommended Reading 12
Exercises 12
2 Installing Oracle Software 14
2.1 Installing Oracle 11g Server Software 15
2.2 Configuring a Listener 18
2.3 Creating an Oracle Database 18
2.4 Installing Oracle 11g Client Software 28
2.5 Oracle Grid Control versus DB Control 31
2.6 Summary 33
Recommended Reading 33
Exercises 33
3 Options for Accessing an Oracle Server 34
3.1 A Command Line Interface (CLI) versus a GUI–Based Console 35
3.2 The Oracle Enterprise Manager Java Console (OEMJC) 37
3.3 Using the SQLPlus Tool 40
3.4 Oracle Enterprise Manager DBConsole 42
3.5 Other Tools for Developers 43
3.6 Case Study: Creating ER Diagrams with Visio via ODBC 44
3.7 Case Study: Accessing Oracle in Java via JDBC 47
3.8 Summary 49
Recommended Reading 50
Exercises 50
4 A Quick Tour of an Oracle Server 52
4.1 New Oracle Schemas Beyond "Scott" 53
4.2 Oracle Users versus Schemas 54
4.3 Tablespaces, Segments, Extents, and Data Blocks 56
4.4 Tables, Indexes and Index Types for Structured Data 57
4.5 Domain and LOB Index Types for Unstructured Data 65
4.6 Views, Materialized Views, and Synonyms 68
4.7 Stored Procedures, Functions, and Triggers 68
4.8 Referential Integrity with Foreign Keys 71
4.9 Summary 73
Recommended Reading 73
Exercises 74
PART 2 ORACLE ARCHITECTURE FROM PERFORMANCE AND SCALABILITY PERSPECTIVES 75
5 Understanding Oracle Architecture 79
5.1 The Version History of Oracle 80
5.2 Oracle Processes 82
5.3 Oracle Memory Areas 87
5.4 Dedicated versus Shared Oracle Server Architecture 89
5.5 Performance Sensitive Initialization Parameters 91
5.6 Oracle Static Data Dictionary Views 94
5.7 Oracle Dynamic Performance (V$) Views 95
5.8 Summary 98
Recommended Reading 98
Exercises 99
6 Oracle 10g Memory Management 101
6.1 SGA Sub–Areas 102
6.2 SGA Sizing: Automatic Shared Memory Management (ASMM) 104
6.3 PGA Sizing: PGA—AGGREGATE—TARGET 106
6.4 Summary 108
Recommended Reading 109
Exercises 110
7 Oracle 11g Memory Management 111
7.1 Automatic Memory Management (AMM) 112
7.2 Memory Sizing Options Configurable at Database Creation Time 112
7.3 Checking Memory Management and Usage Distribution at Run Time 113
7.4 Summary 115
Recommended Reading 115
Exercises 115
8 Oracle Storage Structure 116
8.1 Overview 117
8.2 Managing Tablespaces 119
8.3 Managing Data Files 122
8.4 Managing Redo Logs 124
8.5 Summary 125
Recommended Reading 125
Exercises 126
9 Oracle Wait Interface (OWI) 127
9.1 Ratio–based versus OWI–based Oracle Performance Tuning Methodologies 128
9.2 Wait Event The Core Concept of OWI 130
9.3 Classification of Wait Events from OWI 131
9.4 The Other Part (CPU Time) of the Equation Elapsed Time ¼ CPU Time þ Wait Time 134
9.5 AWR as a Compass to Tuning Oracle Performance and Scalability 136
9.6 Summary 137
Recommended Reading 137
Exercises 138
10 Oracle Data Consistency and Concurrency 139
10.1 Select . . . for Update Statement 140
10.2 ACID Properties of Transactions 141
10.3 Read Phenomena and Data Inconsistencies 143
10.4 Oracle Isolation Levels 145
10.5 Multi–Version Concurrency Control (MVCC) and Read Consistency 145
10.6 Oracle Locks 146
10.7 Lock Escalations versus Conversions 149
10.8 Oracle Latches 149
10.9 Oracle Enqueues 150
10.10 Deadlocks 150
10.11 Taking Advantage of Oracle s Scalable Concurrency Model 151
10.12 Case Study: A JDBC Example 152
10.13 Summary 158
Recommended Reading 159
Exercises 159
11 Anatomy of an Oracle Automatic Workload Repository (AWR) Report 161
11.1 Importance of Performance Statistics 162
11.2 AWR Report Header 165
11.3 Report Summary 166
11.3.1 Cache Sizes 166
11.3.2 Load Profile 167
11.3.3 Instance Efficiency Percentages (Target 100%) 169
11.3.4 Shared Pool Statistics 170
11.3.5 Top Five Timed Events 170
11.4 Main Report 171
11.5 Wait Events Statistics 172
11.5.1 Time Model Statistics 173
11.5.2 Wait Class 174
11.5.3 Wait Events 174
11.5.4 Background Wait Events 176
11.5.5 Operating System Statistics 176
11.5.6 Service Statistics 177
11.5.7 Service Wait Class Stats 178
11.6 SQL Statistics 178
11.6.1 SQL ordered by Elapsed Time 179
11.6.2 SQL ordered by CPU Time 180
11.6.3 SQL ordered by Gets 180
11.6.4 SQL ordered by Reads 181
11.6.5 SQL ordered by Executions 182
11.6.6 SQL ordered by Parse Calls 183
11.6.7 SQL ordered by Sharable Memory 183
11.6.8 SQL ordered by Version Count 183
11.6.9 Complete List of SQL Text 184
11.7 Instance Activity Statistics 185
11.7.1 Instance Activity Stats 185
11.7.2 Instance Activity Stats Absolute Values 196
11.7.3 Instance Activity Stats Thread Activity 197
11.8 IO Stats 197
11.8.1 Tablespace IO Stats 198
11.8.2 File IO Stats 198
11.9 Buffer Pool Statistics 199
11.10 Advisory Statistics 199
11.10.1 Instance Recovery Stats 200
11.10.2 Buffer Pool Advisory 200
11.10.3 PGA Aggr Summary 201
11.10.4 PGA Aggr Target Stats 202
11.10.5 PGA Aggr Target Histogram 202
11.10.6 PGA Memory Advisory 203
11.10.7 Shared Pool Advisory 204
11.10.8 SGA Target Advisory 204
11.10.9 Streams Pool Advisory 205
11.10.10 Java Pool Advisory 205
11.11 Wait Statistics 206
11.12 Undo Statistics 207
11.13 Latch Statistics 208
11.13.1 Latch Activity 208
11.13.2 Latch Sleep Breakdown 213
11.13.3 Latch Miss Sources 214
11.13.4 Parent and Child Latch Statistics 215
11.14 Segment Statistics 215
11.14.1 Segments by Logical Reads 215
11.14.2 Segments by Physical Reads 216
11.14.3 Segments by Row Lock Waits 217
11.14.4 Segments by ITLWaits 217
11.14.5 Segments by Buffer Busy Waits 217
11.15 Dictionary Cache Stats 218
11.16 Library Cache Activity 219
11.17 Memory Statistics 219
11.17.1 Process Memory Summary 219
11.17.2 SGA Memory Summary 220
11.17.3 SGA Breakdown Difference 221
11.18 Streams Statistics 222
11.19 Resource Limit Stats 224
11.20 init.ora Parameters 224
11.21 Summary 225
Recommended Reading 225
Exercises 226
12 Oracle Advanced Features and Options 227
12.1 Oracle 8i New Features 227
12.1.1 Java 228
12.1.2 Oracle interMedia, Spatial, Time Series, andVisual Image Retrieval 229
12.1.3 Oracle Parallel Server 230
12.1.4 Optimizer Plan Stability 230
12.1.5 Locally Managed Tablespaces 230
12.1.6 Online Index Creation and Rebuild 231
12.1.7 Online Read–Only Tablespaces 231
12.1.8 Temporary Tables 231
12.1.9 Non–Blocking OCI (Oracle Call Interface) 231
12.1.10 Function–Based Indexes 232
12.1.11 Logical ROWIDs 232
12.1.12 Enhanced Partitioning 232
12.1.13 Connection Load Balancing 233
12.1.14 Client Load Balancing 233
12.1.15 Oracle Enterprise Manager 233
12.2 Oracle 9i New Features 233
12.2.1 Real Application Clusters (RAC) 234
12.2.2 Data Guard 236
12.2.3 Performance Tuning Intelligent Advisors 239
12.2.4 Actual Operation–Level Query Statistics 239
12.2.5 Dynamic Sampling of Optimizer Statistics 239
12.2.6 Cloning Production Database with Oracle Enterprise Manager 240
12.2.7 Renaming Columns and Constraints 241
12.2.8 Dynamic Memory Pools 241
12.2.9 Flashback Query 241
12.2.10 List Partitioning 241
12.3 Oracle 10g New Features 241
12.3.1 Automatic Storage Management (ASM) 242
12.3.2 Asynchronous Commit 244
12.3.3 Database Replay 244
12.3.4 Read Performance Statistics Directly from the SGA 245
12.3.5 Automatic Workload Repository (AWR) 245
12.3.6 Automatic Database Diagnostic Monitor (ADDM) 245
12.3.7 Automatic Shared Memory Tuning 245
12.3.8 Automatic Optimizer Statistics Gathering 245
12.3.9 SQL Tuning Features 247
12.3.10 Grid Computing 247
12.4 Oracle 11g New Features 248
12.4.1 Automatic Memory Management 249
12.4.2 Intelligent Cursor Sharing 249
12.4.3 Database Resident Connection Pool (DRCP) 249
12.4.4 Server Result Cache 250
12.4.5 Database Smart Flash Cache 251
12.4.6 Database Replay SQL Performance Analyzer (SPA) Integration 252
12.4.7 I/O Calibration 252
12.4.8 Partitioning Enhancements 252
12.4.9 SQL Plan Management 253
12.4.10 Zero–Size Unusable Indexes and Index Partitions 254
12.4.11 Invisible Indexes 254
12.4.12 Virtual Columns 254
12.5 Summary 255
Recommended Reading 255
Exercises 255
13 Top 10 Oracle Performance and Scalability Features 257
13.1 Real Application Clustering (RAC) 258
13.2 Dedicated versus Shared Server Models 260
13.3 Proven Transaction and Concurrency Models 260
13.4 A Highly Efficient SQL Optimization Engine 261
13.5 Efficient Parallel Processing with Modern Multi–Core CPUs 261
13.6 Partitioning 262
13.7 An All–Encompassing, Powerful Performance, and Scalability Troubleshooting Tool AWR 262
13.8 The Most Comprehensive Set of Internal Performance Metrics 263
13.9 Database Resident Connection Pool 263
13.10 In–Memory Database Cache (IMDB) 263
13.11 Summary 263
Recommended Reading 264
Exercises 264
14 Oracle–Based Application Performance and Scalability by Design 266
14.1 Rapid Development Methodologies 268
14.2 Planning 269
14.2.1 Vision 269
14.2.2 Objectives 270
14.2.3 ROI Analysis 270
14.2.4 Feasibility Study 271
14.2.5 Project Team Formation 271
14.3 Requirements Gathering 272
14.3.1 Use Cases 273
14.3.2 User Views 274
14.3.3 Business Processes, Entities, and Business Rules 274
14.4 Conceptual Design via Data Modeling 275
14.4.1 Entity–Relationship Diagramming 276
14.4.2 The Information Engineering (IE) Format for ERDs 278
14.4.3 UML Format for ERDs 279
14.4.4 Relational Format for ERDs 279
14.5 Logical Design via Normalization 280
14.5.1 Operational Anomalies 281
14.5.2 Review of Relation Theory 282
14.5.3 Functional Dependencies and Lossless–Join Decompositions 285
14.5.4 First Normal Form (1NF): Avoiding Multi–Valued Columns 287
14.5.5 Second Normal Form (2NF): Eliminating Partial Dependencies 288
14.5.6 Third Normal Form (3NF): Eliminating Transitive Dependencies: 288
14.5.7 Boyce–Codd Normal Form (BCNF): Eliminating Key Non–Key Dependencies 289
14.5.8 Fourth Normal Form (4NF): Trivializing or Keying Multi–Valued Dependencies 290
14.5.9 Fifth Normal Form (5NF): Trivializing or Keying Join Dependencies 292
14.5.10 Which Level of Normalization to Settle Down? 294
14.5.11 Denormalization? 294
14.6 Physical Design 295
14.6.1 Naming Conventions 297
14.6.2 Creating Tablespaces 298
14.6.3 Creating a Schema User with Proper Privileges 299
14.6.4 Creating Application Schema Objects 299
14.6.5 Changing Schema Objects 308
14.6.6 Enforcing Business Rules and Data Integrity 309
14.6.7 Adding Views 312
14.6.8 Creating Sequences and Synonyms 312
14.6.9 Adding Indexes 313
14.6.10 Security 314
14.7 Implementation 315
14.7.1 Choosing an Effective and Efficient Coding Path 315
14.7.2 Leveraging Proven Oracle Database Design Principles 316
14.7.3 Leveraging Proven Application Design Patterns 318
14.7.4 Enforcing with an Effective and Efficient Testing Process 319
14.8 Release To Market (RTM) 322
14.9 Continuous Improvements 322
14.10 Summary 323
Recommended Reading 324
Exercises 325
15 Project: Soba A Secure Online Banking Application on Oracle 326
15.1 Getting SOBA Up and Running 328
15.1.1 Prerequisite Software 328
15.1.2 Initial Software Stack Setup 329
15.1.3 Creating SOBA Database on Oracle 330
15.1.4 Installing SOBA on Eclipse IDE 330
15.1.5 Configuring SOBA to Work with Oracle 331
15.1.6 Configuring SOBA to Work with Hibernate 333
15.1.7 Building SOBA and Deploying SOBA with Ant to Run on Tomcat 333
15.2 Overview of Spring Framework 333
15.2.1 Background 333
15.2.2 Spring for Building Flexible Applications Faster 334
15.2.3 Spring Inversion of Control (IoC) and Dependency Injection 335
15.2.4 Features of Spring 3.0 336
15.3 MVC Architecture 337
15.3.1 MVC Architecture in General 338
15.3.2 Spring MVC in Action with SOBA 340
15.4 Spring MVC Framework Applied to SOBA 342
15.4.1 Spring DispatcherServlet and WebApplicationContext 343
15.4.2 Logic Flow of SOBA Defined in Spring MVC Framework 347
15.4.3 AWeb Entry Point Defined in a Spring MVC Web Form 348
15.4.4 Handler Mapping 350
15.4.5 Implementing Spring Controllers 353
15.4.6 A Typical View Defined in a Spring MVC Web Form 358
15.4.7 A Typical Form Success Controller and its Resultant View 362
15.4.8 POJOs Referenced in the CreateCustomerFormController 364
15.5 Hibernate Object–Relational Mapping (ORM) Applied to SOBA 368
15.5.1 Benefits of Using Hibernate 369
15.5.2 Metadata Mapping with Hibernate 370
15.5.3 Configuring Hibernate to Work with Oracle 371
15.5.4 Hibernate DAO 373
15.6 RESTful Web Services Applied to SOBA 376
15.6.1 Introduction to RESTful Web Services 376
15.6.2 RESTful Constraints 377
15.6.3 RESTful Interface Design Principles 378
15.6.4 Spring s Support for RESTful Web Services 379
15.6.5 Server Code 380
15.6.6 Client Code 383
15.7 Spring Security Applied to SOBA 386
15.7.1 Basic Concepts 387
15.7.2 Security Configured in web.xml 387
15.7.3 Security Configured in soba–security.xml 388
15.7.4 Implementing Spring Security in Views 394
15.8 Spring ACL Applied to SOBA 394
15.8.1 Creating ACL Tables in Oracle 395
15.8.2 Configuring Spring ACL 395
15.8.3 Maintaining ACLs for SOBA Domain Objects 398
15.8.4 Applying ACLs to Business Operations 404
15.8.5 Testing ACLs with SOBA 406
15.9 Summary 413
Recommended Reading 414
Exercises 414
PART 3 OPTIMIZING ORACLE PERFORMANCE AND SCALABILITY 415
16 Logistics of the Oracle Cost–Based Optimizer (CBO) 417
16.1 Life of a SQL Statement in Oracle 418
16.2 Oracle SQL Optimizer: Rule–Based versus Cost–Based 420
16.3 CBO Statistics 421
16.4 Pivot Role of Gathering Database Statistics to CBO 422
16.5 Methods of Gathering CBO Statistics 424
16.6 Locking and Unlocking CBO Statistics 425
16.7 Explain Plan A Handle to CBO 425
16.8 Data Access Methods CBO s Footprints 426
16.9 Looking Up CBO s Plan Hidden in V$SQL—PLAN 427
16.10 When CBO may Generate Suboptimum Execution Plans 428
16.11 Summary 429
Recommended Reading 429
Exercises 430
17 Oracle SQL Tuning 431
17.1 Tuning Joins 432
17.2 Tuning Subqueries 437
17.3 Case Study: Performance of SUBQUERY versus JOIN 439
17.4 Case Study: Performance of IN versus EXISTS 443
17.5 Case Study: A SQL Tuning Yielded a 12x Performance Gain 444
17.6 Summary 447
Recommended Reading 447
Exercises 448
18 Oracle Indexing 449
18.1 Rules of Thumb on Indexing 450
18.2 Creating and Using Ubiquitous b–Tree Indexes 451
18.3 Advanced Indexing Scheme I: Covering Indexes versus Index–Organized Tables 452
18.4 Advanced Indexing Scheme II: Function–Based Indexes (FBIs) 453
18.5 Unusual Indexing Scheme I: BITMAP Indexes 454
18.6 Unusual Indexing Scheme II: Reverse Key Indexes 455
18.7 Unusual Indexing Scheme III: Compressed Composite Indexes 455
18.8 How To Create Oracle Indexes 456
18.9 Summary 457
Recommended Reading 458
Exercises 458
19 Auto—Tune Features 459
19.1 Oracle Automatic Database Diagnostic Monitor (ADDM) 460
19.2 Automatic Undo Management 462
19.3 Data Recovery Advisor 462
19.4 Memory Advisors 462
19.5 MTTR Advisor 466
19.6 Segment Advisor 466
19.7 SQL Advisors 467
19.8 SQL Performance Analyzer 469
19.9 Summary 470
Recommended Reading 471
Exercises 471
PART 4 CASE STUDIES: ORACLE MEETING REAL WORLD PERFORMANCE AND SCALABILITY CHALLENGES 473
20 Case Study: Achieving High Throughput with Array Processing 477
20.1 Context 478
20.2 Performance Model 479
20.3 Tests 480
20.4 Solution 480
20.5 Effects of Array Processing 482
20.6 Summary 484
Recommended Reading 484
Exercises 484
21 Case Study: Performance Comparison of Heap–Organized versus Index–Organized Tables 485
21.1 Context 486
21.2 Conversion from Heap–Organized to Index–Organized 487
21.3 Creating Indexes 487
21.4 Creating Constraints 488
21.5 EXPLAIN PLANs 488
21.6 Oracle SQL Traces 489
21.7 Summary 490
Recommended Reading 491
Exercises 491
22 Case Study: SQL Tuning: "IN" versus "OR" versus Global Temporary Table 492
22.1 Context 493
22.2 Test Program 494
22.3 Observation 1: IN—CreateStatement is the Best Performer 495
22.4 Observation 2: Batch Insert Saves Time 497
22.5 Temptable Performed Better without an Index Hint than with an Index Hint 498
22.6 Effects of APPEND Hint for Populating Temptable 499
22.7 Effects of Number of Iterations 499
22.8 OR and IN without the Index Hint 499
22.9 Limitation on the Number of Literal Values and the Size of OR Statement 501
22.10 Dealing with More Than 1000 Literal Values for an IN Based SQL Query 501
22.11 A Recommendation for Dealing with 1000 Literal Value Limit in an IN Statement 501
22.12 Summary 502
Recommended Reading 503
Exercises 503
23 Case Study: Data Access Paths (Double Buffering) 504
23.1 Data Access Paths in General 505
23.1.1 Data Buffering 507
23.1.2 Inode Locking 509
23.1.3 Write–Sync Daemon 510
23.2 Test Environments 511
23.2.1 Solaris on Veritas 511
23.2.2 Solaris on UFS 511
23.2.3 Windows on NTFS 512
23.3 Test Results with Solaris on Veritas 514
23.3.1 Test Run #1 145 ms Average Read Time 514
23.3.2 Test Run #2 401 ms Average Read Time 516
23.3.3 Test Run #3 261 ms Average Read Time 518
23.3.4 Test Run #4 0.98 ms Average Read Time 519
23.3.5 Analysis 521
23.4 Test Results with Solaris on UFS 522
23.4.1 Test Run #1 447 ms Average Read Time 522
23.4.2 Test Run #2 10ms Average Read Time 524
23.4.3 Analysis 525
23.5 Test Results with Windows on NTFS 526
23.5.1 Test Run 8 ms Average Read Time 526
23.5.2 Analysis 528
23.6 Moral of the Case Study 528
Recommended Reading 529
Exercises 530
24 Case Study: Covering Index 531
24.1 Getting to Know the Application Architecture 533
24.2 Quantifying the Problems 533
24.3 Analyzing Bottlenecks 533
24.4 Applying Optimizations/Tunings 535
24.5 Verifying the Fixes 535
24.5.1 Report Summary 537
24.5.2 Wait Events Statistics 538
24.5.3 SQL Statistics 541
24.5.4 IO Stats 544
24.5.5 Buffer Pool Statistics 544
24.5.6 Wait Statistics 544
24.5.7 init.ora Parameters 545
24.6 Moral of the Case Study 545
Recommended Reading 546
Exercises 546
25 Case Study: CURSOR—SHARING 547
25.1 The Concept of a Bind Variable 548
25.2 Oracle CURSOR—SHARING Parameter 549
25.3 Getting to Know the Application Architecture 550
25.4 Quantifying Problems 550
25.5 Analyzing Bottlenecks 551
25.5.1 Report Summary 552
25.5.2 SQL Statistics 556
25.5.3 IO Stats 557
25.5.4 Wait Statistics 558
25.5.5 init.ora Parameters 558
25.6 Applying Tuning: CURSOR—SHARING = FORCE 560
25.6.1 Report Summary 561
25.6.2 Wait Events Statistics 563
25.7 Applying Tuning: CURSOR—SHARING = SIMILAR 564
25.7.1 Report Summary 564
25.7.2 Wait Events Statistics 566
25.8 Moral of the Case Study 569
Recommended Reading 569
Exercises 570
26 Case Study: Bulk Transactions 571
26.1 Application Architecture 572
26.2 Quantifying Problems 572
26.3 Identifying Performance and Scalability Optimization Opportunities 573
26.3.1 Report Summary 573
26.3.2 Wait Events Statistics 575
26.3.3 SQL Statistics 577
26.3.4 Wait Statistics 579
26.4 Effects of Bulk Transactions on Performance 581
26.4.1 Report Summary 581
26.4.2 Wait Events Statistics 583
26.4.3 SQL Statistics 585
26.4.4 Wait Statistics 587
26.5 Moral of the Case Study 592
Recommended Reading 593
Exercises 593
27 Case Study: Missing Statistics 594
27.1 Decaying Performance due to Missing Statistics 595
27.2 First Run with no Statistics 597
27.2.1 Report Summary 598
27.2.2 Wait Events Statistics 599
27.2.3 SQL Statistics 601
27.2.4 IO Stats 602
27.2.5 Wait Statistics 602
27.2.6 init.ora Parameters 603
27.3 Second Run with Missing Statistics 604
27.3.1 Report Summary 605
27.3.2 Wait Events Statistics 606
27.3.3 SQL Statistics 607
27.3.4 IO Stats 609
27.3.5 Wait Statistics 609
27.4 Third Run with Updated Statistics 611
27.4.1 Report Summary 611
27.4.2 Wait Events Statistics 613
27.4.3 Operating System Statistics 614
27.4.4 SQL Statistics 614
27.4.5 Wait Statistics 616
27.5 Moral of the Case Study 618
Recommended Reading 618
Exercises 618
28 Case Study: Misconfigured SAN Storage 620
28.1 Architecture of the Apple s Xserve RAID 621
28.2 Problem Analysis 622
28.2.1 Report Summary 622
28.2.2 Wait Events Statistics 624
28.2.3 IO Stats 625
28.2.4 init.ora Parameters 625
28.3 Reconfiguring the RAID and Verifying 626
28.3.1 Report Summary 626
28.3.2 Wait Events Statistics 628
28.3.3 IO Stats 629
28.4 Moral of the Case Study 629
Recommended Reading 630
Exercises 630
APPENDIX A ORACLE PRODUCT DOCUMENTATIONS 633
A.1 Oracle Database Concepts 633
A.2 Oracle Database Administrator s Guide 633
A.3 Oracle Database Reference 634
A.4 Oracle Database Performance Tuning Guide 634
A.5 Oracle Database 2 Day þ Performance Tuning Guide 634
A.6 Oracle Database 2 Day DBA 634
A.7 Oracle Database SQL Language Reference 634
A.8 Oracle Database Sample Schemas 635
A.9 Oracle Database PL/SQL Packages and Types Reference 635
A.10 Oracle Database PL/SQL Language Reference 635
A.11 Oracle Database JDBC Developer s Guide and References 635
APPENDIX B USING SQLPLUS WITH ORACLE 636
B.1 Installation 636
B.2 SQLPlus and tnsnames.ora File 637
B.3 Basics of SQLPlus 638
B.4 Common SQLPlus Commands 638
B.5 Using SQLPlus to Execute SQL Statements 639
B.6 Using SQLPlus to Execute PL/SQL Blocks 640
B.7 Using SQLPlus Autotrace to Obtain EXECUTION PLANs and Optimizer Statistics 640
B.8 Using SQLPlus Timing Command 641
B.9 Exporting/Importing Oracle Databases with SQLPlus 642
B.10 Creating AWR Reports with SQLPlus 643
B.11 Checking Tablespace Usage with SQLPlus 644
B.12 Creating EM DBConsole with SQLPlus 646
APPENDIX C A COMPLETE LIST OF ALL WAIT EVENTS IN ORACLE 11g 648
APPENDIX D A COMPLETE LIST OF ALL METRICS WITH THE V$STATNAME VIEW 656
APPENDIX E A COMPLETE LIST OF ALL STATISTICS WITH THE V$SYSSTAT VIEW 667
INDEX 681
Henry H. Liu, PhD, is a Software Developer at BMC Software, Inc., with a physicist background from his prior career. His primary responsibilities at BMC include helping build performance and scalability into BMC′s cloud computing and enterprise service management software products. Dr. Liu is the author of the highly acclaimed Software Performance and Scalability: A Quantitative Approach (Wiley).
A data–driven, fact–based, quantitative text on Oracle performance and scalability
With database concepts and theories clearly explained in Oracle′s context, readers quickly learn how to fully leverage Oracle′s performance and scalability capabilities at every stage of designing and developing an Oracle–based enterprise application. The book is based on the author′s more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques.
Oracle Database Performance and Scalability is divided into four parts that enable readers to progressively learn and apply new skills:
In addition to case studies, this book offers a detailed secure online banking application as an example, making it possible for readers to fully explore Oracle′s performance and scalability features on their own. Oracle Database Performance and Scalability is an ideal text for pre–conditioning college students and training software developers.
1997-2024 DolnySlask.com Agencja Internetowa