ExpressDBA
Sharing cases, tips, knowledge base and real life experience about database administration and related staff.
Monday, February 29, 2016
Saturday, February 27, 2016
Monday, November 9, 2015
Friday, September 18, 2015
Thursday, September 17, 2015
Wednesday, September 16, 2015
Monday, September 14, 2015
Friday, August 28, 2015
Tuesday, August 25, 2015
Friday, August 7, 2015
Thursday, July 30, 2015
Tuesday, April 28, 2015
Wednesday, April 22, 2015
Thursday, April 16, 2015
Thursday, April 9, 2015
Tuesday, March 31, 2015
Thursday, March 26, 2015
Monday, March 2, 2015
Friday, February 27, 2015
Monday, February 2, 2015
Wednesday, January 7, 2015
Monday, January 5, 2015
Monday, December 29, 2014
Tuesday, December 23, 2014
Wednesday, December 17, 2014
Thursday, December 11, 2014
Tuesday, December 9, 2014
Thursday, December 4, 2014
Wednesday, December 3, 2014
Monday, December 1, 2014
Sunday, November 30, 2014
Wednesday, November 26, 2014
Friday, October 24, 2014
Thursday, October 23, 2014
Tuesday, October 21, 2014
Friday, October 17, 2014
Wednesday, October 15, 2014
Monday, October 13, 2014
Friday, October 10, 2014
Wednesday, October 1, 2014
Friday, September 26, 2014
Wednesday, September 17, 2014
Use PowerShell to Discover, Diagnose, and Document SQL Server
Statistics parser SQL Server
Ferramenta online bacana para análise de estatÃsticas de comandos SQL Server
1) Set statistics io on
2) Roda o comando
3) Cola as estatÃsticas no site para converter
SQL Server Setup Checklist - documento legal
Example DBA Parameters To Help Tune the Performance of Data Pump
Subject: | Example DBA Parameters To Help Tune the Performance of Data Pump. | |||
| Note:376969.1 | Type: | HOWTO | |
| Last Revision Date: | 10-OCT-2006 | Status: | PUBLISHED |
In this Document
Goal
Solution
Applies to:
HCM Common Architecture - Version: 11.5.10
Information in this document applies to any platform.
Goal
To Help Tune the Performance of Data Pump.
Solution
To have a steady rate count between 50 – 60 "Record per Second"
You should setup Parameters like this: -
_spin_count = 4000
session_cached_cursors = 800
cursor_space_for_time = true
db_cache_advice = OFF
Note: this parameter "session_cached_cursors" plays a Major role in the performance issues.
Data Pump Performance
The Data Pump utilities are designed especially for very large databases. If your site has very large quantities of data versus metadata, you should experience a dramatic increase in performance compared to the original Export and Import utilities. This chapter briefly discusses why the performance is better and also suggests specific steps you can take to enhance performance of export and import operations.
This chapter contains the following sections:
· Data Performance Improvements for Data Pump Export and Import
· Initialization Parameters That Affect Data Pump Performance
Performance of metadata extraction and database object creation in Data Pump Export and Import remains essentially equivalent to that of the original Export and Import utilities.
Data Performance Improvements for Data Pump Export and Import
The improved performance of the Data Pump Export and Import utilities is attributable to several factors, including the following:
· Multiple worker processes can perform intertable and interpartition parallelism to load and unload tables in multiple, parallel, direct-path streams.
· For very large tables and partitions, single worker processes can choose intrapartition parallelism through multiple parallel queries and parallel DML I/O server processes when the external tables method is used to access data.
· Data Pump uses parallelism to build indexes and load package bodies.
· Dump files are read and written directly by the server and, therefore, do not require any data movement to the client.
· The dump file storage format is the internal stream format of the direct path API. This format is very similar to the format stored in Oracle database datafiles inside of tablespaces. Therefore, no client-side conversion to INSERT
statement bind variables is performed.
· The supported data access methods, direct path and external tables, are faster than conventional SQL. The direct path API provides the fastest single-stream performance. The external tables feature makes efficient use of the parallel queries and parallel DML capabilities of the Oracle database.
· Metadata and data extraction can be overlapped during export.
Tuning Performance
Data Pump technology fully uses all available resources to maximize throughput and minimize elapsed job time. For this to happen, a system must be well-balanced across CPU, memory, and I/O. In addition, standard performance tuning principles apply. For example, for maximum performance you should ensure that the files that are members of a dump file set reside on separate disks, because the dump files will be written and read in parallel. Also, the disks should not be the same ones on which the source or target tablespaces reside.
Any performance tuning activity involves making trade-offs between performance and resource consumption.
Controlling Resource Consumption
The Data Pump Export and Import utilities enable you to dynamically increase and decrease resource consumption for each job. This is done using the PARALLEL
parameter to specify a degree of parallelism for the job. (The PARALLEL
parameter is the only tuning parameter that is specific to Data Pump.) For maximum throughput, do not set PARALLEL
to much more than twice the number of CPUs (two workers for each CPU).
See Also:
· PARALLEL for more information about the Export PARALLEL
parameter
· PARALLEL for more information about the Import PARALLEL
parameter
As you increase the degree of parallelism, CPU usage, memory consumption, and I/O bandwidth usage also increase. You must ensure that adequate amounts of these resources are available. If necessary, you can distribute files across different disk devices or channels to get the needed I/O bandwidth.
To maximize parallelism, you must supply at least one file for each degree of parallelism. The simplest way of doing this is to use wild cards in your file names (for example, file%u.dmp
). However, depending upon your disk set up (for example, simple, non-striped disks), you might not want to put all dump files on one device. In this case, it is best to specify multiple wildcarded file names, with each in a separate directory resolving to a separate disk. Even with fast CPUs and fast disks, the path between the CPU and the disk may be the constraining factor in the amount of parallelism that can be sustained.
The PARALLEL
parameter is valid only in the Enterprise Edition of Oracle Database 10g.
Initialization Parameters That Affect Data Pump Performance
The settings for certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, you can try using the following settings to improve performance, although the effect may not be the same on all platforms.
· DISK_ASYNCH_IO=TRUE
· DB_BLOCK_CHECKING=FALSE
· DB_BLOCK_CHECKSUM=FALSE
The following initialization parameters must have values set high enough to allow for maximum parallelism:
· PROCESSES
· SESSIONS
· PARALLEL_MAX_SERVERS
Additionally, the SHARED_POOL_SIZE and UNDO_TABLESPACE initialization parameters should be generously sized. The exact values will depend upon the size of your database.
Setting the Size Of the Buffer Cache In a Streams Environment
As of Oracle Database 10g release 10.2, if Streams functionality is used, but the STREAMS_POOL_SIZE
initialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of size of the shared pool.
When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE
initialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly.
Além disso:
- não esquecer de separar UNDO, DADOS, LOG espalhando o I/O ao máximo.
-
Trace flags importantes que todo DBA precisa saber
Trace flag 610: - utilizado para logar o mÃnimo possÃvel em grandes cargas
Trace flag 834: - utilização de large pages (precisa de lock pages privilege)
Trace flag 835: - habilita suporte a lock pages in memory em versão standard
Trace flag 1204:- grava deadlocks no errorlog
Trace flag 2528:- desabilita parallel em dbcc checkdb