Data Files |
Tablespace | Datafile | Status |
Enabled | Size (kB) | Free (kB) | Used (%) | Phys.Reads | Phys.Writes |
Avg.I/O Time |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 64.00
| 99.86 | 482651 | 44185 | 1588670751 |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 128.00
| 99.73 | 482651 | 44185 | 1588670751 |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 256.00
| 99.46 | 482651 | 44185 | 1588670751 |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 320.00
| 99.32 | 482651 | 44185 | 1588670751 |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 512.00
| 98.91 | 482651 | 44185 | 1588670751 |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 640.00
| 98.64 | 482651 | 44185 | 1588670751 |
PCB | /opt/oracle/oraData/oradb/vol_01/esn.dbf | ONLINE | READ WRITE | 47,104.00 | 960.00
| 97.96 | 482651 | 44185 | 1588670751 |
SYSTEM | /opt/oracle/oraData/oradb/vol_01/system01.dbf | SYSTEM | READ WRITE | 332,800.00 | 32.00
| 99.99 | 30800 | 45477 | 875644029 |
SYSTEM | /opt/oracle/oraData/oradb/vol_01/system01.dbf | SYSTEM | READ WRITE | 332,800.00 | 48.00
| 99.99 | 30800 | 45477 | 875644029 |
SYSTEM | /opt/oracle/oraData/oradb/vol_01/system01.dbf | SYSTEM | READ WRITE | 332,800.00 | 96.00
| 99.97 | 30800 | 45477 | 875644029 |
SYSTEM | /opt/oracle/oraData/oradb/vol_01/system01.dbf | SYSTEM | READ WRITE | 332,800.00 | 224.00
| 99.93 | 30800 | 45477 | 875644029 |
SYSTEM | /opt/oracle/oraData/oradb/vol_01/system01.dbf | SYSTEM | READ WRITE | 332,800.00 | 392.00
| 99.88 | 30800 | 45477 | 875644029 |
SYSTEM | /opt/oracle/oraData/oradb/vol_01/system01.dbf | SYSTEM | READ WRITE | 332,800.00 | 129,760.00
| 61.01 | 30800 | 45477 | 875644029 |
UNDOTBS | /opt/oracle/oraData/oradb/vol_01/undotbs01.dbf | ONLINE | READ WRITE | 235,520.00 | 64.00
| 99.97 | 37 | 185896 | -283356465 |
UNDOTBS | /opt/oracle/oraData/oradb/vol_01/undotbs01.dbf | ONLINE | READ WRITE | 235,520.00 | 192.00
| 99.92 | 37 | 185896 | -283356465 |
UNDOTBS | /opt/oracle/oraData/oradb/vol_01/undotbs01.dbf | ONLINE | READ WRITE | 235,520.00 | 256.00
| 99.89 | 37 | 185896 | -283356465 |
UNDOTBS | /opt/oracle/oraData/oradb/vol_01/undotbs01.dbf | ONLINE | READ WRITE | 235,520.00 | 233,344.00
| 0.92 | 37 | 185896 | -283356465 |
SYSSTAT Info |
Name | Value | Description |
DiskSorts / MemorySorts | 0.00 | Higher values are an indicator to increase SORT_AREA_SIZE |
summed dirty queue length / write requests | 5,820.00 | If this value is > 100, the LGWR is too lazy -- so you maywant to decrease DB_BLOCK_MAX_DIRTY_TARGET |
free buffer inspected / free buffer requested | 0.00 | Increase your buffer cache if this value is too high |
redo buffer allocation retries / redo blocks written | 0.00 | should be less than 0.01 |
redo log space requests | 9.00 | how often the log file was full and Oracle had to wait for a new file to become available |
table fetch continued row | 3,538.00 | How many migrated rows did we encounter during this instances life time? If the number is markable, you may have to analyse your tables:
ANALYZE TABLE tablename COMPUTE STATISTICS;
SELECT num_rows,chain_cnt FROM dba_tables WHERE table_name='tablename'; utlchain.sql then may help you to automatically eliminate migration (correct PCTFREE before running that!). |
Selected Wait Events |
Name | Value | Description |
v$system_event: db file sequential read | 272206 | Indicator for I/O problems on index accesses |
v$system_event: db file scattered read | 240182 | Indicator for I/O problems on full table scans |
v$system_event: latch free | 15330 | |
v$system_event: LGWR wait for redo copy | 626 | |
v$system_event: log file switch (checkpoint incomplete) | 0 | |
v$system_event: log file switch completion | 7 | You may consider increasing the number of logfile groups. |
v$session_wait: log buffer wait | 0 | If this value is too high, log buffers are filling faster than being emtied. You then have to consider to increase the number of logfile groups or to use larger log files. |
Miscellaneous |
Name | Value |
If we have many full table scans, we may have to optimize DB_FILE_MULTI_BLOCK_READ_COUNT. Beneath the statistic below, we need the block count of the largest table to find the best value. |
table scans (short tables) | 2640269 |
table scans (long tables) | 73895 |
table scans (rowid ranges) | 37 |
table scans (cache partitions) | 0 |
table scans (direct read) | 21 |
If there are tables that will for sure need more extents shortly, we can reduce I/O overhead allocating some extents for them in advance, using
"ALTER TABLE tablename ALLOCATE EXTENT". Here are some candidates, having less than 10 percent free blocks left: |
PCB_REP.GDORDER | 0.00% |
PCB_REP.GDSETTL | 0.00% |
PCB_REP.GDTRADE | 0.00% |
PCB_REP.HGDORDER | 0.00% |
PCB_REP.HGDSETTL | 0.00% |
PCB_REP.HGDTRADE | 0.00% |
PCB_REP.PDENUM | 0.00% |
PCB_REP.SDAPART | 0.00% |
PCB_REP.SDBOERSE | 0.00% |
PCB_REP.SDBOERSEGEB | 0.00% |
PCB_REP.SDFTAG | 0.00% |
PCB_REP.SDKONTH | 0.00% |
PCB_REP.SDKONTHGEB | 0.00% |
PCB_REP.SDSTAND | 0.00% |
PCB_REP.ZDKONTHBOERSE | 0.00% |
SYSTEM.VDK_OBJECT | 0.00% |
SYSTEM.VDK_REP_CONTROL | 0.00% |
SYSTEM.VDK_SEGMENT | 0.00% |
SYSTEM.VDK_SESSION | 0.00% |
SYSTEM.VDK_SERVICE | 0.00% |
SYSTEM.VDK_TAB_PARTITIONS | 0.00% |
SYSTEM.VDK_TABLE | 0.00% |
SYSTEM.VDK_TABLESPACE | 0.00% |
SYSTEM.VDK_USER_RULE | 0.00% |