Although, the performance of archiver I/O is not quite as critical as log file I/O, because it is performed in the background by ARCn, each log file must nevertheless be able to be archived comfortably within the amount of time that it takes LGWR to fill the remaining log files. Otherwise, redo generation will be suspended, which would be disastrous for performance.
The twin tuning strategies to prevent archival backlogs are
Archiver writes are logically sequential. However, unless redo generation is light, it is unwise to rely on archive writes being physically sequential as well. This is because LGWR can write faster than a single ARCn process can archive, and so it is often necessary to have multiple ARCn processes writing into the archive destination file systems at the same time. This makes the logically sequential I/O appear pseudo-random at the disk level.
There are lots of reasons why LGWR has an unfair advantage over ARCn. LGWR reads from memory, whereas ARCn has to read from disk. Also LGWR can write to raw log files, whereas ARCn must write to a file system. This often means that LGWR can perform asynchronous writes, but ARCn cannot. Also LGWR overwrites an existing file, whereas ARCn must continually extend the file it is writing. This involves significant file system space management overheads, which more than double the time taken to write the file.
So, if the rate of redo generation is moderate or heavy, it is easy for LGWR to write faster than ARCn can copy, even if ARCn is just copying the log files to a single archive destination. But ARCn's workload may be even greater if it needs to archive the log files to multiple destinations. This is because each log file is copied to all the archive destinations by a single ARCn process, even if multiple ARCn processes are available.
The key to configuring the archive destination file systems for fast archival is to use an extent-based file system mounted for direct I/O on dedicated striped and mirrored disks. The file systems must be extent-based and mounted for direct I/O to ensure that large writes are not normally split, and to minimize the impact of the file system space management overheads. Alternately, if a block-based file system must be used, the largest possible file system block size should be used.
If redo generation is light or moderate, a small stripe element size can be used to maximize the transfer rate of individual archive writes. This is based on the assumption that there will only rarely be more than one ARCn process active. In this case, the archive destinations should also be on dedicated disks, to preserve the sequential nature of the I/O when possible. If redo generation is heavy, such that there will commonly be more than one ARCn process active, then the stripe element size must be large relative to the size of archive writes (_log_archive_buffer_size) to maximize concurrency.
Where possible, data protection should be provided by hardware mirroring in order to minimize its impact. However, multiple archive destinations may be necessary if the files are being transmitted to a hot standby database. Multiple archive destinations should not however be preferred merely because it facilitates the taking of duplicate tape backups of the archived log files. With proper configuration, all modern backup solutions can be configured to take multiple tape backups of the archived log files from a single mirrored source.
|Copyright © Ixora Pty Ltd||