The datafile size should be chosen in proportion to the total size of the database with an allowance for database growth. If the datafiles are too large, then the free space will be unnecessarily fragmented between tablespaces. However, a proliferation of small datafiles should also be avoided, because most datafile headers must be updated for most checkpoints, and each Oracle process' memory usage is in part proportional to the number of datafiles that it has open. There the number of datafiles also impacts the rate at which DBWn attempts to write, and thus a proliferation of small datafiles increases the risk of write complete waits.
When choosing a datafile size for a database that will be created on raw logical volumes, remember that allowance needs to be made for logical volume control block, if any, and a single datafile header block. That is, the SIZE specified in the filespec clause of the CREATE TABLESPACE command must be at least that much smaller than the logical volume in which it is being created. 128K is an adequate allowance in all cases.
Also, remember that locally managed tablespaces have a 64K bitmap after the datafile header block. This means that the SIZE of a locally managed tablespace with a uniform extent allocation policy should be a multiple of the extent size plus 64K, otherwise the final "extent" will be too small to be used.
Using a uniform stripe breadth within each data protection class gives you maximum flexibility in disk load balancing. And using a modest stripe breath enables you to maintain the required I/O separation.
It is sometimes objected that some database segments require higher concurrency and thus broader striping than the moderate stripe breadth proposed above for all datafiles. In most cases, this can be catered for by ensuring that these segments have extents in a number of such datafiles residing on different sets of disks. If however a single extent will contain a hot spot that requires higher concurrency, then separate provision should be made for that tablespace only.
The naming conventions adopted should not be overly verbose, particularly those used for directory names. Long pathname components prevent all subordinate pathnames from being cached in the DNLC (name cache), which is used by the operating system for pathname to inode translations. For many operating systems, a long pathname component is defined as anything longer than 14 bytes. Further, because directories are searched linearly, frequently opened files (such as subdirectories) should be created first in their directories. It is also better have a deeper directory structure with a low branching factor, rather than a shallower directory structure with many files in each directory. However, the absolute pathname to all datafiles (and raw logical volumes) should be limited to 59 bytes if possible.
Also, beware of allowing large numbers of files to accumulate in the archive, audit and dump destination directories. If this does occur, the directory concerned should be entirely removed and then recreated.
If you have uniformly sized datafiles with clearly differentiated I/O characteristics, and a moderate number of tablespaces with well-differentiated I/O requirements, then you will then be ideally equipped to perform disk load balancing.
|Copyright © Ixora Pty Ltd||