Oracle Performance Tuning Tips

Use a large database block size

Because the database block size is fixed at database creation, this is one decision that is important to get right the first time. For file system based datafiles, without direct I/O, the database block size must match the file system buffer size exactly, as explained here. However, if your database is raw, or if direct I/O is available, then you have the luxury of choosing a larger database block size. There are many advantages to using a large block size, but the most significant is the saving in I/O for index based access paths.

How does a large block size save I/O?

A large database block size reduces the number of reads required to probe an index and scan a range of values from its leaf blocks.

An index range scan using an 8K database block size

The figure above shows a hypothetical index in a database with an 8K block size. The figure below shows the equivalent index after the database has been rebuilt with a 16K block size. The most obvious difference is that the number of leaf blocks has been halved. As a direct result, the number of pointers to leaf blocks has been halved also. Because the number of leaf block pointers has been halved, the space required for the last level of branch blocks has been halved as well. Therefore the number of blocks at that level of the index has been reduced to one quarter. In general, for large indexes the space saving at each level of the B*-tree increases exponentially with the height of the tree. Thus the height of the tree is reduced by one level in some cases.

An index range scan using a 16K database block size

The shaded blocks in the figures illustrate the dual saving in logical reads due to a larger database block size. If the height of the tree is reduced, fewer block gets are required to traverse the tree from the root block to a leaf block. And in an index range scan, only half the number of single block reads are required to scan the leaf blocks.

Of course, these are savings in logical reads. But are there equivalent savings in physical reads as well? Of course, that depends on which blocks are cached. Because the index branch blocks tend to be well cached anyway, doubling the database block size benefits index range scans more than it benefits index probes.

What about the impact on the buffer cache?

However, we must consider whether these improvements might be negated by other factors. In particular, if the cache is not enlarged, the number of buffers in the cache will be halved, and this might have a negative impact on the cache hit rate. Fortunately, this is not the case if you take care to adjust the sizes of the various buffer pools appropriately.

The buffer pools using an 8K database block size

Although circumstances vary, the reduction in the size of the DEFAULT buffer pool normally offsets the growth in the RECYCLE and KEEP buffer pools. This is illustrated in the figures above and below. So, contrary to popular belief, rebuilding a database with a larger database block size can be memory neutral and need not impair data caching. However, due care must be taken in the sizing of the buffer pools.

The buffer pools using a 16K database block size

In view of this you should normally use a large database block size for unbuffered databases, particularly if your application is heavily dependent on the performance of index based access paths, particularly index range scans.

What other factors are there?

The database block size has many other impacts, but they all support the use of a large database block size.

What about using multiple block sizes?

Under Oracle9i individual tablespaces can have a block size that differs from the database block size. So it is now possible to use a smaller block size for tablespaces containing tables that are never accessed sequentially. This can reduce the amount of memory required to cache a working set of blocks from these tables. It can also reduce the cost of index based access to these tables, because each physical read against a table block is slightly faster. Of course, this should not be considered if the database is using buffered I/O, because in that case the database block size must match the file system buffer size exactly. Also, because using a smaller block size for tables reduces the data density, increases the risk of chaining, and makes sequential processing less efficient, there are many cases in which it is better to follow the general principle of using a large block size.


Ixora Pty Ltd.   All rights reserved.
12-Oct-2007 22:22
Search   Questions   Feedback   Up   Home