|Sin-onyms||- Are synonyms really bad?|
|Not so fast||- Incremental checkpointing can defeat commit cleanouts|
For this test we created 960 test tables in the PROD schema, named T1 to T960, each with just a single NUMBER column. We then created two users, USER1 and USER2, both with SELECT ANY TABLE privilege. USER1 had private synonyms for each of the test tables, whereas USER2 used public synonyms to access them. The main test script contained 960 insert statements to insert a single row into each test table. (If you want to reproduce this test, the number of tables used should be enough to make the performance differences measurable, but not so many that either the library cache or buffer cache will be saturated.)
After bouncing the instance, we ran the test script once to insert rows containing the literal value 1 into each test table. This populated the library cache with the metadata for each test table, and the buffer cache with the required database blocks. We then ran the test script again to measure the hard parse performance, this time inserting rows containing the value 0 into each test table. We then repeated that as another user to measure the soft parse performance. Here are the results from a small desktop system running release 8.1.6.
|Hard parse performance||Soft parse performance|
The difference in CPU usage is very clear. In this case the use of synonyms more than doubled the CPU cost of both hard and soft parses. Of course, if the test table definitions and SQL statements had been more complex, the percentage difference would not have been so great, but the difference would remain important because of the latching required. This was a single user test with almost no potential for latch contention. In a multi-user production environment where numerous concurrent soft parses and occasional hard parses can be expected, the use of synonyms approximately halves the scalability of the application. In the absence of latch contention, the additional CPU cost of using synonyms during parsing may not be noticeable, but if the number of users and thus the parse rate is high enough to introduce latch contention, then the performance impact of using synonyms will be dramatic.
There are several items on the Ixora web site that explain the reasons for the performance difference that we have observed here, and also a script that may be used to detect public synonym usage in a production environment.
Incremental checkpointing makes use of the fact that blocks are placed on a checkpoint queue in the buffer cache as they are dirtied. DBWn implicitly advances the thread checkpoint incrementally as it writes dirty blocks, and CKPT updates the checkpoint progress record in the controlfile every 3 seconds to reflect the RBA prior to the first change RBA of the block on the tail of the checkpoint queue.
DBWn is also able to explicitly advance the thread checkpoint incrementally by controlling the way that its write batches are composed. By choosing to write some blocks from the tail of the checkpoint queue, rather than just from the four write lists, DBWn can ensure that the thread checkpoint is advanced incrementally.
There are several ways to control to speed of incremental checkpointing. The db_block_max_dirty_target parameter places a threshold on the intended length of the checkpoint queue, although this parameter is deprecated under Oracle8i. The log_checkpoint_interval parameter places a threshold on the intended lag between the sync RBA and the checkpoint RBA measured in log blocks. The log_checkpoint_timeout parameter places a threshold on the intended thread checkpoint lag measured in seconds. The length of the checkpoint queue and the size of redo lag can be combined to generate an estimate of the number of I/O operations that would be needed for crash recovery. Accordingly, the fast_start_io_target parameter places a threshold on the intended number of recovery I/O operations. All of these parameters may be set together. Which one constrains the speed of incremental checkpointing can vary from moment to moment, and the V$INSTANCE_RECOVERY view can used to monitor the relative impact of the last 3 constraints.
Because incremental checkpointing does not have the heavy performance impact that interval checkpointing used to have some DBAs are enthusiastically using this facility to place a firm limit on crash recovery times, whereas they would otherwise have been content with much longer crash recovery times. However, there is an unwanted side effect to rapid incremental checkpointing that should also be taken into account. Namely, it has the potential to defeat the commit cleanout mechanism.
When a transaction commits the set of blocks that it has dirtied most recently, up to 10% of the cache, are revisited and cleaned out if possible. Commit cleanouts do not generate redo, and avoid the additional I/O and rollback segment header access associated with delayed block cleanouts. But commit cleanouts are not written to disk for blocks that have already been written out due to over zealous incremental checkpointing. (PS: This is no longer true at 9.0.1 and possibly earlier). Our recommendation therefore is to set log_checkpoint_timeout to 3 or 4 times the expected duration of your longest "normal" transaction and to disable all other constraints on the speed of incremental checkpointing.
|Copyright © Ixora Pty Ltd||