LRU and dirty lists
|The Concepts manual says that dirty buffers are moved from the LRU list to the dirty list by server processes searching for free buffers. Under what circumstances and which processes will move the buffers back from dirty list to LRU list?|
|When DBWn has written a batch of blocks from the write list it moves the buffers back to the LRU end of the replacement list (in reverse order).|
|Which is better, db_writer_processes or dbwr_io_slaves?|
|I'll assume that asynchronous I/O is not available, otherwise you would not consider I/O slaves. In that case, you need to ensure that each write batch can be processed as quickly as possible to reduce risk of write complete waits. You can have 50 slaves, but only 10 DBWn's. Slaves sustain two extra IPC latencies for each block written and use CPU less efficiently, but DBWn processes share the cache cleaning load and thus use more CPU than slaves. So slaves would be better if there is high concurrency and a large write batch needed, or if CPU capacity is constrained. Otherwise, multiple DBWn processes without slaves would be better.|
Make free request thresholds
|I need to know what parameter sets the number of dirty buffers that a server process searches in the LRU list before it forces a flush by DBWR.|
You may be thinking of _db_block_max_scan_cnt which defaults to
floor(db_block_buffers / 4) applied pro rata to each LRU latch set of buffers.
However, the buffers inspected need not be dirty. They are more likely to be pinned.
There is another threshold set by _db_large_dirty_queue that would trigger a make free request to DBWn
long before the default max scan count could be reached,
if a large number of dirty buffers were inspected during an LRU scan.
This parameter defaults to 1/16 of the internal write batch size (see documentation) rounded to the nearest buffer.
Note that the above applies up to Oracle 8.0 only. The DBWn algorithms and parameters have changed dramatically in 8.1.
dirty buffers inspected
|Is the statistic dirty buffers inspected a symptom of a lazy DBWR? If so, could this be improved by increasing checkpoints, reducing log file sync waits, adding an additional DBWR process, or, as the Oracle manual suggests, by simply increasing the db_block_buffers value.|
|Yes, dirty buffers inspected is symptomatic of a lazy DBWn, however, fixing it is non-trivial.|
DBWR summed scan depth
As far as I can see from X$KVIT, DBWR blocks to scan highest value equals db_block_buffers / 4.
Why does TOAD's statistics analysis suggest that db_files may by too high?
DBWR avg scan depth 1506.824 # DB_FILES too high?
|The DBWn scan depth is both a function of the scan depth boundaries and the write batch size. Further, if I recall correctly, the scan depth boundaries are primarily based on the write batch size too, and the db_block_buffers limit only applies if the write batch is very large. Of course, the default write batch size is a function of db_files and thus the comment (I assume).|
free buffer waits
|I'd like your insight as to how I should tune DBWR to combat free buffer waits. Or is this just a symptom of the session needing too many free buffers?|
|No, free buffer waits should be regarded as a DBWn issue. It is not enough to just investigate what the session waiting on this event was trying to do. Yes, it might be the "query from hell", but DBWn's job is to make free buffers available, no matter what. To tune DBWn, you need to be aware of the write bandwidth available at the hardware, at the operating system level, and at the Oracle level. If the bandwidth is inadequate at any point, you will see free buffer waits. If the bandwidth is lower at the operating system or hardware level than at the Oracle level, then you will see substantial write complete waits. There are many tuning strategies to consider, and many factors to weigh together.|
|I was under the impression that if you had a DBWn problem, then there would always be write complete waits.|
|No, free buffer waits and write complete waits are opposite types of DBWn problem. Generally, the first indicates that the write bandwidth is too small, and the second indicates that Oracle internal write batch size (_db_block_write_batch) is too big.|
|What is this parameter, and how should it be set?|
|The documentation calls this the "internal write batch size". Under Oracle7, it normally defaults to half the product of db_file_simultaneous_writes and db_files, but it is limited to one quarter of db_block_buffers, or the operating system limit on the maximum number of concurrent asynchronous I/O operations, which ever is the lesser. You can get the current setting from X$KVII, or using our instance_variables.sql script. Because this is a hidden parameter, it is best not to set it directly. Rather, you should tweak the value by tuning db_file_simultaneous_writes and db_files. Tuning is non-trivial, but in general you may need to increase it if you have free buffer waits and reduce it if you have write complete waits.|
|We are running Oracle 7.3.4 on AIX. Our checkpointing is taking too long, and LGWR has to wait to reuse the log files. We have increased the number of redo log groups (almost doubling the space available), and have also increased the initialization parameter db_block_checkpoint_batch so as to allow for faster checkpointing. Yet the problem persists. Is there any good documentation on how the checkpoint process works, and why it takes so long?|
This is a DBWR bottleneck.
First we need to check that AIO is available.
To find out, do the following as root
# smit aiothen select "Change/Show Characteristics of Asynchronous I/O". Also, what version of AIX is it? There have been AIO bugs in some versions. Also check the value of the use_async_io parameter, and if it is FALSE, the number of db_writers.
The next thing to check is the write bandwidth. How many physical disks (spindles) do you have underlying the data files (not counting log files)? To check Oracle's internal write bandwidth, we need to know how many datafiles you have, and the values of the following parameters: db_files, db_file_simultaneous_writes, _db_block_write_batch (if set) and db_block_checkpoint_batch.
Also, just in case you are checkpointing too intensively for a checkpoint to finish before the next one starts, we need to know the size of the log files, and the values of the log_checkpoint_interval and log_checkpoint_timeout parameters. The size of the cache (db_block_buffers * db_block_size) is also significant as an indicator of the amount of work that DBWR may have to do for each checkpoint.
|It is AIX Version 4.3. AIO is available and use_async_io is TRUE. There are 3 physical disks (each mirrored), 24 datafiles, db_files is 200, db_file_simultaneous_writes is 4, _db_block_write_batch is not set, and db_block_checkpoint_batch is 64. There are 10 log files of 50M each, log_checkpoint_interval is 100000, log_checkpoint_timeout is 0, and the cache is 250M. Why are jobs writing so much to the redo logs? Are the rollback segments being logged?|
If by 50M you mean 50,000K then your log_checkpoint_interval is an exact match to the log file size,
because it is measured in 512 byte blocks on AIX.
The other parameter settings seem OK too. Your internal write batch size works out at 400 blocks, so the checkpoint batch size is reasonable at 64. However, you only have 3 disks, so these numbers are larger than ideal and introduce a risk of write complete waits. It would be better to set both db_files and db_block_checkpoint_batch to 30. However, that is not the problem we are facing here. There is also a school of thought that multiple db_writers works better than AIO. I think it is true, but once again, I don't think that is your problem.
The problem is that you have too few physical disks to support that cache size. You need to either
As to why your redo generation is so intensive, I cannot say. It depends on what you are doing. There are some tricks to reduce redo generation, but it is something that most people just live with. And, yes, rollback segment changes have to be logged to allow for crash recovery.
|Copyright © Ixora Pty Ltd||