ZFS Device I/O Queue Size (I/O Concurrency)

Device I/O Queue Size (I/O Concurrency)

ZFS controls the I/O queue depth for a given LUN. The default is 35, which allows common SCSI and SATA disks to reach their maximum throughput under ZFS. However, having 35 concurrent I/Os means that the service times can be inflated. For NVRAM-based storage, it is not expected that this 35-deep queue is reached nor plays a significant role. Tuning this parameter for NVRAM-based storage is expected to be ineffective. For JBOD-type storage, tuning this parameter is expected to help response times at the expense of raw streaming throughput.

The Solaris Nevada release now has the option of storing the ZIL on separate devices from the main pool. Using separate intent log devices can alleviate the need to tune this parameter for loads that are synchronously write intensive.

If you tune this parmeter, please reference this URL in shell script or in an /etc/system comment.

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#MAXPEND

Tuning is not expected to be effective for NVRAM-based storage arrays.
[edit] Solaris 10 8/07 and Solaris Nevada (snv_53 to snv_69) Releases

Set dynamically:

echo zfs_vdev_max_pending/W0t10 | mdb -kw

Revert to default:

echo zfs_vdev_max_pending/W0t35 | mdb -kw

Set the following parameter in the /etc/system file:

set zfs:zfs_vdev_max_pending = 10

For earlier Solaris releases, see:

http://blogs.sun.com/roch/entry/tuning_the_knobs
RFEs

* 6471212 need reserved I/O scheduler slots to improve I/O latency of critical ops

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6471212
Further Reading

http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on

Leave a Reply

Your email address will not be published. Required fields are marked *