ZFS does a device-level prefetching in addition to file-level prefetching. When ZFS reads a block from a disk, it inflates the I/O size, hoping to pull interesting data or metadata from the disk. Prior to the Solaris Nevada (snv_70) release, the code has caused problems for system with lots of disks because the extra prefetched data can cause congestion on the channel between the storage and the host. Tuning down the prefetching has been effective for OLTP type loads in the past. However, in the Solaris Nevada release, the code is now only prefetching metadata and this is not expected to require any tuning.
No tuning is required for snv_70 and after.
Solaris 10 8/07 and Nevada (snv_53 to snv_69) Releases
Set the following parameter in the /etc/system file:
set zfs:zfs_vdev_cache_bshift = 13
/* Setting zfs_vdev_cache_bshift with mdb crashes a system.
/* zfs_vdev_cache_bshift is the base 2 logarithm of the size used to read disks.
/* The default value of 16 means reads are issued in size of 1 << 16 = 64K. /* A value of 13 means disk reads are padded to 8K. For earlier releases, see: http://blogs.sun.com/roch/entry/tuning_the_knobs RFEs * vdev_cache wises up: increase DB performance by 16% (integrated in snv_70) http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6437054  Further Reading http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help