Device I/O Queue Size (I/O Concurrency)
ZFS controls the I/O queue depth for a given LUN. The default is 35, which allows common SCSI and SATA disks to reach their maximum throughput under ZFS. However, having 35 concurrent I/Os means that the service times can be inflated. For NVRAM-based storage, it is not expected that this 35-deep queue is reached nor plays a significant role. Tuning this parameter for NVRAM-based storage is expected to be ineffective. For JBOD-type storage, tuning this parameter is expected to help response times at the expense of raw streaming throughput.
The Solaris Nevada release now has the option of storing the ZIL on separate devices from the main pool. Using separate intent log devices can alleviate the need to tune this parameter for loads that are synchronously write intensive.
If you tune this parmeter, please reference this URL in shell script or in an /etc/system comment.
Tuning is not expected to be effective for NVRAM-based storage arrays.
 Solaris 10 8/07 and Solaris Nevada (snv_53 to snv_69) Releases
echo zfs_vdev_max_pending/W0t10 | mdb -kw
Revert to default:
echo zfs_vdev_max_pending/W0t35 | mdb -kw
Set the following parameter in the /etc/system file:
set zfs:zfs_vdev_max_pending = 10
For earlier Solaris releases, see:
* 6471212 need reserved I/O scheduler slots to improve I/O latency of critical ops