ZFS Limiting the ARC Cache

Limiting the ARC Cache

The ARC is where ZFS caches data from all active storage pools. The ARC grows and consumes memory on the principle that no need exists to return data to the system while there is still plenty of free memory. When the ARC has grown and outside memory pressure exists, for example, when a new application starts up, then the ARC releases its hold on memory. ZFS is not designed to steal memory from applications. A few bumps appeared along the way, but the established mechanism works reasonably well for many situations and does not commonly warrant tuning.

However, a few situations stand out.

* If a future memory requirement is significantly large and well defined, then it can be advantageous to prevent ZFS from growing the ARC into it. So, if we know that a future application requires 20% of memory, it makes sense to cap the ARC such that it does not consume more than the remaining 80% of memory.

* If the application is a known consumer of large memory pages, then again limiting the ARC prevents ZFS from breaking up the pages and fragmenting the memory. Limiting the ARC preserves the availability of large pages.

* If dynamic reconfiguration of a memory board is needed (supported on certain platforms), then it is a requirement to prevent the ARC (and thus the kernel cage) to grow onto all boards.

For theses cases, it can be desirable to limit the ARC. This will, of course, also limit the amount of cached data and this can have adverse effects on performance. No easy way exists to foretell if limiting the ARC degrades performance.

If you tune this parameter, please reference this URL in shell script or in an /etc/system comment.

Solaris 10 8/07 and Solaris Nevada (snv_51) Releases

For example, if an application needs 5 Gbytes of memory on a system with 36-Gbytes of memory, you could set the arc maximum to 30 Gbytes (0x780000000).

Set the following parameter in the /etc/system file:

set zfs:zfs_arc_max = 0x780000000

Earlier Solaris Releases

You can only change the ARC maximum size by using the mdb command. Because the system is already booted, the ARC init routine has already executed and other ARC size parameters have already been set based on the default c_max size. Therefore, you should tune the arc.c and arc.p values, along with arc.c_max, using the formula:

arc.c = arc.c_max

arc.p = arc.c / 2

For example, to the set the ARC parameters to small values, such as arc_c_max to 512MB, and complying with the formula above (arc.c_max to 512MB, and arc.p to 256MB), use the following syntax:

# mdb -kw
> arc::print -a p c c_max
ffffffffc00b3260 p = 0xb75e46ff
ffffffffc00b3268 c = 0x11f51f570
ffffffffc00b3278 c_max = 0x3bb708000

> ffffffffc00b3260/Z 0x10000000
ffffffffc00b3260: 0xb75e46ff = 0x10000000
> ffffffffc00b3268/Z 0x20000000
ffffffffc00b3268: 0x11f51f570 = 0x20000000
> ffffffffc00b3278/Z 0x20000000
ffffffffc00b3278: 0x11f51f570 = 0x20000000

You should verify the values have been set correctly by examining them again in mdb (using the same print command in the example). You can also monitor the actual size of the ARC to ensure it has not exceeded:

# echo "arc::print -d size" | mdb -k

The above command displays the current ARC size in decimal.

Here is a perl script that you can call from an init script to configure your ARC on boot with the above guidelines:


use strict;
my $arc_max = shift @ARGV;
if ( !defined($arc_max) ) {
print STDERR "usage: arc_tune \n";
exit -1;
$| = 1;
use IPC::Open2;
my %syms;
my $mdb = "/usr/bin/mdb";
open2(*READ, *WRITE, "$mdb -kw") || die "cannot execute mdb";
print WRITE "arc::print -a\n";
while() {
my $line = $_;

if ( $line =~ /^ +([a-f0-9]+) (.*) =/ ) {
$syms{$2} = $1;
} elsif ( $line =~ /^\}/ ) {
# set c & c_max to our max; set p to max/2
printf WRITE "%s/Z 0x%x\n", $syms{p}, ( $arc_max / 2 );
print scalar ;
printf WRITE "%s/Z 0x%x\n", $syms{c}, $arc_max;
print scalar ;
printf WRITE "%s/Z 0x%x\n", $syms{c_max}, $arc_max;
print scalar ;


* ZFS should avoiding growing the ARC into trouble


* The ARC allocates memory inside the kernel cage, preventing DR


* ZFS/ARC should cleanup more after itself


* Each zpool needs to monitor it’s throughput and throttle heavy writers


Further Reading


Leave a Reply

Your email address will not be published. Required fields are marked *