Adding swap with zfs

Adding swap with zfs

To add swap from the rpool:

# zfs get volsize rpool/dump
NAME PROPERTY VALUE SOURCE
rpool/dump volsize 1.50G local

to increase boot into single user mode and do:

# zfs set volsize=40G rpool/dump

if you can’t bring the system to single user mode
then
for sparc
# zfs create -V 2G -b 8k rpool/swap2

for x86
# zfs create -V 2G -b 4k rpool/swap2

then just add it

# swap -a /dev/zvol/dsk/rpool/swap2

You can also add swap from other that the rpool

# zfs create -V 40G -b 4k db1/swap
# swap -a /dev/zvol/dsk/db1/swap
# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296
/dev/zvol/dsk/db1/swap 181,3 8 83886072 83886072

Resize (grow) a mirror by swapping to larger disks with ZFS

Resize (grow) a mirror with ZFS

I have a non root mirror made up of two 72G drives,
can I replace them with 2 146g drives without a backup
and without loss of data

I knew you could grow a mirror under ZFS by breaking the mirror,
then inserting a larger disk for the broken out smaller disk,
creating a new pool on the larger disk,
cloning the old pool to the new pool,
then destroying the old pool,
then inserting a larger disk for the old (now single drive) pool,
then attaching the second larger drive to the new pool,
then mounting the new pool where
the old pool lived
but this is wild

======================
Resize (grow) a mirror with ZFS
======================

I have a non root mirror made up of two 72G drives,
can I replace them with 2 146g drives without a backup
and without loss of data

Luckly with ZFS you can treat files like drives for test purposes:

=========================
Let’s make 4 test devices
=========================

bash-3.00# cd /export/home/
bash-3.00# ls
lost+found
bash-3.00# mkfile 72m 0 1
bash-3.00# mkfile 156m 2 3

===========================================================
O and 1 become our 72 drives, 2 and 3 become our 156 drives
===========================================================

bash-3.00# ls -la
total 934452
drwxr-xr-x 3 root root 512 Mar 13 00:31 .
drwxr-xr-x 3 root sys 512 Mar 12 21:55 ..
-rw——T 1 root root 75497472 Mar 13 00:31 0
-rw——T 1 root root 75497472 Mar 13 00:31 1
-rw——T 1 root root 163577856 Mar 13 00:31 2
-rw——T 1 root root 163577856 Mar 13 00:31 3
drwx—— 2 root root 8192 Mar 12 21:55 lost+found

===================================
let’s create a mirror with the 72’s
===================================

bash-3.00# zpool create test mirror /export/home/0 /export/home/1
bash-3.00# zpool status
pool: test
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/0 ONLINE 0 0 0
/export/home/1 ONLINE 0 0 0

errors: No known data errors
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 67.5M 92.5K 67.4M 0% ONLINE –

================================
lets put some data on the mirror
================================

bash-3.00# cd /test
bash-3.00# ls
bash-3.00# pwd
/test
bash-3.00# mkfile 1m q w e r t y
bash-3.00# ls -la
total 11043
drwxr-xr-x 2 root root 8 Mar 13 00:33 .
drwxr-xr-x 27 root root 512 Mar 13 00:32 ..
-rw——T 1 root root 1048576 Mar 13 00:33 e
-rw——T 1 root root 1048576 Mar 13 00:33 q
-rw——T 1 root root 1048576 Mar 13 00:33 r
-rw——T 1 root root 1048576 Mar 13 00:33 t
-rw——T 1 root root 1048576 Mar 13 00:33 w
-rw——T 1 root root 1048576 Mar 13 00:33 y
bash-3.00# cd /

=======================================
let’s remove the backside of the mirror
=======================================

bash-3.00# zpool detach test /export/home/1
bash-3.00# zpool status
pool: test
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/export/home/0 ONLINE 0 0 0

errors: No known data errors
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 67.5M 6.16M 61.3M 9% ONLINE –

========================================
Now lets replace it with a larger device
========================================

bash-3.00# zpool attach test /export/home/0 /export/home/2
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 67.5M 6.15M 61.4M 9% ONLINE –
bash-3.00# zpool status
pool: test
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Fri Mar 13 00:36:05
2009
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/0 ONLINE 0 0 0
/export/home/2 ONLINE 0 0 0

errors: No known data errors

===========================================
Now lets detach the frontside of the mirror
===========================================

bash-3.00# zpool detach test /export/home/0
bash-3.00# zpool status
pool: test
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Fri Mar 13 00:36:05
2009
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/export/home/2 ONLINE 0 0 0

errors: No known data errors

=========================================
Now let’s replace it with a larger device
=========================================

bash-3.00# zpool attach test /export/home/2 /export/home/3

============
Is it there?
============

bash-3.00# zpool status
pool: test
state: ONLINE
scrub: resilver completed after 0h0m with 0 errors on Fri Mar 13 00:37:38
2009
config:

NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/2 ONLINE 0 0 0
/export/home/3 ONLINE 0 0 0

errors: No known data errors

=======================================================
Note: the resilver will take much longer with more data
=======================================================

=============
Is it bigger?
=============

bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 152M 6.22M 145M 4% ONLINE –
bash-3.00#

=========
Yes it is
=========

========================
Is our data still there?
========================

bash-3.00# ls /test/
e q r t w y
bash-3.00# ls -la /test/
total 12323
drwxr-xr-x 2 root root 8 Mar 13 00:33 .
drwxr-xr-x 27 root root 512 Mar 13 00:32 ..
-rw——T 1 root root 1048576 Mar 13 00:33 e
-rw——T 1 root root 1048576 Mar 13 00:33 q
-rw——T 1 root root 1048576 Mar 13 00:33 r
-rw——T 1 root root 1048576 Mar 13 00:33 t
-rw——T 1 root root 1048576 Mar 13 00:33 w
-rw——T 1 root root 1048576 Mar 13 00:33 y

=========
Yes it is
=========

zero to 4 terabytes in 60 seconds with ZFS

zero to 4 terabytes in 60 seconds with ZFS

Note: 5300 storage array with 14 400G drives. Using raidz2 (raid 6 aka double parity)

root@foobar # iostat -En |grep c7|awk ‘{print $1}’
c7t220D000A330680AFd0
c7t220D000A330675DBd0
c7t220D000A33068F42d0
c7t220D000A33068D45d0
c7t220D000A33068D49d0
c7t220D000A330688F6d0
c7t220D000A33068D71d0
c7t220D000A3306956Ad0
c7t220D000A33066E31d0
c7t220D000A33068E53d0
c7t220D000A33068D75d0
c7t220D000A330680A4d0
c7t220D000A33069574d0
c7t220D000A33066C81d0

zpool create -f mypool raidz2 c7t220D000A330680AFd0 c7t220D000A330675DBd0 \
c7t220D000A33068F42d0 c7t220D000A33068D45d0 c7t220D000A33068D49d0 \
c7t220D000A330688F6d0 c7t220D000A33068D71d0 c7t220D000A3306956Ad0 \
c7t220D000A33066E31d0 c7t220D000A33068E53d0 c7t220D000A33068D75d0 \
c7t220D000A330680A4d0 spare c7t220D000A33069574d0 c7t220D000A33066C81d0

root@foobar # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mypool 4.34T 192K 4.34T 0% ONLINE –

root@foobar # zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 152K 3.54T 60.9K /mypool

root@foobar # zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c7t220D000A330680AFd0 ONLINE 0 0 0
c7t220D000A330675DBd0 ONLINE 0 0 0
c7t220D000A33068F42d0 ONLINE 0 0 0
c7t220D000A33068D45d0 ONLINE 0 0 0
c7t220D000A33068D49d0 ONLINE 0 0 0
c7t220D000A330688F6d0 ONLINE 0 0 0
c7t220D000A33068D71d0 ONLINE 0 0 0
c7t220D000A3306956Ad0 ONLINE 0 0 0
c7t220D000A33066E31d0 ONLINE 0 0 0
c7t220D000A33068E53d0 ONLINE 0 0 0
c7t220D000A33068D75d0 ONLINE 0 0 0
c7t220D000A330680A4d0 ONLINE 0 0 0
spares
c7t220D000A33069574d0 AVAIL
c7t220D000A33066C81d0 AVAIL

errors: No known data errors

root@foobar # df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c3t0d0s0 15G 7.7G 7.0G 53% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 17G 716K 17G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/usr/lib/libc/libc_hwcap2.so.1
15G 7.7G 7.0G 53% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 17G 36K 17G 1% /tmp
swap 17G 24K 17G 1% /var/run
mypool 3.5T 61K 3.5T 1% /mypool

root@foobar # zfs set sharenfs=on mypool

root@foobar # cat /etc/dfs/sharetab
/mypool – n

Recursive rolling snapshots

Recursive rolling snapshots

With build 63 (Solaris 5.10.6 aka r6) ZFS now supports the recursive snapshot command

my storage pool looks like this

mypool
mypool/home
mypool/home/john
mypool/home/joe

Here is a script to run nightly in cron (note we only snapshot the top directory)
#!/bin/sh
#: jcore
zfs destroy mypool@7daysago 2>&1 > /dev/null
zfs rename -r mypool@6daysago 7daysago 2>&1 > /dev/null
zfs rename -r mypool@5daysago 6daysago 2>&1 > /dev/null
zfs rename -r mypool@4daysago 5daysago 2>&1 > /dev/null
zfs rename -r mypool@3daysago 4daysago 2>&1 > /dev/null
zfs rename -r mypool@2daysago 3daysago 2>&1 > /dev/null
zfs rename -r mypool@yesterday 2daysago 2>&1 > /dev/null
zfs snapshot mypool@yesterday
exit

After running this 7 times this is what I get

# uname -a
SunOS serv10g-dxc1 5.10 Generic_137138-09 i86pc i386 i86pc

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 9.00G 3.53T 9.00G /mypool
mypool@7daysago 0 – 9.00G –
mypool@6daysago 0 – 9.00G –
mypool@5daysago 0 – 9.00G –
mypool@4daysago 0 – 9.00G –
mypool@3daysago 0 – 9.00G –
mypool@2daysago 0 – 9.00G –
mypool@yesterday 0 – 9.00G –
mypool/home 139K 3.53T 49.7K /mypool/home
mypool/home@7daysago 0 – 49.7K –
mypool/home@6daysago 0 – 49.7K –
mypool/home@5daysago 0 – 49.7K –
mypool/home@4daysago 0 – 49.7K –
mypool/home@3daysago 0 – 49.7K –
mypool/home@2daysago 0 – 49.7K –
mypool/home@yesterday 0 – 49.7K –
mypool/home/joe 44.7K 3.53T 44.7K /mypool/home/joe
mypool/home/joe@7daysago 0 – 44.7K –
mypool/home/joe@6daysago 0 – 44.7K –
mypool/home/joe@5daysago 0 – 44.7K –
mypool/home/joe@4daysago 0 – 44.7K –
mypool/home/joe@3daysago 0 – 44.7K –
mypool/home/joe@2daysago 0 – 44.7K –
mypool/home/joe@yesterday 0 – 44.7K –
mypool/home/john 44.7K 3.53T 44.7K /mypool/home/john
mypool/home/john@7daysago 0 – 44.7K –
mypool/home/john@6daysago 0 – 44.7K –
mypool/home/john@5daysago 0 – 44.7K –
mypool/home/john@4daysago 0 – 44.7K –
mypool/home/john@3daysago 0 – 44.7K –
mypool/home/john@2daysago 0 – 44.7K –
mypool/home/john@yesterday 0 – 44.7K –
#

Creating a three way mirror in ZFS

you actually can create a three way mirror in ZFS if that is what you REALLY want to do

zpool create -f poolname mirror ctag ctag mirror ctag ctag mirror ctag ctag
—-mirrors (2 disks each)——->first<---------->second<--------->third<--- the first mirror will then replicate to the second which will then replicate to the third Note: not to be confused with the three disk mirror which you CAN do on a T3 storage array Dracko article #542 T3 - Three Disk Mirror (WTF?)

ZFS Cache Flushes

ZFS Cache Flushes

ZFS is designed to work with storage devices that manage a disk-level cache. ZFS commonly asks the storage device to ensure that data is safely placed on stable storage by requesting a cache flush. For JBOD storage, this works as designed and without problems. For many NVRAM-based storage arrays, a problem might come up if the array takes the cache flush request and actually does something rather than ignoring it. Some storage will flush their caches despite the fact that the NVRAM protection makes those caches as good as stable storage.

ZFS issues infrequent flushes (every 5 second or so) after the uberblock updates. The problem here is fairly inconsequential. No tuning is warranted here.

ZFS also issues a flush every time an application requests a synchronous write (O_DSYNC, fsync, NFS commit, and so on). The completion of this type of flush is waited upon by the application and impacts performance. Greatly so, in fact. From a performance standpoint, this neutralizes the benefits of having an NVRAM-based storage.

The upcoming fix is that the flush request semantic will be qualified to instruct storage devices to ignore the requests if they have the proper protection. This change requires a fix to our disk drivers and for the storage to support the updated semantics.

Since ZFS is not aware of the nature of the storage and if NVRAM is present, the best way to fix this issue is to tell the storage to ignore the requests. For more information, see:

http://blogs.digitar.com/jjww/?itemid=44.

Please check with your storage vendor for ways to achieve the same thing.

As a last resort, when all LUNs exposed to ZFS come from NVRAM-protected storage array and procedures ensure that no unprotected LUNs will be added in the future, ZFS can be tuned to not issue the flush requests. If some LUNs exposed to ZFS are not protected by NVRAM, then this tuning can lead to data loss, application level corruption, or even pool corruption.

NOTE: Cache flushing is commonly done as part of the ZIL operations. While disabling cache flushing can, at times, make sense, disabling the ZIL does not.

Solaris 10 11/06 and Solaris Nevada (snv_52) Releases

Set dynamically:

echo zfs_nocacheflush/W0t1 | mdb -kw

Revert to default:

echo zfs_nocacheflush/W0t0 | mdb -kw

Set the following parameter in the /etc/system file:

set zfs:zfs_nocacheflush = 1

Risk: Some storage might revert to working like a JBOD disk when their battery is low, for instance. Disabling the caches can have adverse effects here. Check with your storage vendor.

Earlier Solaris Releases

Set the following parameter in the /etc/system file:

set zfs:zil_noflush = 1

Set dynamically:

echo zil_noflush/W0t1 | mdb -kw

Revert to default:

echo zil_noflush/W0t0 | mdb -kw

Risk: Some storage might revert to working like a JBOD disk when their battery is low, for instance. Disabling the caches can have adverse effects here. Check with your storage vendor.

RFEs

* sd driver should set SYNC_NV bit when issuing SYNCHRONIZE CACHE to SBC-2 devices

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690

* zil shouldn’t send write-cache-flush command …

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460889

ZFS Disabling the ZIL

Disabling the ZIL (Don’t)

ZIL stands for ZFS Intent Log. It is used during synchronous writes operations. The ZIL is an essential part of ZFS and should never be disabled. Significant performance gains can be achieved by not having the ZIL, but that would be at the expense of data integrity. One can be infinitely fast, if correctness is not required.

One reason to disable the ZIL is to check if a given workload is significantly impacted by it. A little while ago, a workload that was a heavy consumer of ZIL operations was shown to not be impacted by disabling the ZIL. It convinced us to look elsewhere for improvements. If the ZIL is shown to be a factor in the performance of a workload, more investigation is necessary to see if the ZIL can be improved.

The Solaris Nevada release now has the option of storing the ZIL on separate devices from the main pool. Using separate possibly low latency devices for the Intent Log is a great way to improve ZIL sensitive loads.

Caution: Disabling the ZIL on an NFS server will lead to client side corruption. The ZFS pool integrity itself is not compromised by this tuning.

Current Solaris Releases

If you must, then:

echo zil_disable/W0t1 | mdb -kw

Revert to default:

echo zil_disable/W0t0 | mdb -kw

RFEs

* zil synchronicity

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630

Further Reading

http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on http://blogs.sun.com/erickustarz/entry/zil_disable http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine

ZFS Disabling Metadata Compression

Disabling Metadata Compression

Caution: This tuning needs to be researched as it’s now apparent that the tunable applies only to indirect blocks leaving a lot of metadata compressed anyway.

With ZFS, compression of data blocks is under the control of the file system administrator and can be turned on or off by using the command "zfs set compression …".

On the other hand, ZFS internal metadata is always compressed on disk, by default. For metadata intensive loads, this default is expected to gain some amount of space (a few percentages) at the expense of a little extra CPU computation. However, a bigger motivation exists to have metadata compression on. For directories that grow to millions of objects then shrink to just a few, metadata compression saves large amounts of space (>>10X).

In general, metadata compression can be left as is. If your workload is CPU intensive (say > 80% load) and kernel profiling shows medata compression is a significant contributor and we are not expected to create and shrink huge directories, then disabling metadata compression can be attempted with the goal of providing more CPU to handle the workload.

Solaris 10 11/06 and Solaris Nevada (snv_52) Releases

Set dynamically:

echo zfs_mdcomp_disable/W0t1 | mdb -kw

Revert to default:

echo zfs_mdcomp_disable/W0t0 | mdb -kw

Set the following parameter in the /etc/system file:

set zfs:zfs_mdcomp_disable = 1

Earlier Solaris Releases

Not tunable.

RFEs

* 6391873 metadata compression should be turned back on (Integrated in NEVADA snv_36)

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6391873

ZFS and heavily cached disk arrays like StorageTek/Engenio

ZFS really has some interesting quirks. One of them is that it is truly designed to deal with dumb-as-a-rock storage. If you have a box of SATA disks with firmware flakier than Paris Hilton on a coke binge, then ZFS has truly been designed for you.

As a result, ZFS doesn’t trust that anything it writes to the ZFS Intent Log (ZIL) made it to your storage, until it flushes the storage cache. After every write to the ZIL, ZFS executes an fsync() call to instruct the storage to flush its write cache to the disk. In fact, ZFS won’t return on a write operation until the ZIL write and flush have completed. If the devices making up your zpool are individual hard drives…particularly SATA ones…this is a great behavior. If the power goes kaput during a write, you don’t have the problem that the write made it to drive cache but never to the disk.

The major problem with this strategy only occurs when you try to layer ZFS over an intelligent storage array with a decent battery-backed cache.

Most of these arrays have sizable 2GB or greater caches with 72-hour batteries. The cache gives a huge performance boost, particularly on writes. Since cache is so much faster than disk, the array can tell the writer really quickly, "I’ve got it from here, you can go back to what you were doing". Essentially, as fast as the data goes into the cache, the array can release the writer. Unlike the drive-based caches, the array cache has a 72-hour battery attached to it. So, if the array loses power and dies, you don’t lose the writes in the cache. When the array boots back up, it flushes the writes in the cache to the disk. However, ZFS doesn’t know that its talking to an array, so it assumes that the cache isn’t trustworthy, and still issues an fsync() after every ZIL write. So every time a ZIL write occurs, the write goes into the array write cache, and then the array is immediately instructed to flush the cache contents to the disk. This means ZFS doesn’t get the benefit of a quick return from the array, instead it has to wait the amount of time it takes to flush the write cache to the slow disks. If the array is under heavy load and the disks are thrashing away, your write return time (latency) can be awful with ZFS. Even when the array is idle, your latency with flushing is typically higher than the latency under heavy load with no flushing. With our array honoring ZFS ZIL flushes, we saw idle latencies of 54ms, and heavy load latencies of 224ms.

You have two options to rid yourself of the bane of existence known as write cache flushing:

* Disable the ZIL. The ZIL is the way ZFS maintains consistency until it can get the blocks written to their final place on the disk. That’s why the ZIL flushes the cache. If you don’t have the ZIL and a power outage occurs, your blocks may go poof in your server’s RAM…’cause they never made it to the disk Kemosabe. See Dracko article #570 on how to disable ZIL

* Tell your array to ignore ZFS’ flush commands. This is pretty safe, and massively beneficial.

The former option, is really a no go because it opens you up to losing data. The second option really works well and is darn safe. It ends up being safe because if ZFS is waiting for the write to complete, that means the write made it to the array, and if its in the array cache you’re golden. Whether famine or flood or a loose power cable come, your array will get that write to the disk eventually. So its OK to have the array lie to ZFS and release ZFS almost immediately after the ZIL flush command executes.

So how do you get your array to ignore SCSI flush commands from ZFS? That differs depending on the array, but I can tell you how to do it on an Engenio array. If you’ve got any of the following arrays, its made by Engenio and this may work for you:

* Sun StorageTek FlexLine 200/300 series
* Sun StorEdge 6130
* Sun StorageTek 6140/6540
* IBM DS4x00
* many SGI InfiniteStorage arrays (you’ll need to check to make sure your array is actually OEM’d from Engenio)

On a StorageTek FLX210 with SANtricity 9.15, the the following command script will instruct the array to ignore flush commands issued by Solaris hosts:

//Show Solaris ICS option
show controller[a] HostNVSRAMbyte[0x2, 0x21];
show controller[b] HostNVSRAMbyte[0x2, 0x21];

//Enable ICS
set controller[a] HostNVSRAMbyte[0x2, 0x21]=0x01;
set controller[b] HostNVSRAMbyte[0x2, 0x21]=0x01;

// Make changes effective
// Rebooting controllers
show "Rebooting A controller.";
reset controller[a];

show "Rebooting B controller.";
reset controller[b];

If you notice carefully, I said the script will cause the array to ignore flush commands from Solaris hosts. So all Solaris hosts attached to the array will have their flush commands ignored. You can’t turn this behavior on and off on a per host basis. To run this script, cut and paste the script into the script editor of the "Enterprise Management Window" of the SANtricity management GUI. That’s it! A key note here is that you should definitely have your server shut down, or at minimum your ZFS zpool exported before you run this. Otherwise, when your array reboots ZFS will kernel panic the server. In our experience, this will happen even if you only reboot one controller at a time, waiting for one controller to come back online before rebooting the other. For whatever reason, MPXIO which normally works beautifully to keep a LUN available when losing a controller, fails miserably with this situation. Its probably the array’s fault, but whatever the issue, that’s the reality. Plan for downtime when you do this.

Attaching and Detaching Devices in a ZFS Storage Pool

Attaching and Detaching Devices in a ZFS Storage Pool

In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or non-mirrored device. For example:

# zpool attach zeepool c1t1d0 c2t1d0

If the existing device is part of a two-way mirror, attaching the new device, creates a three-way mirror, and so on. In either case, the new device begins to resilver immediately.

In is example, zeepool is an existing two-way mirror that is transformed to a three-way mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.

You can use the zpool detach command to detach a device from a pool. For example:

# zpool detach zeepool c2t1d0

However, this operation is refused if there are no other valid replicas of the data. For example:

# zpool detach newpool c1t2d0
cannot detach c1t2d0: only applicable to mirror and replacing vdevs