Thursday, April 30, 2009

zfs-fuse broken after dist-upgrade to jaunty

juliusr@rainforest:~$ cat /etc/issue
Ubuntu 9.04 \n \l
juliusr@rainforest:~$ dpkg -l | grep -i zfs-fuse
ii zfs-fuse 0.5.1-1ubuntu5

I have two 320gb sata disks connected to a PCI raid controller:
juliusr@rainforest:~$ lspci | grep -i sata
00:08.0 RAID bus controller: Silicon Image, Inc. SiI 3512
[SATALink/SATARaid] Serial ATA Controller (rev 01)

After a dist-upgrade to jaunty my zpool mirror zfspool got broken.

juliusr@rainforest:~$ sudo zpool status
pool: zfspool
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient

replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfspool UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 insufficient replicas
sdb FAULTED 0 0 0 corrupted data
sdc UNAVAIL 0 0 0 cannot open

I think what has happened is that somehow the drive labels sda and sdc have been swapped around and zfs-fuse got confused. IIRC i used to boot of sda, but now it looks like i'm booting of sdc.

juliusr@rainforest:~$ sudo lshw | grep -iE '/dev/sd|size'
logical name: /dev/sdc
size: 18GiB (20GB)
logical name: /dev/sdc1
size: 17GiB
logical name: /dev/sdc2
size: 839MiB
logical name: /dev/sdc5
logical name: /dev/sda
size: 298GiB (320GB)
logical name: /dev/sdb
size: 298GiB (320GB)

This is my fstab, but i suspect the commented volume labels (#/dev/sdaX) are now wrong:

juliusr@rainforest:~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
proc /proc proc defaults 0 0

# /dev/sda1
UUID=d4a5ebb6-52ec-4b6f-bc8e-5052dca81ec6 / ext3 relatime,errors=remount-ro 1
# /dev/sda5
UUID=5e55071d-0ebf-4741-ba1a-4a9d70b70c78 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0

How do i get back to a working zpool? I asked the zfs-discuss@opensolaris.org list and I received the following reply from Fajar A. Nugraha (thanks mate!):

1) stop zfs-fuse service:
juliusr@rainforest:~$ sudo /etc/init.d/zfs-fuse stop

2)delete (or move) /etc/zfs/zpool.cache
juliusr@rainforest:~$ sudo rm /etc/zfs/zpool.cache

3) start zfs-fuse
juliusr@rainforest:~$ sudo /etc/init.d/zfs-fuse stop

4) zpool import

As it happened, i didn't even need to do the zpool import, it all came up fine:

juliusr@rainforest:~$ sudo zpool status
pool: zfspool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
mirror ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0

No comments: