Playing with BTRFS

btrfs is hopefully the next best filesystem for Linux (A short history of btrfs). Featurewise it mimics ZFS, although technically the two don’t have much in common.

I set up an ArchLinux using Kernel 2.6.35 to play around with btrfs. I created 6 8GB disks. First, I put 4 of them into a RAID10 (the minimum amount possible):

[root@myhost ~]# mkfs.btrfs -d raid10 -m raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL 
WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/sdc id 2 
adding device /dev/sdd id 3 
adding device /dev/sde id 4 
fs created label (null) on /dev/sdb 
        nodesize 4096 leafsize 4096 sectorsize 4096 size 32.00GB 
Btrfs Btrfs v0.19

Then the testing could begin. My tests included the following:

  • Adding devices to the RAID
  • Replacing intact devices
  • Completely wiping drives while fs was mounted
  • Randomly flipping bits on the disk
  • Pulling the plug

Creating test files

This will create nine 1GB test files and make sure they all have different checksums:

[root@myhost ~]# for i in {1..9}; do dd if=/dev/zero of=/mnt/data/foo$i bs=10M count=10$i; done

Then, create md5sums and store them somewhere else (i.e. not on the btrfs volume), so we can later check if our data is still intact:

[root@myhost ~]# for i in /mnt/data/foo*; do md5sum $i >> /root/md5sums; done

To check if your files are still OK, just repeat the md5sum creation process (writing to a different file) and compare both files with the diff utility.

Findings

  • Adding and removing devices is a breeze. It is very easy.
  • BTRFS sometimes marked a disk I completely wiped as intact. To alleviate this, frequently issue the ‘btrfsctl –a’ command to make it scan for btrfs devices, especially before mounting devices, and while removing missing disks. My .bashrc now contains the following line:
alias mount=’btrfsctl –a && mount’
  • Always unmount damaged fs, then remount with “-o degraded”.
  • In any case, if you change the disk layout frequently running ‘brtfsctl –a’ and remounting seem to be a good idea
  • Pulling the plug seemed to have no impact whatsoever. No matter what kind of operations were done during power loss, the data was always in a consistent state

While I could produce Kernel panics and destroy data by being generally stupid and writing lots of garbage directly onto the disks without proper recovery, I found if used correctly btrfs is very stable. I randomly zeroed chunks of 1MB size on some disks, and btrfs corrected the error.

The “-o compress” mount option worked very well (especially when you create files full of zeros lol), however at one point the system wouldn’t let me write to the disk anymore, although btrfs tools showed plenty of available space.

I seriously consider switching my file server to btrfs RAID10.

Update: I just learned the hard way that you should not run btrfs-debug-tree on a mounted file system. Added the following to .bashrc:

alias btrfs-debug-tree='echo && echo !!! DO NOT RUN btrfs-debug-tree ON A MOUNTED FILE SYSTEM !!! && echo && btrfs-debug-tree'

  1. No comments yet.

  1. No trackbacks yet.