dirk at proactive
Apr 4, 2012, 3:15 AM
First off all: I posted a similar thread on the OCFS2 mailing list, but
I didn't receive a lot of response. This list seems to be busier, maybe
more luck over here...
I'm having trouble backing up a OCFS2 file system. I'm using rsync and I
find this way, way slower than rsyncing a 'traditional' file system.
The OCFS2 filesytem lives on a double primary DRBD setup. DRBD runs on
hardware RAID6, dedicated bonded gigabyte NICs, I get a 160 Mb/s syncer
speed. Read and write speeds are OK on the file system.
My OCFS2 filesystem is 3.7 Tb in size, 200 Gb is used, has about 1.5
million files on it in 95 directories. About 3000 new files are added
each day, few files are changed.
Rsyncing (directlty to the rsync daemon, no ssh shell overhead) this
filesystem over a Gb connection takes 70 minutes:
Number of files: 1495981
Number of files transferred: 2944
Total file size: 201701039047 bytes
Total transferred file size: 613318155 <tel:613318155> bytes
Literal data: 613292255 <tel:613292255> bytes
Matched data: 25900 bytes
File list size: 24705311
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 118692
Total bytes received: 638195567 <tel:638195567>
sent 118692 bytes received 638195567 <tel:638195567> bytes 154163.57
total size is 201701039047 speedup is 315.99
To compare this, I have a similar system (the old, non-HA system doing
the exact same thing), with an ext3 filesystem. This one holds 6.5
million files, 500 Gb, about 10.000 new files a day. Backup done with
rsync through ssh on a 100 Mbit line takes 400 seconds.
I'd like to know if somebody has encountered similar problems and maybe
has some tips / insights for me?
drbd-user mailing list
drbd-user [at] lists