Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Gentoo: Dev

preserve_old_lib and I'm even more lazy

 

 

Gentoo dev RSS feed   Index | Next | Previous | View Threaded


phajdan.jr at gentoo

Feb 24, 2012, 9:56 AM

Post #1 of 27 (659 views)
Permalink
preserve_old_lib and I'm even more lazy

Currently preserve_old_lib functions generate two commands per preserved
lib:

# revdep-rebuild --library '/usr/lib/libv8.so.3.9.4'
# rm '/usr/lib/libv8.so.3.9.4'

I'd like to modify eutils.eclass to only generate one command:

# revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
rm '/usr/lib/libv8.so.3.9.4'

What do you think?
Attachments: signature.asc (0.20 KB)


aballier at gentoo

Feb 24, 2012, 10:43 AM

Post #2 of 27 (650 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Fri, 24 Feb 2012 18:56:44 +0100
""Paweł Hajdan, Jr."" <phajdan.jr [at] gentoo> wrote:

> Currently preserve_old_lib functions generate two commands per
> preserved lib:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4'
> # rm '/usr/lib/libv8.so.3.9.4'
>
> I'd like to modify eutils.eclass to only generate one command:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
> rm '/usr/lib/libv8.so.3.9.4'
>
> What do you think?
>

+1

moreover the && wont delete the lib if revdep-rebuild failed i think,
so it should be even safer to copy/paste :)

A.


kentfredric at gmail

Feb 24, 2012, 10:47 AM

Post #3 of 27 (648 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
>        rm '/usr/lib/libv8.so.3.9.4'


Might even be worth patching revdep-rebuild:

revdep-rebuild --library /usr/lib/libv8.so.3.9.4 --autoclean


--
Kent


rich0 at gentoo

Feb 24, 2012, 10:47 AM

Post #4 of 27 (651 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Fri, Feb 24, 2012 at 1:43 PM, Alexis Ballier <aballier [at] gentoo> wrote:
> moreover the && wont delete the lib if revdep-rebuild failed i think,
> so it should be even safer to copy/paste :)

Am I the only paranoid person who moves them rather than unlinking
them? Oh, if only btrfs were stable...

Rich


pacho at gentoo

Feb 24, 2012, 11:12 AM

Post #5 of 27 (651 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

El vie, 24-02-2012 a las 18:56 +0100, "Paweł Hajdan, Jr." escribió:
> Currently preserve_old_lib functions generate two commands per preserved
> lib:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4'
> # rm '/usr/lib/libv8.so.3.9.4'
>
> I'd like to modify eutils.eclass to only generate one command:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
> rm '/usr/lib/libv8.so.3.9.4'
>
> What do you think?
>

Great, I am already running both in that way manually ;)
Attachments: signature.asc (0.19 KB)


jamesbroadhead at gmail

Feb 24, 2012, 12:20 PM

Post #6 of 27 (653 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On 24 February 2012 17:56, "Paweł Hajdan, Jr." <phajdan.jr [at] gentoo> wrote:
> Currently preserve_old_lib functions generate two commands per preserved
> lib:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4'
> # rm '/usr/lib/libv8.so.3.9.4'
>
> I'd like to modify eutils.eclass to only generate one command:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
>        rm '/usr/lib/libv8.so.3.9.4'
>
> What do you think?

Definitely a good idea, but FYI it's only been possible since last week :P
https://bugs.gentoo.org/show_bug.cgi?id=326923


ryao at cs

Feb 24, 2012, 4:31 PM

Post #7 of 27 (639 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

> Am I the only paranoid person who moves them rather than unlinking
> them? Oh, if only btrfs were stable...

Is this a reference to snapshots? You can use ZFS for those. The
kernel modules are only available in the form of 9999 ebuilds right
now, but they your data should be safe unless you go out of your way
to break things (e.g. putting the ZIL/SLOG on a tmpfs). Alternatively,
there is XFS, which I believe also supports snapshots.


floppym at gentoo

Feb 24, 2012, 7:44 PM

Post #8 of 27 (640 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Fri, Feb 24, 2012 at 7:31 PM, Richard Yao <ryao [at] cs> wrote:
>> Am I the only paranoid person who moves them rather than unlinking
>> them?  Oh, if only btrfs were stable...
>
> Is this a reference to snapshots? You can use ZFS for those. The
> kernel modules are only available in the form of 9999 ebuilds right
> now, but they your data should be safe unless you go out of your way
> to break things (e.g. putting the ZIL/SLOG on a tmpfs). Alternatively,
> there is XFS, which I believe also supports snapshots.
>

I've been using btrfs exclusively for about 6 months, and I don't
*think* I've lost anything... :)


rich0 at gentoo

Feb 24, 2012, 7:53 PM

Post #9 of 27 (641 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Fri, Feb 24, 2012 at 10:44 PM, Mike Gilbert <floppym [at] gentoo> wrote:
>
> I've been using btrfs exclusively for about 6 months, and I don't
> *think* I've lost anything... :)
>

From what I've seen as long as you keep things simple, and don't have
heavy loads, you're at least reasonably likely to get by unscathed.
I'd definitely keep good backups though. Just read the mailing lists,
or for kicks run xfs-test on your server. Xfs-test doesn't do any
direct disk access or anything like that - it is no different than
running bazillions of cat's, mv's, rm's, cp's, etc. It most likely
will panic your system if you try it on btrfs - on ext4 it will
probably load the living daylights out of it but you should be fine.
The issues with btrfs at this point are the ones that aren't so easy
to spot, like race conditions, issues when you use more unusual
configurations, and so on.

Oh, and go ahead and try filling up your disk some time. If your
kernel is recent enough it might not panic when you get down to a few
GB left.

I'm eager for the rise of btrfs - it IS the filesystem of the future.
However, that cuts both ways right now.

Rich


billk at iinet

Feb 24, 2012, 8:10 PM

Post #10 of 27 (644 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Fri, 2012-02-24 at 22:44 -0500, Mike Gilbert wrote:
> On Fri, Feb 24, 2012 at 7:31 PM, Richard Yao <ryao [at] cs> wrote:
> >> Am I the only paranoid person who moves them rather than unlinking
> >> them? Oh, if only btrfs were stable...
> >
> > Is this a reference to snapshots? You can use ZFS for those. The
> > kernel modules are only available in the form of 9999 ebuilds right
> > now, but they your data should be safe unless you go out of your way
> > to break things (e.g. putting the ZIL/SLOG on a tmpfs). Alternatively,
> > there is XFS, which I believe also supports snapshots.
> >
>
> I've been using btrfs exclusively for about 6 months, and I don't
> *think* I've lost anything... :)
>

I did ... tried it out and found it "tougher" than reiserfs to break
which is saying something considering how flaky extended 2/3 proved for
the same task.

Problem was, once it broke you couldnt fix it :(

Also there are some things that dont work, one of which was a few
packages would always fail to emerge when using btrfs for temp storage
(I think one was libreoffice)

So I deleted the btrfs partitions and put reiserfs back ...

BillK


zmedico at gentoo

Feb 24, 2012, 8:35 PM

Post #11 of 27 (640 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On 02/24/2012 08:10 PM, William Kenworthy wrote:
> Also there are some things that dont work, one of which was a few
> packages would always fail to emerge when using btrfs for temp storage
> (I think one was libreoffice)

I've been using btrfs for temp storage, for more than a year, and
haven't noticed any problems with specific packages (libroffice builds
fine).

The only problems I've experienced are:

1) Intermittent ENOSPC when unpacking lots of files. Maybe this is
related to having compression enabled. I haven't experienced it lately,
so maybe it's fixed in recent kernels.

2) Bug 353907 [1] which is fixed in recent kernels and coreutils.

[1] https://bugs.gentoo.org/show_bug.cgi?id=353907
--
Thanks,
Zac


1i5t5.duncan at cox

Feb 24, 2012, 10:37 PM

Post #12 of 27 (642 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

Rich Freeman posted on Fri, 24 Feb 2012 22:53:50 -0500 as excerpted:

> From what I've seen as long as you keep things simple, and don't have
> heavy loads, you're at least reasonably likely to get by unscathed. I'd
> definitely keep good backups though. Just read the mailing lists,
> or for kicks run xfs-test

> Oh, and go ahead and try filling up your disk some time. If your kernel
> is recent enough it might not panic when you get down to a few GB left.
>
> I'm eager for the rise of btrfs - it IS the filesystem of the future.
> However, that cuts both ways right now.

That's about right... along with the caveat that if something /does/ go
wrong on your not too corner-case, generally normal, lightly loaded
system, while there are recovery tools for /some/ situations, the normal
distribution btrfsck is read-only. The freshly sort-of available but
still rather hidden in the DANGER, DON'T EVER USE branch error-correcting
btrfsck, is still under very heavy stress testing internally by Oracle
QA. (As a result of those tests, there's a load of fixes headed to Linus
for inclusion, discovered just since 3.3-rc1. As a result of /that/ 3.3
should be the most stable btrfs yet, but that's still far from saying
it's stable!)

And yes, "filesystem of the future" DOES cut both ways, ATM. It's an apt
description and I too am seriously looking forward to btrfs. But it's
definitely NOT the "filesystem of now", for sure! =:^)

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman


1i5t5.duncan at cox

Feb 24, 2012, 11:04 PM

Post #13 of 27 (644 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

Zac Medico posted on Fri, 24 Feb 2012 20:35:24 -0800 as excerpted:

> I've been using btrfs for temp storage, for more than a year

> The only problems I've experienced are:
>
> 1) Intermittent ENOSPC when unpacking lots of files. Maybe this is
> related to having compression enabled. I haven't experienced it lately,
> so maybe it's fixed in recent kernels.

This is one of those "many bugs, same result" bugs. The way btrfs
allocates space is /extremely/ complicated, and based on what I read on-
list they've been fixing bugs in it, gradually reducing the ENOSPC
triggers, for quite some time.

Last I read, the biggest remaining known one was indeed related to
compression, apparently to a race condition of some sort, with one bit of
code reaching the ENOSPC conclusion because it finished before the the
actual processing code did.

However, apparently same bug could be triggered on uncompressed btrfs if
it was stressed enough (rsyncing several gigs was a common duplicator).

Last I read they hadn't fully traced that one down in btrfs itself yet,
but they had worked around the problem by throttling things further up
the stack, in the kernel VFS code I believe. The reasoning was that if a
device was so overwhelmed it clearly couldn't keep up, regardless of the
filesystem, throttling requests at the vfs level would put less pressure
on the filesystem code, allowing things to work smoother. It MAY (my own
thought here) have been another application of the buffer-bloat work --
simply increasing buffer size and filling it even more doesn't help, when
the bottleneck is further down the stack, rather the reverse!

AFAIK that's the present status for 3.3. At least that one spurious
ENOSPC trigger remains, but they've worked around it for now with the
throttling, so it shouldn't hit anyone but those deliberately disabling
the throttling in ordered to further test it, now.

But with luck, the stress-testing that Oracle QA's doing ATM will have
found the root bug and it's fixed now too. I hope...

> 2) Bug 353907 [1] which is fixed in recent kernels and coreutils.
>
> [1] https://bugs.gentoo.org/show_bug.cgi?id=353907

That one could be another head of the same race-related root bug. In
fact, reading it and seeing that ext4 was affected as well, I'm wondering
if that's what triggered the introduction of the throttling at the VFS
level.

(NB: Interesting that I wasn't the only one to see that as an invitation
to discuss btrfs. At least my subthread has the subject changed so
people that want can ignore it, tho. I wish that had happened here too,
but I guess it's kind of late to try and change it with this post, so...)

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman


cardoe at gentoo

Feb 25, 2012, 7:02 AM

Post #14 of 27 (641 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Fri, Feb 24, 2012 at 6:31 PM, Richard Yao <ryao [at] cs> wrote:
>> Am I the only paranoid person who moves them rather than unlinking
>> them?  Oh, if only btrfs were stable...
>
> Is this a reference to snapshots? You can use ZFS for those. The
> kernel modules are only available in the form of 9999 ebuilds right
> now, but they your data should be safe unless you go out of your way
> to break things (e.g. putting the ZIL/SLOG on a tmpfs). Alternatively,
> there is XFS, which I believe also supports snapshots.
>

FWIW, I'll second the ZFS > btrfs suggestion. I understand people want
to go btrfs cause its the Linux way but in real world usage, its
performance is abysmal We've had over a dozen developers switch to
btrfs in my group on their various environments (OpenSUSE, Fedora, own
rolled distros) and they've all gone back to their previous filesystem
of choice.

Simplest test I can suggest to btrfs users to attempt is the following:

dd if=/dev/zero of=/mnt/btrfs/file bs=4k count=100 oflag=direct
dd if=/dev/zero of=/mnt/ext4/file bs=4k count=100 oflag=direct

It will emulate the similar operation to an fdatasync().

--
Doug Goldstein


rich0 at gentoo

Feb 25, 2012, 7:26 AM

Post #15 of 27 (641 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Sat, Feb 25, 2012 at 10:02 AM, Doug Goldstein <cardoe [at] gentoo> wrote:
> FWIW, I'll second the ZFS > btrfs suggestion.

Oh, if you need a safe COW filesystem today I'd definitely recommend
ZFS over btrfs for sure, although I suspect the people who are most
likely to take this sort of advice are also the sort of people who are
most likely to not be running Gentoo. There are a bazillion problems
with btrfs as it stands.

However, fundamentally there is no reason to think that ZFS will
remain better in the future, once the bugs are worked out. They're
still focusing on keeping btrfs from hosing your data - tuning
performance is not a priority yet. However, the b-tree design of
btrfs should scale very well once the bugs are worked out.

Rich


ryao at cs

Feb 25, 2012, 12:52 PM

Post #16 of 27 (632 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

> Oh, if you need a safe COW filesystem today I'd definitely recommend
> ZFS over btrfs for sure, although I suspect the people who are most
> likely to take this sort of advice are also the sort of people who are
> most likely to not be running Gentoo. There are a bazillion problems
> with btrfs as it stands.

There is significant interest in ZFS in the Gentoo community,
especially on freenode. Several veteran users are evaluating it and
others have already begun to switch from other filesystems, volume
managers and RAID solutions.

> However, fundamentally there is no reason to think that ZFS will
> remain better in the future, once the bugs are worked out. They're
> still focusing on keeping btrfs from hosing your data - tuning
> performance is not a priority yet. However, the b-tree design of
> btrfs should scale very well once the bugs are worked out.

ZFSOnLinux performance tuning is not a priority either, but there have
been a few patches and the performance is good. btrfs might one day
outperform ZFS in terms of single disk performance, assuming that it
does not already, but I question the usefulness of single disk
performance as a performance metric. If I add a SSD to a ZFS pool
machine to complement the disk, system performance will increase
many-fold. As far as I can tell, that will never be possible with
btrfs without external solutions like Google's flashcache, which
killed an OCZ Vertex 3 within 16 days about a month ago that Wyatt in
#gentoo-chat on freenode had to replace. I imagine that its death
could have been delayed through write rate limiting, which is what ZFS
uses for L2ARC, but until you can replace the Linux page replacement
algorithm with either ARC or something comparable, flashcache will be
inferior to ZFS L2ARC. You can read more about this topic at the
following link:

http://linux-mm.org/AdvancedPageReplacement

ZFS at its core is a transactional object store and everything that
enables its use as a filesystem is implemented on top of that. ZFS
supports raidz3, zvols, L2ARC, SLOG/ZIL and endian independence, which
as far as I can tell, are things that btrfs will never support.. ZFS
also has either first-party or third-party support on Solaris,
FreeBSD, Linux, Mac OS X and Windows, while btrfs appears to have no
future outside of Linux.

Lastly, ZFS' performance scaling exceeds that of any block device
based filesystem I have seen (which excludes comparisons with
tmpfs/ramfs and lustre/gpfs). The following benchmark is of a SAN
device using ZFS:

http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking/2

While ZFS performance in that benchmark is impressive, ZFS can scale
far higher with additional disks and more SSDs. SuperMicro has a
hotswappable 72-disk enclosure that should enable ZFS to far exceed
the performance of the system that Anandtech benchmarked, provided
that it is configured with a large ARC cache and multiple vdevs each
with multiple disks, some SSDs for L2ARC and a SLC SSD-based SLOG/ZIL.
I would not be surprised if ZFS performance were to exceed 1 million
IOPS on such hardware. Nothing that I have seen planned for btrfs can
perform comparably, in any configuration.


rich0 at gentoo

Feb 25, 2012, 1:02 PM

Post #17 of 27 (628 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Sat, Feb 25, 2012 at 3:52 PM, Richard Yao <ryao [at] cs> wrote:
> ZFSOnLinux performance tuning is not a priority either, but there have
> been a few patches and the performance is good. btrfs might one day
> outperform ZFS in terms of single disk performance, assuming that it
> does not already, but I question the usefulness of single disk
> performance as a performance metric.

Why would btrfs be inferior to ZFS on multiple disks? I can't see how
its architecture would do any worse, and the planned features are
superior to ZFS (which isn't to say that ZFS can't improve either).

Beyond the licensing issues ZFS also does not support reshaping of
raid-z, which is the only n+1 redundancy solution it offers. Btrfs of
course does not yet support n+1 at all aside from some experimental
patches floating around, but it plans to support reshaping at some
point in time. Of course, there is no reason you couldn't implement
reshaping for ZFS, it just hasn't happened yet. Right now the
competition for me is with ext4+lvm+mdraid. While I really would like
to have COW soon, I doubt I'll implement anything that doesn't support
reshaping as mdraid+lvm does.

I do realize that you can add multiple raid-zs to a zpool, but that
isn't quite enough. If I have 4x1TB disks I'd like to be able to add
a single 1TB disk and end up with 5TB of space. I'd rather not have
to find 3 more 1TB hard drives to hold the data on while I redo my
raid and then try to somehow sell them again.

Rich


ryao at cs

Feb 25, 2012, 1:56 PM

Post #18 of 27 (629 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

> Why would btrfs be inferior to ZFS on multiple disks? I can't see how
> its architecture would do any worse, and the planned features are
> superior to ZFS (which isn't to say that ZFS can't improve either).

ZFS uses ARC as its page replacement algorithm, which is superior to
the LRU page replacement algorithm used by btrfs. ZFS has L2ARC and
SLOG. L2ARC permits things that would not be evacuated from ARC had it
been bigger to be stored in a Level 2 cache. SLOG permits writes to be
stored in memory before they are committed to the disks. This provides
the benefits of write sequentialization and protection against data
inconsistency in the event of a kernel panic. Furthermore, data is
striped across vdevs, so the more vdevs you have, the higher your
performance goes.

These features enable ZFS performance to go to impressive heights and
the btrfs developers display no intention of following it as far as I
have seen.

> Beyond the licensing issues ZFS also does not support reshaping of
> raid-z, which is the only n+1 redundancy solution it offers. Btrfs of
> course does not yet support n+1 at all aside from some experimental
> patches floating around, but it plans to support reshaping at some
> point in time. Of course, there is no reason you couldn't implement
> reshaping for ZFS, it just hasn't happened yet. Right now the
> competition for me is with ext4+lvm+mdraid. While I really would like
> to have COW soon, I doubt I'll implement anything that doesn't support
> reshaping as mdraid+lvm does.

raidz has 3 varieties, which are single parity, double parity and
triple parity. As for reshaping, ZFS is a logical volume manager. You
can set and resize limits on ZFS datasets as you please.

As for competiting with ext4+lvm+mdraid, I recently migrated a server
from that exact configuration. It had 6 disks, using RAID 6. I had a
VM on it running Gentoo Hardened in which I did a benchmark using dd
to write zeroes to the disk. Nothing I could do with ext4+lvm+mdraid
could get performance above 20MB/sec. After switching to ZFS,
performance went to 205MB/sec. The worst performance I observed was
92MB/sec. This used 6 Samsung HD204UI hard drives.

> I do realize that you can add multiple raid-zs to a zpool, but that
> isn't quite enough. If I have 4x1TB disks I'd like to be able to add
> a single 1TB disk and end up with 5TB of space. I'd rather not have
> to find 3 more 1TB hard drives to hold the data on while I redo my
> raid and then try to somehow sell them again.

You would probably be better served by making your additional drive
into a hotspare, but if you insist on using it, you can make it a
separate vdev, which should provide more space. To be honest, anyone
who wants to upgrade such a configuration probably is better off
getting 4x2TB disks, do a scrub and then start replacing disks in the
pool, iterating between replacing a disk and resilvering the vdev.
After you have finished this process, you will have doubled the amount
of space in the pool.


rich0 at gentoo

Feb 25, 2012, 2:15 PM

Post #19 of 27 (629 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Sat, Feb 25, 2012 at 4:56 PM, Richard Yao <ryao [at] cs> wrote:
> raidz has 3 varieties, which are single parity, double parity and
> triple parity. As for reshaping, ZFS is a logical volume manager. You
> can set and resize limits on ZFS datasets as you please.

That isn't my understanding as far as raidz reshaping goes. You can
create raidz's and add them to a zpool. You can add individual
drives/partitions to zpools. You can remove any of these from a zpool
at any time and have it move data into other storage areas. However,
you can't reshape a raidz.

Suppose I have a system with 5x1TB hard drives. They're merged into a
single raidz with single-parity, so I have 4TB of space. I want to
add one 1TB drive to the array and have 5TB of single-parity storage.
As far as I'm aware you can't do that with raidz. What you could do
is set up some other 4TB storage area (raidz or otherwise), remove the
original raidz, recycle those drives into the new raidz, and then move
the data back onto it. However, doing this requires 4TB of storage
space. With mdadm you could do this online without the need for
additional space as a holding area.

ZFS is obviously a capable filesystem, but unless Oracle re-licenses
it we'll never see it take off on Linux. For good or bad everybody
seems to like the monolithic kernel. Btrfs obviously has a ways to go
before it is a viable replacement, but I doubt Oracle would be sinking
so much money into it if they intended to ever re-license ZFS.

Rich


ryao at cs

Feb 25, 2012, 2:47 PM

Post #20 of 27 (629 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

> That isn't my understanding as far as raidz reshaping goes. You can
> create raidz's and add them to a zpool. You can add individual
> drives/partitions to zpools. You can remove any of these from a zpool
> at any time and have it move data into other storage areas. However,
> you can't reshape a raidz.

ZFS is organized into pools, which are transactional object stores.
Various things can go into these transactional object stores, such as
ZFS data sets and zvols. A ZFS data set is what you would consider to
be a filesystem. A zvol is a block device on which other filesystems
can be installed. Data in pools are stored in vdevs, which can be
files masquerading as block devices, single disks, mirrored disks or a
raidz level.

ZFS is designed to put data integrity first. I question how many other
volume managers are capable of recovering from a crash during a
reshape without some sort of catastrophic data loss. WIth that said, I
do not see what your point is to talk about this. There are things you
can use your extra disk to do, but as far as storage requirements go,
a single disk does not go very far. You are better off replacing
hardware if your storage requirements grow beyond the ability of your
current disks to handle.

> Suppose I have a system with 5x1TB hard drives. They're merged into a
> single raidz with single-parity, so I have 4TB of space. I want to
> add one 1TB drive to the array and have 5TB of single-parity storage.
> As far as I'm aware you can't do that with raidz. What you could do
> is set up some other 4TB storage area (raidz or otherwise), remove the
> original raidz, recycle those drives into the new raidz, and then move
> the data back onto it. However, doing this requires 4TB of storage
> space. With mdadm you could do this online without the need for
> additional space as a holding area.

If you have proper backups, you should be able to destroy the pool,
make a new one and restore the backup. If you do not have backups,
then I think there are more important things to consider than your
ability to do this without them.

> ZFS is obviously a capable filesystem, but unless Oracle re-licenses
> it we'll never see it take off on Linux. For good or bad everybody
> seems to like the monolithic kernel. Btrfs obviously has a ways to go
> before it is a viable replacement, but I doubt Oracle would be sinking
> so much money into it if they intended to ever re-license ZFS.

I heard a statement in IRC that Oracle owns all of the next generation
filesystems, which enables them to position btrfs for the low-end and
use ZFS at the high-end. I have no way of substantiating this, but I
can say that this does appear to be the case.

With that said, ebuilds are in the portage tree and support has been
integrated into genkernel. I have a physical system booting off ZFS
(no ext4 et al) and genkernel makes kernel upgrades incredibly easy,
even when configuring my own kernel through --menuconfig. Gentoo users
in IRC are quite interested in this and they do not seem to care that
the modules are out-of-tree or that the licensing is different. As far
as I can tell, there is no need for them to care.

You might want to look at Gentoo/FreeBSD, which also supports ZFS with
a monolithic kernel design, but has no licensing issues. There is
nothing forcing any of us to use Linux and if the licensing is a
problem for you, then perhaps it would be a good idea to switch.

Also, to avoid any confusion, a proper bootloader for ZFS does not
exist in portage at this time. I hacked the boot process to enable the
system to boot off ZFS using GRUB and it will require some more work
before this is ready for inclusion into portage. I made an
announcement to the ZFSOnLinux mailing list not that long ago
explaining what I did. I was waiting until ZFS support in Gentoo
reached a few milestones before I made an announcement about it here,
although most of the stuff you need is already in-tree:

http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/d94f597f8f4e3c88


rich0 at gentoo

Feb 25, 2012, 3:03 PM

Post #21 of 27 (626 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Sat, Feb 25, 2012 at 5:47 PM, Richard Yao <ryao [at] cs> wrote:
> If you have proper backups, you should be able to destroy the pool,
> make a new one and restore the backup. If you do not have backups,
> then I think there are more important things to consider than your
> ability to do this without them.

I wouldn't have pointed it out if the solution were this simple in my
case. Not everything is worth backing up - I'd rather take a 2%
chance of losing everything but maybe the 0.1% of my storage that I
back up, than wipe the drive and have a 100% chance of losing
everything but the 0.1% of my storage that I back up. My data isn't
worth the cost of a proper backup solution, but it isn't worthless
either so if I can have my cake and eat it too so much the better.

That said, it is true that reshaping often isn't practical for other
reasons, such as having 4 1TB drives, and by the time you want to add
another one the best price point is on 500TB drives.

Thanks for your comments just the same - they are informative. My
licensing concern is more of wanting to promote GPL software than
being compliant, so FreeBSD isn't much of a help. You may be right
about Oracle wanting to keep btrfs for the low end, although where
they are already aiming is already high enough for me, and once btrfs
becomes mainstream nobody is really going to be able to hold it back -
it isn't like Oracle actually has any control over it beyond
contributing the most code.

Rich


1i5t5.duncan at cox

Feb 26, 2012, 1:21 AM

Post #22 of 27 (606 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

Richard Yao posted on Sat, 25 Feb 2012 17:47:06 -0500 as excerpted:

> Also, to avoid any confusion, a proper bootloader for ZFS does not exist
> in portage at this time. I hacked the boot process to enable the system
> to boot off ZFS using GRUB and it will require some more work before
> this is ready for inclusion into portage.

AFAIK grub2 has a zfs module and therefore zfs support. I was just
reading the thread announcing the freeze for grub 2.0, and there was a
bit of discussion there about it, so whatever licensing issues they had
with it previously, appear to have been worked out, as it's supposed to
be part of grub 2.0.

So at least the grub-9999 build should have zfs support. 1.99 probably
has it in some form, but I don't know what its status is. Those are both
in the gentoo main tree, tho both are masked.

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman


phajdan.jr at gentoo

Feb 27, 2012, 7:06 AM

Post #23 of 27 (597 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On 2/24/12 6:56 PM, "Paweł Hajdan, Jr." wrote:
> I'd like to modify eutils.eclass to only generate one command:
>
> # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
> rm '/usr/lib/libv8.so.3.9.4'

Given supporting comments to this thread (and totally off-topic
zfs/btrfs discussion), I'd like to commit the patch below in 24 hours.

> --- eutils.eclass.orig 2012-02-26 10:02:24.000000000 +0100
> +++ eutils.eclass 2012-02-26 10:03:17.000000000 +0100
> @@ -1276,16 +1276,8 @@
> fi
> # temp hack for #348634 #357225
> [[ ${PN} == "mpfr" ]] && lib=${lib##*/}
> - ewarn " # revdep-rebuild --library '${lib}'"
> + ewarn " # revdep-rebuild --library '${lib}' && rm '${lib}'"
> done
> - if [[ ${notice} -eq 1 ]] ; then
> - ewarn
> - ewarn "Once you've finished running revdep-rebuild, it should be safe to"
> - ewarn "delete the old libraries. Here is a copy & paste for the lazy:"
> - for lib in "$@" ; do
> - ewarn " # rm '${lib}'"
> - done
> - fi
> }
>
> # @FUNCTION: built_with_use
Attachments: signature.asc (0.20 KB)


pacho at gentoo

Feb 27, 2012, 11:29 AM

Post #24 of 27 (598 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

El lun, 27-02-2012 a las 16:06 +0100, "Paweł Hajdan, Jr." escribió:
> On 2/24/12 6:56 PM, "Paweł Hajdan, Jr." wrote:
> > I'd like to modify eutils.eclass to only generate one command:
> >
> > # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
> > rm '/usr/lib/libv8.so.3.9.4'
>
> Given supporting comments to this thread (and totally off-topic
> zfs/btrfs discussion), I'd like to commit the patch below in 24 hours.
>
> > --- eutils.eclass.orig 2012-02-26 10:02:24.000000000 +0100
> > +++ eutils.eclass 2012-02-26 10:03:17.000000000 +0100
> > @@ -1276,16 +1276,8 @@
> > fi
> > # temp hack for #348634 #357225
> > [[ ${PN} == "mpfr" ]] && lib=${lib##*/}
> > - ewarn " # revdep-rebuild --library '${lib}'"
> > + ewarn " # revdep-rebuild --library '${lib}' && rm '${lib}'"
> > done
> > - if [[ ${notice} -eq 1 ]] ; then
> > - ewarn
> > - ewarn "Once you've finished running revdep-rebuild, it should be safe to"
> > - ewarn "delete the old libraries. Here is a copy & paste for the lazy:"
> > - for lib in "$@" ; do
> > - ewarn " # rm '${lib}'"
> > - done
> > - fi
> > }
> >
> > # @FUNCTION: built_with_use
>

I think somebody pointed some "revdep-rebuild" versions where exiting
with successful code even when failed, was fixed version stabilized?
Attachments: signature.asc (0.19 KB)


dolsen at gentoo

Feb 27, 2012, 1:37 PM

Post #25 of 27 (595 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Mon, 2012-02-27 at 20:29 +0100, Pacho Ramos wrote:
> El lun, 27-02-2012 a las 16:06 +0100, "Paweł Hajdan, Jr." escribió:
> > On 2/24/12 6:56 PM, "Paweł Hajdan, Jr." wrote:
> > > I'd like to modify eutils.eclass to only generate one command:
> > >
> > > # revdep-rebuild --library '/usr/lib/libv8.so.3.9.4' && \
> > > rm '/usr/lib/libv8.so.3.9.4'
> >
> > Given supporting comments to this thread (and totally off-topic
> > zfs/btrfs discussion), I'd like to commit the patch below in 24 hours.
> >
> > > --- eutils.eclass.orig 2012-02-26 10:02:24.000000000 +0100
> > > +++ eutils.eclass 2012-02-26 10:03:17.000000000 +0100
> > > @@ -1276,16 +1276,8 @@
> > > fi
> > > # temp hack for #348634 #357225
> > > [[ ${PN} == "mpfr" ]] && lib=${lib##*/}
> > > - ewarn " # revdep-rebuild --library '${lib}'"
> > > + ewarn " # revdep-rebuild --library '${lib}' && rm '${lib}'"
> > > done
> > > - if [[ ${notice} -eq 1 ]] ; then
> > > - ewarn
> > > - ewarn "Once you've finished running revdep-rebuild, it should be safe to"
> > > - ewarn "delete the old libraries. Here is a copy & paste for the lazy:"
> > > - for lib in "$@" ; do
> > > - ewarn " # rm '${lib}'"
> > > - done
> > > - fi
> > > }
> > >
> > > # @FUNCTION: built_with_use
> >
>
> I think somebody pointed some "revdep-rebuild" versions where exiting
> with successful code even when failed, was fixed version stabilized?

No, it is only in -9999 so far. It has not been released in a -0.3*
ebuild yet.

The last patch to revdep-rebuild fixing return codes is:

http://git.overlays.gentoo.org/gitweb/?p=proj/gentoolkit.git;a=commit;h=3e51df74595c535656ef9f38bf7a577a4f64d0f5

from 11 days ago.

--
Brian Dolbec <dolsen [at] gentoo>
Attachments: signature.asc (0.48 KB)


phajdan.jr at gentoo

Feb 29, 2012, 12:45 AM

Post #26 of 27 (94 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On 2/27/12 10:37 PM, Brian Dolbec wrote:
>> I think somebody pointed some "revdep-rebuild" versions where exiting
>> with successful code even when failed, was fixed version stabilized?
>
> No, it is only in -9999 so far. It has not been released in a -0.3*
> ebuild yet.
>
> The last patch to revdep-rebuild fixing return codes is:
>
> http://git.overlays.gentoo.org/gitweb/?p=proj/gentoolkit.git;a=commit;h=3e51df74595c535656ef9f38bf7a577a4f64d0f5
>
> from 11 days ago.

If the maintainers of the package in question do not consider it
important enough to do a release (not even mentioning stabilization), I
don't think this is blocking.

Any further objections? (I'm going to listen)
Attachments: signature.asc (0.20 KB)


fuzzyray at gentoo

Feb 29, 2012, 7:25 AM

Post #27 of 27 (99 views)
Permalink
Re: preserve_old_lib and I'm even more lazy [In reply to]

On Wed, 2012-02-29 at 09:45 +0100, "Pawe Hajdan, Jr." wrote:
> On 2/27/12 10:37 PM, Brian Dolbec wrote:
> >> I think somebody pointed some "revdep-rebuild" versions where exiting
> >> with successful code even when failed, was fixed version stabilized?
> >
> > No, it is only in -9999 so far. It has not been released in a -0.3*
> > ebuild yet.
> >
> > The last patch to revdep-rebuild fixing return codes is:
> >
> > http://git.overlays.gentoo.org/gitweb/?p=proj/gentoolkit.git;a=commit;h=3e51df74595c535656ef9f38bf7a577a4f64d0f5
> >
> > from 11 days ago.
>
> If the maintainers of the package in question do not consider it
> important enough to do a release (not even mentioning stabilization), I
> don't think this is blocking.
>
> Any further objections? (I'm going to listen)
>

Yes, you are going to break systems if you do this change. If you
really want to do this before we have a fixed gentoolkit to support it,
then put yourself in the tools-portage herd and handle all of the bugs
that arise out of the change.

I just did a new release of gentoolkit-0.3.0.5 with the fixes in them so
that you could do this change once it gets stable.

Regards,
Paul

Gentoo dev RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.