Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Gentoo: Dev

btrfs status and/was: preserve_old_lib

 

 

Gentoo dev RSS feed   Index | Next | Previous | View Threaded


1i5t5.duncan at cox

Feb 24, 2012, 3:26 PM

Post #1 of 4 (349 views)
Permalink
btrfs status and/was: preserve_old_lib

Rich Freeman posted on Fri, 24 Feb 2012 13:47:45 -0500 as excerpted:

> On Fri, Feb 24, 2012 at 1:43 PM, Alexis Ballier <aballier [at] gentoo>
> wrote:
>> moreover the && wont delete the lib if revdep-rebuild failed i think,
>> so it should be even safer to copy/paste :)

FWIW this is the preserved_libs feature/bug I ran into in early testing,
that convinced me to turn it off. Running revdep-rebuild manually was
far safer anyway, since at least then I /knew/ the status of various
libs, they weren't preserved on first run, then arbitrarily deleted on
second, even if it still broke remaining depending apps to do so.

So if that was reliably fixed, I'd be FAR happier about enabling
FEATURES=preserved-libs. I'm not sure I actually would as I like a bit
more direct knowledge of stale libs on the system than the automated
handling gives me, but at least I'd not have to worry about the so-called
"preserved" libs STILL disappearing and leaving broken packages, if I DID
enable it!

So definitely ++ on this! =:^)

> Am I the only paranoid person who moves them rather than unlinking them?
> Oh, if only btrfs were stable...

FWIW, in the rare event it breaks revdep-rebuild or the underlying
rebuilding itself, I rely on my long set FEATURES=buildpkg and emerge
-K. In the even rarer event that too is broken, there's always manual
untarring of the missing lib from the binpkg (I've had to do that once
when gcc itself was broken due to an unadvised emerge -C that I knew
might break things given the depclean warning, but also knew I could fix
with an untar if it came to it, which it did), or if it comes to it,
booting to backup and using ROOT= to emerge -K back to the working system.


[btrfs status discussion, skip if uninterested.]

I'm not sure if that's a reference to the btrfs snapshots allowing
rollbacks feature, or a hint that you're running it and worried about its
stability underneath you...

If it's the latter, you probably already know this, but if it's the
former, and for others interested...

I recently set the btrfs kernel options and merged btrfs-progs, then read
up on the wiki and joined the btrfs list, with the plan being to get
familiar with it and perhaps install it.

From all the reports about it being an option for various distros, etc,
now, and all the constant improvement reports, I had /thought/ that the
biggest issue for stability was the lack of an error-correcting (not just
detecting) fsck.btrfs, and that the restore tool announced late last
year, that allows pulling data off of unmountable btrfs volumes was a
reasonable workaround.

What I found, even allowing for the fact that such lists get the bad
reports and not the good ones, thus paint a rather worse picture of the
situation than actually exists for most users, is that...

BTRFS still has a rather longer way to go than I had thought. It's still
FAR from stable, even for someone like myself that often runs betas and
was prepared to keep (and use, if necessary) TESTED backups, etc. Maybe
by Q4 this year, but also very possibly not until next year. I'd
definitely NOT recommend that anyone run it now, unless you are
SPECIFICALLY running it for testing and bug reporting purposes with
"garbage" data (IOW, data that you're NOT depending on, at the btrfs
level, at all) that you are not only PREPARED to lose, but EXPECT to
lose, perhaps repeatedly, during your testing.

IOW, there's still known untraced and unfixed active data corruption bugs
remaining. Don't put your data on btrfs at this point unless you EXPECT
to have it corrupted, and want to actively help in tracing and patching
the problems!

Additionally, for anyone who has been interested in the btrfs RAID
capacities, striped/raid0 it handles, but its raid1 and raid10 capacities
are misnamed. At present, it's strictly two-way-mirror ONLY, there's no
way to do N-way (N>2) mirroring aside from layering on top of say mdraid,
at all, and of course layering on top of mdraid loses the data integrity
guarantees at that level, btrfs still has just the one additional copy it
can fall back on. This SERIOUSLY limits btrfs data integrity
possibilities in a 2+ drive failure scenario.

btrfs raid5/6 isn't available yet, but the current roadmap says kernels
3.4 or 3.5. Multi-mirror is supposed to be built on that code, tho the
mentions of it I've seen are specifically triple-mirror, so it's unclear
whether arbitrary N-way (N>3) mirroring as in true raid1 will be possible
even then. But whether triple-way specifically or N-way (N>=3), given
it's on top of raid5/6, to be introduced in 3.4/3.5, triple-way mirroring
thus appears to be 3.5/3.6 at the earliest.

So while I had gotten the picture that btrfs was stabilizing and it was
mostly over-cautiousness keeping that experimental label around, that's
definitely NOT the case. Nobody should really plan on /relying/ on it,
even with backups, until at least late this year, and very possibly
looking at 2013 now.

So btrfs is still a ways out. =:^(

Meanwhile, for anyone that's still interested in it at this point, note
that the homepage wiki currently listed the btrfs-progs package is a
stale copy on kernel.org, still read-only after the kernel.org breakin.
The "temporary" but looking more and more permanent location is:

http://btrfs.ipv5.de/index.php?title=Main_Page

Also, regarding the gentoo btrfs-progs package, see my recently filed:

https://bugs.gentoo.org/show_bug.cgi?id=405519

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman


ryao at cs

Feb 24, 2012, 5:06 PM

Post #2 of 4 (331 views)
Permalink
Re: btrfs status and/was: preserve_old_lib [In reply to]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Have you tried ZFS? The kernel modules are in the portage tree and I
am maintaining a FAQ regarding the status of Gentoo ZFS support at github:

https://github.com/gentoofan/zfs-overlay/wiki/FAQ

Data stored on ZFS is generally safe unless you go out of your way to
lose it (e.g. put the ZIL/SLOG on a tmpfs).

On 02/24/12 18:26, Duncan wrote:
> Rich Freeman posted on Fri, 24 Feb 2012 13:47:45 -0500 as
> excerpted:
>
>> On Fri, Feb 24, 2012 at 1:43 PM, Alexis Ballier
>> <aballier [at] gentoo> wrote:
>>> moreover the && wont delete the lib if revdep-rebuild failed i
>>> think, so it should be even safer to copy/paste :)
>
> FWIW this is the preserved_libs feature/bug I ran into in early
> testing, that convinced me to turn it off. Running revdep-rebuild
> manually was far safer anyway, since at least then I /knew/ the
> status of various libs, they weren't preserved on first run, then
> arbitrarily deleted on second, even if it still broke remaining
> depending apps to do so.
>
> So if that was reliably fixed, I'd be FAR happier about enabling
> FEATURES=preserved-libs. I'm not sure I actually would as I like a
> bit more direct knowledge of stale libs on the system than the
> automated handling gives me, but at least I'd not have to worry
> about the so-called "preserved" libs STILL disappearing and leaving
> broken packages, if I DID enable it!
>
> So definitely ++ on this! =:^)
>
>> Am I the only paranoid person who moves them rather than
>> unlinking them? Oh, if only btrfs were stable...
>
> FWIW, in the rare event it breaks revdep-rebuild or the underlying
> rebuilding itself, I rely on my long set FEATURES=buildpkg and
> emerge -K. In the even rarer event that too is broken, there's
> always manual untarring of the missing lib from the binpkg (I've
> had to do that once when gcc itself was broken due to an unadvised
> emerge -C that I knew might break things given the depclean
> warning, but also knew I could fix with an untar if it came to it,
> which it did), or if it comes to it, booting to backup and using
> ROOT= to emerge -K back to the working system.
>
>
> [btrfs status discussion, skip if uninterested.]
>
> I'm not sure if that's a reference to the btrfs snapshots allowing
> rollbacks feature, or a hint that you're running it and worried
> about its stability underneath you...
>
> If it's the latter, you probably already know this, but if it's the
> former, and for others interested...
>
> I recently set the btrfs kernel options and merged btrfs-progs,
> then read up on the wiki and joined the btrfs list, with the plan
> being to get familiar with it and perhaps install it.
>
> From all the reports about it being an option for various distros,
> etc, now, and all the constant improvement reports, I had /thought/
> that the biggest issue for stability was the lack of an
> error-correcting (not just detecting) fsck.btrfs, and that the
> restore tool announced late last year, that allows pulling data off
> of unmountable btrfs volumes was a reasonable workaround.
>
> What I found, even allowing for the fact that such lists get the
> bad reports and not the good ones, thus paint a rather worse
> picture of the situation than actually exists for most users, is
> that...
>
> BTRFS still has a rather longer way to go than I had thought. It's
> still FAR from stable, even for someone like myself that often runs
> betas and was prepared to keep (and use, if necessary) TESTED
> backups, etc. Maybe by Q4 this year, but also very possibly not
> until next year. I'd definitely NOT recommend that anyone run it
> now, unless you are SPECIFICALLY running it for testing and bug
> reporting purposes with "garbage" data (IOW, data that you're NOT
> depending on, at the btrfs level, at all) that you are not only
> PREPARED to lose, but EXPECT to lose, perhaps repeatedly, during
> your testing.
>
> IOW, there's still known untraced and unfixed active data
> corruption bugs remaining. Don't put your data on btrfs at this
> point unless you EXPECT to have it corrupted, and want to actively
> help in tracing and patching the problems!
>
> Additionally, for anyone who has been interested in the btrfs RAID
> capacities, striped/raid0 it handles, but its raid1 and raid10
> capacities are misnamed. At present, it's strictly two-way-mirror
> ONLY, there's no way to do N-way (N>2) mirroring aside from
> layering on top of say mdraid, at all, and of course layering on
> top of mdraid loses the data integrity guarantees at that level,
> btrfs still has just the one additional copy it can fall back on.
> This SERIOUSLY limits btrfs data integrity possibilities in a 2+
> drive failure scenario.
>
> btrfs raid5/6 isn't available yet, but the current roadmap says
> kernels 3.4 or 3.5. Multi-mirror is supposed to be built on that
> code, tho the mentions of it I've seen are specifically
> triple-mirror, so it's unclear whether arbitrary N-way (N>3)
> mirroring as in true raid1 will be possible even then. But whether
> triple-way specifically or N-way (N>=3), given it's on top of
> raid5/6, to be introduced in 3.4/3.5, triple-way mirroring thus
> appears to be 3.5/3.6 at the earliest.
>
> So while I had gotten the picture that btrfs was stabilizing and it
> was mostly over-cautiousness keeping that experimental label
> around, that's definitely NOT the case. Nobody should really plan
> on /relying/ on it, even with backups, until at least late this
> year, and very possibly looking at 2013 now.
>
> So btrfs is still a ways out. =:^(
>
> Meanwhile, for anyone that's still interested in it at this point,
> note that the homepage wiki currently listed the btrfs-progs
> package is a stale copy on kernel.org, still read-only after the
> kernel.org breakin. The "temporary" but looking more and more
> permanent location is:
>
> http://btrfs.ipv5.de/index.php?title=Main_Page
>
> Also, regarding the gentoo btrfs-progs package, see my recently
> filed:
>
> https://bugs.gentoo.org/show_bug.cgi?id=405519
>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBAgAGBQJPSDQNAAoJELFAT5FmjZuEeeQP/2clR9eIz34lm1oQNwW1/Ad7
+Yl1KTjWo9w3B/KqV6qla/SZs22OnD6X7PqYS3hTYwBLTMAM5uQmSGJ5z+Ju+rka
gbox4nQujaYFfqPMuxi5VEKc8n+k9WJG2nWUvfT7MlRvft8jZexn6p0ehrNOWdB+
7kNsqkjJLFwWBpLdJJh9oVDTYymTb82Iujrj82ZOWROc41i4+nd2PR5dC5Qd2xWq
bRzwGxppBymTQHaDQG9zYzBQzISBre/agQB/ZM58xutV6S8fHO5o277J5EDFF6+w
pWA0COylTyfT13E3MJOhluhP5dag52FVNtr9SGCb0s5vb1njxJI3J4IxgLwwA6U8
Uz4+PAQYMQz40n65yjtyh9D+kvmUIJzZgrWZL0fMEa93ka/i+4cnjYcqCPKd7WzN
ONv0yRCDmArVIJZJ2snqlInUTLPKr6PRIYWaO2pQnL/ZsMec9dm6DHeviQ6ywrat
SEuZ4dbjv6/CE2zstK5mfxrhH6x0+gWSJoEKlfuQYI7a984kqNd4VzCfawBLBbuT
W1PLbUWLbAJ/4Xr//7De6+m8OjTBRt9gEkQFTYbpjl5nmBV3qLB1u4xG91aCXoxr
QPwcL4ZNH3paGjiLFG7O8uFLPLon6aF0szLGMcPlewkYU7lJ9KgHz+KIu5588xXD
//u/5USHYjnIwDJFSmNW
=lusE
-----END PGP SIGNATURE-----


rich0 at gentoo

Feb 24, 2012, 5:54 PM

Post #3 of 4 (323 views)
Permalink
Re: btrfs status and/was: preserve_old_lib [In reply to]

On Fri, Feb 24, 2012 at 8:06 PM, Richard Yao <ryao [at] cs> wrote:
> Have you tried ZFS?

Yes - but not terribly interested in doing that on linux. I do
appreciate that it can be done, but still lacks raid-z reshaping,
which means it isn't quite flexible enough.

> On 02/24/12 18:26, Duncan wrote:
>> FWIW, in the rare event it breaks revdep-rebuild or the underlying
>>  rebuilding itself, I rely on my long set FEATURES=buildpkg and
>> emerge -K.

I also use buildpkg, but I don't keep them around forever.

>> I'm not sure if that's a reference to the btrfs snapshots allowing
>>  rollbacks feature, or a hint that you're running it and worried
>> about its stability underneath you...

That would be the former. I'm QUITE aware of its stability.

I've played around with it on a VM - I posted on my blog an experience
with it around a year ago as well. It has come quite a way, but it is
definitely not production quality. Xfs-tools is useful if you want to
try breaking it - I think I posted on my blog an article about
capturing linux kernel core dumps for debugging purposes - it panics
quite readily.

If you do want to mess with it I'd recommend using the git kernel
maintained by the btrfs team. It is obviously bleeding-edge, but due
to the high pace of fixes it tends to be more stable than the version
in the mainline kernel.

Rich


1i5t5.duncan at cox

Feb 24, 2012, 10:11 PM

Post #4 of 4 (325 views)
Permalink
Re: btrfs status and/was: preserve_old_lib [In reply to]

Richard Yao posted on Fri, 24 Feb 2012 20:06:21 -0500 as excerpted:

> Have you tried ZFS? The kernel modules are in the portage tree and I am
> maintaining a FAQ regarding the status of Gentoo ZFS support at github:
>
> https://github.com/gentoofan/zfs-overlay/wiki/FAQ
>
> Data stored on ZFS is generally safe unless you go out of your way to
> lose it (e.g. put the ZIL/SLOG on a tmpfs).

I haven't.

One reason is licensing issues. I know they resolve to some degree for
end users who don't distribute and for those only distributing sources,
since the gpl isn't particularly concerned in that case, but it's still
an issue that I'd prefer not to touch, personally (nothing against others
doing so, just not me), so no zfs here. There's a discussion that could
be had beyond that and I'm tempted, but here isn't the place for it.

My reason for posting wasn't really that, anyway, it was the apparently
common misconception out there that btrfs is basically ready and that
they're just being conservative in switching off the experimental label.
There's several posts a week on the btrfs list from people caught out
trying to depend on it, asking about recovery tool status and the like,
that they'd already /know/ the status of if they were using btrfs for
testing, etc, it's only appropriate use atm, and it's simply not ready
for that.

Additionally in the context of gentoo-dev, the post was to say, don't
plan on btrfs stability for anything but pre-release versions of anything
you might be maintaining this year (kernel, btrfs-progs and grub2
packages excepted, but they don't depend on btrfs stability, they help
create it).

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman

Gentoo dev RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.