Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: DRBD: Users

Hardware-recomendation needed

 

 

DRBD users RSS feed   Index | Next | Previous | View Threaded


proxmox at ssn

Mar 29, 2012, 7:22 AM

Post #1 of 25 (3925 views)
Permalink
Hardware-recomendation needed

Hi!

I'm thinking about a DRBD Installation for one of our sites.

The hardware are two identical Sandybridge-Xeon boxes with a dedicated,
direct connection between for DRBD. We're booting of a SSD.
Harddisks are connected by onboard-SATA 6G.

OS is Debian Squeeze using DRBD 3.8.10.

Unfortunatly there's just one 3.5" drive bay available for storage so
raid is not really an option (perhaps using 2 2.5" HDDs mounted with a
2x2.5 to 1x3.5 mounting bracket, but I'm afraid of this setup getting
too hot despite active cooling in the rack.)

So as far as i can see, there are four options available:

1.) a single SATA-Disk, internal metadata: Played around with that while
getting familar with DRBD, lousy performance near to unusable.

2.) a single SATA-Disk, external metadata on a partition of the
Boot-SSD. Any experiences on performance using a setup like this?

3.) a single SATA SSD, internal metadata: I've read about DRBD not
supporting TRIM, so I'm afraid the SSD will not work at full performance
and perhaps die too fast.

4.) a single SATA-Disk, connected to a 3Ware 9650SE Raidcontroller,
using a BBU. Never tried a controller like this with just one disk. How
much RAM do I need on that controller? Is 128MB sufficient (and I know
that the more the better, but how much do we really need?)?

4a.) two 2.5" SATA Disks, mounted in one 3.5" bay using a 9650SE as
well. As said before I'm afraid of getting the HDDs too hot.


Personally I would prefer variant 2 or 3, but I'm not shure about
performance using these setups.

So, what do you think about that options?

regards,
Lukas


--

--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Mar 30, 2012, 12:15 PM

Post #2 of 25 (3851 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Re: RAID 1; I've used these little adapters quite a bit in production
and have not had any heat problems;

http://usa.chenbro.com/corporatesite/products_detail.php?sku=72

SSDs are nice, but they are not enterprise class (unless you have
serious money). I'd recommend looking at 2x seagate 10k or 15k SAS
drives. Plug those into an LSI 9265-8i with BBU and you will likely find
the performance is quite nice, without inducing the risks of running
into early failure on SSDs.

Digimer

On 03/29/2012 07:22 AM, Lukas Gradl wrote:
> Hi!
>
> I'm thinking about a DRBD Installation for one of our sites.
>
> The hardware are two identical Sandybridge-Xeon boxes with a dedicated,
> direct connection between for DRBD. We're booting of a SSD.
> Harddisks are connected by onboard-SATA 6G.
>
> OS is Debian Squeeze using DRBD 3.8.10.
>
> Unfortunatly there's just one 3.5" drive bay available for storage so
> raid is not really an option (perhaps using 2 2.5" HDDs mounted with a
> 2x2.5 to 1x3.5 mounting bracket, but I'm afraid of this setup getting
> too hot despite active cooling in the rack.)
>
> So as far as i can see, there are four options available:
>
> 1.) a single SATA-Disk, internal metadata: Played around with that while
> getting familar with DRBD, lousy performance near to unusable.
>
> 2.) a single SATA-Disk, external metadata on a partition of the
> Boot-SSD. Any experiences on performance using a setup like this?
>
> 3.) a single SATA SSD, internal metadata: I've read about DRBD not
> supporting TRIM, so I'm afraid the SSD will not work at full performance
> and perhaps die too fast.
>
> 4.) a single SATA-Disk, connected to a 3Ware 9650SE Raidcontroller,
> using a BBU. Never tried a controller like this with just one disk. How
> much RAM do I need on that controller? Is 128MB sufficient (and I know
> that the more the better, but how much do we really need?)?
>
> 4a.) two 2.5" SATA Disks, mounted in one 3.5" bay using a 9650SE as
> well. As said before I'm afraid of getting the HDDs too hot.
>
>
> Personally I would prefer variant 2 or 3, but I'm not shure about
> performance using these setups.
>
> So, what do you think about that options?
>
> regards,
> Lukas
>
>


--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


pascal.berton3 at free

Mar 30, 2012, 12:16 PM

Post #3 of 25 (3849 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Wow! Sandy Bridge for this ? It's like taking a shotgun to kill a flee,
isn't it ? Wouldn't you do better placing your money on your disk layout,
rather than on the CPU family ? That's what I'd do at least!
What do you intend to do with this platform then ? Take care : Few disks,
few IOs, especially with SATA... SSD will surely help, but may be not as
much as you need. BTW you'll find a couple of discussions about that trim
issue on the mailing list.

Regards,

Pascal.

-----Message d'origine-----
De : drbd-user-bounces [at] lists
[mailto:drbd-user-bounces [at] lists] De la part de Lukas Gradl
Envoyé : jeudi 29 mars 2012 16:22
À : drbd-user [at] lists
Objet : [DRBD-user] Hardware-recomendation needed

Hi!

I'm thinking about a DRBD Installation for one of our sites.

The hardware are two identical Sandybridge-Xeon boxes with a dedicated,
direct connection between for DRBD. We're booting of a SSD.
Harddisks are connected by onboard-SATA 6G.

OS is Debian Squeeze using DRBD 3.8.10.

Unfortunatly there's just one 3.5" drive bay available for storage so raid
is not really an option (perhaps using 2 2.5" HDDs mounted with a
2x2.5 to 1x3.5 mounting bracket, but I'm afraid of this setup getting too
hot despite active cooling in the rack.)

So as far as i can see, there are four options available:

1.) a single SATA-Disk, internal metadata: Played around with that while
getting familar with DRBD, lousy performance near to unusable.

2.) a single SATA-Disk, external metadata on a partition of the Boot-SSD.
Any experiences on performance using a setup like this?

3.) a single SATA SSD, internal metadata: I've read about DRBD not
supporting TRIM, so I'm afraid the SSD will not work at full performance and
perhaps die too fast.

4.) a single SATA-Disk, connected to a 3Ware 9650SE Raidcontroller, using a
BBU. Never tried a controller like this with just one disk. How much RAM do
I need on that controller? Is 128MB sufficient (and I know that the more the
better, but how much do we really need?)?

4a.) two 2.5" SATA Disks, mounted in one 3.5" bay using a 9650SE as well. As
said before I'm afraid of getting the HDDs too hot.


Personally I would prefer variant 2 or 3, but I'm not shure about
performance using these setups.

So, what do you think about that options?

regards,
Lukas


--

--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


proxmox at ssn

Mar 31, 2012, 2:15 PM

Post #4 of 25 (3849 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Am Freitag, den 30.03.2012, 21:16 +0200 schrieb Pascal BERTON:
> Wow! Sandy Bridge for this ? It's like taking a shotgun to kill a flee,
> isn't it ? Wouldn't you do better placing your money on your disk layout,
> rather than on the CPU family ? That's what I'd do at least!

You consider Sandybridge as expensive? I'm not using desktop-CPUs in
Servers, and for our default boxes (E3-1260L, 16GB Ram, Pizzabox-Case,
no disks) we're paying far less that 1000 EUR. And they're ideal for
using them in a rack - they consume only 60Watts average.

> What do you intend to do with this platform then ? Take care : Few disks,
> few IOs, especially with SATA... SSD will surely help, but may be not as
> much as you need. BTW you'll find a couple of discussions about that trim
> issue on the mailing list.

I intend to run KVM on them. for automatic failover we need a shared
storage. As this setup is intended to host some small webservers only I
hope on using just these two servers - and not some redundant storage as
well.


For now the KVM-Guests are on one of these nodes with a local
software-Raid1 consisting of two SATA-Disks. They perform quite well and
have lot's of spare system ressources.

So - I'm seeking of a inexpensive possibility to replace the
Software-Raid1 with some kind of network-Raid1.


regards
Lukas


--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


proxmox at ssn

Mar 31, 2012, 2:25 PM

Post #5 of 25 (3857 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Am Freitag, den 30.03.2012, 12:15 -0700 schrieb Digimer:
> Re: RAID 1; I've used these little adapters quite a bit in production
> and have not had any heat problems;
>
> http://usa.chenbro.com/corporatesite/products_detail.php?sku=72

Thanks for that link - they look interesting.

>
> SSDs are nice, but they are not enterprise class (unless you have
> serious money). I'd recommend looking at 2x seagate 10k or 15k SAS
> drives. Plug those into an LSI 9265-8i with BBU and you will likely find
> the performance is quite nice, without inducing the risks of running
> into early failure on SSDs.

2 LSI 6295 with BBU, 4 10k SAS drives, the mounting bays - that's around
2500 EUR.

For now the system uses two local SATA-Drives with software-raid1 and
this setup is more than enough performance. Is there really no cheaper
solution to get DRBD up and running with the same perfomance than a
local Software-Raid1?

regards
Lukas

--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Mar 31, 2012, 3:08 PM

Post #6 of 25 (3845 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 03/31/2012 02:25 PM, Lukas Gradl wrote:
> Am Freitag, den 30.03.2012, 12:15 -0700 schrieb Digimer:
>> Re: RAID 1; I've used these little adapters quite a bit in production
>> and have not had any heat problems;
>>
>> http://usa.chenbro.com/corporatesite/products_detail.php?sku=72
>
> Thanks for that link - they look interesting.
>
>>
>> SSDs are nice, but they are not enterprise class (unless you have
>> serious money). I'd recommend looking at 2x seagate 10k or 15k SAS
>> drives. Plug those into an LSI 9265-8i with BBU and you will likely find
>> the performance is quite nice, without inducing the risks of running
>> into early failure on SSDs.
>
> 2 LSI 6295 with BBU, 4 10k SAS drives, the mounting bays - that's around
> 2500 EUR.
>
> For now the system uses two local SATA-Drives with software-raid1 and
> this setup is more than enough performance. Is there really no cheaper
> solution to get DRBD up and running with the same perfomance than a
> local Software-Raid1?
>
> regards
> Lukas

I'm not sure I understand the question, sorry.

DRBD isn't much slower than the native disk performance, provided your
network is fast enough. So the question is less about DRBD's performance
as it is about the performance you need from the storage. If a standard
SATA drive's performance is fine, then it's all you need.

--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


arnold at arnoldarts

Apr 1, 2012, 5:16 AM

Post #7 of 25 (3844 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On Saturday 31 March 2012 15:08:59 Digimer wrote:
> On 03/31/2012 02:25 PM, Lukas Gradl wrote:
> > Am Freitag, den 30.03.2012, 12:15 -0700 schrieb Digimer:
> >> Re: RAID 1; I've used these little adapters quite a bit in production
> >> and have not had any heat problems;
> >>
> >> http://usa.chenbro.com/corporatesite/products_detail.php?sku=72
> >
> > Thanks for that link - they look interesting.
> >
> >> SSDs are nice, but they are not enterprise class (unless you have
> >> serious money). I'd recommend looking at 2x seagate 10k or 15k SAS
> >> drives. Plug those into an LSI 9265-8i with BBU and you will likely find
> >> the performance is quite nice, without inducing the risks of running
> >> into early failure on SSDs.
> >
> > 2 LSI 6295 with BBU, 4 10k SAS drives, the mounting bays - that's around
> > 2500 EUR.
> >
> > For now the system uses two local SATA-Drives with software-raid1 and
> > this setup is more than enough performance. Is there really no cheaper
> > solution to get DRBD up and running with the same perfomance than a
> > local Software-Raid1?
> I'm not sure I understand the question, sorry.
>
> DRBD isn't much slower than the native disk performance, provided your
> network is fast enough.

I wouldn't sign that. While 1GB-network compares to current sata disks, the
throughput isn't everything. There is also latency where the network layer in
drbd introduces a factor of ten compared to pure local disks and when using C-
protocol. And when there are many users/apps accessing this resource, its the
latency that makes them complain.

Have fun,

Arnold
Attachments: signature.asc (0.19 KB)


lars.ellenberg at linbit

Apr 1, 2012, 11:20 AM

Post #8 of 25 (3838 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On Sun, Apr 01, 2012 at 02:16:42PM +0200, Arnold Krille wrote:
> On Saturday 31 March 2012 15:08:59 Digimer wrote:
> > On 03/31/2012 02:25 PM, Lukas Gradl wrote:
> > > Am Freitag, den 30.03.2012, 12:15 -0700 schrieb Digimer:
> > >> Re: RAID 1; I've used these little adapters quite a bit in production
> > >> and have not had any heat problems;
> > >>
> > >> http://usa.chenbro.com/corporatesite/products_detail.php?sku=72
> > >
> > > Thanks for that link - they look interesting.
> > >
> > >> SSDs are nice, but they are not enterprise class (unless you have
> > >> serious money). I'd recommend looking at 2x seagate 10k or 15k SAS
> > >> drives. Plug those into an LSI 9265-8i with BBU and you will likely find
> > >> the performance is quite nice, without inducing the risks of running
> > >> into early failure on SSDs.
> > >
> > > 2 LSI 6295 with BBU, 4 10k SAS drives, the mounting bays - that's around
> > > 2500 EUR.
> > >
> > > For now the system uses two local SATA-Drives with software-raid1 and
> > > this setup is more than enough performance. Is there really no cheaper
> > > solution to get DRBD up and running with the same perfomance than a
> > > local Software-Raid1?
> > I'm not sure I understand the question, sorry.
> >
> > DRBD isn't much slower than the native disk performance, provided your
> > network is fast enough.
>
> I wouldn't sign that. While 1GB-network compares to current sata disks, the
> throughput isn't everything. There is also latency where the network layer in
> drbd introduces a factor of ten compared to pure local disks and when using C-
> protocol.

How is that?

SATA disk latency for random writes: >= 10ms
Round trip time on GigE direct link: < 0.15 ms

So wherever you see your factor 10,
it is unlikely to be the "network layer in DRBD".

> And when there are many users/apps accessing this resource, its the
> latency that makes them complain.

That is correct.


--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


arnold at arnoldarts

Apr 1, 2012, 2:57 PM

Post #9 of 25 (3839 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 01.04.2012 20:20, Lars Ellenberg wrote:
> On Sun, Apr 01, 2012 at 02:16:42PM +0200, Arnold Krille wrote:
>> On Saturday 31 March 2012 15:08:59 Digimer wrote:
>>> DRBD isn't much slower than the native disk performance, provided your
>>> network is fast enough.
>> I wouldn't sign that. While 1GB-network compares to current sata disks, the
>> throughput isn't everything. There is also latency where the network layer in
>> drbd introduces a factor of ten compared to pure local disks and when using C-
>> protocol.
> How is that?
> SATA disk latency for random writes:>= 10ms
> Round trip time on GigE direct link:< 0.15 ms

My experience says otherwise...

> So wherever you see your factor 10,
> it is unlikely to be the "network layer in DRBD".

Its not the "network layer in drbd", its "the sending buffer, the
switch, the receiving buffer, the remote disk latency, the sending
buffer, the switch, the receiving buffer" of DRBD with protocol C.

The resulting factor of ten is what my co-admin used to point to me as
the guilty for the low performance just before we switched to protocol
A, where its really only the local disks latency.

>> And when there are many users/apps accessing this resource, its the
>> latency that makes them complain.
> That is correct.

Have fun,

Arnold
--
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche
Unterschrift gültig.
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Apr 2, 2012, 12:39 AM

Post #10 of 25 (3831 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Hi,

On 04/01/2012 11:57 PM, Arnold Krille wrote:
> Its not the "network layer in drbd", its "the sending buffer, the
> switch, the receiving buffer, the remote disk latency, the sending
> buffer, the switch, the receiving buffer" of DRBD with protocol C.

if your DRBD setup comprises a switch (and hence probably lots of
DRBD-unrelated traffice on the same NIC), the performance issues are
well deserved punishment you're getting.

Whenever possible, DRBD should use a dedicated back-to-back link.
Buffers should not pose much of an issue then, either.

Regards,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Apr 2, 2012, 12:49 AM

Post #11 of 25 (3825 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/02/2012 12:39 AM, Felix Frank wrote:
> Hi,
>
> On 04/01/2012 11:57 PM, Arnold Krille wrote:
>> Its not the "network layer in drbd", its "the sending buffer, the
>> switch, the receiving buffer, the remote disk latency, the sending
>> buffer, the switch, the receiving buffer" of DRBD with protocol C.
>
> if your DRBD setup comprises a switch (and hence probably lots of
> DRBD-unrelated traffice on the same NIC), the performance issues are
> well deserved punishment you're getting.
>
> Whenever possible, DRBD should use a dedicated back-to-back link.
> Buffers should not pose much of an issue then, either.
>
> Regards,
> Felix

I always use switches, and I don't think it's fair to say using switches
is, itself, dumb. Now to play the other side; I never have trouble with
DRBD hurting performance, so saying that DRBD is a performance killer is
also not fair.

Arnold,

Your personal experience may tell you otherwise, but think about it; If
DRBD caused such tremendous performance hit, do you think others would
use it? Of course not. You dismissed one of the DRBD devs out of hand...
If you want to resolve your issues, you might want to be a bit more open
to admitting you have something to learn.

--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Apr 2, 2012, 12:55 AM

Post #12 of 25 (3829 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/02/2012 09:49 AM, Digimer wrote:
> I don't think it's fair to say using switches
> is, itself, dumb.

I didn't mean to imply that. If I came around that way, I apologize.

All I meant to say was that you should *not* do it, if at all avoidable.
(Yes, I have sent DRBD without dedicated links in production, too.)

Thanks for pointing that out.

Cheers,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Apr 2, 2012, 12:57 AM

Post #13 of 25 (3838 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/02/2012 12:39 AM, Felix Frank wrote:
> Hi,
>
> On 04/01/2012 11:57 PM, Arnold Krille wrote:
>> Its not the "network layer in drbd", its "the sending buffer, the
>> switch, the receiving buffer, the remote disk latency, the sending
>> buffer, the switch, the receiving buffer" of DRBD with protocol C.
>
> if your DRBD setup comprises a switch (and hence probably lots of
> DRBD-unrelated traffice on the same NIC), the performance issues are
> well deserved punishment you're getting.
>
> Whenever possible, DRBD should use a dedicated back-to-back link.
> Buffers should not pose much of an issue then, either.
>
> Regards,
> Felix

Woops, meant to say more on switches;

Using a switch will contain traffic between the ports used by DRBD. Of
course, you will very much want a dedicated interface (or, ideally, two
in Active/Passive bonding). You need to look at the switch's
capabilities, as switches are a lot more than their rated port speed.

You need to ensure that the switch's internal performance is high enough
to handle all your network load while leaving enough overhead for the
additional DRBD traffic. You also want to adjust your MTU as high as
your equipment allows for, assuming you're using decent quality
equipment. Realtek is terrible, generally speaking.

Personally speaking, I use D-Link DGS-3120 series switches with Intel
NICs. I've just started testing DGS-1210 series, which are much less
expensive, and initial testing shows them to be perfectly capable (and
much less expensive).

TL;DR - Network equipment can't be crap and not all gigabit is created
equal.

--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Apr 2, 2012, 1:09 AM

Post #14 of 25 (3834 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/02/2012 12:55 AM, Felix Frank wrote:
> On 04/02/2012 09:49 AM, Digimer wrote:
>> I don't think it's fair to say using switches
>> is, itself, dumb.
>
> I didn't mean to imply that. If I came around that way, I apologize.
>
> All I meant to say was that you should *not* do it, if at all avoidable.
> (Yes, I have sent DRBD without dedicated links in production, too.)
>
> Thanks for pointing that out.
>
> Cheers,
> Felix

I've found that having switches can be helpful. One example; If an
interface drops, it can be easier to see which side dropped because the
switch will provide the link. With good switches, there is no
perceptible performance hit, either. So not to beat the issue, but I
don't even agree with the argument that direct link is overly preferable
to using switches.

I'm open to being shown why I am wrong though (seriously :) ). Do you
have any testing numbers or arguments against using a switch?

--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Apr 2, 2012, 3:37 AM

Post #15 of 25 (3818 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/02/2012 10:09 AM, Digimer wrote:
> I'm open to being shown why I am wrong though (seriously :) ). Do you
> have any testing numbers or arguments against using a switch?

You got me. I've never made such analysis. It's more a case of "common
sense agrees with general recommendations agrees with general anxiety" ;-)

I'ts also that I'm fairly certain that if you do use a switched network
for you DRBD links, it is unlikely to be dedicated (because then it
would be almost trivial to remove the switch from this piece of your
setup). Again, I've been in situations where you cannot have a dedicated
link, and I sympathize, but it's a painful choice and I encourage
everyone to refrain from it.

I cannot comment on the sense (or lack thereof) in switching your
dedicated links. Your point appears valid.

Cheers,
Felix

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


proxmox at ssn

Apr 3, 2012, 2:53 AM

Post #16 of 25 (3780 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

> I'm not sure I understand the question, sorry.
>
> DRBD isn't much slower than the native disk performance, provided your
> network is fast enough. So the question is less about DRBD's performance
> as it is about the performance you need from the storage. If a standard
> SATA drive's performance is fine, then it's all you need.

I followed the discussion about switch or no switch.

But I'm still stuck with my questions...

For use with KVM with automatic failover I need a primary/primary setup,
so AFAIK protocol C is required.

According to my benchmarks DRBD is much slower in that setup than native
HDD performance and changing the Network-Setup from 1GBit direct link to
2 bonded interfaces doesn't increase speed.

As we've just space for one 3.5" HDD (the other bay is used by the
Boot-SSD) I'm unable to install a raid5-setup.


So I think about installing two SSDs per Server using a 2x2.5" to 1x3.5"
adapter and leaving 20% of each ssd's space unpartitioned because of the
lack of TRIM support.
Then I would create 2 DRBD devices, to store the KVM-Images onto.
Moneywise this is not cheap but ok with our budget.


What do the experts think: Should this be sufficient to get the
perfomance of a single SATA-Disk without DRBD?

regards
Lukas


--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


ff at mpexnet

Apr 3, 2012, 3:03 AM

Post #17 of 25 (3779 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Hi,

On 04/03/2012 11:53 AM, Lukas Gradl wrote:
> For use with KVM with automatic failover I need a primary/primary setup,
> so AFAIK protocol C is required.

For dual-primary it is required, yes. You do need dual-primary for live
migrations. You do *not* need it for automatic failover (in failure
scenarios, live migration won't do you any good, anyway).

If live migration isn't an issue for you, single-primary is perfectly
fine! You still want protocol C though :-)

> According to my benchmarks DRBD is much slower in that setup than native
> HDD performance and changing the Network-Setup from 1GBit direct link to
> 2 bonded interfaces doesn't increase speed.

Have you identified the exact bottleneck inside your DRBD setup?
Have you done analysis according to
http://www.drbd.org/users-guide/ch-benchmark.html?

> What do the experts think: Should this be sufficient to get the
> perfomance of a single SATA-Disk without DRBD?

I don't really feel addressed ;-) but here's my 2 cents:

If DRBD performance with rotational disks is dissatisfactory, I wouldn't
count on faster disks somehow solving the problem. You *may* save enough
latency to make the setup worthwhile, but myself, I'd rather keep trying
to root out the main problem.
Throwing SSDs at a slow setup in order to make it a mediocre setup seems
awfully wasteful to me.

Cheers,
Felix
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


M.vandeLande at VDL-Fittings

Apr 3, 2012, 3:28 AM

Post #18 of 25 (3785 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

>What do the experts think: Should this be sufficient to get the perfomance of a single SATA-Disk without DRBD?

Probably not, nothing will.

I'm using drbd in primary/primary mode to host KVM images on a two node cluster. (with drbd8.3.12, drbd8.4.1 has some performance issues)
I have switched to SSD's myself(in raid 5 mode). This improved the VM performance, (I guess because reading data is much faster), but drbd syncer speed did not improve. I even installed a 10G network backbone and used 10G network adapters on the servers. But still syncer speed does not go beyond 110MB/s.

I let Linbit look at this setup but they could not get a higher syncer speed with protocol C. I think the problem is that the syncer uses a single thread and therefore is limited by the processing power of one cpu. Turning off power management and IRQ balance helped a little bit, but not much.

I have spend ages trying to increase the syncer rate, for now it seems limited to 110MB/s.

The is my latest drbd.conf

#
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";

#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#

global {
minor-count 64;
usage-count yes;
}


common {
syncer {
rate 110M;
verify-alg crc32c;
#csums-alg sha1; # do not use, slow performance
al-extents 3733;
cpu-mask 3;
}
}

resource VMstore1 {

protocol C;

startup {
wfc-timeout 1800; # 30 min
degr-wfc-timeout 120; # 2 minutes.
wait-after-sb;
become-primary-on both;
}

disk {
no-disk-barrier;
no-disk-flushes;
}

net {
max-buffers 8000;
max-epoch-size 8000;
sndbuf-size 0;
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
syncer{
cpu-mask 3;
}


on vmhost6a.vdl-fittings.local {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.100.37:7788;
meta-disk internal;
}
on vmhost6b.vdl-fittings.local {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.100.38:7788;
meta-disk internal;
}
}

Best regards,

Maurits van de Lande

-----Oorspronkelijk bericht-----
Van: drbd-user-bounces [at] lists [mailto:drbd-user-bounces [at] lists] Namens Lukas Gradl
Verzonden: dinsdag 3 april 2012 11:54
Aan: drbd-user [at] lists
Onderwerp: Re: [DRBD-user] Hardware-recomendation needed

> I'm not sure I understand the question, sorry.
>
> DRBD isn't much slower than the native disk performance, provided your
> network is fast enough. So the question is less about DRBD's
> performance as it is about the performance you need from the storage.
> If a standard SATA drive's performance is fine, then it's all you need.

I followed the discussion about switch or no switch.

But I'm still stuck with my questions...

For use with KVM with automatic failover I need a primary/primary setup, so AFAIK protocol C is required.

According to my benchmarks DRBD is much slower in that setup than native HDD performance and changing the Network-Setup from 1GBit direct link to
2 bonded interfaces doesn't increase speed.

As we've just space for one 3.5" HDD (the other bay is used by the
Boot-SSD) I'm unable to install a raid5-setup.


So I think about installing two SSDs per Server using a 2x2.5" to 1x3.5"
adapter and leaving 20% of each ssd's space unpartitioned because of the lack of TRIM support.
Then I would create 2 DRBD devices, to store the KVM-Images onto.
Moneywise this is not cheap but ok with our budget.


What do the experts think: Should this be sufficient to get the perfomance of a single SATA-Disk without DRBD?

regards
Lukas


--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


florian at hastexo

Apr 3, 2012, 4:17 AM

Post #19 of 25 (3790 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On Tue, Apr 3, 2012 at 12:28 PM, Maurits van de Lande
<M.vandeLande [at] vdl-fittings> wrote:
>>What do the experts think: Should this be sufficient to get the perfomance of a single SATA-Disk without DRBD?
>
> Probably not, nothing will.

Beg to differ.

The question was, will DRBD pair of SSDs, when replicated with
protocol C, match the performance of a single standalone SATA drive.
And at least in terms of throughput, you bet it will. A single SATA
drive would sustain 50-60MB/s streaming writes max, and that should
_definitely_ be doable with a protocol C replicated pair of SSDs.

And latency wise, Lars has already given his view on that which I
agree with, so I don't need to rehash.

> I'm using drbd in primary/primary mode to host KVM images on a two node cluster. (with drbd8.3.12, drbd8.4.1 has some performance issues)
> I have switched to SSD's myself(in raid 5 mode). This improved the VM performance, (I guess because reading data is much faster), but drbd syncer speed did not improve. I even installed a 10G network backbone and used 10G network adapters on the servers. But still syncer speed does not go beyond 110MB/s.

Ahem. I'm almost certain you're failing to provide some crucial piece
of information here. We have customers on 8.3 happily replicating in
excess of 300MB/s. That's on Infiniband hardware, but 10G is also
certainly capable of going faster than 110MB/s.

Just my €0.02.

Florian

--
Need help with High Availability?
http://www.hastexo.com/now
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


smt at vgersoft

Apr 3, 2012, 4:44 AM

Post #20 of 25 (3776 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On Tue, 3 Apr 2012, Florian Haas wrote:

> Ahem. I'm almost certain you're failing to provide some crucial piece
> of information here. We have customers on 8.3 happily replicating in
> excess of 300MB/s. That's on Infiniband hardware, but 10G is also
> certainly capable of going faster than 110MB/s.

Indeed. I use dual bonded gigabits (dedicated point to point link) and get
in excess of 160 MB/sec from DRBD.

Steve
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Apr 3, 2012, 10:07 AM

Post #21 of 25 (3768 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/03/2012 02:53 AM, Lukas Gradl wrote:
> As we've just space for one 3.5" HDD (the other bay is used by the
> Boot-SSD) I'm unable to install a raid5-setup.

Three things;

1. RAID 5 will shorted the life of SSDs.
2. RAID 1+0 is faster than RAID 5, if you can afford the reduced capacity.

2. cat /sys/block/sdb/queue/rotational

If set to '1';

echo 0 > /sys/block/sdb/queue/rotational

That made a significant improvement on my test bed (albeit with hardware
RAID 5 [LSI-9265 w/ SSDs and HP P410i w/ 10krpm SAS HDDs).

As for dual-primary/Protocol C; I've got nine clusters in production,
some under heavy load, hosting both Linux and Windows VMs. These operate
just fine, with proper config of the storage. For example; in some
cases, I use a couple separate RAID 1 arrays, in others I used RAID 5.
All use traditional platter drives (some SATA, most SAS). None of these
clusters are over the top, performance wise.

--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


lists at alteeve

Apr 3, 2012, 10:11 AM

Post #22 of 25 (3776 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

On 04/03/2012 10:07 AM, Digimer wrote:
> On 04/03/2012 02:53 AM, Lukas Gradl wrote:
>> As we've just space for one 3.5" HDD (the other bay is used by the
>> Boot-SSD) I'm unable to install a raid5-setup.
>
> Three things;
>
> 1. RAID 5 will shorted the life of SSDs.
> 2. RAID 1+0 is faster than RAID 5, if you can afford the reduced capacity.
>
> 2. cat /sys/block/sdb/queue/rotational
>
> If set to '1';
>
> echo 0 > /sys/block/sdb/queue/rotational
>
> That made a significant improvement on my test bed (albeit with hardware
> RAID 5 [LSI-9265 w/ SSDs and HP P410i w/ 10krpm SAS HDDs).
>
> As for dual-primary/Protocol C; I've got nine clusters in production,
> some under heavy load, hosting both Linux and Windows VMs. These operate
> just fine, with proper config of the storage. For example; in some
> cases, I use a couple separate RAID 1 arrays, in others I used RAID 5.
> All use traditional platter drives (some SATA, most SAS). None of these
> clusters are over the top, performance wise.

Oh, you might want to try testing with different schedulers;

cat /sys/block/sdb/queue/scheduler (try 'echo deadline|cfq|noop > /...').

cat /sys/block/sdb/device/queue_depth (try 64, 128, ... 975, 128 was
my best).

There are other tweaks, but that should get you going.


--
Digimer
Papers and Projects: https://alteeve.com
_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


proxmox at ssn

Apr 3, 2012, 11:25 AM

Post #23 of 25 (3817 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Am Dienstag, den 03.04.2012, 10:07 -0700 schrieb Digimer:
> On 04/03/2012 02:53 AM, Lukas Gradl wrote:
> > As we've just space for one 3.5" HDD (the other bay is used by the
> > Boot-SSD) I'm unable to install a raid5-setup.
>
> Three things;
>
> 1. RAID 5 will shorted the life of SSDs.

I don't think so - but you've to be careful which SSD you choose (we use
Intel 320 and have a successful setup with Crucial M4) and not to use
them to their full capacity (we normally leave 15-20 % of the space
unused)

> 2. RAID 1+0 is faster than RAID 5, if you can afford the reduced capacity.

As I wrote: We've just space for one 3.5" HDD - so we're able to install
2 2.5" Disks with a special Adapter - so neither RAID 5 (3 disks
minimum) nor Raid 1+0 (4 discs minimum) is possible.


> 2. cat /sys/block/sdb/queue/rotational
>
> If set to '1';
>
> echo 0 > /sys/block/sdb/queue/rotational
>
> That made a significant improvement on my test bed (albeit with hardware
> RAID 5 [LSI-9265 w/ SSDs and HP P410i w/ 10krpm SAS HDDs).

For which setup? At the moment I've a SATA-Disk, so rotational=1 seems
quite right. As I wrote before: At the moment there's no
RAID-Controller.

>
> As for dual-primary/Protocol C; I've got nine clusters in production,
> some under heavy load, hosting both Linux and Windows VMs. These operate
> just fine, with proper config of the storage. For example; in some
> cases, I use a couple separate RAID 1 arrays, in others I used RAID 5.
> All use traditional platter drives (some SATA, most SAS). None of these
> clusters are over the top, performance wise.



--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


proxmox at ssn

Apr 3, 2012, 6:16 PM

Post #24 of 25 (3765 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

Am Dienstag, den 03.04.2012, 12:03 +0200 schrieb Felix Frank:
> Hi,
>
> On 04/03/2012 11:53 AM, Lukas Gradl wrote:
> > For use with KVM with automatic failover I need a primary/primary setup,
> > so AFAIK protocol C is required.
>
> For dual-primary it is required, yes. You do need dual-primary for live
> migrations. You do *not* need it for automatic failover (in failure
> scenarios, live migration won't do you any good, anyway).
>
> If live migration isn't an issue for you, single-primary is perfectly
> fine! You still want protocol C though :-)
>
> > According to my benchmarks DRBD is much slower in that setup than native
> > HDD performance and changing the Network-Setup from 1GBit direct link to
> > 2 bonded interfaces doesn't increase speed.
>
> Have you identified the exact bottleneck inside your DRBD setup?
> Have you done analysis according to
> http://www.drbd.org/users-guide/ch-benchmark.html?

Yes.

I benchmarked exactly as described in that doc.

Throughput values don't change really - 85MB/s on the raw device, 83MB/s
on the drbd-device.
(average of 5 times "dd if=/dev/zero of=/dev/drbd1 bs=512M count=1
oflag=direct")

But latency drops:
for the 1000 512B blocks it took 0,05397s to write on the raw device and
12,757s on the drbd device
(average of 5 times "dd if=/dev/zero of=/dev/drbd1 bs=512 count=1000
oflag=direct")

Additionally I tried drbd with internal metadata on the sata-disk and
external metadata on the Boot-SSD - there were no significant changes.

My drbd.conf looks like this:

global {
usage-count no;
}
common {
protocol C;
syncer {
rate 120M;
al-extents 3389;
}
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
net {
cram-hmac-alg sha1;
shared-secret "secret";
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
sndbuf-size 512k;
}
}
resource r0 {
on vm01 {
device /dev/drbd0;
disk /dev/sdb3;
address 10.254.1.101:7780;
meta-disk /dev/sda3[0];
}
on vm02 {
device /dev/drbd0;
disk /dev/sdb3;
address 10.254.1.102:7780;
meta-disk /dev/sda3[0];
}
}
resource r1 {
on vm01 {
device /dev/drbd1;
disk /dev/sdb1;
address 10.254.1.101:7781;
meta-disk internal;
}
on vm02 {
device /dev/drbd1;
disk /dev/sdb1;
address 10.254.1.102:7781;
meta-disk internal;
}
}

The both nodes are linkes by a direct Gigabit connection used by drbd
exclusively.

>
> > What do the experts think: Should this be sufficient to get the
> > perfomance of a single SATA-Disk without DRBD?
>
> I don't really feel addressed ;-) but here's my 2 cents:
>
> If DRBD performance with rotational disks is dissatisfactory, I wouldn't
> count on faster disks somehow solving the problem. You *may* save enough
> latency to make the setup worthwhile, but myself, I'd rather keep trying
> to root out the main problem.

I would like to do so - but I've no real idea what the problem might be.

regards
Lukas


--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user


proxmox at ssn

Apr 3, 2012, 7:21 PM

Post #25 of 25 (3762 views)
Permalink
Re: Hardware-recomendation needed [In reply to]

> Yes.
>
> I benchmarked exactly as described in that doc.

And a little addition:

I created a 3 32GB disks for a KVM-Guest on that SATA-Disk:

First on local LVM, second on DRBD using external metadata and third on
DRBD using internal metadata.

Each of that 32GB disks was partitioned to one 32GB partition and was
formatted inside the KVM-Guest (Debian Squeeze, disks connected as
virtio-device) with ext3.

The formatting took 19s for local lvm, 95s for drbd with external
metadata and 133s for drbd with internal metadata...

regards
Lukas

--
--------------------------
software security networks
Lukas Gradl <proxmox#ssn.at>
Eduard-Bodem-Gasse 6
A - 6020 Innsbruck
Tel: +43-512-214040-0
Fax: +43-512-214040-21
--------------------------

_______________________________________________
drbd-user mailing list
drbd-user [at] lists
http://lists.linbit.com/mailman/listinfo/drbd-user

DRBD users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.