Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: OpenStack: Operators

Shared storage HA question

 

 

First page Previous page 1 2 Next page Last page  View All OpenStack operators RSS feed   Index | Next | Previous | View Threaded


dloshakov at gmail

Jul 24, 2013, 7:11 AM

Post #1 of 35 (185 views)
Permalink
Shared storage HA question

Hi all,

I have issue with creating shared storage for Openstack. Main idea is to
create 100% redundant shared storage from two servers (kind of network
RAID from two servers).
I have two identical servers with many disks inside. What solution can
any one provide for such schema? I need shared storage for running VMs
(so live migration can work) and also for cinder-volumes.

One solution is to install Linux on both servers and use DRBD + OCFS2,
any comments on this?
Also I heard about Quadstor software and it can create network RAID and
present it via iSCSI.

Thanks.

P.S. Glance uses swift and is setuped on another servers

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


jacobgodin at gmail

Jul 24, 2013, 7:25 AM

Post #2 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Denis,

I would take a look into GlusterFS with a distributed, replicated volume.
We have been using it for several months now, and it has been stable. Nova
will need to have the volume mounted to its instances directory (default
/var/lib/nova/instances), and Cinder has direct support for Gluster as of
Grizzly I believe.



On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:

> Hi all,
>
> I have issue with creating shared storage for Openstack. Main idea is to
> create 100% redundant shared storage from two servers (kind of network RAID
> from two servers).
> I have two identical servers with many disks inside. What solution can any
> one provide for such schema? I need shared storage for running VMs (so live
> migration can work) and also for cinder-volumes.
>
> One solution is to install Linux on both servers and use DRBD + OCFS2, any
> comments on this?
> Also I heard about Quadstor software and it can create network RAID and
> present it via iSCSI.
>
> Thanks.
>
> P.S. Glance uses swift and is setuped on another servers
>
> ______________________________**_________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists**openstack.org<OpenStack-operators [at] lists>
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>


razique.mahroua at gmail

Jul 24, 2013, 7:32 AM

Post #3 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production

Regards,
Razique

Razique Mahroua - Nuage & Co
razique.mahroua [at] gmail
Tel : +33 9 72 37 94 15



Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :

> Hi Denis,
>
> I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.
>
>
>
> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail> wrote:
> Hi all,
>
> I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).
> I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.
>
> One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?
> Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.
>
> Thanks.
>
> P.S. Glance uses swift and is setuped on another servers
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Attachments: NUAGECO-LOGO-Fblan_petit.jpg (9.88 KB)


jford at blackmesh

Jul 24, 2013, 7:32 AM

Post #4 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Denis,

Look at ceph (http://ceph.com/) for this since you are trying to use it in
cinder. We have had pretty good success with it as long as you can give
your storage network > 1gbit/s speeds.

The basic approach would be to install ceph storage nodes on your boxes
with disk in it and also install a mon process on each. Put your journals
on faster drives and install the third mon and the gateway on your
openstack management box.

There are many howtos on the ceph site for this and they are very helpful
in IRC if you have questions about it.

Hope this helps!

Regards,

jason

--------------------------------
Jason Ford
BlackMesh Managed Hosting
Drupal/Magento/Wordpress and Private Clouds
http://www.blackmesh.com <http://www.blackmesh.com/>
888.473.0854





On 7/24/13 10:11 AM, "Denis Loshakov" <dloshakov [at] gmail> wrote:

>Hi all,
>
>I have issue with creating shared storage for Openstack. Main idea is to
>create 100% redundant shared storage from two servers (kind of network
>RAID from two servers).
>I have two identical servers with many disks inside. What solution can
>any one provide for such schema? I need shared storage for running VMs
>(so live migration can work) and also for cinder-volumes.
>
>One solution is to install Linux on both servers and use DRBD + OCFS2,
>any comments on this?
>Also I heard about Quadstor software and it can create network RAID and
>present it via iSCSI.
>
>Thanks.
>
>P.S. Glance uses swift and is setuped on another servers
>
>_______________________________________________
>OpenStack-operators mailing list
>OpenStack-operators [at] lists
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


razique.mahroua at gmail

Jul 24, 2013, 7:34 AM

Post #5 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Jason,
do you use CephFS as a shared storage?
thanks



Le 24 juil. 2013 à 16:32, Jason Ford <jford [at] blackmesh> a écrit :

> Denis,
>
> Look at ceph (http://ceph.com/) for this since you are trying to use it in
> cinder. We have had pretty good success with it as long as you can give
> your storage network > 1gbit/s speeds.
>
> The basic approach would be to install ceph storage nodes on your boxes
> with disk in it and also install a mon process on each. Put your journals
> on faster drives and install the third mon and the gateway on your
> openstack management box.
>
> There are many howtos on the ceph site for this and they are very helpful
> in IRC if you have questions about it.
>
> Hope this helps!
>
> Regards,
>
> jason
>
> --------------------------------
> Jason Ford
> BlackMesh Managed Hosting
> Drupal/Magento/Wordpress and Private Clouds
> http://www.blackmesh.com <http://www.blackmesh.com/>
> 888.473.0854
>
>
>
>
>
> On 7/24/13 10:11 AM, "Denis Loshakov" <dloshakov [at] gmail> wrote:
>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea is to
>> create 100% redundant shared storage from two servers (kind of network
>> RAID from two servers).
>> I have two identical servers with many disks inside. What solution can
>> any one provide for such schema? I need shared storage for running VMs
>> (so live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>> any comments on this?
>> Also I heard about Quadstor software and it can create network RAID and
>> present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


narayan.desai at gmail

Jul 24, 2013, 7:36 AM

Post #6 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

+1 on glusterfs. It works and is stable for us as well.

Also, there is direct qemu integration with libgfapi, which allows you to
bypass the filesystem altogether. We've only tested it directly with
qemu/kvm instances, not through openstack, though it looks like that won't
be hard to add if the support isn't already written. (we saw a substantial
performance improvement from this switch)
-nld


On Wed, Jul 24, 2013 at 9:25 AM, Jacob Godin <jacobgodin [at] gmail> wrote:

> Hi Denis,
>
> I would take a look into GlusterFS with a distributed, replicated volume.
> We have been using it for several months now, and it has been stable. Nova
> will need to have the volume mounted to its instances directory (default
> /var/lib/nova/instances), and Cinder has direct support for Gluster as of
> Grizzly I believe.
>
>
>
> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea is to
>> create 100% redundant shared storage from two servers (kind of network RAID
>> from two servers).
>> I have two identical servers with many disks inside. What solution can
>> any one provide for such schema? I need shared storage for running VMs (so
>> live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>> any comments on this?
>> Also I heard about Quadstor software and it can create network RAID and
>> present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> ______________________________**_________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists**openstack.org<OpenStack-operators [at] lists>
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
>> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


jford at blackmesh

Jul 24, 2013, 7:39 AM

Post #7 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Razique,

We have used it to hold the meta data (/var/lib/nova/instances) and boot
from volume for everything else. Since Denis was looking for cinder volume
support, he isn't far from just doing boot from volume anyways to
consolidate the storage layers. At least that was the transition we made
when trying to figure this out as well.

Regards,

jason

--------------------------------
Jason Ford
BlackMesh Managed Hosting
Drupal/Magento/Wordpress and Private Clouds

http://www.blackmesh.com <http://www.blackmesh.com/>
888.473.0854





On 7/24/13 10:34 AM, "Razique Mahroua" <razique.mahroua [at] gmail> wrote:

>Hi Jason,
>do you use CephFS as a shared storage?
>thanks
>
>
>
>Le 24 juil. 2013 à 16:32, Jason Ford <jford [at] blackmesh> a écrit :
>
>> Denis,
>>
>> Look at ceph (http://ceph.com/) for this since you are trying to use it
>>in
>> cinder. We have had pretty good success with it as long as you can give
>> your storage network > 1gbit/s speeds.
>>
>> The basic approach would be to install ceph storage nodes on your boxes
>> with disk in it and also install a mon process on each. Put your
>>journals
>> on faster drives and install the third mon and the gateway on your
>> openstack management box.
>>
>> There are many howtos on the ceph site for this and they are very
>>helpful
>> in IRC if you have questions about it.
>>
>> Hope this helps!
>>
>> Regards,
>>
>> jason
>>
>> --------------------------------
>> Jason Ford
>> BlackMesh Managed Hosting
>> Drupal/Magento/Wordpress and Private Clouds
>> http://www.blackmesh.com <http://www.blackmesh.com/>
>> 888.473.0854
>>
>>
>>
>>
>>
>> On 7/24/13 10:11 AM, "Denis Loshakov" <dloshakov [at] gmail> wrote:
>>
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main idea is
>>>to
>>> create 100% redundant shared storage from two servers (kind of network
>>> RAID from two servers).
>>> I have two identical servers with many disks inside. What solution can
>>> any one provide for such schema? I need shared storage for running VMs
>>> (so live migration can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>>> any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and
>>> present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


jacobgodin at gmail

Jul 24, 2013, 7:40 AM

Post #8 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

A few things I found were key for I/O performance:

1. Make sure your network can sustain the traffic. We are using a 10G
backbone with 2 bonded interfaces per node.
2. Use high speed drives. SATA will not cut it.
3. Look into tuning settings. Razique, thanks for sending these along to
me a little while back. A couple that I found were useful:
- KVM cache=writeback (a little risky, but WAY faster)
- Gluster write-behind-window-size (set to 4MB in our setup)
- Gluster cache-size (ideal values in our setup were 96MB-128MB)

Hope that helps!



On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <razique.mahroua [at] gmail
> wrote:

> I had much performance issues myself with Windows instances, and I/O
> demanding instances. Make sure it fits your env. first before deploying it
> in production
>
> Regards,
> Razique
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua [at] gmail
> Tel : +33 9 72 37 94 15
>
>
> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>
> Hi Denis,
>
> I would take a look into GlusterFS with a distributed, replicated volume.
> We have been using it for several months now, and it has been stable. Nova
> will need to have the volume mounted to its instances directory (default
> /var/lib/nova/instances), and Cinder has direct support for Gluster as of
> Grizzly I believe.
>
>
>
> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea is to
>> create 100% redundant shared storage from two servers (kind of network RAID
>> from two servers).
>> I have two identical servers with many disks inside. What solution can
>> any one provide for such schema? I need shared storage for running VMs (so
>> live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>> any comments on this?
>> Also I heard about Quadstor software and it can create network RAID and
>> present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> ______________________________**_________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists**openstack.org<OpenStack-operators [at] lists>
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
>> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
Attachments: NUAGECO-LOGO-Fblan_petit.jpg (9.88 KB)


smanos at unimelb

Jul 24, 2013, 7:43 AM

Post #9 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Denis,

Before designing & implementing and shared storage for OpenStack you should think carefully about the workload your shared storage needs to be built for. How many VMs is it servicing? What sort of workloads are they? DBs? Web servers? File servers? etc. Then design according to that spec.

Steven.


On 24/07/2013, at 10:32 PM, Razique Mahroua <razique.mahroua [at] gmail<mailto:razique.mahroua [at] gmail>>
wrote:

I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production

Regards,
Razique

Razique Mahroua - Nuage & Co
razique.mahroua [at] gmail<mailto:razique.mahroua [at] gmail>
Tel : +33 9 72 37 94 15

<NUAGECO-LOGO-Fblan_petit.jpg>

Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail<mailto:jacobgodin [at] gmail>> a écrit :

Hi Denis,

I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.



On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail<mailto:dloshakov [at] gmail>> wrote:
Hi all,

I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).
I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.

One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?
Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.

Thanks.

P.S. Glance uses swift and is setuped on another servers


stephane.boisvert at gameloft

Jul 24, 2013, 7:47 AM

Post #10 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

sorry to interfere in that thread but I did set cache=true in my ceph
config... but where I can set cache=writeback ?


thanks for your help

On 13-07-24 10:40 AM, Jacob Godin wrote:
> A few things I found were key for I/O performance:
>
> 1. Make sure your network can sustain the traffic. We are using a 10G
> backbone with 2 bonded interfaces per node.
> 2. Use high speed drives. SATA will not cut it.
> 3. Look into tuning settings. Razique, thanks for sending these along
> to me a little while back. A couple that I found were useful:
> * KVM cache=writeback (a little risky, but WAY faster)
> * Gluster write-behind-window-size (set to 4MB in our setup)
> * Gluster cache-size (ideal values in our setup were 96MB-128MB)
>
> Hope that helps!
>
>
>
> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
> <razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>> wrote:
>
> I had much performance issues myself with Windows instances, and
> I/O demanding instances. Make sure it fits your env. first before
> deploying it in production
>
> Regards,
> Razique
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>
> Tel : +33 9 72 37 94 15
>
>
> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail
> <mailto:jacobgodin [at] gmail>> a écrit :
>
>> Hi Denis,
>>
>> I would take a look into GlusterFS with a distributed, replicated
>> volume. We have been using it for several months now, and it has
>> been stable. Nova will need to have the volume mounted to its
>> instances directory (default /var/lib/nova/instances), and Cinder
>> has direct support for Gluster as of Grizzly I believe.
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>> <dloshakov [at] gmail <mailto:dloshakov [at] gmail>> wrote:
>>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main
>> idea is to create 100% redundant shared storage from two
>> servers (kind of network RAID from two servers).
>> I have two identical servers with many disks inside. What
>> solution can any one provide for such schema? I need shared
>> storage for running VMs (so live migration can work) and also
>> for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD
>> + OCFS2, any comments on this?
>> Also I heard about Quadstor software and it can create
>> network RAID and present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
*Stéphane Boisvert*
GNS-Shop Technical Coordinator
5800 St-Denis suite 1001
Montreal (QC), H2S 3L5
*MSN:*stephane.boisvert [at] gameloft
*E-mail:*stephane.boisvert [at] gameloft
Attachments: Inbox.jpg (8.24 KB)


jacobgodin at gmail

Jul 24, 2013, 8:20 AM

Post #11 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Stephane,

This is actually done in Nova with the config
directive disk_cachemodes="file=writeback"


On Wed, Jul 24, 2013 at 11:47 AM, Stephane Boisvert <
stephane.boisvert [at] gameloft> wrote:

> sorry to interfere in that thread but I did set cache=true in my ceph
> config... but where I can set cache=writeback ?
>
>
> thanks for your help
>
> On 13-07-24 10:40 AM, Jacob Godin wrote:
>
> A few things I found were key for I/O performance:
>
> 1. Make sure your network can sustain the traffic. We are using a 10G
> backbone with 2 bonded interfaces per node.
> 2. Use high speed drives. SATA will not cut it.
> 3. Look into tuning settings. Razique, thanks for sending these along
> to me a little while back. A couple that I found were useful:
> - KVM cache=writeback (a little risky, but WAY faster)
> - Gluster write-behind-window-size (set to 4MB in our setup)
> - Gluster cache-size (ideal values in our setup were 96MB-128MB)
>
> Hope that helps!
>
>
>
> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <
> razique.mahroua [at] gmail> wrote:
>
>> I had much performance issues myself with Windows instances, and I/O
>> demanding instances. Make sure it fits your env. first before deploying it
>> in production
>>
>> Regards,
>> Razique
>>
>> *Razique Mahroua** - **Nuage & Co*
>> razique.mahroua [at] gmail
>> Tel : +33 9 72 37 94 15
>>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>
>> Hi Denis,
>>
>> I would take a look into GlusterFS with a distributed, replicated
>> volume. We have been using it for several months now, and it has been
>> stable. Nova will need to have the volume mounted to its instances
>> directory (default /var/lib/nova/instances), and Cinder has direct support
>> for Gluster as of Grizzly I believe.
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>>
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main idea is to
>>> create 100% redundant shared storage from two servers (kind of network RAID
>>> from two servers).
>>> I have two identical servers with many disks inside. What solution can
>>> any one provide for such schema? I need shared storage for running VMs (so
>>> live migration can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>>> any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and
>>> present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>
>
> _______________________________________________
> OpenStack-operators mailing listOpenStack-operators [at] listshttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> --
> *Stéphane Boisvert* GNS-Shop Technical Coordinator 5800 St-Denis
> suite 1001 Montreal (QC), H2S 3L5 *MSN:* stephane.boisvert [at] gameloft
> *E-mail:* stephane.boisvert [at] gameloft
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
Attachments: Inbox.jpg (8.24 KB)


jacobgodin at gmail

Jul 24, 2013, 8:24 AM

Post #12 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Agreed. We have a multi-tenant setup, so we decided to allow for HA root
storage via a GlusterFS mount. To Jason's point, if using boot from volume
works for your setup, Ceph is probably your best option. We actually use
Ceph for Volume and Image storage.


On Wed, Jul 24, 2013 at 11:43 AM, Steven Manos <smanos [at] unimelb>wrote:

> Hi Denis,
>
> Before designing & implementing and shared storage for OpenStack you
> should think carefully about the workload your shared storage needs to be
> built for. How many VMs is it servicing? What sort of workloads are they?
> DBs? Web servers? File servers? etc. Then design according to that spec.
>
> Steven.
>
>
> On 24/07/2013, at 10:32 PM, Razique Mahroua <razique.mahroua [at] gmail>
> wrote:
>
> I had much performance issues myself with Windows instances, and I/O
> demanding instances. Make sure it fits your env. first before deploying it
> in production
>
> Regards,
> Razique
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua [at] gmail
> Tel : +33 9 72 37 94 15
>
> <NUAGECO-LOGO-Fblan_petit.jpg>
>
> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>
> Hi Denis,
>
> I would take a look into GlusterFS with a distributed, replicated
> volume. We have been using it for several months now, and it has been
> stable. Nova will need to have the volume mounted to its instances
> directory (default /var/lib/nova/instances), and Cinder has direct support
> for Gluster as of Grizzly I believe.
>
>
>
> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea is to
>> create 100% redundant shared storage from two servers (kind of network RAID
>> from two servers).
>> I have two identical servers with many disks inside. What solution can
>> any one provide for such schema? I need shared storage for running VMs (so
>> live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>> any comments on this?
>> Also I heard about Quadstor software and it can create network RAID and
>> present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
> *
>
> *
>


stephane.boisvert at gameloft

Jul 24, 2013, 8:30 AM

Post #13 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Thanks for the quick answer.. I already did it but it seems not to be
taken in account... I'll test it again and open a new thread if I fail.

Thanks Jacob


On 13-07-24 11:20 AM, Jacob Godin wrote:
> Hi Stephane,
>
> This is actually done in Nova with the config
> directive disk_cachemodes="file=writeback"
>
>
> On Wed, Jul 24, 2013 at 11:47 AM, Stephane Boisvert
> <stephane.boisvert [at] gameloft
> <mailto:stephane.boisvert [at] gameloft>> wrote:
>
> sorry to interfere in that thread but I did set cache=true in my
> ceph config... but where I can set cache=writeback ?
>
>
> thanks for your help
>
> On 13-07-24 10:40 AM, Jacob Godin wrote:
>> A few things I found were key for I/O performance:
>>
>> 1. Make sure your network can sustain the traffic. We are using
>> a 10G backbone with 2 bonded interfaces per node.
>> 2. Use high speed drives. SATA will not cut it.
>> 3. Look into tuning settings. Razique, thanks for sending these
>> along to me a little while back. A couple that I found were
>> useful:
>> * KVM cache=writeback (a little risky, but WAY faster)
>> * Gluster write-behind-window-size (set to 4MB in our setup)
>> * Gluster cache-size (ideal values in our setup were
>> 96MB-128MB)
>>
>> Hope that helps!
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
>> <razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>> wrote:
>>
>> I had much performance issues myself with Windows instances,
>> and I/O demanding instances. Make sure it fits your env.
>> first before deploying it in production
>>
>> Regards,
>> Razique
>>
>> *Razique Mahroua** - **Nuage & Co*
>> razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>
>> Tel : +33 9 72 37 94 15
>>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail
>> <mailto:jacobgodin [at] gmail>> a écrit :
>>
>>> Hi Denis,
>>>
>>> I would take a look into GlusterFS with a distributed,
>>> replicated volume. We have been using it for several months
>>> now, and it has been stable. Nova will need to have the
>>> volume mounted to its instances directory (default
>>> /var/lib/nova/instances), and Cinder has direct support for
>>> Gluster as of Grizzly I believe.
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>>> <dloshakov [at] gmail <mailto:dloshakov [at] gmail>> wrote:
>>>
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack.
>>> Main idea is to create 100% redundant shared storage
>>> from two servers (kind of network RAID from two servers).
>>> I have two identical servers with many disks inside.
>>> What solution can any one provide for such schema? I
>>> need shared storage for running VMs (so live migration
>>> can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use
>>> DRBD + OCFS2, any comments on this?
>>> Also I heard about Quadstor software and it can create
>>> network RAID and present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> <mailto:OpenStack-operators [at] lists>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> <mailto:OpenStack-operators [at] lists>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> --
> *Stéphane Boisvert*
> GNS-Shop Technical Coordinator
> 5800 St-Denis suite 1001
> Montreal (QC), H2S 3L5
> *MSN:*stephane.boisvert [at] gameloft
> <mailto:stephane.boisvert [at] gameloft>
> *E-mail:*stephane.boisvert [at] gameloft
> <mailto:stephane.boisvert [at] gameloft>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> <mailto:OpenStack-operators [at] lists>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


--
*Stéphane Boisvert*
GNS-Shop Technical Coordinator
5800 St-Denis suite 1001
Montreal (QC), H2S 3L5
*MSN:*stephane.boisvert [at] gameloft
*E-mail:*stephane.boisvert [at] gameloft
Attachments: Inbox.jpg (8.24 KB)


jacobgodin at gmail

Jul 24, 2013, 8:39 AM

Post #14 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Stephane,

If you have any existing instances, you will need to completely power them
off and back on again for the change to take affect.


On Wed, Jul 24, 2013 at 12:30 PM, Stephane Boisvert <
stephane.boisvert [at] gameloft> wrote:

> Thanks for the quick answer.. I already did it but it seems not to be
> taken in account... I'll test it again and open a new thread if I fail.
>
> Thanks Jacob
>
>
>
> On 13-07-24 11:20 AM, Jacob Godin wrote:
>
> Hi Stephane,
>
> This is actually done in Nova with the config
> directive disk_cachemodes="file=writeback"
>
>
> On Wed, Jul 24, 2013 at 11:47 AM, Stephane Boisvert <
> stephane.boisvert [at] gameloft> wrote:
>
>> sorry to interfere in that thread but I did set cache=true in my ceph
>> config... but where I can set cache=writeback ?
>>
>>
>> thanks for your help
>>
>> On 13-07-24 10:40 AM, Jacob Godin wrote:
>>
>> A few things I found were key for I/O performance:
>>
>> 1. Make sure your network can sustain the traffic. We are using a 10G
>> backbone with 2 bonded interfaces per node.
>> 2. Use high speed drives. SATA will not cut it.
>> 3. Look into tuning settings. Razique, thanks for sending these along
>> to me a little while back. A couple that I found were useful:
>> - KVM cache=writeback (a little risky, but WAY faster)
>> - Gluster write-behind-window-size (set to 4MB in our setup)
>> - Gluster cache-size (ideal values in our setup were 96MB-128MB)
>>
>> Hope that helps!
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <
>> razique.mahroua [at] gmail> wrote:
>>
>>> I had much performance issues myself with Windows instances, and I/O
>>> demanding instances. Make sure it fits your env. first before deploying it
>>> in production
>>>
>>> Regards,
>>> Razique
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua [at] gmail
>>> Tel : +33 9 72 37 94 15
>>>
>>>
>>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>>
>>> Hi Denis,
>>>
>>> I would take a look into GlusterFS with a distributed, replicated
>>> volume. We have been using it for several months now, and it has been
>>> stable. Nova will need to have the volume mounted to its instances
>>> directory (default /var/lib/nova/instances), and Cinder has direct support
>>> for Gluster as of Grizzly I believe.
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>>>
>>>> Hi all,
>>>>
>>>> I have issue with creating shared storage for Openstack. Main idea is
>>>> to create 100% redundant shared storage from two servers (kind of network
>>>> RAID from two servers).
>>>> I have two identical servers with many disks inside. What solution can
>>>> any one provide for such schema? I need shared storage for running VMs (so
>>>> live migration can work) and also for cinder-volumes.
>>>>
>>>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>>>> any comments on this?
>>>> Also I heard about Quadstor software and it can create network RAID and
>>>> present it via iSCSI.
>>>>
>>>> Thanks.
>>>>
>>>> P.S. Glance uses swift and is setuped on another servers
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing listOpenStack-operators [at] listshttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>> --
>> *Stéphane Boisvert* GNS-Shop Technical Coordinator 5800 St-Denis
>> suite 1001 Montreal (QC), H2S 3L5 *MSN:* stephane.boisvert [at] gameloft
>> *E-mail:* stephane.boisvert [at] gameloft
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> *Stéphane Boisvert* GNS-Shop Technical Coordinator 5800 St-Denis
> suite 1001 Montreal (QC), H2S 3L5 *MSN:* stephane.boisvert [at] gameloft
> *E-mail:* stephane.boisvert [at] gameloft
>
Attachments: Inbox.jpg (8.24 KB)


stephane.boisvert at gameloft

Jul 24, 2013, 8:41 AM

Post #15 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

No need to 'terminate' them ? just powering them off will do it



On 13-07-24 11:39 AM, Jacob Godin wrote:
> Hi Stephane,
>
> If you have any existing instances, you will need to completely power
> them off and back on again for the change to take affect.
>
>
> On Wed, Jul 24, 2013 at 12:30 PM, Stephane Boisvert
> <stephane.boisvert [at] gameloft
> <mailto:stephane.boisvert [at] gameloft>> wrote:
>
> Thanks for the quick answer.. I already did it but it seems not to
> be taken in account... I'll test it again and open a new thread if
> I fail.
>
> Thanks Jacob
>
>
>
> On 13-07-24 11:20 AM, Jacob Godin wrote:
>> Hi Stephane,
>>
>> This is actually done in Nova with the config
>> directive disk_cachemodes="file=writeback"
>>
>>
>> On Wed, Jul 24, 2013 at 11:47 AM, Stephane Boisvert
>> <stephane.boisvert [at] gameloft
>> <mailto:stephane.boisvert [at] gameloft>> wrote:
>>
>> sorry to interfere in that thread but I did set cache=true
>> in my ceph config... but where I can set cache=writeback ?
>>
>>
>> thanks for your help
>>
>> On 13-07-24 10:40 AM, Jacob Godin wrote:
>>> A few things I found were key for I/O performance:
>>>
>>> 1. Make sure your network can sustain the traffic. We are
>>> using a 10G backbone with 2 bonded interfaces per node.
>>> 2. Use high speed drives. SATA will not cut it.
>>> 3. Look into tuning settings. Razique, thanks for sending
>>> these along to me a little while back. A couple that I
>>> found were useful:
>>> * KVM cache=writeback (a little risky, but WAY faster)
>>> * Gluster write-behind-window-size (set to 4MB in our
>>> setup)
>>> * Gluster cache-size (ideal values in our setup were
>>> 96MB-128MB)
>>>
>>> Hope that helps!
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
>>> <razique.mahroua [at] gmail
>>> <mailto:razique.mahroua [at] gmail>> wrote:
>>>
>>> I had much performance issues myself with Windows
>>> instances, and I/O demanding instances. Make sure it
>>> fits your env. first before deploying it in production
>>>
>>> Regards,
>>> Razique
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>
>>> Tel : +33 9 72 37 94 15
>>>
>>>
>>> Le 24 juil. 2013 à 16:25, Jacob Godin
>>> <jacobgodin [at] gmail <mailto:jacobgodin [at] gmail>> a
>>> écrit :
>>>
>>>> Hi Denis,
>>>>
>>>> I would take a look into GlusterFS with a distributed,
>>>> replicated volume. We have been using it for several
>>>> months now, and it has been stable. Nova will need to
>>>> have the volume mounted to its instances directory
>>>> (default /var/lib/nova/instances), and Cinder has
>>>> direct support for Gluster as of Grizzly I believe.
>>>>
>>>>
>>>>
>>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>>>> <dloshakov [at] gmail <mailto:dloshakov [at] gmail>> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I have issue with creating shared storage for
>>>> Openstack. Main idea is to create 100% redundant
>>>> shared storage from two servers (kind of network
>>>> RAID from two servers).
>>>> I have two identical servers with many disks
>>>> inside. What solution can any one provide for such
>>>> schema? I need shared storage for running VMs (so
>>>> live migration can work) and also for cinder-volumes.
>>>>
>>>> One solution is to install Linux on both servers
>>>> and use DRBD + OCFS2, any comments on this?
>>>> Also I heard about Quadstor software and it can
>>>> create network RAID and present it via iSCSI.
>>>>
>>>> Thanks.
>>>>
>>>> P.S. Glance uses swift and is setuped on another
>>>> servers
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists
>>>> <mailto:OpenStack-operators [at] lists>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists
>>>> <mailto:OpenStack-operators [at] lists>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists <mailto:OpenStack-operators [at] lists>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> --
>> *Stéphane Boisvert*
>> GNS-Shop Technical Coordinator
>> 5800 St-Denis suite 1001
>> Montreal (QC), H2S 3L5
>> *MSN:*stephane.boisvert [at] gameloft
>> <mailto:stephane.boisvert [at] gameloft>
>> *E-mail:*stephane.boisvert [at] gameloft
>> <mailto:stephane.boisvert [at] gameloft>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> *Stéphane Boisvert*
> GNS-Shop Technical Coordinator
> 5800 St-Denis suite 1001
> Montreal (QC), H2S 3L5
> *MSN:*stephane.boisvert [at] gameloft
> <mailto:stephane.boisvert [at] gameloft>
> *E-mail:*stephane.boisvert [at] gameloft
> <mailto:stephane.boisvert [at] gameloft>
>
>


--
*Stéphane Boisvert*
GNS-Shop Technical Coordinator
5800 St-Denis suite 1001
Montreal (QC), H2S 3L5
*MSN:*stephane.boisvert [at] gameloft
*E-mail:*stephane.boisvert [at] gameloft
Attachments: Inbox.jpg (8.24 KB)


razique.mahroua at gmail

Jul 24, 2013, 11:31 AM

Post #16 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

:-)
Actually I had to remove all my instances running on it (especially the windows ones), yah unfortunately my network backbone wasn't fast enough to support the load induced by GFS - especially the numerous operations performed by the self-healing agents :(

I'm currently considering MooseFS, it has the advantage to have a pretty long list of companies using it in production

take care


Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail> a écrit :

> A few things I found were key for I/O performance:
> Make sure your network can sustain the traffic. We are using a 10G backbone with 2 bonded interfaces per node.
> Use high speed drives. SATA will not cut it.
> Look into tuning settings. Razique, thanks for sending these along to me a little while back. A couple that I found were useful:
> KVM cache=writeback (a little risky, but WAY faster)
> Gluster write-behind-window-size (set to 4MB in our setup)
> Gluster cache-size (ideal values in our setup were 96MB-128MB)
> Hope that helps!
>
>
>
> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <razique.mahroua [at] gmail> wrote:
> I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production
>
> Regards,
> Razique
>
> Razique Mahroua - Nuage & Co
> razique.mahroua [at] gmail
> Tel : +33 9 72 37 94 15
>
> <NUAGECO-LOGO-Fblan_petit.jpg>
>
> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>
>> Hi Denis,
>>
>> I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail> wrote:
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).
>> I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?
>> Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


jacobgodin at gmail

Jul 24, 2013, 11:37 AM

Post #17 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Oh really, you've done away with Gluster all together? The fast backbone is
definitely needed, but I would think that was the case with any distributed
filesystem.

MooseFS looks promising, but apparently it has a few reliability problems.


On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
<razique.mahroua [at] gmail>wrote:

> :-)
> Actually I had to remove all my instances running on it (especially the
> windows ones), yah unfortunately my network backbone wasn't fast enough to
> support the load induced by GFS - especially the numerous operations
> performed by the self-healing agents :(
>
> I'm currently considering MooseFS, it has the advantage to have a pretty
> long list of companies using it in production
>
> take care
>
>
> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail> a écrit :
>
> A few things I found were key for I/O performance:
>
> 1. Make sure your network can sustain the traffic. We are using a 10G
> backbone with 2 bonded interfaces per node.
> 2. Use high speed drives. SATA will not cut it.
> 3. Look into tuning settings. Razique, thanks for sending these along
> to me a little while back. A couple that I found were useful:
> - KVM cache=writeback (a little risky, but WAY faster)
> - Gluster write-behind-window-size (set to 4MB in our setup)
> - Gluster cache-size (ideal values in our setup were 96MB-128MB)
>
> Hope that helps!
>
>
>
> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <
> razique.mahroua [at] gmail> wrote:
>
>> I had much performance issues myself with Windows instances, and I/O
>> demanding instances. Make sure it fits your env. first before deploying it
>> in production
>>
>> Regards,
>> Razique
>>
>> *Razique Mahroua** - **Nuage & Co*
>> razique.mahroua [at] gmail
>> Tel : +33 9 72 37 94 15
>>
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>
>> Hi Denis,
>>
>> I would take a look into GlusterFS with a distributed, replicated volume.
>> We have been using it for several months now, and it has been stable. Nova
>> will need to have the volume mounted to its instances directory (default
>> /var/lib/nova/instances), and Cinder has direct support for Gluster as of
>> Grizzly I believe.
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>>
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main idea is to
>>> create 100% redundant shared storage from two servers (kind of network RAID
>>> from two servers).
>>> I have two identical servers with many disks inside. What solution can
>>> any one provide for such schema? I need shared storage for running VMs (so
>>> live migration can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>>> any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and
>>> present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> ______________________________**_________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists**openstack.org<OpenStack-operators [at] lists>
>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
>>> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>
>


razique.mahroua at gmail

Jul 24, 2013, 11:47 AM

Post #18 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Not done yet, I'll still do some testing on it, but I don't expect much given the current topology



MooseFS lacks a decentralized meta-data server, but you can build an HA with an active/ passive master/metalogger, I've been able to run such setup for almost 2 years now, and not any issue so far

Le 24 juil. 2013 à 20:37, Jacob Godin <jacobgodin [at] gmail> a écrit :

> Oh really, you've done away with Gluster all together? The fast backbone is definitely needed, but I would think that was the case with any distributed filesystem.
>
> MooseFS looks promising, but apparently it has a few reliability problems.
>
>
> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua <razique.mahroua [at] gmail> wrote:
> :-)
> Actually I had to remove all my instances running on it (especially the windows ones), yah unfortunately my network backbone wasn't fast enough to support the load induced by GFS - especially the numerous operations performed by the self-healing agents :(
>
> I'm currently considering MooseFS, it has the advantage to have a pretty long list of companies using it in production
>
> take care
>
>
> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail> a écrit :
>
>> A few things I found were key for I/O performance:
>> Make sure your network can sustain the traffic. We are using a 10G backbone with 2 bonded interfaces per node.
>> Use high speed drives. SATA will not cut it.
>> Look into tuning settings. Razique, thanks for sending these along to me a little while back. A couple that I found were useful:
>> KVM cache=writeback (a little risky, but WAY faster)
>> Gluster write-behind-window-size (set to 4MB in our setup)
>> Gluster cache-size (ideal values in our setup were 96MB-128MB)
>> Hope that helps!
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <razique.mahroua [at] gmail> wrote:
>> I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production
>>
>> Regards,
>> Razique
>>
>> Razique Mahroua - Nuage & Co
>> razique.mahroua [at] gmail
>> Tel : +33 9 72 37 94 15
>>
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>
>>> Hi Denis,
>>>
>>> I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail> wrote:
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).
>>> I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
Attachments: 61428_254798781315632_35282649_n.jpg (23.7 KB)


joe.topjian at cybera

Jul 24, 2013, 12:08 PM

Post #19 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Jacob,

Are you using SAS or SSD drives for Gluster? Also, do you have one large
Gluster volume across your entire cloud or is it broke up into a few
different ones? I've wondered if there's a benefit to doing the latter so
distribution activity is isolated to only a few nodes. The downside to
that, of course, is you're limited to what compute nodes instances can
migrate to.

I use Gluster for instance storage in all of my "controlled" environments
like internal and sandbox clouds, but I'm hesitant to introduce it into
production environments as I've seen the same issues that Razique describes
-- especially with Windows instances. My guess is due to how NTFS writes to
disk.

I'm curious if you could report the results of the following test: in a
Windows instance running on Gluster, copy a 3-4gb file to it from the local
network so it comes in at a very high speed. When I do this, the first few
gigs come in very fast, but then slows to a crawl and the Gluster processes
on all nodes spike.

Thanks,
Joe



On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <jacobgodin [at] gmail> wrote:

> Oh really, you've done away with Gluster all together? The fast backbone
> is definitely needed, but I would think that was the case with any
> distributed filesystem.
>
> MooseFS looks promising, but apparently it has a few reliability problems.
>
>
> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua <
> razique.mahroua [at] gmail> wrote:
>
>> :-)
>> Actually I had to remove all my instances running on it (especially the
>> windows ones), yah unfortunately my network backbone wasn't fast enough to
>> support the load induced by GFS - especially the numerous operations
>> performed by the self-healing agents :(
>>
>> I'm currently considering MooseFS, it has the advantage to have a pretty
>> long list of companies using it in production
>>
>> take care
>>
>>
>> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>
>> A few things I found were key for I/O performance:
>>
>> 1. Make sure your network can sustain the traffic. We are using a 10G
>> backbone with 2 bonded interfaces per node.
>> 2. Use high speed drives. SATA will not cut it.
>> 3. Look into tuning settings. Razique, thanks for sending these along
>> to me a little while back. A couple that I found were useful:
>> - KVM cache=writeback (a little risky, but WAY faster)
>> - Gluster write-behind-window-size (set to 4MB in our setup)
>> - Gluster cache-size (ideal values in our setup were 96MB-128MB)
>>
>> Hope that helps!
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <
>> razique.mahroua [at] gmail> wrote:
>>
>>> I had much performance issues myself with Windows instances, and I/O
>>> demanding instances. Make sure it fits your env. first before deploying it
>>> in production
>>>
>>> Regards,
>>> Razique
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua [at] gmail
>>> Tel : +33 9 72 37 94 15
>>>
>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>
>>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>>
>>> Hi Denis,
>>>
>>> I would take a look into GlusterFS with a distributed, replicated
>>> volume. We have been using it for several months now, and it has been
>>> stable. Nova will need to have the volume mounted to its instances
>>> directory (default /var/lib/nova/instances), and Cinder has direct support
>>> for Gluster as of Grizzly I believe.
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail>wrote:
>>>
>>>> Hi all,
>>>>
>>>> I have issue with creating shared storage for Openstack. Main idea is
>>>> to create 100% redundant shared storage from two servers (kind of network
>>>> RAID from two servers).
>>>> I have two identical servers with many disks inside. What solution can
>>>> any one provide for such schema? I need shared storage for running VMs (so
>>>> live migration can work) and also for cinder-volumes.
>>>>
>>>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>>>> any comments on this?
>>>> Also I heard about Quadstor software and it can create network RAID and
>>>> present it via iSCSI.
>>>>
>>>> Thanks.
>>>>
>>>> P.S. Glance uses swift and is setuped on another servers
>>>>
>>>> ______________________________**_________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists**openstack.org<OpenStack-operators [at] lists>
>>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**
>>>> openstack-operators<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>
>>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


--
Joe Topjian
Systems Architect
Cybera Inc.

www.cybera.ca

Cybera is a not-for-profit organization that works to spur and support
innovation, for the economic benefit of Alberta, through the use
of cyberinfrastructure.


razique.mahroua at gmail

Jul 24, 2013, 1:18 PM

Post #20 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

+1 :)


Le 24 juil. 2013 à 21:08, Joe Topjian <joe.topjian [at] cybera> a écrit :

> Hi Jacob,
>
> Are you using SAS or SSD drives for Gluster? Also, do you have one large Gluster volume across your entire cloud or is it broke up into a few different ones? I've wondered if there's a benefit to doing the latter so distribution activity is isolated to only a few nodes. The downside to that, of course, is you're limited to what compute nodes instances can migrate to.
>
> I use Gluster for instance storage in all of my "controlled" environments like internal and sandbox clouds, but I'm hesitant to introduce it into production environments as I've seen the same issues that Razique describes -- especially with Windows instances. My guess is due to how NTFS writes to disk.
>
> I'm curious if you could report the results of the following test: in a Windows instance running on Gluster, copy a 3-4gb file to it from the local network so it comes in at a very high speed. When I do this, the first few gigs come in very fast, but then slows to a crawl and the Gluster processes on all nodes spike.
>
> Thanks,
> Joe
>
>
>
> On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <jacobgodin [at] gmail> wrote:
> Oh really, you've done away with Gluster all together? The fast backbone is definitely needed, but I would think that was the case with any distributed filesystem.
>
> MooseFS looks promising, but apparently it has a few reliability problems.
>
>
> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua <razique.mahroua [at] gmail> wrote:
> :-)
> Actually I had to remove all my instances running on it (especially the windows ones), yah unfortunately my network backbone wasn't fast enough to support the load induced by GFS - especially the numerous operations performed by the self-healing agents :(
>
> I'm currently considering MooseFS, it has the advantage to have a pretty long list of companies using it in production
>
> take care
>
>
> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail> a écrit :
>
>> A few things I found were key for I/O performance:
>> Make sure your network can sustain the traffic. We are using a 10G backbone with 2 bonded interfaces per node.
>> Use high speed drives. SATA will not cut it.
>> Look into tuning settings. Razique, thanks for sending these along to me a little while back. A couple that I found were useful:
>> KVM cache=writeback (a little risky, but WAY faster)
>> Gluster write-behind-window-size (set to 4MB in our setup)
>> Gluster cache-size (ideal values in our setup were 96MB-128MB)
>> Hope that helps!
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua <razique.mahroua [at] gmail> wrote:
>> I had much performance issues myself with Windows instances, and I/O demanding instances. Make sure it fits your env. first before deploying it in production
>>
>> Regards,
>> Razique
>>
>> Razique Mahroua - Nuage & Co
>> razique.mahroua [at] gmail
>> Tel : +33 9 72 37 94 15
>>
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail> a écrit :
>>
>>> Hi Denis,
>>>
>>> I would take a look into GlusterFS with a distributed, replicated volume. We have been using it for several months now, and it has been stable. Nova will need to have the volume mounted to its instances directory (default /var/lib/nova/instances), and Cinder has direct support for Gluster as of Grizzly I believe.
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail> wrote:
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main idea is to create 100% redundant shared storage from two servers (kind of network RAID from two servers).
>>> I have two identical servers with many disks inside. What solution can any one provide for such schema? I need shared storage for running VMs (so live migration can work) and also for cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD + OCFS2, any comments on this?
>>> Also I heard about Quadstor software and it can create network RAID and present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> --
> Joe Topjian
> Systems Architect
> Cybera Inc.
>
> www.cybera.ca
>
> Cybera is a not-for-profit organization that works to spur and support innovation, for the economic benefit of Alberta, through the use of cyberinfrastructure.


dloshakov at gmail

Jul 24, 2013, 10:49 PM

Post #21 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Thanks for advice, I thought about GlusterFS to, but with Cinder it
hasn't ability to work with snapshots.
And thanks for tip with windows and IO problems, we are trying to make
private/public cloud and IO is one of main priorities.


On 24.07.2013 17:32, Razique Mahroua wrote:
> I had much performance issues myself with Windows instances, and I/O
> demanding instances. Make sure it fits your env. first before deploying
> it in production
>
> Regards,
> Razique
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>
> Tel : +33 9 72 37 94 15
>
>
> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail
> <mailto:jacobgodin [at] gmail>> a écrit :
>
>> Hi Denis,
>>
>> I would take a look into GlusterFS with a distributed, replicated
>> volume. We have been using it for several months now, and it has been
>> stable. Nova will need to have the volume mounted to its instances
>> directory (default /var/lib/nova/instances), and Cinder has direct
>> support for Gluster as of Grizzly I believe.
>>
>>
>>
>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail
>> <mailto:dloshakov [at] gmail>> wrote:
>>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea
>> is to create 100% redundant shared storage from two servers (kind
>> of network RAID from two servers).
>> I have two identical servers with many disks inside. What solution
>> can any one provide for such schema? I need shared storage for
>> running VMs (so live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD +
>> OCFS2, any comments on this?
>> Also I heard about Quadstor software and it can create network
>> RAID and present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> _________________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


dloshakov at gmail

Jul 24, 2013, 10:59 PM

Post #22 of 35 (178 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Jason, yep Ceph is first solution which I going to try, only thing
scares me, that it is not ready for production as it was written
somewhere on ceph.com, but I heard that there are some successfully
integration with openstack.

Thanks.

On 24.07.2013 17:32, Jason Ford wrote:
> Denis,
>
> Look at ceph (http://ceph.com/) for this since you are trying to use it in
> cinder. We have had pretty good success with it as long as you can give
> your storage network > 1gbit/s speeds.
>
> The basic approach would be to install ceph storage nodes on your boxes
> with disk in it and also install a mon process on each. Put your journals
> on faster drives and install the third mon and the gateway on your
> openstack management box.
>
> There are many howtos on the ceph site for this and they are very helpful
> in IRC if you have questions about it.
>
> Hope this helps!
>
> Regards,
>
> jason
>
> --------------------------------
> Jason Ford
> BlackMesh Managed Hosting
> Drupal/Magento/Wordpress and Private Clouds
> http://www.blackmesh.com <http://www.blackmesh.com/>
> 888.473.0854
>
>
>
>
>
> On 7/24/13 10:11 AM, "Denis Loshakov" <dloshakov [at] gmail> wrote:
>
>> Hi all,
>>
>> I have issue with creating shared storage for Openstack. Main idea is to
>> create 100% redundant shared storage from two servers (kind of network
>> RAID from two servers).
>> I have two identical servers with many disks inside. What solution can
>> any one provide for such schema? I need shared storage for running VMs
>> (so live migration can work) and also for cinder-volumes.
>>
>> One solution is to install Linux on both servers and use DRBD + OCFS2,
>> any comments on this?
>> Also I heard about Quadstor software and it can create network RAID and
>> present it via iSCSI.
>>
>> Thanks.
>>
>> P.S. Glance uses swift and is setuped on another servers
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


dloshakov at gmail

Jul 24, 2013, 11:02 PM

Post #23 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Steven, because we are making semi private/public cloud its very hard
to predict future load. But i don't think there will be any big and
heavy VMs.
For now it is pilot project, on future we planing to buy some hardware
based storages (may by even FC).

Thanks.

On 24.07.2013 17:43, Steven Manos wrote:
> Hi Denis,
>
> Before designing & implementing and shared storage for OpenStack you
> should think carefully about the workload your shared storage needs to
> be built for. How many VMs is it servicing? What sort of workloads are
> they? DBs? Web servers? File servers? etc. Then design according to that
> spec.
>
> Steven.
>
>
> On 24/07/2013, at 10:32 PM, Razique Mahroua <razique.mahroua [at] gmail
> <mailto:razique.mahroua [at] gmail>>
> wrote:
>
>> I had much performance issues myself with Windows instances, and I/O
>> demanding instances. Make sure it fits your env. first before
>> deploying it in production
>>
>> Regards,
>> Razique
>>
>> *Razique Mahroua** - **Nuage & Co*
>> razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>
>> Tel : +33 9 72 37 94 15
>>
>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>
>> Le 24 juil. 2013 à 16:25, Jacob Godin <jacobgodin [at] gmail
>> <mailto:jacobgodin [at] gmail>> a écrit :
>>
>>> Hi Denis,
>>>
>>> I would take a look into GlusterFS with a distributed, replicated
>>> volume. We have been using it for several months now, and it has been
>>> stable. Nova will need to have the volume mounted to its instances
>>> directory (default /var/lib/nova/instances), and Cinder has direct
>>> support for Gluster as of Grizzly I believe.
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov <dloshakov [at] gmail
>>> <mailto:dloshakov [at] gmail>> wrote:
>>>
>>> Hi all,
>>>
>>> I have issue with creating shared storage for Openstack. Main
>>> idea is to create 100% redundant shared storage from two servers
>>> (kind of network RAID from two servers).
>>> I have two identical servers with many disks inside. What
>>> solution can any one provide for such schema? I need shared
>>> storage for running VMs (so live migration can work) and also for
>>> cinder-volumes.
>>>
>>> One solution is to install Linux on both servers and use DRBD +
>>> OCFS2, any comments on this?
>>> Also I heard about Quadstor software and it can create network
>>> RAID and present it via iSCSI.
>>>
>>> Thanks.
>>>
>>> P.S. Glance uses swift and is setuped on another servers
>>>
> */
> *
>
> *
> /*
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


dloshakov at gmail

Jul 24, 2013, 11:16 PM

Post #24 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

So, first i'm going to try Ceph.
Thanks for advices and lets RTFM begin :)

On 24.07.2013 23:18, Razique Mahroua wrote:
> +1 :)
>
>
> Le 24 juil. 2013 à 21:08, Joe Topjian <joe.topjian [at] cybera
> <mailto:joe.topjian [at] cybera>> a écrit :
>
>> Hi Jacob,
>>
>> Are you using SAS or SSD drives for Gluster? Also, do you have one
>> large Gluster volume across your entire cloud or is it broke up into a
>> few different ones? I've wondered if there's a benefit to doing the
>> latter so distribution activity is isolated to only a few nodes. The
>> downside to that, of course, is you're limited to what compute nodes
>> instances can migrate to.
>>
>> I use Gluster for instance storage in all of my "controlled"
>> environments like internal and sandbox clouds, but I'm hesitant to
>> introduce it into production environments as I've seen the same issues
>> that Razique describes -- especially with Windows instances. My guess
>> is due to how NTFS writes to disk.
>>
>> I'm curious if you could report the results of the following test: in
>> a Windows instance running on Gluster, copy a 3-4gb file to it from
>> the local network so it comes in at a very high speed. When I do this,
>> the first few gigs come in very fast, but then slows to a crawl and
>> the Gluster processes on all nodes spike.
>>
>> Thanks,
>> Joe
>>
>>
>>
>> On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <jacobgodin [at] gmail
>> <mailto:jacobgodin [at] gmail>> wrote:
>>
>> Oh really, you've done away with Gluster all together? The fast
>> backbone is definitely needed, but I would think that was the case
>> with any distributed filesystem.
>>
>> MooseFS looks promising, but apparently it has a few reliability
>> problems.
>>
>>
>> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
>> <razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>> wrote:
>>
>> :-)
>> Actually I had to remove all my instances running on it
>> (especially the windows ones), yah unfortunately my network
>> backbone wasn't fast enough to support the load induced by GFS
>> - especially the numerous operations performed by the
>> self-healing agents :(
>>
>> I'm currently considering MooseFS, it has the advantage to
>> have a pretty long list of companies using it in production
>>
>> take care
>>
>>
>> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail
>> <mailto:jacobgodin [at] gmail>> a écrit :
>>
>>> A few things I found were key for I/O performance:
>>>
>>> 1. Make sure your network can sustain the traffic. We are
>>> using a 10G backbone with 2 bonded interfaces per node.
>>> 2. Use high speed drives. SATA will not cut it.
>>> 3. Look into tuning settings. Razique, thanks for sending
>>> these along to me a little while back. A couple that I
>>> found were useful:
>>> * KVM cache=writeback (a little risky, but WAY faster)
>>> * Gluster write-behind-window-size (set to 4MB in our
>>> setup)
>>> * Gluster cache-size (ideal values in our setup were
>>> 96MB-128MB)
>>>
>>> Hope that helps!
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
>>> <razique.mahroua [at] gmail
>>> <mailto:razique.mahroua [at] gmail>> wrote:
>>>
>>> I had much performance issues myself with Windows
>>> instances, and I/O demanding instances. Make sure it fits
>>> your env. first before deploying it in production
>>>
>>> Regards,
>>> Razique
>>>
>>> *Razique Mahroua** - **Nuage & Co*
>>> razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>
>>> Tel : +33 9 72 37 94 15 <tel:%2B33%209%2072%2037%2094%2015>
>>>
>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>
>>> Le 24 juil. 2013 à 16:25, Jacob Godin
>>> <jacobgodin [at] gmail <mailto:jacobgodin [at] gmail>> a
>>> écrit :
>>>
>>>> Hi Denis,
>>>>
>>>> I would take a look into GlusterFS with a distributed,
>>>> replicated volume. We have been using it for several
>>>> months now, and it has been stable. Nova will need to
>>>> have the volume mounted to its instances directory
>>>> (default /var/lib/nova/instances), and Cinder has direct
>>>> support for Gluster as of Grizzly I believe.
>>>>
>>>>
>>>>
>>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>>>> <dloshakov [at] gmail <mailto:dloshakov [at] gmail>> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I have issue with creating shared storage for
>>>> Openstack. Main idea is to create 100% redundant
>>>> shared storage from two servers (kind of network
>>>> RAID from two servers).
>>>> I have two identical servers with many disks inside.
>>>> What solution can any one provide for such schema? I
>>>> need shared storage for running VMs (so live
>>>> migration can work) and also for cinder-volumes.
>>>>
>>>> One solution is to install Linux on both servers and
>>>> use DRBD + OCFS2, any comments on this?
>>>> Also I heard about Quadstor software and it can
>>>> create network RAID and present it via iSCSI.
>>>>
>>>> Thanks.
>>>>
>>>> P.S. Glance uses swift and is setuped on another servers
>>>>
>>>> _________________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists
>>>> <mailto:OpenStack-operators [at] lists>
>>>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists
>>>> <mailto:OpenStack-operators [at] lists>
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> <mailto:OpenStack-operators [at] lists>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>>
>> --
>> Joe Topjian
>> Systems Architect
>> Cybera Inc.
>>
>> www.cybera.ca <http://www.cybera.ca/>
>>
>> Cybera is a not-for-profit organization that works to spur and support
>> innovation, for the economic benefit of Alberta, through the use
>> of cyberinfrastructure.
>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


sylvain.bauza at bull

Jul 25, 2013, 12:22 AM

Post #25 of 35 (179 views)
Permalink
Re: Shared storage HA question [In reply to]

Hi Denis,

As per my short testings, I would assume anything but FUSE-mounted would
match your needs.

On the performance glance, here is what I could suggest :
- if using GlusterFS, wait for this BP [1] to be implemented. I do
agree with Razique on the issues you could face with GlusterFS, this is
mainly due to the Windows caching system mixed with QCOW2 copy-on-write
images relying on a FUSE mountpoint.
- if using Ceph, use RADOS to boot from Cinder volumes. Don't use FUSE
mountpoint, again.

-Sylvain

[1] https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support



Le 25/07/2013 08:16, Denis Loshakov a écrit :
> So, first i'm going to try Ceph.
> Thanks for advices and lets RTFM begin :)
>
> On 24.07.2013 23:18, Razique Mahroua wrote:
>> +1 :)
>>
>>
>> Le 24 juil. 2013 à 21:08, Joe Topjian <joe.topjian [at] cybera
>> <mailto:joe.topjian [at] cybera>> a écrit :
>>
>>> Hi Jacob,
>>>
>>> Are you using SAS or SSD drives for Gluster? Also, do you have one
>>> large Gluster volume across your entire cloud or is it broke up into a
>>> few different ones? I've wondered if there's a benefit to doing the
>>> latter so distribution activity is isolated to only a few nodes. The
>>> downside to that, of course, is you're limited to what compute nodes
>>> instances can migrate to.
>>>
>>> I use Gluster for instance storage in all of my "controlled"
>>> environments like internal and sandbox clouds, but I'm hesitant to
>>> introduce it into production environments as I've seen the same issues
>>> that Razique describes -- especially with Windows instances. My guess
>>> is due to how NTFS writes to disk.
>>>
>>> I'm curious if you could report the results of the following test: in
>>> a Windows instance running on Gluster, copy a 3-4gb file to it from
>>> the local network so it comes in at a very high speed. When I do this,
>>> the first few gigs come in very fast, but then slows to a crawl and
>>> the Gluster processes on all nodes spike.
>>>
>>> Thanks,
>>> Joe
>>>
>>>
>>>
>>> On Wed, Jul 24, 2013 at 12:37 PM, Jacob Godin <jacobgodin [at] gmail
>>> <mailto:jacobgodin [at] gmail>> wrote:
>>>
>>> Oh really, you've done away with Gluster all together? The fast
>>> backbone is definitely needed, but I would think that was the case
>>> with any distributed filesystem.
>>>
>>> MooseFS looks promising, but apparently it has a few reliability
>>> problems.
>>>
>>>
>>> On Wed, Jul 24, 2013 at 3:31 PM, Razique Mahroua
>>> <razique.mahroua [at] gmail <mailto:razique.mahroua [at] gmail>>
>>> wrote:
>>>
>>> :-)
>>> Actually I had to remove all my instances running on it
>>> (especially the windows ones), yah unfortunately my network
>>> backbone wasn't fast enough to support the load induced by GFS
>>> - especially the numerous operations performed by the
>>> self-healing agents :(
>>>
>>> I'm currently considering MooseFS, it has the advantage to
>>> have a pretty long list of companies using it in production
>>>
>>> take care
>>>
>>>
>>> Le 24 juil. 2013 à 16:40, Jacob Godin <jacobgodin [at] gmail
>>> <mailto:jacobgodin [at] gmail>> a écrit :
>>>
>>>> A few things I found were key for I/O performance:
>>>>
>>>> 1. Make sure your network can sustain the traffic. We are
>>>> using a 10G backbone with 2 bonded interfaces per node.
>>>> 2. Use high speed drives. SATA will not cut it.
>>>> 3. Look into tuning settings. Razique, thanks for sending
>>>> these along to me a little while back. A couple that I
>>>> found were useful:
>>>> * KVM cache=writeback (a little risky, but WAY faster)
>>>> * Gluster write-behind-window-size (set to 4MB in our
>>>> setup)
>>>> * Gluster cache-size (ideal values in our setup were
>>>> 96MB-128MB)
>>>>
>>>> Hope that helps!
>>>>
>>>>
>>>>
>>>> On Wed, Jul 24, 2013 at 11:32 AM, Razique Mahroua
>>>> <razique.mahroua [at] gmail
>>>> <mailto:razique.mahroua [at] gmail>> wrote:
>>>>
>>>> I had much performance issues myself with Windows
>>>> instances, and I/O demanding instances. Make sure it fits
>>>> your env. first before deploying it in production
>>>>
>>>> Regards,
>>>> Razique
>>>>
>>>> *Razique Mahroua** - **Nuage & Co*
>>>> razique.mahroua [at] gmail
>>>> <mailto:razique.mahroua [at] gmail>
>>>> Tel : +33 9 72 37 94 15
>>>> <tel:%2B33%209%2072%2037%2094%2015>
>>>>
>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>
>>>> Le 24 juil. 2013 à 16:25, Jacob Godin
>>>> <jacobgodin [at] gmail <mailto:jacobgodin [at] gmail>> a
>>>> écrit :
>>>>
>>>>> Hi Denis,
>>>>>
>>>>> I would take a look into GlusterFS with a distributed,
>>>>> replicated volume. We have been using it for several
>>>>> months now, and it has been stable. Nova will need to
>>>>> have the volume mounted to its instances directory
>>>>> (default /var/lib/nova/instances), and Cinder has direct
>>>>> support for Gluster as of Grizzly I believe.
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jul 24, 2013 at 11:11 AM, Denis Loshakov
>>>>> <dloshakov [at] gmail <mailto:dloshakov [at] gmail>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I have issue with creating shared storage for
>>>>> Openstack. Main idea is to create 100% redundant
>>>>> shared storage from two servers (kind of network
>>>>> RAID from two servers).
>>>>> I have two identical servers with many disks inside.
>>>>> What solution can any one provide for such schema? I
>>>>> need shared storage for running VMs (so live
>>>>> migration can work) and also for cinder-volumes.
>>>>>
>>>>> One solution is to install Linux on both servers and
>>>>> use DRBD + OCFS2, any comments on this?
>>>>> Also I heard about Quadstor software and it can
>>>>> create network RAID and present it via iSCSI.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> P.S. Glance uses swift and is setuped on another
>>>>> servers
>>>>>
>>>>> _________________________________________________
>>>>> OpenStack-operators mailing list
>>>>> OpenStack-operators [at] lists
>>>>> <mailto:OpenStack-operators [at] lists>
>>>>> http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
>>>>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-operators mailing list
>>>>> OpenStack-operators [at] lists
>>>>> <mailto:OpenStack-operators [at] lists>
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> <mailto:OpenStack-operators [at] lists>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>>
>>> --
>>> Joe Topjian
>>> Systems Architect
>>> Cybera Inc.
>>>
>>> www.cybera.ca <http://www.cybera.ca/>
>>>
>>> Cybera is a not-for-profit organization that works to spur and support
>>> innovation, for the economic benefit of Alberta, through the use
>>> of cyberinfrastructure.
>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

First page Previous page 1 2 Next page Last page  View All OpenStack operators RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.