Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: OpenStack: Operators

Question regarding to referencearchitecture.org

 

 

OpenStack operators RSS feed   Index | Next | Previous | View Threaded


igor.laskovy at gmail

May 1, 2012, 4:41 AM

Post #1 of 10 (600 views)
Permalink
Question regarding to referencearchitecture.org

Hi all!

I am new to the openstack. I have question regarding to
http://www.referencearchitecture.org/physical-deployment/ .
This design shown a two controller nodes, one brand iSCSI DAS and a
lot of compute nodes. Controller nodes has small local storages and
connectivity to iSCSI DAS. Compute nodes, according to "Rule of
Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
spaces each one.
So how work Nova-volumes here? It is running on controller node and
use VG "nova-volumes" which based on disk via attached iSCSI DAS?
After that euca-create-volume create volume and will expose it to the
compute node via iSCSI again? If it is correct, what is the point to
use huge local storages at compute nodes, for boot image of instances
only?

Igor Laskovy
_______________________________________________
Openstack-operators mailing list
Openstack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


vivekraghuwanshi at gmail

May 1, 2012, 4:59 AM

Post #2 of 10 (595 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Yes and to store images and snapshorts

On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail> wrote:

> Hi all!
>
> I am new to the openstack. I have question regarding to
> http://www.referencearchitecture.org/physical-deployment/ .
> This design shown a two controller nodes, one brand iSCSI DAS and a
> lot of compute nodes. Controller nodes has small local storages and
> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
> spaces each one.
> So how work Nova-volumes here? It is running on controller node and
> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
> After that euca-create-volume create volume and will expose it to the
> compute node via iSCSI again? If it is correct, what is the point to
> use huge local storages at compute nodes, for boot image of instances
> only?
>
> Igor Laskovy
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



--
ViVek Raghuwanshi
Mobile -+91-09595950504

Skype - vivek_raghuwanshi


jason.cannavale at rackspace

May 1, 2012, 5:20 AM

Post #3 of 10 (592 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Hi Igor, Vivek,

Apologies for the confusion. This site was put together quite some time ago (between Cactus and Diablo releases) and only took into consideration the nova and swift projects, and has not been updated since that time. Unfortunately, there was no consideration for nova-volumes, however your assumption is correct that the huge local storage on the compute nodes was for the image download from glance to the compute node, booting the image, and providing some local storage for the instances themselves.


In the case of images and snapshots the document makes the assumption that swift was the backend for glance. In this case you would have glance-api and glance-registry running on the controller node with the swift middleware and would follow the swift portion of the reference architecture.


Jason

From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail<mailto:vivekraghuwanshi [at] gmail>>
Date: Tuesday, May 1, 2012 12:59 PM
To: Igor Laskovy <igor.laskovy [at] gmail<mailto:igor.laskovy [at] gmail>>
Cc: "openstack-operators [at] lists<mailto:openstack-operators [at] lists>" <openstack-operators [at] lists<mailto:openstack-operators [at] lists>>
Subject: Re: [Openstack-operators] Question regarding to referencearchitecture.org

Yes and to store images and snapshorts

On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail<mailto:igor.laskovy [at] gmail>> wrote:
Hi all!

I am new to the openstack. I have question regarding to
http://www.referencearchitecture.org/physical-deployment/ .
This design shown a two controller nodes, one brand iSCSI DAS and a
lot of compute nodes. Controller nodes has small local storages and
connectivity to iSCSI DAS. Compute nodes, according to "Rule of
Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
spaces each one.
So how work Nova-volumes here? It is running on controller node and
use VG "nova-volumes" which based on disk via attached iSCSI DAS?
After that euca-create-volume create volume and will expose it to the
compute node via iSCSI again? If it is correct, what is the point to
use huge local storages at compute nodes, for boot image of instances
only?

Igor Laskovy
_______________________________________________
Openstack-operators mailing list
Openstack-operators [at] lists<mailto:Openstack-operators [at] lists>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
ViVek Raghuwanshi
Mobile -+91-09595950504

Skype - vivek_raghuwanshi


duncan at dreamhost

May 1, 2012, 6:26 AM

Post #4 of 10 (592 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Jason,

Folks from CERN and NeCTAR [1] have expressed interest in sharing
their cloud architectures and reference materials.

Should they go through you, and get their data/images put up on
http://www.referencearchitecture.org/ ?

d


[1] Associated with the University of Melbourne, http://nectar.org.au/

On Tue, May 1, 2012 at 8:20 AM, Jason Cannavale
<jason.cannavale [at] rackspace> wrote:
> Hi Igor, Vivek,
>
> Apologies for the confusion. This site was put together quite some time ago
> (between Cactus and Diablo releases) and only took into consideration the
> nova and swift projects, and has not been updated since that time.
> Unfortunately, there was no consideration for nova-volumes, however your
> assumption is correct that the huge local storage on the compute nodes was
> for the image download from glance to the compute node, booting the image,
> and providing some local storage for the instances themselves.
>
>
> In the case of images and snapshots the document makes the assumption that
> swift was the backend for glance. In this case you would have glance-api and
> glance-registry running on the controller node with the swift middleware and
> would follow the swift portion of the reference architecture.
>
>
> Jason
>
> From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
> Date: Tuesday, May 1, 2012 12:59 PM
> To: Igor Laskovy <igor.laskovy [at] gmail>
> Cc: "openstack-operators [at] lists"
> <openstack-operators [at] lists>
> Subject: Re: [Openstack-operators] Question regarding to
> referencearchitecture.org
>
> Yes and to store images and snapshorts
>
> On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail> wrote:
>>
>> Hi all!
>>
>> I am new to the openstack. I have question regarding to
>> http://www.referencearchitecture.org/physical-deployment/ .
>> This design shown a two controller nodes, one brand iSCSI DAS and a
>> lot of compute nodes. Controller nodes has small local storages and
>> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
>> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
>> spaces each one.
>> So how work Nova-volumes here? It is running on controller node and
>> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
>> After that euca-create-volume create volume and will expose it to the
>> compute node via iSCSI again? If it is correct, what is the point to
>> use huge local storages at compute nodes, for boot image of instances
>> only?
>>
>> Igor Laskovy
>> _______________________________________________
>> Openstack-operators mailing list
>> Openstack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> --
> ViVek Raghuwanshi
> Mobile -+91-09595950504
>
> Skype - vivek_raghuwanshi
>
>
>
> _______________________________________________
> Openstack-operators mailing list
> Openstack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
_______________________________________________
Openstack-operators mailing list
Openstack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


igor.laskovy at gmail

May 1, 2012, 7:01 AM

Post #5 of 10 (587 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Thanks Jason, Vivek for this clarifications.

Is it possible to run nova-volumes on each compute node by themselves
use? Than this will locate all data of current instance on one compute
node from decentralization storage vision.


On Tue, May 1, 2012 at 3:20 PM, Jason Cannavale
<jason.cannavale [at] rackspace> wrote:
> Hi Igor, Vivek,
>
> Apologies for the confusion. This site was put together quite some time ago
> (between Cactus and Diablo releases) and only took into consideration the
> nova and swift projects, and has not been updated since that time.
> Unfortunately, there was no consideration for nova-volumes, however your
> assumption is correct that the huge local storage on the compute nodes was
> for the image download from glance to the compute node, booting the image,
> and providing some local storage for the instances themselves.
>
>
> In the case of images and snapshots the document makes the assumption that
> swift was the backend for glance. In this case you would have glance-api and
> glance-registry running on the controller node with the swift middleware and
> would follow the swift portion of the reference architecture.
>
>
> Jason
>
> From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
> Date: Tuesday, May 1, 2012 12:59 PM
> To: Igor Laskovy <igor.laskovy [at] gmail>
> Cc: "openstack-operators [at] lists"
> <openstack-operators [at] lists>
> Subject: Re: [Openstack-operators] Question regarding to
> referencearchitecture.org
>
> Yes and to store images and snapshorts
>
> On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail> wrote:
>>
>> Hi all!
>>
>> I am new to the openstack. I have question regarding to
>> http://www.referencearchitecture.org/physical-deployment/ .
>> This design shown a two controller nodes, one brand iSCSI DAS and a
>> lot of compute nodes. Controller nodes has small local storages and
>> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
>> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
>> spaces each one.
>> So how work Nova-volumes here? It is running on controller node and
>> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
>> After that euca-create-volume create volume and will expose it to the
>> compute node via iSCSI again? If it is correct, what is the point to
>> use huge local storages at compute nodes, for boot image of instances
>> only?
>>
>> Igor Laskovy
>> _______________________________________________
>> Openstack-operators mailing list
>> Openstack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> --
> ViVek Raghuwanshi
> Mobile -+91-09595950504
>
> Skype - vivek_raghuwanshi
>
>



--
Igor Laskovy
_______________________________________________
Openstack-operators mailing list
Openstack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


vivekraghuwanshi at gmail

May 1, 2012, 7:03 AM

Post #6 of 10 (598 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Thanks Duncan

On Tue, May 1, 2012 at 6:56 PM, Duncan McGreggor <duncan [at] dreamhost>wrote:

> Jason,
>
> Folks from CERN and NeCTAR [1] have expressed interest in sharing
> their cloud architectures and reference materials.
>
> Should they go through you, and get their data/images put up on
> http://www.referencearchitecture.org/ ?
>
> d
>
>
> [1] Associated with the University of Melbourne, http://nectar.org.au/
>
> On Tue, May 1, 2012 at 8:20 AM, Jason Cannavale
> <jason.cannavale [at] rackspace> wrote:
> > Hi Igor, Vivek,
> >
> > Apologies for the confusion. This site was put together quite some time
> ago
> > (between Cactus and Diablo releases) and only took into consideration the
> > nova and swift projects, and has not been updated since that time.
> > Unfortunately, there was no consideration for nova-volumes, however your
> > assumption is correct that the huge local storage on the compute nodes
> was
> > for the image download from glance to the compute node, booting the
> image,
> > and providing some local storage for the instances themselves.
> >
> >
> > In the case of images and snapshots the document makes the assumption
> that
> > swift was the backend for glance. In this case you would have glance-api
> and
> > glance-registry running on the controller node with the swift middleware
> and
> > would follow the swift portion of the reference architecture.
> >
> >
> > Jason
> >
> > From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
> > Date: Tuesday, May 1, 2012 12:59 PM
> > To: Igor Laskovy <igor.laskovy [at] gmail>
> > Cc: "openstack-operators [at] lists"
> > <openstack-operators [at] lists>
> > Subject: Re: [Openstack-operators] Question regarding to
> > referencearchitecture.org
> >
> > Yes and to store images and snapshorts
> >
> > On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail>
> wrote:
> >>
> >> Hi all!
> >>
> >> I am new to the openstack. I have question regarding to
> >> http://www.referencearchitecture.org/physical-deployment/ .
> >> This design shown a two controller nodes, one brand iSCSI DAS and a
> >> lot of compute nodes. Controller nodes has small local storages and
> >> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
> >> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
> >> spaces each one.
> >> So how work Nova-volumes here? It is running on controller node and
> >> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
> >> After that euca-create-volume create volume and will expose it to the
> >> compute node via iSCSI again? If it is correct, what is the point to
> >> use huge local storages at compute nodes, for boot image of instances
> >> only?
> >>
> >> Igor Laskovy
> >> _______________________________________________
> >> Openstack-operators mailing list
> >> Openstack-operators [at] lists
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
> >
> >
> > --
> > ViVek Raghuwanshi
> > Mobile -+91-09595950504
> >
> > Skype - vivek_raghuwanshi
> >
> >
> >
> > _______________________________________________
> > Openstack-operators mailing list
> > Openstack-operators [at] lists
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>



--
ViVek Raghuwanshi
Mobile -+91-09595950504

Skype - vivek_raghuwanshi


jason.cannavale at rackspace

May 1, 2012, 7:17 AM

Post #7 of 10 (588 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Duncan,

I think it would be a great idea to get a larger set of architectures for
easy review, let me see what I can do.


Jason

On 5/1/12 2:26 PM, "Duncan McGreggor" <duncan [at] dreamhost> wrote:

>Jason,
>
>Folks from CERN and NeCTAR [1] have expressed interest in sharing
>their cloud architectures and reference materials.
>
>Should they go through you, and get their data/images put up on
>http://www.referencearchitecture.org/ ?
>
>d
>
>
>[1] Associated with the University of Melbourne, http://nectar.org.au/
>
>On Tue, May 1, 2012 at 8:20 AM, Jason Cannavale
><jason.cannavale [at] rackspace> wrote:
>> Hi Igor, Vivek,
>>
>> Apologies for the confusion. This site was put together quite some time
>>ago
>> (between Cactus and Diablo releases) and only took into consideration
>>the
>> nova and swift projects, and has not been updated since that time.
>> Unfortunately, there was no consideration for nova-volumes, however your
>> assumption is correct that the huge local storage on the compute nodes
>>was
>> for the image download from glance to the compute node, booting the
>>image,
>> and providing some local storage for the instances themselves.
>>
>>
>> In the case of images and snapshots the document makes the assumption
>>that
>> swift was the backend for glance. In this case you would have
>>glance-api and
>> glance-registry running on the controller node with the swift
>>middleware and
>> would follow the swift portion of the reference architecture.
>>
>>
>> Jason
>>
>> From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
>> Date: Tuesday, May 1, 2012 12:59 PM
>> To: Igor Laskovy <igor.laskovy [at] gmail>
>> Cc: "openstack-operators [at] lists"
>> <openstack-operators [at] lists>
>> Subject: Re: [Openstack-operators] Question regarding to
>> referencearchitecture.org
>>
>> Yes and to store images and snapshorts
>>
>> On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail>
>>wrote:
>>>
>>> Hi all!
>>>
>>> I am new to the openstack. I have question regarding to
>>> http://www.referencearchitecture.org/physical-deployment/ .
>>> This design shown a two controller nodes, one brand iSCSI DAS and a
>>> lot of compute nodes. Controller nodes has small local storages and
>>> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
>>> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
>>> spaces each one.
>>> So how work Nova-volumes here? It is running on controller node and
>>> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
>>> After that euca-create-volume create volume and will expose it to the
>>> compute node via iSCSI again? If it is correct, what is the point to
>>> use huge local storages at compute nodes, for boot image of instances
>>> only?
>>>
>>> Igor Laskovy
>>> _______________________________________________
>>> Openstack-operators mailing list
>>> Openstack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>>
>> --
>> ViVek Raghuwanshi
>> Mobile -+91-09595950504
>>
>> Skype - vivek_raghuwanshi
>>
>>
>>
>> _______________________________________________
>> Openstack-operators mailing list
>> Openstack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>

_______________________________________________
Openstack-operators mailing list
Openstack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


vivekraghuwanshi at gmail

May 1, 2012, 7:19 AM

Post #8 of 10 (591 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

i agreed with Jason


On Tue, May 1, 2012 at 7:47 PM, Jason Cannavale <
jason.cannavale [at] rackspace> wrote:

> Duncan,
>
> I think it would be a great idea to get a larger set of architectures for
> easy review, let me see what I can do.
>
>
> Jason
>
> On 5/1/12 2:26 PM, "Duncan McGreggor" <duncan [at] dreamhost> wrote:
>
> >Jason,
> >
> >Folks from CERN and NeCTAR [1] have expressed interest in sharing
> >their cloud architectures and reference materials.
> >
> >Should they go through you, and get their data/images put up on
> >http://www.referencearchitecture.org/ ?
> >
> >d
> >
> >
> >[1] Associated with the University of Melbourne, http://nectar.org.au/
> >
> >On Tue, May 1, 2012 at 8:20 AM, Jason Cannavale
> ><jason.cannavale [at] rackspace> wrote:
> >> Hi Igor, Vivek,
> >>
> >> Apologies for the confusion. This site was put together quite some time
> >>ago
> >> (between Cactus and Diablo releases) and only took into consideration
> >>the
> >> nova and swift projects, and has not been updated since that time.
> >> Unfortunately, there was no consideration for nova-volumes, however your
> >> assumption is correct that the huge local storage on the compute nodes
> >>was
> >> for the image download from glance to the compute node, booting the
> >>image,
> >> and providing some local storage for the instances themselves.
> >>
> >>
> >> In the case of images and snapshots the document makes the assumption
> >>that
> >> swift was the backend for glance. In this case you would have
> >>glance-api and
> >> glance-registry running on the controller node with the swift
> >>middleware and
> >> would follow the swift portion of the reference architecture.
> >>
> >>
> >> Jason
> >>
> >> From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
> >> Date: Tuesday, May 1, 2012 12:59 PM
> >> To: Igor Laskovy <igor.laskovy [at] gmail>
> >> Cc: "openstack-operators [at] lists"
> >> <openstack-operators [at] lists>
> >> Subject: Re: [Openstack-operators] Question regarding to
> >> referencearchitecture.org
> >>
> >> Yes and to store images and snapshorts
> >>
> >> On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail>
> >>wrote:
> >>>
> >>> Hi all!
> >>>
> >>> I am new to the openstack. I have question regarding to
> >>> http://www.referencearchitecture.org/physical-deployment/ .
> >>> This design shown a two controller nodes, one brand iSCSI DAS and a
> >>> lot of compute nodes. Controller nodes has small local storages and
> >>> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
> >>> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
> >>> spaces each one.
> >>> So how work Nova-volumes here? It is running on controller node and
> >>> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
> >>> After that euca-create-volume create volume and will expose it to the
> >>> compute node via iSCSI again? If it is correct, what is the point to
> >>> use huge local storages at compute nodes, for boot image of instances
> >>> only?
> >>>
> >>> Igor Laskovy
> >>> _______________________________________________
> >>> Openstack-operators mailing list
> >>> Openstack-operators [at] lists
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> >>
> >>
> >> --
> >> ViVek Raghuwanshi
> >> Mobile -+91-09595950504
> >>
> >> Skype - vivek_raghuwanshi
> >>
> >>
> >>
> >> _______________________________________________
> >> Openstack-operators mailing list
> >> Openstack-operators [at] lists
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
>
>


--
ViVek Raghuwanshi
Mobile -+91-09595950504

Skype - vivek_raghuwanshi


jason.cannavale at rackspace

May 1, 2012, 7:20 AM

Post #9 of 10 (598 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Yes, technically nova-volume can run anywhere. Referring back to the the
ref arch you would run the volume api on the controller node and the
actual volume service on the compute nodes. There are some obvious
considerations to be made (such as performance of the disks, usage of the
system with vm's, instance/image storage), but if you have space remaining
it could be a good way to use it.

Jason


On 5/1/12 3:01 PM, "Igor Laskovy" <igor.laskovy [at] gmail> wrote:

>Thanks Jason, Vivek for this clarifications.
>
>Is it possible to run nova-volumes on each compute node by themselves
>use? Than this will locate all data of current instance on one compute
>node from decentralization storage vision.
>
>
>On Tue, May 1, 2012 at 3:20 PM, Jason Cannavale
><jason.cannavale [at] rackspace> wrote:
>> Hi Igor, Vivek,
>>
>> Apologies for the confusion. This site was put together quite some time
>>ago
>> (between Cactus and Diablo releases) and only took into consideration
>>the
>> nova and swift projects, and has not been updated since that time.
>> Unfortunately, there was no consideration for nova-volumes, however your
>> assumption is correct that the huge local storage on the compute nodes
>>was
>> for the image download from glance to the compute node, booting the
>>image,
>> and providing some local storage for the instances themselves.
>>
>>
>> In the case of images and snapshots the document makes the assumption
>>that
>> swift was the backend for glance. In this case you would have
>>glance-api and
>> glance-registry running on the controller node with the swift
>>middleware and
>> would follow the swift portion of the reference architecture.
>>
>>
>> Jason
>>
>> From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
>> Date: Tuesday, May 1, 2012 12:59 PM
>> To: Igor Laskovy <igor.laskovy [at] gmail>
>> Cc: "openstack-operators [at] lists"
>> <openstack-operators [at] lists>
>> Subject: Re: [Openstack-operators] Question regarding to
>> referencearchitecture.org
>>
>> Yes and to store images and snapshorts
>>
>> On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail>
>>wrote:
>>>
>>> Hi all!
>>>
>>> I am new to the openstack. I have question regarding to
>>> http://www.referencearchitecture.org/physical-deployment/ .
>>> This design shown a two controller nodes, one brand iSCSI DAS and a
>>> lot of compute nodes. Controller nodes has small local storages and
>>> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
>>> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
>>> spaces each one.
>>> So how work Nova-volumes here? It is running on controller node and
>>> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
>>> After that euca-create-volume create volume and will expose it to the
>>> compute node via iSCSI again? If it is correct, what is the point to
>>> use huge local storages at compute nodes, for boot image of instances
>>> only?
>>>
>>> Igor Laskovy
>>> _______________________________________________
>>> Openstack-operators mailing list
>>> Openstack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>>
>> --
>> ViVek Raghuwanshi
>> Mobile -+91-09595950504
>>
>> Skype - vivek_raghuwanshi
>>
>>
>
>
>
>--
>Igor Laskovy

_______________________________________________
Openstack-operators mailing list
Openstack-operators [at] lists
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


igor.laskovy at gmail

May 1, 2012, 7:49 AM

Post #10 of 10 (590 views)
Permalink
Re: Question regarding to referencearchitecture.org [In reply to]

Thanks Jason, I will try to do this.
On May 1, 2012 5:20 PM, "Jason Cannavale" <jason.cannavale [at] rackspace>
wrote:

> Yes, technically nova-volume can run anywhere. Referring back to the the
> ref arch you would run the volume api on the controller node and the
> actual volume service on the compute nodes. There are some obvious
> considerations to be made (such as performance of the disks, usage of the
> system with vm's, instance/image storage), but if you have space remaining
> it could be a good way to use it.
>
> Jason
>
>
> On 5/1/12 3:01 PM, "Igor Laskovy" <igor.laskovy [at] gmail> wrote:
>
> >Thanks Jason, Vivek for this clarifications.
> >
> >Is it possible to run nova-volumes on each compute node by themselves
> >use? Than this will locate all data of current instance on one compute
> >node from decentralization storage vision.
> >
> >
> >On Tue, May 1, 2012 at 3:20 PM, Jason Cannavale
> ><jason.cannavale [at] rackspace> wrote:
> >> Hi Igor, Vivek,
> >>
> >> Apologies for the confusion. This site was put together quite some time
> >>ago
> >> (between Cactus and Diablo releases) and only took into consideration
> >>the
> >> nova and swift projects, and has not been updated since that time.
> >> Unfortunately, there was no consideration for nova-volumes, however your
> >> assumption is correct that the huge local storage on the compute nodes
> >>was
> >> for the image download from glance to the compute node, booting the
> >>image,
> >> and providing some local storage for the instances themselves.
> >>
> >>
> >> In the case of images and snapshots the document makes the assumption
> >>that
> >> swift was the backend for glance. In this case you would have
> >>glance-api and
> >> glance-registry running on the controller node with the swift
> >>middleware and
> >> would follow the swift portion of the reference architecture.
> >>
> >>
> >> Jason
> >>
> >> From: Vivek Singh Raghuwanshi <vivekraghuwanshi [at] gmail>
> >> Date: Tuesday, May 1, 2012 12:59 PM
> >> To: Igor Laskovy <igor.laskovy [at] gmail>
> >> Cc: "openstack-operators [at] lists"
> >> <openstack-operators [at] lists>
> >> Subject: Re: [Openstack-operators] Question regarding to
> >> referencearchitecture.org
> >>
> >> Yes and to store images and snapshorts
> >>
> >> On Tue, May 1, 2012 at 5:11 PM, Igor Laskovy <igor.laskovy [at] gmail>
> >>wrote:
> >>>
> >>> Hi all!
> >>>
> >>> I am new to the openstack. I have question regarding to
> >>> http://www.referencearchitecture.org/physical-deployment/ .
> >>> This design shown a two controller nodes, one brand iSCSI DAS and a
> >>> lot of compute nodes. Controller nodes has small local storages and
> >>> connectivity to iSCSI DAS. Compute nodes, according to "Rule of
> >>> Thumb): 4 to 8 GB RAM and 1 Spindle Per Core", has a lot of local
> >>> spaces each one.
> >>> So how work Nova-volumes here? It is running on controller node and
> >>> use VG "nova-volumes" which based on disk via attached iSCSI DAS?
> >>> After that euca-create-volume create volume and will expose it to the
> >>> compute node via iSCSI again? If it is correct, what is the point to
> >>> use huge local storages at compute nodes, for boot image of instances
> >>> only?
> >>>
> >>> Igor Laskovy
> >>> _______________________________________________
> >>> Openstack-operators mailing list
> >>> Openstack-operators [at] lists
> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >>
> >>
> >>
> >> --
> >> ViVek Raghuwanshi
> >> Mobile -+91-09595950504
> >>
> >> Skype - vivek_raghuwanshi
> >>
> >>
> >
> >
> >
> >--
> >Igor Laskovy
>
>

OpenStack operators RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.