Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: OpenStack: Netstack

Re: quantum community projects & folsom summit plans

 

 

OpenStack netstack RSS feed   Index | Next | Previous | View Threaded


r-mibu at cq

Apr 3, 2012, 11:42 PM

Post #1 of 10 (606 views)
Permalink
Re: quantum community projects & folsom summit plans

Hi Dan,



Thanks for your response.

> ** vif port parameter handling (or portprofile)
> VIF Driver should be available to handle plugin-specific port parameters and/or helper functions.
> This is required in just some plugins now, but necessary to be designed in Quantum Community.
> In my experience of implementing NEC OpenFlow Plugin, I think it is tough to create Nova drivers and Quantum
>extension APIs.
> To help those who wants to write a new Quantum plugin without "agent", we should design a common (or just
>sample) Nova driver and
> Quantum extension APIs for passing or retrieving plugin-specific information.
>
>
>
>I believe the Cisco driver actually does something like this already, you might want to look at it as an example.
>
>However, I think the goal should be that code that is in Nova is not specific to the plugin. As we talked about
>earlier in the thread, you may need different types of vif-plugging to attach to different types of switching
>technologies (e.g., OVS vs. bridge vs. VEPA/VNTAG), but I think that different plugins that use the same switching
>technology should be able to use the same vif-plugging (for example, there are several plugins that all use the OVS
>vif-plugging). Our goal here should be to minimize churn in the Nova codebase.

Yes, I checked the Cisco driver and the portprofile extension.
The Cisco driver seems to pass and retrieve the plugin-specific data with PUT method.
I just thought that this kind of vif-plugging could be a general model for plugins that work without "agent".
I agree with you that we should minimize churn in the Nova codebase.
But, I still feel that the "agent" model, especially polling, is not so good.
Although there are many topics on the summit,
I hope that we could have discussion about vif-plugging changes.



> I think there is another sub topic, but I am not sure yet.
> I agree that a configurable VIF driver is much better.
> For designing the configuration of vif-plugging, it is required that we discuss the granularity of selecting
>VIF Driver.
> Should The granularity of selecting VIF Driver be per node, VM, or VIF?
> Currently, VIF Driver would be configured in nova-compute.conf.
> This means that the granularity is per Hypervisor Node.
> To be more flexible, we might consider the case where VIF1 of VM1 connects to bridge and VIF2 of VM1 maps
>to a physical NIC
> directly.
> If so, it may raise another issue; how to determine connection type of VIF.
>
>
>
>That's an interesting use case, and something that we haven't tried to deal with yet. In your use case, who would
>determine how a VIF was mapped? Would it be a policy described by the service provider? Would it be part of the
>VM flavor? Adding this kind of flexibility is certainly possible, though you are the first person who has expressed
>a need for this type of flexibility.

It could be mixed.
I think that a cloud user specifies vNIC option like "physical NIC mapping" as a VM flavor,
then a service provider determines a hypervisor node and it's available physical NIC.
It is not suitable that the cloud user specifies physical NIC itself.
But this though is not clear enough to having a session on the summit,
I hope that we discuss this issue on vif-plugging session or somewhere in the summit.



> Is there any blueprint of Security Group in NetStack?
> I have a primitive proposal for an firewall model and API, and a firewall implementation on Quantum OpenFlow
>Plugin.
> That is just a prototype based on Quantum L2 API.
> But, this proposal shows how firewalling API should be.
> I think the first point in designing firewalling models is to which entity each rule is associated.
> Is the entity network or port?
> I hope that we will have discussions for firewalling leads much more functionalities and APIs than security
>group has currently.
>
>
>
>Dave Lapsley (on netstack list) is doing a session on this at the summit. Feel free to join as a driver. My thinking
>on the topic is that each Quantum port could be assigned one or more security groups. There is also scope, I believe,
>for more advanced ACLs that could be associated with each port, essentially consisting of inbound/outbound lists
>of rules that each have a "match" and an "action" (allow/deny). There is also the topic of NAT, which in my mind
>makes the most sense to think about in terms of our "L3 forwarding" discussion (see etherpad).
>

I found it.
I'll join the sessions.
Thanks!

> Finally, I would like to suggest that the Security Group session would be suitable to be held after Quantum
>L3 session.
> I think the Security Group is a cross layer function and its design should better be coupled with L3 design.
>
>
>
>I think our first task needs to be properly scoping each of these discussions, particularly on the many topics loosely
>call "L3". I've sent another email to the list with thoughts on breaking up those discussions.
>



Thanks,

Ryota MIBU



--
Mailing list: https://launchpad.net/~netstack
Post to : netstack [at] lists
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp


dan at nicira

Apr 4, 2012, 4:38 PM

Post #2 of 10 (590 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

Thanks for your thoughts on this Ryota. Some additional comments below.
Happy to chat more about it at the summit as well.

On Tue, Apr 3, 2012 at 11:42 PM, Ryota MIBU <r-mibu [at] cq> wrote:
>
> >
> >However, I think the goal should be that code that is in Nova is not
> specific to the plugin. As we talked about
> >earlier in the thread, you may need different types of vif-plugging to
> attach to different types of switching
> >technologies (e.g., OVS vs. bridge vs. VEPA/VNTAG), but I think that
> different plugins that use the same switching
> >technology should be able to use the same vif-plugging (for example,
> there are several plugins that all use the OVS
> >vif-plugging). Our goal here should be to minimize churn in the Nova
> codebase.
>
> Yes, I checked the Cisco driver and the portprofile extension.
> The Cisco driver seems to pass and retrieve the plugin-specific data with
> PUT method.
> I just thought that this kind of vif-plugging could be a general model for
> plugins that work without "agent".
> I agree with you that we should minimize churn in the Nova codebase.
> But, I still feel that the "agent" model, especially polling, is not so
> good.
> Although there are many topics on the summit,
> I hope that we could have discussion about vif-plugging changes.
>

One important thing to remember is that, there's no requirement that a
plugin runs an agent. I believe at least two plugins already, don't use
agents at all, as they have other ways of remotely configuring the switches
when needed.

I think the polling performed by some plugins (e.g., OVS) you mention is
actually really easy to remove by sending notifications to agents using
something like RabbitMQ. This is something that's already planned for
Folsom.

I think the issue of "agents" is more fundamental. It may be that the term
"agent" is confusing, as it is really just code that runs as a service on
the hypervisor, just like nova-compute. The question is really whether
there should be a single python process (all network logic embedded in
nova), or one for nova and one for quantum. In a way, having a quantum
agent on the hypervisor is similar to what happens when running
nova-network in multi-host model.

There are two key points here in my mind:
1) We want to minimize network related code churn in Nova. The vswitch
configuration supported by Quantum plugins will continue to grow over time,
and our goal should be that adding a new capability to a Quantum plugin
should rarely require Nova changes.
2) Its likely that more advanced plugins need to make changes to the
vswitch at times other than vif-plug and vif-unplug. For example, consider
that quantum already exposes the ability to put a port in "admin down"
(i.e., no packets forwarded) at any point if a tenant makes an API request.


It may be that having a more flexible vif-plugging mechanism is still
valuable despite these points, so let's chat more about it at the summit.
Thanks again for your thoughts.


Dan



>
>
>
> > I think there is another sub topic, but I am not sure yet.
> > I agree that a configurable VIF driver is much better.
> > For designing the configuration of vif-plugging, it is required
> that we discuss the granularity of selecting
> >VIF Driver.
> > Should The granularity of selecting VIF Driver be per node, VM, or
> VIF?
> > Currently, VIF Driver would be configured in nova-compute.conf.
> > This means that the granularity is per Hypervisor Node.
> > To be more flexible, we might consider the case where VIF1 of VM1
> connects to bridge and VIF2 of VM1 maps
> >to a physical NIC
> > directly.
> > If so, it may raise another issue; how to determine connection
> type of VIF.
> >
> >
> >
> >That's an interesting use case, and something that we haven't tried to
> deal with yet. In your use case, who would
> >determine how a VIF was mapped? Would it be a policy described by the
> service provider? Would it be part of the
> >VM flavor? Adding this kind of flexibility is certainly possible, though
> you are the first person who has expressed
> >a need for this type of flexibility.
>
> It could be mixed.
> I think that a cloud user specifies vNIC option like "physical NIC
> mapping" as a VM flavor,
> then a service provider determines a hypervisor node and it's available
> physical NIC.
> It is not suitable that the cloud user specifies physical NIC itself.
> But this though is not clear enough to having a session on the summit,
> I hope that we discuss this issue on vif-plugging session or somewhere in
> the summit.
>
>
>
> > Is there any blueprint of Security Group in NetStack?
> > I have a primitive proposal for an firewall model and API, and a
> firewall implementation on Quantum OpenFlow
> >Plugin.
> > That is just a prototype based on Quantum L2 API.
> > But, this proposal shows how firewalling API should be.
> > I think the first point in designing firewalling models is to
> which entity each rule is associated.
> > Is the entity network or port?
> > I hope that we will have discussions for firewalling leads much
> more functionalities and APIs than security
> >group has currently.
> >
> >
> >
> >Dave Lapsley (on netstack list) is doing a session on this at the summit.
> Feel free to join as a driver. My thinking
> >on the topic is that each Quantum port could be assigned one or more
> security groups. There is also scope, I believe,
> >for more advanced ACLs that could be associated with each port,
> essentially consisting of inbound/outbound lists
> >of rules that each have a "match" and an "action" (allow/deny). There is
> also the topic of NAT, which in my mind
> >makes the most sense to think about in terms of our "L3 forwarding"
> discussion (see etherpad).
> >
>
> I found it.
> I'll join the sessions.
> Thanks!
>
> > Finally, I would like to suggest that the Security Group session
> would be suitable to be held after Quantum
> >L3 session.
> > I think the Security Group is a cross layer function and its
> design should better be coupled with L3 design.
> >
> >
> >
> >I think our first task needs to be properly scoping each of these
> discussions, particularly on the many topics loosely
> >call "L3". I've sent another email to the list with thoughts on
> breaking up those discussions.
> >
>
>
>
> Thanks,
>
> Ryota MIBU
>
>
>


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~


chrisw at sous-sol

Apr 4, 2012, 5:05 PM

Post #3 of 10 (588 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

* Ryota MIBU (r-mibu [at] cq) wrote:
> Hi Dan,
> > I think there is another sub topic, but I am not sure yet.
> > I agree that a configurable VIF driver is much better.
> > For designing the configuration of vif-plugging, it is required that we discuss the granularity of selecting
> >VIF Driver.
> > Should The granularity of selecting VIF Driver be per node, VM, or VIF?
> > Currently, VIF Driver would be configured in nova-compute.conf.
> > This means that the granularity is per Hypervisor Node.
> > To be more flexible, we might consider the case where VIF1 of VM1 connects to bridge and VIF2 of VM1 maps
> >to a physical NIC
> > directly.
> > If so, it may raise another issue; how to determine connection type of VIF.
> >That's an interesting use case, and something that we haven't tried to deal with yet. In your use case, who would
> >determine how a VIF was mapped? Would it be a policy described by the service provider? Would it be part of the
> >VM flavor? Adding this kind of flexibility is certainly possible, though you are the first person who has expressed
> >a need for this type of flexibility.
>
> It could be mixed.
> I think that a cloud user specifies vNIC option like "physical NIC mapping" as a VM flavor,
> then a service provider determines a hypervisor node and it's available physical NIC.

Yes, come across similar issue, esp w/ SR-IOV virtual functions instead of
physical functions.

> It is not suitable that the cloud user specifies physical NIC itself.
> But this though is not clear enough to having a session on the summit,
> I hope that we discuss this issue on vif-plugging session or somewhere in the summit.

--
Mailing list: https://launchpad.net/~netstack
Post to : netstack [at] lists
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp


matt at nycresistor

Apr 4, 2012, 5:11 PM

Post #4 of 10 (596 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

SR-IOV is a mess. I recommend avoiding it entirely.

On Wed, Apr 4, 2012 at 5:05 PM, Chris Wright <chrisw [at] sous-sol> wrote:
> * Ryota MIBU (r-mibu [at] cq) wrote:
>> Hi Dan,
>> >     I think there is another sub topic, but I am not sure yet.
>> >     I agree that a configurable VIF driver is much better.
>> >     For designing the configuration of vif-plugging, it is required that we discuss the granularity of selecting
>> >VIF Driver.
>> >     Should The granularity of selecting VIF Driver be per node, VM, or VIF?
>> >     Currently, VIF Driver would be configured in nova-compute.conf.
>> >     This means that the granularity is per Hypervisor Node.
>> >     To be more flexible, we might consider the case where VIF1 of VM1 connects to bridge and VIF2 of VM1 maps
>> >to a physical NIC
>> >     directly.
>> >     If so, it may raise another issue; how to determine connection type of VIF.
>> >That's an interesting use case, and something that we haven't tried to deal with yet.  In your use case, who would
>> >determine how a VIF was mapped?  Would it be a policy described by the service provider?  Would it be part of the
>> >VM flavor?  Adding this kind of flexibility is certainly possible, though you are the first person who has expressed
>> >a need for this type of flexibility.
>>
>> It could be mixed.
>> I think that a cloud user specifies vNIC option like "physical NIC mapping" as a VM flavor,
>> then a service provider determines a hypervisor node and it's available physical NIC.
>
> Yes, come across similar issue, esp w/ SR-IOV virtual functions instead of
> physical functions.
>
>> It is not suitable that the cloud user specifies physical NIC itself.
>> But this though is not clear enough to having a session on the summit,
>> I hope that we discuss this issue on vif-plugging session or somewhere in the summit.
>
> --
> Mailing list: https://launchpad.net/~netstack
> Post to     : netstack [at] lists
> Unsubscribe : https://launchpad.net/~netstack
> More help   : https://help.launchpad.net/ListHelp

--
Mailing list: https://launchpad.net/~netstack
Post to : netstack [at] lists
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp


dan at nicira

Apr 4, 2012, 5:23 PM

Post #5 of 10 (589 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

On Wed, Apr 4, 2012 at 5:05 PM, Chris Wright <chrisw [at] sous-sol> wrote:

> * Ryota MIBU (r-mibu [at] cq) wrote:
> > Hi Dan,
> > > I think there is another sub topic, but I am not sure yet.
> > > I agree that a configurable VIF driver is much better.
> > > For designing the configuration of vif-plugging, it is required
> that we discuss the granularity of selecting
> > >VIF Driver.
> > > Should The granularity of selecting VIF Driver be per node, VM, or
> VIF?
> > > Currently, VIF Driver would be configured in nova-compute.conf.
> > > This means that the granularity is per Hypervisor Node.
> > > To be more flexible, we might consider the case where VIF1 of VM1
> connects to bridge and VIF2 of VM1 maps
> > >to a physical NIC
> > > directly.
> > > If so, it may raise another issue; how to determine connection
> type of VIF.
> > >That's an interesting use case, and something that we haven't tried to
> deal with yet. In your use case, who would
> > >determine how a VIF was mapped? Would it be a policy described by the
> service provider? Would it be part of the
> > >VM flavor? Adding this kind of flexibility is certainly possible,
> though you are the first person who has expressed
> > >a need for this type of flexibility.
> >
> > It could be mixed.
> > I think that a cloud user specifies vNIC option like "physical NIC
> mapping" as a VM flavor,
> > then a service provider determines a hypervisor node and it's available
> physical NIC.
>
> Yes, come across similar issue, esp w/ SR-IOV virtual functions instead of
> physical functions.
>

Whoops, sorry Ryota, I missed this part of your email.

I understand the use case. Being able to invoke different types of
vif-plugging based on flavor would be one way to think about it. Another
way would be to think about it would be a single vif-plugging mechanism
that can configure the vif in two ways based on the flavor.

dan



>
> > It is not suitable that the cloud user specifies physical NIC itself.
> > But this though is not clear enough to having a session on the summit,
> > I hope that we discuss this issue on vif-plugging session or somewhere
> in the summit.
>



--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~


chrisw at sous-sol

Apr 4, 2012, 5:32 PM

Post #6 of 10 (587 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

* Matt Joyce (matt [at] nycresistor) wrote:
> SR-IOV is a mess. I recommend avoiding it entirely.

Some providers are interested in giving a VM direct access to hw as a
differentiator. Same as MIBU-san's use case. SR-IOV is just one way to
achieve that (and I agree, pleny of complications there).

--
Mailing list: https://launchpad.net/~netstack
Post to : netstack [at] lists
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp


r-mibu at cq

Apr 4, 2012, 8:51 PM

Post #7 of 10 (585 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

Thank you, guys.
I hope that we have more discussions about this topic at the summit.

Ryota

>-----Original Message-----
>From: Dan Wendlandt [mailto:dan [at] nicira]
>Sent: Thursday, April 05, 2012 9:24 AM
>To: Chris Wright
>Cc: Ryota MIBU; netstack [at] lists
>Subject: Re: [Netstack] quantum community projects & folsom summit plans
>
>
>
>On Wed, Apr 4, 2012 at 5:05 PM, Chris Wright <chrisw [at] sous-sol> wrote:
>
>
> * Ryota MIBU (r-mibu [at] cq) wrote:
> > Hi Dan,
>
> > > I think there is another sub topic, but I am not sure yet.
> > > I agree that a configurable VIF driver is much better.
> > > For designing the configuration of vif-plugging, it is required that we discuss the granularity of
>selecting
> > >VIF Driver.
> > > Should The granularity of selecting VIF Driver be per node, VM, or VIF?
> > > Currently, VIF Driver would be configured in nova-compute.conf.
> > > This means that the granularity is per Hypervisor Node.
> > > To be more flexible, we might consider the case where VIF1 of VM1 connects to bridge and VIF2 of
>VM1 maps
> > >to a physical NIC
> > > directly.
> > > If so, it may raise another issue; how to determine connection type of VIF.
> > >That's an interesting use case, and something that we haven't tried to deal with yet. In your use case,
>who would
> > >determine how a VIF was mapped? Would it be a policy described by the service provider? Would it be
>part of the
> > >VM flavor? Adding this kind of flexibility is certainly possible, though you are the first person who
>has expressed
> > >a need for this type of flexibility.
> >
> > It could be mixed.
> > I think that a cloud user specifies vNIC option like "physical NIC mapping" as a VM flavor,
> > then a service provider determines a hypervisor node and it's available physical NIC.
>
>
> Yes, come across similar issue, esp w/ SR-IOV virtual functions instead of
> physical functions.
>
>
>
>Whoops, sorry Ryota, I missed this part of your email.
>
>I understand the use case. Being able to invoke different types of vif-plugging based on flavor would be one way
>to think about it. Another way would be to think about it would be a single vif-plugging mechanism that can configure
>the vif in two ways based on the flavor.
>
>dan
>
>
>
>
> > It is not suitable that the cloud user specifies physical NIC itself.
> > But this though is not clear enough to having a session on the summit,
> > I hope that we discuss this issue on vif-plugging session or somewhere in the summit.
>
>
>
>
>
>--
>~~~~~~~~~~~~~~~~~~~~~~~~~~~
>Dan Wendlandt
>Nicira, Inc: www.nicira.com
>
>twitter: danwendlandt
>~~~~~~~~~~~~~~~~~~~~~~~~~~~
>



--
Mailing list: https://launchpad.net/~netstack
Post to : netstack [at] lists
Unsubscribe : https://launchpad.net/~netstack
More help : https://help.launchpad.net/ListHelp


irenab at mellanox

Apr 5, 2012, 1:26 AM

Post #8 of 10 (591 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

Hi Dan,
I would like to second the idea to minimize code that should be integrated into nova.
One of the Quantum statements is the it can be positioned as a standalone service.
In such case quantum service on the hypervisor should be activated in other than OpenStack environment.

Irena

From: netstack-bounces+irenab=mellanox.com [at] lists [mailto:netstack-bounces+irenab=mellanox.com [at] lists] On Behalf Of Dan Wendlandt
Sent: Thursday, April 05, 2012 2:40 AM
To: Ryota MIBU
Cc: netstack [at] lists
Subject: Re: [Netstack] quantum community projects & folsom summit plans

Thanks for your thoughts on this Ryota. Some additional comments below. Happy to chat more about it at the summit as well.
On Tue, Apr 3, 2012 at 11:42 PM, Ryota MIBU <r-mibu [at] cq<mailto:r-mibu [at] cq>> wrote:
>
>However, I think the goal should be that code that is in Nova is not specific to the plugin. As we talked about
>earlier in the thread, you may need different types of vif-plugging to attach to different types of switching
>technologies (e.g., OVS vs. bridge vs. VEPA/VNTAG), but I think that different plugins that use the same switching
>technology should be able to use the same vif-plugging (for example, there are several plugins that all use the OVS
>vif-plugging). Our goal here should be to minimize churn in the Nova codebase.
Yes, I checked the Cisco driver and the portprofile extension.
The Cisco driver seems to pass and retrieve the plugin-specific data with PUT method.
I just thought that this kind of vif-plugging could be a general model for plugins that work without "agent".
I agree with you that we should minimize churn in the Nova codebase.
But, I still feel that the "agent" model, especially polling, is not so good.
Although there are many topics on the summit,
I hope that we could have discussion about vif-plugging changes.

One important thing to remember is that, there's no requirement that a plugin runs an agent. I believe at least two plugins already, don't use agents at all, as they have other ways of remotely configuring the switches when needed.

I think the polling performed by some plugins (e.g., OVS) you mention is actually really easy to remove by sending notifications to agents using something like RabbitMQ. This is something that's already planned for Folsom.

I think the issue of "agents" is more fundamental. It may be that the term "agent" is confusing, as it is really just code that runs as a service on the hypervisor, just like nova-compute. The question is really whether there should be a single python process (all network logic embedded in nova), or one for nova and one for quantum. In a way, having a quantum agent on the hypervisor is similar to what happens when running nova-network in multi-host model.

There are two key points here in my mind:
1) We want to minimize network related code churn in Nova. The vswitch configuration supported by Quantum plugins will continue to grow over time, and our goal should be that adding a new capability to a Quantum plugin should rarely require Nova changes.
2) Its likely that more advanced plugins need to make changes to the vswitch at times other than vif-plug and vif-unplug. For example, consider that quantum already exposes the ability to put a port in "admin down" (i.e., no packets forwarded) at any point if a tenant makes an API request.

It may be that having a more flexible vif-plugging mechanism is still valuable despite these points, so let's chat more about it at the summit. Thanks again for your thoughts.


Dan





> I think there is another sub topic, but I am not sure yet.
> I agree that a configurable VIF driver is much better.
> For designing the configuration of vif-plugging, it is required that we discuss the granularity of selecting
>VIF Driver.
> Should The granularity of selecting VIF Driver be per node, VM, or VIF?
> Currently, VIF Driver would be configured in nova-compute.conf.
> This means that the granularity is per Hypervisor Node.
> To be more flexible, we might consider the case where VIF1 of VM1 connects to bridge and VIF2 of VM1 maps
>to a physical NIC
> directly.
> If so, it may raise another issue; how to determine connection type of VIF.
>
>
>
>That's an interesting use case, and something that we haven't tried to deal with yet. In your use case, who would
>determine how a VIF was mapped? Would it be a policy described by the service provider? Would it be part of the
>VM flavor? Adding this kind of flexibility is certainly possible, though you are the first person who has expressed
>a need for this type of flexibility.
It could be mixed.
I think that a cloud user specifies vNIC option like "physical NIC mapping" as a VM flavor,
then a service provider determines a hypervisor node and it's available physical NIC.
It is not suitable that the cloud user specifies physical NIC itself.
But this though is not clear enough to having a session on the summit,
I hope that we discuss this issue on vif-plugging session or somewhere in the summit.



> Is there any blueprint of Security Group in NetStack?
> I have a primitive proposal for an firewall model and API, and a firewall implementation on Quantum OpenFlow
>Plugin.
> That is just a prototype based on Quantum L2 API.
> But, this proposal shows how firewalling API should be.
> I think the first point in designing firewalling models is to which entity each rule is associated.
> Is the entity network or port?
> I hope that we will have discussions for firewalling leads much more functionalities and APIs than security
>group has currently.
>
>
>
>Dave Lapsley (on netstack list) is doing a session on this at the summit. Feel free to join as a driver. My thinking
>on the topic is that each Quantum port could be assigned one or more security groups. There is also scope, I believe,
>for more advanced ACLs that could be associated with each port, essentially consisting of inbound/outbound lists
>of rules that each have a "match" and an "action" (allow/deny). There is also the topic of NAT, which in my mind
>makes the most sense to think about in terms of our "L3 forwarding" discussion (see etherpad).
>
I found it.
I'll join the sessions.
Thanks!

> Finally, I would like to suggest that the Security Group session would be suitable to be held after Quantum
>L3 session.
> I think the Security Group is a cross layer function and its design should better be coupled with L3 design.
>
>
>
>I think our first task needs to be properly scoping each of these discussions, particularly on the many topics loosely
>call "L3". I've sent another email to the list with thoughts on breaking up those discussions.
>


Thanks,

Ryota MIBU




--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com<http://www.nicira.com>
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~


snaiksat at cisco

Apr 5, 2012, 1:42 AM

Post #9 of 10 (604 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

Here's a thought - to the point on having to avoid churn in the nova
code (on account Quantum plugin specific VIFs), how about we try to
solve this issue via packaging? What if we have a separate Quantum nova
driver package which when deployed will install the VIF drivers in the
appropriate location(s). Existing location for the VIF drivers can be
used as an installation target (or some new convention can be set up).
But, in general, this approach will avoid having to adding/updating
VIF-driver code to nova every time a Quantum plugin requires it.



Note also that I am suggesting a separate driver package, and not as a
part of the Quantum server or client/common package since these drivers
are nova-specific and need to be installed only for nova and after nova
is installed.



Thanks,

~Sumit.



From: netstack-bounces+snaiksat=cisco.com [at] lists
[mailto:netstack-bounces+snaiksat=cisco.com [at] lists] On
Behalf Of Dan Wendlandt
Sent: Wednesday, April 04, 2012 4:39 PM
To: Ryota MIBU
Cc: netstack [at] lists
Subject: Re: [Netstack] quantum community projects & folsom summit plans



Thanks for your thoughts on this Ryota. Some additional comments below.
Happy to chat more about it at the summit as well.

On Tue, Apr 3, 2012 at 11:42 PM, Ryota MIBU <r-mibu [at] cq>
wrote:

>
>However, I think the goal should be that code that is in Nova is not
specific to the plugin. As we talked about
>earlier in the thread, you may need different types of vif-plugging to
attach to different types of switching
>technologies (e.g., OVS vs. bridge vs. VEPA/VNTAG), but I think that
different plugins that use the same switching
>technology should be able to use the same vif-plugging (for example,
there are several plugins that all use the OVS
>vif-plugging). Our goal here should be to minimize churn in the Nova
codebase.

Yes, I checked the Cisco driver and the portprofile extension.
The Cisco driver seems to pass and retrieve the plugin-specific data
with PUT method.
I just thought that this kind of vif-plugging could be a general model
for plugins that work without "agent".
I agree with you that we should minimize churn in the Nova codebase.
But, I still feel that the "agent" model, especially polling, is not so
good.
Although there are many topics on the summit,
I hope that we could have discussion about vif-plugging changes.



One important thing to remember is that, there's no requirement that a
plugin runs an agent. I believe at least two plugins already, don't use
agents at all, as they have other ways of remotely configuring the
switches when needed.



I think the polling performed by some plugins (e.g., OVS) you mention is
actually really easy to remove by sending notifications to agents using
something like RabbitMQ. This is something that's already planned for
Folsom.



I think the issue of "agents" is more fundamental. It may be that the
term "agent" is confusing, as it is really just code that runs as a
service on the hypervisor, just like nova-compute. The question is
really whether there should be a single python process (all network
logic embedded in nova), or one for nova and one for quantum. In a way,
having a quantum agent on the hypervisor is similar to what happens when
running nova-network in multi-host model.



There are two key points here in my mind:

1) We want to minimize network related code churn in Nova. The vswitch
configuration supported by Quantum plugins will continue to grow over
time, and our goal should be that adding a new capability to a Quantum
plugin should rarely require Nova changes.

2) Its likely that more advanced plugins need to make changes to the
vswitch at times other than vif-plug and vif-unplug. For example,
consider that quantum already exposes the ability to put a port in
"admin down" (i.e., no packets forwarded) at any point if a tenant makes
an API request.



It may be that having a more flexible vif-plugging mechanism is still
valuable despite these points, so let's chat more about it at the
summit. Thanks again for your thoughts.





Dan








> I think there is another sub topic, but I am not sure
yet.
> I agree that a configurable VIF driver is much better.
> For designing the configuration of vif-plugging, it is
required that we discuss the granularity of selecting
>VIF Driver.
> Should The granularity of selecting VIF Driver be per
node, VM, or VIF?
> Currently, VIF Driver would be configured in
nova-compute.conf.
> This means that the granularity is per Hypervisor Node.
> To be more flexible, we might consider the case where
VIF1 of VM1 connects to bridge and VIF2 of VM1 maps
>to a physical NIC
> directly.
> If so, it may raise another issue; how to determine
connection type of VIF.
>
>
>
>That's an interesting use case, and something that we haven't
tried to deal with yet. In your use case, who would
>determine how a VIF was mapped? Would it be a policy described
by the service provider? Would it be part of the
>VM flavor? Adding this kind of flexibility is certainly
possible, though you are the first person who has expressed
>a need for this type of flexibility.

It could be mixed.
I think that a cloud user specifies vNIC option like "physical
NIC mapping" as a VM flavor,
then a service provider determines a hypervisor node and it's
available physical NIC.
It is not suitable that the cloud user specifies physical NIC
itself.
But this though is not clear enough to having a session on the
summit,
I hope that we discuss this issue on vif-plugging session or
somewhere in the summit.




> Is there any blueprint of Security Group in NetStack?
> I have a primitive proposal for an firewall model and
API, and a firewall implementation on Quantum OpenFlow
>Plugin.
> That is just a prototype based on Quantum L2 API.
> But, this proposal shows how firewalling API should be.
> I think the first point in designing firewalling models
is to which entity each rule is associated.
> Is the entity network or port?
> I hope that we will have discussions for firewalling
leads much more functionalities and APIs than security
>group has currently.
>
>
>
>Dave Lapsley (on netstack list) is doing a session on this at
the summit. Feel free to join as a driver. My thinking
>on the topic is that each Quantum port could be assigned one or
more security groups. There is also scope, I believe,
>for more advanced ACLs that could be associated with each port,
essentially consisting of inbound/outbound lists
>of rules that each have a "match" and an "action" (allow/deny).
There is also the topic of NAT, which in my mind
>makes the most sense to think about in terms of our "L3
forwarding" discussion (see etherpad).
>

I found it.
I'll join the sessions.
Thanks!


> Finally, I would like to suggest that the Security Group
session would be suitable to be held after Quantum
>L3 session.
> I think the Security Group is a cross layer function and
its design should better be coupled with L3 design.
>
>
>
>I think our first task needs to be properly scoping each of
these discussions, particularly on the many topics loosely
>call "L3". I've sent another email to the list with thoughts
on breaking up those discussions.
>




Thanks,

Ryota MIBU









--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt

Nicira, Inc: www.nicira.com

twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~


dan at nicira

Apr 5, 2012, 8:42 AM

Post #10 of 10 (592 views)
Permalink
Re: quantum community projects & folsom summit plans [In reply to]

On Thu, Apr 5, 2012 at 1:42 AM, Sumit Naiksatam (snaiksat) <
snaiksat [at] cisco> wrote:

> Here’s a thought - to the point on having to avoid churn in the nova code
> (on account Quantum plugin specific VIFs), how about we try to solve this
> issue via packaging? What if we have a separate Quantum nova driver package
> which when deployed will install the VIF drivers in the appropriate
> location(s). Existing location for the VIF drivers can be used as an
> installation target (or some new convention can be set up). But, in
> general, this approach will avoid having to adding/updating VIF-driver code
> to nova every time a Quantum plugin requires it.****
>
> ** **
>
> Note also that I am suggesting a separate driver package, and not as a
> part of the Quantum server or client/common package since these drivers are
> nova-specific and need to be installed only for nova and after nova is
> installed.
>

Sumit, I think that's an option, but I intentionally didn't suggest it in
my previous mail because we've been burned before with having vif-drivers
out of tree. The problem is that when people make changes on the Nova
side, they grep for all code that uses X and change it, they run unit
tests, see no issues, and push code. If a vif-driver isn't in the nova
tree, it isn't seen by greps, and it isn't covered by Nova unit tests. The
result is broken vif-drivers.

This problem is not insurmountable: such issue could potentially be caught
if we developed a unit test framework that lived with the external vif
driver code and regularly ran tests that pulled in the latest nova code and
tested for any breakage, it just adds a fair amount of complexity. My
feeling is that the complexity is not warranted if the sole goal is
avoiding having an "agent", but perhaps there are other reasons we need to
support significant complexity in vif-drivers that I am not thinking about.

Dan



> ****
>
> ** **
>
> Thanks,****
>
> ~Sumit.****
>
> ** **
>
> *From:* netstack-bounces+snaiksat=cisco.com [at] lists [mailto:
> netstack-bounces+snaiksat=cisco.com [at] lists] *On Behalf Of *Dan
> Wendlandt
> *Sent:* Wednesday, April 04, 2012 4:39 PM
> *To:* Ryota MIBU
> *Cc:* netstack [at] lists
>
> *Subject:* Re: [Netstack] quantum community projects & folsom summit plans
> ****
>
> ** **
>
> Thanks for your thoughts on this Ryota. Some additional comments below.
> Happy to chat more about it at the summit as well. ****
>
> On Tue, Apr 3, 2012 at 11:42 PM, Ryota MIBU <r-mibu [at] cq> wrote:*
> ***
>
> >
> >However, I think the goal should be that code that is in Nova is not
> specific to the plugin. As we talked about
> >earlier in the thread, you may need different types of vif-plugging to
> attach to different types of switching
> >technologies (e.g., OVS vs. bridge vs. VEPA/VNTAG), but I think that
> different plugins that use the same switching
> >technology should be able to use the same vif-plugging (for example,
> there are several plugins that all use the OVS
> >vif-plugging). Our goal here should be to minimize churn in the Nova
> codebase.****
>
> Yes, I checked the Cisco driver and the portprofile extension.
> The Cisco driver seems to pass and retrieve the plugin-specific data with
> PUT method.
> I just thought that this kind of vif-plugging could be a general model for
> plugins that work without "agent".
> I agree with you that we should minimize churn in the Nova codebase.
> But, I still feel that the "agent" model, especially polling, is not so
> good.
> Although there are many topics on the summit,
> I hope that we could have discussion about vif-plugging changes.****
>
> ** **
>
> One important thing to remember is that, there's no requirement that a
> plugin runs an agent. I believe at least two plugins already, don't use
> agents at all, as they have other ways of remotely configuring the switches
> when needed. ****
>
> ** **
>
> I think the polling performed by some plugins (e.g., OVS) you mention is
> actually really easy to remove by sending notifications to agents using
> something like RabbitMQ. This is something that's already planned for
> Folsom. ****
>
> ** **
>
> I think the issue of "agents" is more fundamental. It may be that the
> term "agent" is confusing, as it is really just code that runs as a service
> on the hypervisor, just like nova-compute. The question is really whether
> there should be a single python process (all network logic embedded in
> nova), or one for nova and one for quantum. In a way, having a quantum
> agent on the hypervisor is similar to what happens when running
> nova-network in multi-host model. ****
>
> ** **
>
> There are two key points here in my mind: ****
>
> 1) We want to minimize network related code churn in Nova. The vswitch
> configuration supported by Quantum plugins will continue to grow over time,
> and our goal should be that adding a new capability to a Quantum plugin
> should rarely require Nova changes. ****
>
> 2) Its likely that more advanced plugins need to make changes to the
> vswitch at times other than vif-plug and vif-unplug. For example, consider
> that quantum already exposes the ability to put a port in "admin down"
> (i.e., no packets forwarded) at any point if a tenant makes an API request.
> ****
>
> ** **
>
> It may be that having a more flexible vif-plugging mechanism is still
> valuable despite these points, so let's chat more about it at the summit.
> Thanks again for your thoughts. ****
>
> ** **
>
> ** **
>
> Dan ****
>
> ** **
>
> ****
>
>
>
>
> > I think there is another sub topic, but I am not sure yet.
> > I agree that a configurable VIF driver is much better.
> > For designing the configuration of vif-plugging, it is required
> that we discuss the granularity of selecting
> >VIF Driver.
> > Should The granularity of selecting VIF Driver be per node, VM, or
> VIF?
> > Currently, VIF Driver would be configured in nova-compute.conf.
> > This means that the granularity is per Hypervisor Node.
> > To be more flexible, we might consider the case where VIF1 of VM1
> connects to bridge and VIF2 of VM1 maps
> >to a physical NIC
> > directly.
> > If so, it may raise another issue; how to determine connection
> type of VIF.
> >
> >
> >
> >That's an interesting use case, and something that we haven't tried to
> deal with yet. In your use case, who would
> >determine how a VIF was mapped? Would it be a policy described by the
> service provider? Would it be part of the
> >VM flavor? Adding this kind of flexibility is certainly possible, though
> you are the first person who has expressed
> >a need for this type of flexibility.****
>
> It could be mixed.
> I think that a cloud user specifies vNIC option like "physical NIC
> mapping" as a VM flavor,
> then a service provider determines a hypervisor node and it's available
> physical NIC.
> It is not suitable that the cloud user specifies physical NIC itself.
> But this though is not clear enough to having a session on the summit,
> I hope that we discuss this issue on vif-plugging session or somewhere in
> the summit.****
>
>
>
>
> > Is there any blueprint of Security Group in NetStack?
> > I have a primitive proposal for an firewall model and API, and a
> firewall implementation on Quantum OpenFlow
> >Plugin.
> > That is just a prototype based on Quantum L2 API.
> > But, this proposal shows how firewalling API should be.
> > I think the first point in designing firewalling models is to
> which entity each rule is associated.
> > Is the entity network or port?
> > I hope that we will have discussions for firewalling leads much
> more functionalities and APIs than security
> >group has currently.
> >
> >
> >
> >Dave Lapsley (on netstack list) is doing a session on this at the summit.
> Feel free to join as a driver. My thinking
> >on the topic is that each Quantum port could be assigned one or more
> security groups. There is also scope, I believe,
> >for more advanced ACLs that could be associated with each port,
> essentially consisting of inbound/outbound lists
> >of rules that each have a "match" and an "action" (allow/deny). There is
> also the topic of NAT, which in my mind
> >makes the most sense to think about in terms of our "L3 forwarding"
> discussion (see etherpad).
> >****
>
> I found it.
> I'll join the sessions.
> Thanks!****
>
>
> > Finally, I would like to suggest that the Security Group session
> would be suitable to be held after Quantum
> >L3 session.
> > I think the Security Group is a cross layer function and its
> design should better be coupled with L3 design.
> >
> >
> >
> >I think our first task needs to be properly scoping each of these
> discussions, particularly on the many topics loosely
> >call "L3". I've sent another email to the list with thoughts on
> breaking up those discussions.
> >
>
>
> ****
>
> Thanks,
>
> Ryota MIBU
>
> ****
>
>
>
> ****
>
> ** **
>
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Dan Wendlandt ****
>
> Nicira, Inc: www.nicira.com****
>
> twitter: danwendlandt
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~****
>
> ** **
>



--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~

OpenStack netstack RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.