settingsLogin | Registersettings

[openstack-dev] [Neutron][ML2] Modular L2 agent architecture

0 votes

Following the discussions in the ML2 subgroup weekly meetings, I have added
more information on the etherpad [1] describing the proposed architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-dev/attachments/20140610/6d18fe61/attachment.html

asked Jun 10, 2014 in openstack-dev by Mohammad_Banikazemi (3,160 points)   2 2
retagged Jan 28, 2015 by admin

15 Responses

0 votes

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi wrote:
Following the discussions in the ML2 subgroup weekly meetings, I have added
more information on the etherpad [1] describing the proposed architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 17, 2014 by Zang_MingJie (740 points)   1 3
0 votes

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more maintainable
and ensure faster event processing as well as making it easier to have some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since we've moving towards a unified agent, I think any new "big" ticket
should address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

responded Jun 17, 2014 by Salvatore_Orlando (12,280 points)   2 5 8
0 votes

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando wrote:
We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more maintainable
and ensure faster event processing as well as making it easier to have some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but since
we've moving towards a unified agent, I think any new "big" ticket should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 17, 2014 by mestery_at_noironetw (5,980 points)   1 3 3
0 votes

just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?

On 17 June 2014 18:38, Kyle Mestery wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando
wrote:

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor
in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more
maintainable
and ensure faster event processing as well as making it easier to have
some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since
we've moving towards a unified agent, I think any new "big" ticket should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please
have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

responded Jun 17, 2014 by Armando_M. (23,560 points)   2 4 8
0 votes

Not if you use ODL, and we don't want to reinvent that wheel. But by
skipping CLI commands and instead using OVSDB programmatically from
agent to ovs-vswitchd, that's a decent improvement.

On Tue, Jun 17, 2014 at 11:56 AM, Armando M. wrote:
just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?

On 17 June 2014 18:38, Kyle Mestery wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando
wrote:

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor
in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more
maintainable
and ensure faster event processing as well as making it easier to have
some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since
we've moving towards a unified agent, I think any new "big" ticket
should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please
have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 17, 2014 by mestery_at_noironetw (5,980 points)   1 3 3
0 votes

Managing the ports and plumbing logic is today driven by L2 Agent, with little assistance

from controller.

If we plan to move that functionality to the controller, the controller has to be more

heavy weight (both hardware and software) since it has to do the job of L2 Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers for the controller

on POC?ing of such a change.

That said, replacing CLI with direct OVSDB calls in the L2 Agent is certainly a good direction.

Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or processing) to follow up

on success or failure of such invocations. Nor there is certain guarantee that all such

flow invocations would be executed by the third-process fired by OVS-Lib to execute CLI.

When we transition to OVSDB calls which are more programmatic in nature, we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return codes (or content)

and ovs-agent (and even other components) can act on such return state more

intelligently/appropriately.

--

Thanks,

Vivek

From: Armando M. [mailto:armamig at gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

just a provocative thought: If we used the ovsdb connection instead, do we really need an L2 agent :P?

On 17 June 2014 18:38, Kyle Mestery > wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando > wrote:
We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more maintainable
and ensure faster event processing as well as making it easier to have some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but since
we've moving towards a unified agent, I think any new "big" ticket should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie > wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi >
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

responded Jun 17, 2014 by Narasimhan,_Vivekana (660 points)  
0 votes

Hi,
Does it make sense also to have the choice between ovs-ofctl CLI and a
direct OF1.3 connection too in the ovs-agent?

Best Regards,
Racha

On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan <
vivekanandan.narasimhan at hp.com> wrote:

Managing the ports and plumbing logic is today driven by L2 Agent, with
little assistance

from controller.

If we plan to move that functionality to the controller, the controller
has to be more

heavy weight (both hardware and software) since it has to do the job of
L2 Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers
for the controller

on POC?ing of such a change.

That said, replacing CLI with direct OVSDB calls in the L2 Agent is
certainly a good direction.

Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
processing) to follow up

on success or failure of such invocations. Nor there is certain guarantee
that all such

flow invocations would be executed by the third-process fired by OVS-Lib
to execute CLI.

When we transition to OVSDB calls which are more programmatic in nature,
we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
codes (or content)

and ovs-agent (and even other components) can act on such return state
more

intelligently/appropriately.

--

Thanks,

Vivek

From: Armando M. [mailto:armamig at gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
architecture

just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?

On 17 June 2014 18:38, Kyle Mestery wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando
wrote:

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor
in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more
maintainable
and ensure faster event processing as well as making it easier to have
some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since
we've moving towards a unified agent, I think any new "big" ticket should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please
have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

responded Jun 17, 2014 by racha (380 points)   1
0 votes

Mine wasn't really a serious suggestion, Neutron's controlling logic is
already bloated as it is, and my personal opinion would be in favor of a
leaner Neutron Server rather than a more complex one; adding more
controller-like logic to it certainly goes against that direction :)

Having said that and as Vivek pointed out, using ovsdb gives us finer
control and ability to react more effectively, however, with the current
server-agent rpc framework there's no way of leveraging that...so in a
grand scheme of things I'd rather see it prioritized lower rather than
higher, to give precedence to rearchitecting the framework first.

Armando

On 17 June 2014 19:25, Narasimhan, Vivekanandan <
vivekanandan.narasimhan at hp.com> wrote:

Managing the ports and plumbing logic is today driven by L2 Agent, with
little assistance

from controller.

If we plan to move that functionality to the controller, the controller
has to be more

heavy weight (both hardware and software) since it has to do the job of
L2 Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers
for the controller

on POC?ing of such a change.

That said, replacing CLI with direct OVSDB calls in the L2 Agent is
certainly a good direction.

Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
processing) to follow up

on success or failure of such invocations. Nor there is certain guarantee
that all such

flow invocations would be executed by the third-process fired by OVS-Lib
to execute CLI.

When we transition to OVSDB calls which are more programmatic in nature,
we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
codes (or content)

and ovs-agent (and even other components) can act on such return state
more

intelligently/appropriately.

--

Thanks,

Vivek

From: Armando M. [mailto:armamig at gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent
architecture

just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?

On 17 June 2014 18:38, Kyle Mestery wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando
wrote:

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor
in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more
maintainable
and ensure faster event processing as well as making it easier to have
some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since
we've moving towards a unified agent, I think any new "big" ticket should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please
have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

responded Jun 17, 2014 by Armando_M. (23,560 points)   2 4 8
0 votes

No. ovs_lib invokes both ovs-vsctl and ovs-ofctl.
ovs-vsctl speaks OVSDB protocol, ovs-ofctl speaks OF-wire.

thanks,

On Tue, Jun 17, 2014 at 01:25:59PM -0500,
Kyle Mestery wrote:

I don't think so. Once we implement the OVSDB support, we will
deprecate using the CLI commands in ovs_lib.

On Tue, Jun 17, 2014 at 12:50 PM, racha wrote:

Hi,
Does it make sense also to have the choice between ovs-ofctl CLI and a
direct OF1.3 connection too in the ovs-agent?

Best Regards,
Racha

On Tue, Jun 17, 2014 at 10:25 AM, Narasimhan, Vivekanandan
<vivekanandan.narasimhan at hp.com> wrote:

Managing the ports and plumbing logic is today driven by L2 Agent, with
little assistance

from controller.

If we plan to move that functionality to the controller, the controller
has to be more

heavy weight (both hardware and software) since it has to do the job of
L2 Agent for all

the compute servers in the cloud. , We need to re-verify all scale numbers
for the controller

on POC?ing of such a change.

That said, replacing CLI with direct OVSDB calls in the L2 Agent is
certainly a good direction.

Today, OVS Agent invokes flow calls of OVS-Lib but has no idea (or
processing) to follow up

on success or failure of such invocations. Nor there is certain guarantee
that all such

flow invocations would be executed by the third-process fired by OVS-Lib
to execute CLI.

When we transition to OVSDB calls which are more programmatic in nature,
we can

enhance the Flow API (OVS-Lib) to provide more fine grained errors/return
codes (or content)

and ovs-agent (and even other components) can act on such return state
more

intelligently/appropriately.

--

Thanks,

Vivek

From: Armando M. [mailto:armamig at gmail.com]
Sent: Tuesday, June 17, 2014 10:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

just a provocative thought: If we used the ovsdb connection instead, do we
really need an L2 agent :P?

On 17 June 2014 18:38, Kyle Mestery wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando
wrote:

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor
in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more
maintainable
and ensure faster event processing as well as making it easier to have
some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but
since
we've moving towards a unified agent, I think any new "big" ticket
should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please
have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Isaku Yamahata <isaku.yamahata at gmail.com>

responded Jun 18, 2014 by Isaku_Yamahata (2,480 points)   2 3
0 votes

Hi. Ryu provides ovs_vsctl.py library which is python equivalent to
ovs-vsctl command. It speaks OVSDB protocl.
https://github.com/osrg/ryu/blob/master/ryu/lib/ovs/vsctl.py

So with the library, it's mostly mechanical change to convert
ovs_lib.py, I think.
I'm not aware other similar library written in python.

thanks,
Isaku Yamahata

On Tue, Jun 17, 2014 at 11:38:36AM -0500,
Kyle Mestery wrote:

Another area of improvement for the agent would be to move away from
executing CLIs for port commands and instead use OVSDB. Terry Wilson
and I talked about this, and re-writing ovs_lib to use an OVSDB
connection instead of the CLI methods would be a huge improvement
here. I'm not sure if Terry was going to move forward with this, but
I'd be in favor of this for Juno if he or someone else wants to move
in this direction.

Thanks,
Kyle

On Tue, Jun 17, 2014 at 11:24 AM, Salvatore Orlando wrote:

We've started doing this in a slightly more reasonable way for icehouse.
What we've done is:
- remove unnecessary notification from the server
- process all port-related events, either trigger via RPC or via monitor in
one place

Obviously there is always a lot of room for improvement, and I agree
something along the lines of what Zang suggests would be more maintainable
and ensure faster event processing as well as making it easier to have some
form of reliability on event processing.

I was considering doing something for the ovs-agent again in Juno, but since
we've moving towards a unified agent, I think any new "big" ticket should
address this effort.

Salvatore

On 17 June 2014 13:31, Zang MingJie wrote:

Hi:

Awesome! Currently we are suffering lots of bugs in ovs-agent, also
intent to rebuild a more stable flexible agent.

Taking the experience of ovs-agent bugs, I think the concurrency
problem is also a very important problem, the agent gets lots of event
from different greenlets, the rpc, the ovs monitor or the main loop.
I'd suggest to serialize all event to a queue, then process events in
a dedicated thread. The thread check the events one by one ordered,
and resolve what has been changed, then apply the corresponding
changes. If there is any error occurred in the thread, discard the
current processing event, do a fresh start event, which reset
everything, then apply the correct settings.

The threading model is so important and may prevent tons of bugs in
the future development, we should describe it clearly in the
architecture

On Wed, Jun 11, 2014 at 4:19 AM, Mohammad Banikazemi
wrote:

Following the discussions in the ML2 subgroup weekly meetings, I have
added
more information on the etherpad [1] describing the proposed
architecture
for modular L2 agents. I have also posted some code fragments at [2]
sketching the implementation of the proposed architecture. Please have a
look when you get a chance and let us know if you have any comments.

[1] https://etherpad.openstack.org/p/modular-l2-agent-outline
[2] https://review.openstack.org/#/c/99187/


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Isaku Yamahata <isaku.yamahata at gmail.com>

responded Jun 18, 2014 by Isaku_Yamahata (2,480 points)   2 3
...