settingsLogin | Registersettings

[openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

0 votes

Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background is included in the mail.

After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based.

Now, let's move forward:

The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI.

Background of OpenStack cascading vs cells:

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment.
    b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience.
    c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access.

  1. What problems does cascading solve that cells doesn't cover:
    OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level.

  2. Why cells can?t do that:
    Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance.
    a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled.
    b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer.

For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7].

[1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle
[4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI
[5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf
[6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf
[7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html

Best Regards
Chaoyi Huang (joehuang)

asked Dec 3, 2014 in openstack-dev by joehuang (17,140 points)   2 6 9
retagged Feb 25, 2015 by admin

30 Responses

0 votes

Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail.

After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based.

Now, let's move forward:

The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI.

Background of OpenStack cascading vs cells:

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment.
    b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience.
    c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access.

  1. What problems does cascading solve that cells doesn't cover:
    OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level.

  2. Why cells can?t do that:
    Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance.
    a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled.
    b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer.

For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7].

[1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle
[4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI
[5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf
[6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf
[7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html

Best Regards
Chaoyi Huang (joehuang)

responded Dec 5, 2014 by joehuang (17,140 points)   2 6 9
0 votes

Joe,

Related to this topic, At the summit, there was a session on Cells v2
and following up on that there have been BP(s) filed in Nova
championed by Andrew -
https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z

thanks,
dims

On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote:
Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail.

After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based.

Now, let's move forward:

The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI.

Background of OpenStack cascading vs cells:

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment.
    b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience.
    c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access.

  1. What problems does cascading solve that cells doesn't cover:
    OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level.

  2. Why cells can?t do that:
    Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance.
    a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled.
    b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer.

For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7].

[1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle
[4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI
[5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf
[6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf
[7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html

Best Regards
Chaoyi Huang (joehuang)


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims

responded Dec 5, 2014 by Davanum_Srinivas (35,920 points)   2 4 9
0 votes

Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the mail.

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment.
    b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience.
    c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access.

Best Regards

Chaoyi Huang ( joehuang )


From: Davanum Srinivas [davanum at gmail.com]
Sent: 05 December 2014 21:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward

Joe,

Related to this topic, At the summit, there was a session on Cells v2
and following up on that there have been BP(s) filed in Nova
championed by Andrew -
https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z

thanks,
dims

On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote:
Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for scaling out?[1], almost 100 peoples attended the meeting, and the conclusion is that cells can not cover the use cases and requirements which the OpenStack cascading solution[2] aim to address, the background including use cases and requirements is also described in the mail.

After the summit, we just ported the PoC[3] source code from IceHouse based to Juno based.

Now, let's move forward:

The major task is to introduce new driver/agent to existing core projects, for the core idea of cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode or register BP separately in each involved project. CI for cascading is quite different from traditional test environment, at least 3 OpenStack instance required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI.

Background of OpenStack cascading vs cells:

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), establishing globally addressable tenants which result in efficient services deployment.
    b). Telefonica use case[5], create virtual DC( data center) cross multiple physical DCs with seamless experience.
    c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV cloud, it?s in nature the cloud will be distributed but inter-connected in many data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple vendor?s OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management with global Open API exposed, for example create virtual DC cross multiple physical DCs with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site cloud, but it's prosperity API in the north bound interface. The cloud operators want the ecosystem friendly global open API for the mutli-site cloud for global access.

  1. What problems does cascading solve that cells doesn't cover:
    OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core architecture idea of OpenStack cascading is to add Nova as the hypervisor backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the backend of Neutron, Glance as one image location of Glance, Ceilometer as the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from different vendor's distribution, or different version ) which may located in different sites (or data centers ) through the OpenStack API, meanwhile the cloud still expose OpenStack API as the north-bound API in the cloud level.

  2. Why cells can?t do that:
    Cells provide the scale out capability to Nova, but from the OpenStack as a whole point of view, it?s still working like one OpenStack instance.
    a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This approach provides the multi-site cloud with one unified API endpoint and unified resource management, but consolidation of multi-vendor/multi-version OpenStack instances across one or more data centers cannot be fulfilled.
    b). Each site installed one child cell and accompanied standalone Cinder, Neutron(or Nova-network), Glance, Ceilometer. This approach makes multi-vendor/multi-version OpenStack distribution co-existence in multi-site seem to be feasible, but the requirement for unified API endpoint and unified resource management cannot be fulfilled. Cross Neutron networking automation is also missing, which should otherwise be done manually or use proprietary orchestration layer.

For more information about cascading and cells, please refer to the discussion thread before Paris Summit [7].

[1]Approaches for scaling out: https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle
[4]Vodafone use case (9'02" to 12'30"): https://www.youtube.com/watch?v=-KOJYvhmxQI
[5]Telefonica use case: http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf
[6]ETSI NFV use cases: http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf
[7]Cascading thread before design summit: http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html

Best Regards
Chaoyi Huang (joehuang)


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 6, 2014 by joehuang (17,140 points)   2 6 9
0 votes

On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote:

Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for
scaling out?[1], almost 100 peoples attended the meeting, and the
conclusion is that cells can not cover the use cases and
requirements which the OpenStack cascading solution[2] aim to
address, the background including use cases and requirements is
also described in the mail.

I must admit that this was not the reaction I came away with the
discussion with. There was a lot of confusion, and as we started
looking closer, many (or perhaps most) people speaking up in the room
did not agree that the requirements being stated are things we want to
try to satisfy.

On 12/05/2014 06:47 PM, joehuang wrote:
Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the mail.

You're right that cells doesn't solve all of the requirements you're
discussing. Cells addresses scale in a region. My impression from the
summit session and other discussions is that the scale issues addressed
by cells are considered a priority, while the "global API" bits are not.

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02"
    to 12'30" ), establishing globally addressable tenants which result
    in efficient services deployment.

Keystone has been working on federated identity. That part makes sense,
and is already well under way.

b). Telefonica use case[5], create virtual DC( data center) cross
multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each
other with high bandwidth and low latency, that's one conversation. My
impression is that you want to provide a single OpenStack API on top of
globally distributed DCs. I honestly don't see that as a problem we
should be trying to tackle. I'd rather continue to focus on making
OpenStack work really well split into regions.

I think some people are trying to use cells in a geographically
distributed way, as well. I'm not sure that's a well understood or
supported thing, though. Perhaps the folks working on the new version
of cells can comment further.

c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
8#. For NFV cloud, it?s in nature the cloud will be distributed but
inter-connected in many data centers.

I'm afraid I don't understand this one. In many conversations about
NFV, I haven't heard this before.

2.requirements
a). The operator has multiple sites cloud; each site can use one or
multiple vendor?s OpenStack distributions.

Is this a technical problem, or is a business problem of vendors not
wanting to support a mixed environment that you're trying to work around
with a technical solution?

b). Each site with its own requirements and upgrade schedule while
maintaining standard OpenStack API
c). The multi-site cloud must provide unified resource management
with global Open API exposed, for example create virtual DC cross
multiple physical DCs with seamless experience.

Although a prosperity orchestration layer could be developed for
the multi-site cloud, but it's prosperity API in the north bound
interface. The cloud operators want the ecosystem friendly global
open API for the mutli-site cloud for global access.

I guess the question is, do we see a "global API" as something we want
to accomplish. What you're talking about is huge, and I'm not even sure
how you would expect it to work in some cases (like networking).

In any case, to be as clear as possible, I'm not convinced this is
something we should be working on. I'm going to need to see much more
overwhelming support for the idea before helping to figure out any
further steps.

--
Russell Bryant

responded Dec 10, 2014 by Russell_Bryant (19,240 points)   2 3 8
0 votes

Hello, Russell,

Many thanks for your reply. See inline comments.

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward

On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote:

Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for
scaling out?[1], almost 100 peoples attended the meeting, and the
conclusion is that cells can not cover the use cases and
requirements which the OpenStack cascading solution[2] aim to
address, the background including use cases and requirements is also
described in the mail.

I must admit that this was not the reaction I came away with the discussion with.
There was a lot of confusion, and as we started looking closer, many (or perhaps most)
people speaking up in the room did not agree that the requirements being stated are
things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements.

On 12/05/2014 06:47 PM, joehuang wrote:

Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the mail.

You're right that cells doesn't solve all of the requirements you're discussing.
Cells addresses scale in a region. My impression from the summit session
and other discussions is that the scale issues addressed by cells are considered
a priority, while the "global API" bits are not.

[joehuang] Agree cells is in the first class priority.

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02"
    to 12'30" ), establishing globally addressable tenants which result
    in efficient services deployment.

Keystone has been working on federated identity.
That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there.
Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation.

b). Telefonica use case[5], create virtual DC( data center) cross
multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each other
with high bandwidth and low latency, that's one conversation.
My impression is that you want to provide a single OpenStack API on top of
globally distributed DCs. I honestly don't see that as a problem we should
be trying to tackle. I'd rather continue to focus on making OpenStack work
really well split into regions.
I think some people are trying to use cells in a geographically distributed way,
as well. I'm not sure that's a well understood or supported thing, though.
Perhaps the folks working on the new version of cells can comment further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome.
.

c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
8#. For NFV cloud, it?s in nature the cloud will be distributed but
inter-connected in many data centers.

I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before.

[joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here.

2.requirements
a). The operator has multiple sites cloud; each site can use one or
multiple vendor?s OpenStack distributions.

Is this a technical problem, or is a business problem of vendors not
wanting to support a mixed environment that you're trying to work
around with a technical solution?

[joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface.

b). Each site with its own requirements and upgrade schedule while
maintaining standard OpenStack API c). The multi-site cloud must
provide unified resource management with global Open API exposed, for
example create virtual DC cross multiple physical DCs with seamless
experience.

Although a prosperity orchestration layer could be developed for the
multi-site cloud, but it's prosperity API in the north bound
interface. The cloud operators want the ecosystem friendly global
open API for the mutli-site cloud for global access.

I guess the question is, do we see a "global API" as something we want
to accomplish. What you're talking about is huge, and I'm not even sure
how you would expect it to work in some cases (like networking).

[joehuang] Yes, the most challenge part is networking. In the PoC, L2 networking cross OpenStack is to leverage the L2 population mechanism.The L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s port is up, and then ML2 L2 population will be activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external port will be attached to the L2GW or only external port created, L2 population(if not L2GW used) inside DC2 can be activated to notify all VMs located in DC2 for the same L2 network. For L3 networking finished in the PoC is to use extra route over GRE to serve local VLAN/VxLAN networks located in different DCs. Of course, other L3 networking method can be developed, for example, through VPN service. There are 4 or 5 BPs talking about edge network gateway to connect OpenStack tenant network to outside network, all these technologies can be leveraged to do cross OpenStack networking for different scenario. To experience the cross OpenStack networking, please try PoC source code: https://github.com/stackforge/tricircle

In any case, to be as clear as possible, I'm not convinced this is something
we should be working on. I'm going to need to see much more
overwhelming support for the idea before helping to figure out any further steps.

[joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion.

Russell Bryant


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Best Regards

Chaoyi Huang ( joehuang )

responded Dec 11, 2014 by joehuang (17,140 points)   2 6 9
0 votes

On 12/11/2014 04:02 AM, joehuang wrote:
[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.

What is "prosperity orchestration layer" and "prosperity API"?

[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for
NFV. In Figure 14 (virtualization of EPC) of this document, you can
see that the operator's cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.

What is "prosperity hardware"?

Thanks,
-jay

responded Dec 11, 2014 by Jay_Pipes (59,760 points)   3 11 14
0 votes

On 12/11/2014 04:02 AM, joehuang wrote:
Hello, Russell,

Many thanks for your reply. See inline comments.

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward

On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote:

Dear all & TC & PTL,

In the 40 minutes cross-project summit session ?Approaches for
scaling out?[1], almost 100 peoples attended the meeting, and the
conclusion is that cells can not cover the use cases and
requirements which the OpenStack cascading solution[2] aim to
address, the background including use cases and requirements is also
described in the mail.
I must admit that this was not the reaction I came away with the discussion with.
There was a lot of confusion, and as we started looking closer, many (or perhaps most)
people speaking up in the room did not agree that the requirements being stated are
things we want to try to satisfy.
[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the use cases and requirements which the OpenStack cascading solution aim to address. 2) Need further discussion whether to satisfy the use cases and requirements.

Correct, cells does not cover all of the use cases that cascading aims
to address. But it was expressed that the use cases that are not
covered may not be cases that we want addressed.

On 12/05/2014 06:47 PM, joehuang wrote:

Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the mail.
You're right that cells doesn't solve all of the requirements you're discussing.
Cells addresses scale in a region. My impression from the summit session
and other discussions is that the scale issues addressed by cells are considered
a priority, while the "global API" bits are not.
[joehuang] Agree cells is in the first class priority.

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02"
    to 12'30" ), establishing globally addressable tenants which result
    in efficient services deployment.
    Keystone has been working on federated identity.
    That part makes sense, and is already well under way.
    [joehuang] The major challenge for VDF use case is cross OpenStack networking for tenants. The tenant's VM/Volume may be allocated in different data centers geographically, but virtual network (L2/L3/FW/VPN/LB) should be built for each tenant automatically and isolated between tenants. Keystone federation can help authorization automation, but the cross OpenStack network automation challenge is still there.
    Using prosperity orchestration layer can solve the automation issue, but VDF don't like prosperity API in the north-bound, because no ecosystem is available. And other issues, for example, how to distribute image, also cannot be solved by Keystone federation.

b). Telefonica use case[5], create virtual DC( data center) cross
multiple physical DCs with seamless experience.
If we're talking about multiple DCs that are effectively local to each other
with high bandwidth and low latency, that's one conversation.
My impression is that you want to provide a single OpenStack API on top of
globally distributed DCs. I honestly don't see that as a problem we should
be trying to tackle. I'd rather continue to focus on making OpenStack work
really well split into regions.
I think some people are trying to use cells in a geographically distributed way,
as well. I'm not sure that's a well understood or supported thing, though.
Perhaps the folks working on the new version of cells can comment further.
[joehuang] 1) Splited region way cannot provide cross OpenStack networking automation for tenant. 2) exactly, the motivation for cascading is "single OpenStack API on top of globally distributed DCs". Of course, cascading can also be used for DCs close to each other with high bandwidth and low latency. 3) Folks comment from cells are welcome.
.

Cells can handle a single API on top of globally distributed DCs. I
have spoken with a group that is doing exactly that. But it requires
that the API is a trusted part of the OpenStack deployments in those
distributed DCs.

c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
8#. For NFV cloud, it?s in nature the cloud will be distributed but
inter-connected in many data centers.
I'm afraid I don't understand this one. In many conversations about NFV, I haven't heard this before.
[joehuang] This is the ETSI requirement and use cases specification for NFV. ETSI is the home of the Industry Specification Group for NFV. In Figure 14 (virtualization of EPC) of this document, you can see that the operator's cloud including many data centers to provide connection service to end user by inter-connected VNFs. The requirements listed in (https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run over cloud, eg. migrate the traditional telco. APP from prosperity hardware to cloud. Not all NFV requirements have been covered yet. Forgive me there are so many telco terms here.

2.requirements
a). The operator has multiple sites cloud; each site can use one or
multiple vendor?s OpenStack distributions.
Is this a technical problem, or is a business problem of vendors not
wanting to support a mixed environment that you're trying to work
around with a technical solution?
[joehuang] Pls. refer to VDF use case, the multi-vendor policy has been stated very clearly: 1) Local relationships: Operating Companies also have long standing relationships to their own choice of vendors; 2) Multi-Vendor :Each site can use one or multiple vendors which leads to better use of local resources and capabilities. Technical solution must be provided for multi-vendor integration and verification, It's usually ETSI standard in the past for mobile network. But how to do that in multi-vendor's cloud infrastructure? Cascading provide a way to use OpenStack API as the integration interface.

b). Each site with its own requirements and upgrade schedule while
maintaining standard OpenStack API c). The multi-site cloud must
provide unified resource management with global Open API exposed, for
example create virtual DC cross multiple physical DCs with seamless
experience.
Although a prosperity orchestration layer could be developed for the
multi-site cloud, but it's prosperity API in the north bound
interface. The cloud operators want the ecosystem friendly global
open API for the mutli-site cloud for global access.
I guess the question is, do we see a "global API" as something we want
to accomplish. What you're talking about is huge, and I'm not even sure
how you would expect it to work in some cases (like networking).
[joehuang] Yes, the most challenge part is networking. In the PoC, L2 networking cross OpenStack is to leverage the L2 population mechanism.The L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s port is up, and then ML2 L2 population will be activated, the VM1's tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external port will be attached to the L2GW or only external port created, L2 population(if not L2GW used) inside DC2 can be activated to notify all VMs located in DC2 for the same L2 network. For L3 networking finished in the PoC is to use extra route over GRE to serve local VLAN/VxLAN networks located in different DCs. Of course, other L3 networking method can be developed, for example, through VPN service. There are 4 or 5 BPs talking about edge network gateway to connect OpenStack tenant network to outside network, all these technologies can be leveraged to do cross OpenStack networking for different scenario. To experience the cross OpenStack networking, please try PoC source code: https://github.com/stackforge/tricircle

In any case, to be as clear as possible, I'm not convinced this is something
we should be working on. I'm going to need to see much more
overwhelming support for the idea before helping to figure out any further steps.
[joehuang] If you or any other have any doubts, please feel free to ignite a discussion thread. For time difference reason, we (working in China) are not able to join most of IRC meeting, so mail-list is a good way for discussion.

Russell Bryant


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Best Regards

Chaoyi Huang ( joehuang )


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 11, 2014 by Andrew_Laski (2,560 points)   2 3
0 votes

On Thu, Dec 11, 2014 at 1:02 AM, joehuang wrote:

Hello, Russell,

Many thanks for your reply. See inline comments.

-----Original Message-----
From: Russell Bryant [mailto:rbryant at redhat.com]
Sent: Thursday, December 11, 2014 5:22 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells - summit
recap and move forward

On Fri, Dec 5, 2014 at 8:23 AM, joehuang wrote:

Dear all & TC & PTL,

In the 40 minutes cross-project summit session "Approaches for
scaling out"[1], almost 100 peoples attended the meeting, and the
conclusion is that cells can not cover the use cases and
requirements which the OpenStack cascading solution[2] aim to
address, the background including use cases and requirements is also
described in the mail.

I must admit that this was not the reaction I came away with the
discussion with.
There was a lot of confusion, and as we started looking closer, many (or
perhaps most)
people speaking up in the room did not agree that the requirements being
stated are
things we want to try to satisfy.

[joehuang] Could you pls. confirm your opinion: 1) cells can not cover the
use cases and requirements which the OpenStack cascading solution aim to
address. 2) Need further discussion whether to satisfy the use cases and
requirements.

On 12/05/2014 06:47 PM, joehuang wrote:

Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements
described in the mail.

You're right that cells doesn't solve all of the requirements you're
discussing.
Cells addresses scale in a region. My impression from the summit session
and other discussions is that the scale issues addressed by cells are
considered
a priority, while the "global API" bits are not.

[joehuang] Agree cells is in the first class priority.

  1. Use cases
    a). Vodafone use case[4](OpenStack summit speech video from 9'02"
    to 12'30" ), establishing globally addressable tenants which result
    in efficient services deployment.

Keystone has been working on federated identity.
That part makes sense, and is already well under way.

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is still
there.
Using prosperity orchestration layer can solve the automation issue, but
VDF don't like prosperity API in the north-bound, because no ecosystem is
available. And other issues, for example, how to distribute image, also
cannot be solved by Keystone federation.

b). Telefonica use case[5], create virtual DC( data center) cross
multiple physical DCs with seamless experience.

If we're talking about multiple DCs that are effectively local to each
other
with high bandwidth and low latency, that's one conversation.
My impression is that you want to provide a single OpenStack API on top of
globally distributed DCs. I honestly don't see that as a problem we
should
be trying to tackle. I'd rather continue to focus on making OpenStack
work
really well split into regions.
I think some people are trying to use cells in a geographically
distributed way,
as well. I'm not sure that's a well understood or supported thing,
though.
Perhaps the folks working on the new version of cells can comment
further.

[joehuang] 1) Splited region way cannot provide cross OpenStack networking
automation for tenant. 2) exactly, the motivation for cascading is "single
OpenStack API on top of globally distributed DCs". Of course, cascading can
also be used for DCs close to each other with high bandwidth and low
latency. 3) Folks comment from cells are welcome.
.

c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6,
8#. For NFV cloud, it's in nature the cloud will be distributed but
inter-connected in many data centers.

I'm afraid I don't understand this one. In many conversations about NFV,
I haven't heard this before.

[joehuang] This is the ETSI requirement and use cases specification for
NFV. ETSI is the home of the Industry Specification Group for NFV. In
Figure 14 (virtualization of EPC) of this document, you can see that the
operator's cloud including many data centers to provide connection service
to end user by inter-connected VNFs. The requirements listed in (
https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about the
requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW etc) to run
over cloud, eg. migrate the traditional telco. APP from prosperity hardware
to cloud. Not all NFV requirements have been covered yet. Forgive me there
are so many telco terms here.

2.requirements
a). The operator has multiple sites cloud; each site can use one or
multiple vendor's OpenStack distributions.

Is this a technical problem, or is a business problem of vendors not
wanting to support a mixed environment that you're trying to work
around with a technical solution?

[joehuang] Pls. refer to VDF use case, the multi-vendor policy has been
stated very clearly: 1) Local relationships: Operating Companies also have
long standing relationships to their own choice of vendors; 2) Multi-Vendor
:Each site can use one or multiple vendors which leads to better use of
local resources and capabilities. Technical solution must be provided for
multi-vendor integration and verification, It's usually ETSI standard in
the past for mobile network. But how to do that in multi-vendor's cloud
infrastructure? Cascading provide a way to use OpenStack API as the
integration interface.

How would something like flavors work across multiple vendors. The
OpenStack API doesn't have any hard coded names and sizes for flavors. So a
flavor such as m1.tiny may actually be very different vendor to vendor.

b). Each site with its own requirements and upgrade schedule while
maintaining standard OpenStack API c). The multi-site cloud must
provide unified resource management with global Open API exposed, for
example create virtual DC cross multiple physical DCs with seamless
experience.

Although a prosperity orchestration layer could be developed for the
multi-site cloud, but it's prosperity API in the north bound
interface. The cloud operators want the ecosystem friendly global
open API for the mutli-site cloud for global access.

I guess the question is, do we see a "global API" as something we want
to accomplish. What you're talking about is huge, and I'm not even sure
how you would expect it to work in some cases (like networking).

[joehuang] Yes, the most challenge part is networking. In the PoC, L2
networking cross OpenStack is to leverage the L2 population mechanism.The
L2proxy for DC1 in the cascading layer will detect the new VM1(in DC1)'s
port is up, and then ML2 L2 population will be activated, the VM1's
tunneling endpoint- host IP or L2GW IP in DC1, will be populated to L2proxy
for DC2, and L2proxy for DC2 will create a external port in the DC2 Neutron
with the VM1's tunneling endpoint- host IP or L2GW IP in DC1. The external
port will be attached to the L2GW or only external port created, L2
population(if not L2GW used) inside DC2 can be activated to notify all VMs
located in DC2 for the same L2 network. For L3 networking finished in the
PoC is to use extra route over GRE to serve local VLAN/VxLAN networks
located in different DCs. Of course, other L3 networking method can be
developed, for example, through VPN service. There are 4 or 5 BPs talking
about edge network gateway to connect OpenStack tenant network to outside
network, all these technologies can be leveraged to do cross OpenStack
networking for different scenario. To experience the cross OpenStack
networking, please try PoC source code:
https://github.com/stackforge/tricircle

In any case, to be as clear as possible, I'm not convinced this is
something

we should be working on. I'm going to need to see much more
overwhelming support for the idea before helping to figure out any
further steps.

[joehuang] If you or any other have any doubts, please feel free to ignite
a discussion thread. For time difference reason, we (working in China) are
not able to join most of IRC meeting, so mail-list is a good way for
discussion.

Russell Bryant


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Best Regards

Chaoyi Huang ( joehuang )


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

responded Dec 12, 2014 by Joe_Gordon (24,620 points)   2 5 8
0 votes

Hi, Jay,

Good question, see inline comments, pls.

Best Regards
Chaoyi Huang ( Joe Huang )

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com]
Sent: Friday, December 12, 2014 1:58 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward

On 12/11/2014 04:02 AM, joehuang wrote:

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.

What is "prosperity orchestration layer" and "prosperity API"?

[joehuang] suppose that there are two OpenStack instances in the cloud, and vendor A developed an orchestration layer called CMPa (cloud management platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, networkID). After the customer asked more and more function to the cloud, the API set of CMPa will be quite different from that of CMPb, and different from OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs ecosystem will be lost in the customer's cloud.

[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for NFV.
In Figure 14 (virtualization of EPC) of this document, you can see
that the operator's cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.

What is "prosperity hardware"?

[joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, even you bought Nokia ATCA, the IMS from Huawei will not be able to work over Nokia ATCA. The telco APP is sold with hardware together. (More comments on ETSI: ETSI is also the standard organization for GSM, 3G, 4G.)

Thanks,
-jay


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 12, 2014 by joehuang (17,140 points)   2 6 9
0 votes

So I think u mean 'proprietary'?

http://www.merriam-webster.com/dictionary/proprietary

-Josh

joehuang wrote:
Hi, Jay,

Good question, see inline comments, pls.

Best Regards
Chaoyi Huang ( Joe Huang )

-----Original Message-----
From: Jay Pipes [mailto:jaypipes at gmail.com]
Sent: Friday, December 12, 2014 1:58 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells ? summit recap and move forward

On 12/11/2014 04:02 AM, joehuang wrote:

[joehuang] The major challenge for VDF use case is cross OpenStack
networking for tenants. The tenant's VM/Volume may be allocated in
different data centers geographically, but virtual network
(L2/L3/FW/VPN/LB) should be built for each tenant automatically and
isolated between tenants. Keystone federation can help authorization
automation, but the cross OpenStack network automation challenge is
still there. Using prosperity orchestration layer can solve the
automation issue, but VDF don't like prosperity API in the
north-bound, because no ecosystem is available. And other issues, for
example, how to distribute image, also cannot be solved by Keystone
federation.

What is "prosperity orchestration layer" and "prosperity API"?

[joehuang] suppose that there are two OpenStack instances in the cloud, and vendor A developed an orchestration layer called CMPa (cloud management platform a), vendor B's orchestration layer CMPb. CMPa will define boot VM interface as CreateVM( Num, NameList, VMTemplate), CMPb may like to define boot VM interface as bootVM( Name, projectID, flavorID, volumeSize, location, networkID). After the customer asked more and more function to the cloud, the API set of CMPa will be quite different from that of CMPb, and different from OpenStack API. Now, all apps which consume OpenStack API like Heat, will not be able to run above the prosperity software CMPa/CMPb. All OpenStack API APPs ecosystem will be lost in the customer's cloud.

[joehuang] This is the ETSI requirement and use cases specification
for NFV. ETSI is the home of the Industry Specification Group for NFV.
In Figure 14 (virtualization of EPC) of this document, you can see
that the operator's cloud including many data centers to provide
connection service to end user by inter-connected VNFs. The
requirements listed in
(https://wiki.openstack.org/wiki/TelcoWorkingGroup) is mainly about
the requirements from specific VNF(like IMS, SBC, MME, HSS, S/P GW
etc) to run over cloud, eg. migrate the traditional telco. APP from
prosperity hardware to cloud. Not all NFV requirements have been
covered yet. Forgive me there are so many telco terms here.

What is "prosperity hardware"?

[joehuang] For example, Huawei's IMS can only run over Huawei's ATCA hardware, even you bought Nokia ATCA, the IMS from Huawei will not be able to work over Nokia ATCA. The telco APP is sold with hardware together. (More comments on ETSI: ETSI is also the standard organization for GSM, 3G, 4G.)

Thanks,
-jay


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 12, 2014 by Joshua_Harlow (12,560 points)   1 4 4
...