settingsLogin | Registersettings

[openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

0 votes

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I couldn't
find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a variable
that would be iterated over, so we would need one ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas to
a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
  repeat:
    foreach:
      <%az%>: { get
param: availabilityzones }
    template:
      rg-<%az%>:
        type: OS::Heat::ResourceGroup
        properties:
          count: 2
          resource
def: 
            type: hotsingleserver.yaml
            properties:
              availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?

Regards,

Mathieu Velten


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Mar 2, 2016 in openstack-dev by Mathieu_Velten (220 points)   1 1 1
retagged Jan 25, 2017 by admin

8 Responses

0 votes

On 02/03/16 05:50, Mathieu Velten wrote:
Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I couldn't
find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a variable
that would be iterated over, so we would need one ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas to
a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
repeat:
foreach:
<%az%>: { get
param: availabilityzones }
template:
rg-<%az%>:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource
def:
type: hotsingleserver.yaml
properties:
availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?

This is a long-standing missing feature in Heat. There are two
blueprints for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-availabilityzones-impl
https://blueprints.launchpad.net/heat/+spec/implement-autoscalinggroup-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need
serious work to rebase. The good news is that some of the changes I made
in Liberty like https://review.openstack.org/#/c/213555/ should
hopefully make it simpler.

All of which is to say, if you want to help then I think it would be
totally do-able to land support for this relatively early in Newton :)

Failing that, the only think I can think to try is something I am pretty
sure won't work: a ResourceGroup with something like:

availabilityzone: {getparam: [AZ_map, "%i"]}

where AZmap looks something like {"0": "az-1", "1": "az-2", "2":
"az-1", ...} and you're using the member index to pick out the AZ to use
from the parameter. I don't think that works (if "%i" is resolved after
get
param then it won't, and I suspect that's the case) but it's worth a
try if you need a solution in Mitaka.

cheers,
Zane.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 2, 2016 by Zane_Bitter (21,640 points)   4 6 12
0 votes

On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:
On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I couldn't
find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a variable
that would be iterated over, so we would need one ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas to
a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
repeat:
foreach:
<%az%>: { get
param: availabilityzones }
template:
rg-<%az%>:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource
def:
type: hotsingleserver.yaml
properties:
availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?

This is a long-standing missing feature in Heat. There are two blueprints
for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-availabilityzones-impl
https://blueprints.launchpad.net/heat/+spec/implement-autoscalinggroup-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need
serious work to rebase. The good news is that some of the changes I made in
Liberty like https://review.openstack.org/#/c/213555/ should hopefully make
it simpler.

All of which is to say, if you want to help then I think it would be totally
do-able to land support for this relatively early in Newton :)

Failing that, the only think I can think to try is something I am pretty
sure won't work: a ResourceGroup with something like:

availabilityzone: {getparam: [AZ_map, "%i"]}

where AZmap looks something like {"0": "az-1", "1": "az-2", "2": "az-1",
...} and you're using the member index to pick out the AZ to use from the
parameter. I don't think that works (if "%i" is resolved after get
param
then it won't, and I suspect that's the case) but it's worth a try if you
need a solution in Mitaka.

Yeah, this won't work if you attempt to do the map/index lookup in the
top-level template where the ResourceGroup is defined, but it does work
if you pass both the map and the index into the nested stack, e.g something
like this (untested):

$ cat rgazmap.yaml
heattemplateversion: 2015-04-30

parameters:
az_map:
type: json
default:
'0': az1
'1': az2

resources:
AGroup:
type: OS::Heat::ResourceGroup
properties:
count: 2
resourcedef:
type: server
mappedaz.yaml
properties:
availability
zonemap: {getparam: az_map}
index: '%index%'

$ cat servermappedaz.yaml
heattemplateversion: 2015-04-30

parameters:
availabilityzonemap:
type: json
index:
type: string

resources:
server:
type: OS::Nova::Server
properties:
image: theimage
flavor: m1.foo
availability
zone: {getparam: [availabilityzonemap, {getparam: index}]}

FWIW we already use this technique in some TripleO templates, and it works
pretty well.

https://github.com/openstack/tripleo-heat-templates/blob/master/network/ports/external_from_pool.yaml#L35

Steve


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 3, 2016 by Steven_Hardy (16,900 points)   2 7 13
0 votes

Thank you both for your answers !

Indeed I need it sooner rather than later (as usual :) ) so the Newton
release is a bit too far away.
In the meantime I just test your solution with the index and the map
and it works great ! 
I'll use that for now, and we will discuss taking over the Heat bp
internally.

Regards,

Mathieu

Le jeudi 03 mars 2016 à 08:57 +0000, Steven Hardy a écrit :

On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:

On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with
Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I
couldn't
find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a
variable
that would be iterated over, so we would need one ResourceGroup
by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor
level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some
metadatas to
a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
  repeat:
    foreach:
      <%az%>: { get
param: availabilityzones }
    template:
      rg-<%az%>:
        type: OS::Heat::ResourceGroup
        properties:
          count: 2
          resource
def:
            type: hotsingleserver.yaml
            properties:
              availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by
AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?
This is a long-standing missing feature in Heat. There are two
blueprints
for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-availabilit
yzones-impl
https://blueprints.launchpad.net/heat/+spec/implement-autoscalinggr
oup-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would
need
serious work to rebase. The good news is that some of the changes I
made in
Liberty like https://review.openstack.org/#/c/213555/ should
hopefully make
it simpler.

All of which is to say, if you want to help then I think it would
be totally
do-able to land support for this relatively early in Newton :)

Failing that, the only think I can think to try is something I am
pretty
sure won't work: a ResourceGroup with something like:

  availabilityzone: {getparam: [AZ_map, "%i"]}

where AZmap looks something like {"0": "az-1", "1": "az-2", "2":
"az-1",
...} and you're using the member index to pick out the AZ to use
from the
parameter. I don't think that works (if "%i" is resolved after
get
param
then it won't, and I suspect that's the case) but it's worth a try
if you
need a solution in Mitaka.
Yeah, this won't work if you attempt to do the map/index lookup in
the
top-level template where the ResourceGroup is defined, but it does
work
if you pass both the map and the index into the nested stack, e.g
something
like this (untested):

$ cat rgazmap.yaml
heattemplateversion: 2015-04-30

parameters:
  az_map:
    type: json
    default:
      '0': az1
      '1': az2

resources:
 AGroup:
    type: OS::Heat::ResourceGroup
    properties:
      count: 2
      resourcedef:
        type: server
mappedaz.yaml
        properties:
          availability
zonemap: {getparam: az_map}
          index: '%index%'

$ cat servermappedaz.yaml
heattemplateversion: 2015-04-30

parameters:
  availabilityzonemap:
    type: json
  index:
    type: string

resources:
 server:
    type: OS::Nova::Server
    properties:
      image: theimage
      flavor: m1.foo
      availability
zone: {getparam: [availabilityzonemap,
{get
param: index}]}

FWIW we already use this technique in some TripleO templates, and it
works
pretty well.

https://github.com/openstack/tripleo-heat-templates/blob/master/netwo
rk/ports/externalfrompool.yaml#L35

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 3, 2016 by Mathieu_Velten (220 points)   1 1 1
0 votes

Another option is to try out senlin service. What you need to do is
something like below:

  1. Create a heat template you want to deploy as a group, say,
    node_template.yaml

  2. Create a senlin profile spec (heat_stack.yaml) which may look
    like, for example:

    type: os.heat.stack
    version: 1.0
    properties:
    name: nodetemplate
    template: node
    template.yaml
    environment: shared_env.yaml

  3. Register the profile to senlin:

    $ senlin profile-create -s heatstack.yaml stackprofile

    After this step, you can create individual instances (nodes) out of
    this profile.

  4. Create a cluster using the profile:

    $ senlin cluster-create -p stackprofile mycluster

  5. Create a zone placement policy spec (zone_placement.yaml), which
    may look like:

    type: senlin.policy.zone_placement
    version: 1.0
    properties:
    zones:

    • name: zone1
      weight: 100
    • name: zone2
      weight: 50
  6. Initialize a policy object, which can be attaced to any clusters:

    $ senlin policy-create -s zoneplacement.yaml zonepolicy

  7. Attach the above policy to your cluster:

    $ senlin cluster-policy-attach -p zonepolicy mycluster

Now, you can change your clusters size at will, and the zone placement
policy will be enforced when new nodes are added or existing nodes are
removed. For example:

$ senlin cluster-scale-out -c 10 my_cluster

This will add 10 nodes to your cluster and the nodes will be spread
across the availability zones based on the weight you specified. When
you scale in your cluster, the zone distribution is also evaluated.

If any help needed, please stop by the #senlin channel IRC. We are more
than happy to provide supports.

Regards,
Qiming


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 4, 2016 by Qiming_Teng (7,380 points)   3 9 16
0 votes

On Fri, Mar 04, 2016 at 01:09:26PM +0800, Qiming Teng wrote:
Another option is to try out senlin service. What you need to do is
something like below:

  1. Create a heat template you want to deploy as a group, say,
    node_template.yaml

  2. Create a senlin profile spec (heat_stack.yaml) which may look
    like, for example:

    type: os.heat.stack
    version: 1.0
    properties:
    name: nodetemplate
    template: node
    template.yaml
    environment: shared_env.yaml

  3. Register the profile to senlin:

    $ senlin profile-create -s heatstack.yaml stackprofile

    After this step, you can create individual instances (nodes) out of
    this profile.

  4. Create a cluster using the profile:

    $ senlin cluster-create -p stackprofile mycluster

  5. Create a zone placement policy spec (zone_placement.yaml), which
    may look like:

    type: senlin.policy.zone_placement
    version: 1.0
    properties:
    zones:

    • name: zone1
      weight: 100
    • name: zone2
      weight: 50
  6. Initialize a policy object, which can be attaced to any clusters:

    $ senlin policy-create -s zoneplacement.yaml zonepolicy

  7. Attach the above policy to your cluster:

    $ senlin cluster-policy-attach -p zonepolicy mycluster

Oh, forgot to mention, this won't work at the moment because we are not
so sure that a stack as a whole can be placed into a single NOVA
AVAILABILITY ZONE, there are other availability zones as well. The above
example works if the profile is about an os.nova.server type.

Anyway, this example hopefully showed you how things are done with
senlin service.

Regards,
Qiming

Now, you can change your clusters size at will, and the zone placement
policy will be enforced when new nodes are added or existing nodes are
removed. For example:

$ senlin cluster-scale-out -c 10 my_cluster

This will add 10 nodes to your cluster and the nodes will be spread
across the availability zones based on the weight you specified. When
you scale in your cluster, the zone distribution is also evaluated.

If any help needed, please stop by the #senlin channel IRC. We are more
than happy to provide supports.

Regards,
Qiming


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 4, 2016 by Qiming_Teng (7,380 points)   3 9 16
0 votes

Hi Heat team,

A question inline.

Best regards,
Hongbin

-----Original Message-----
From: Steven Hardy [mailto:shardy@redhat.com]
Sent: March-03-16 3:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
different availability zones

On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:

On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I
couldn't find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a
variable that would be iterated over, so we would need one
ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor
level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas
to a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
repeat:
foreach:
<%az%>: { get
param: availabilityzones }
template:
rg-<%az%>:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource
def:
type: hotsingleserver.yaml
properties:
availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?

This is a long-standing missing feature in Heat. There are two
blueprints for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-
availabilityzo
nes-impl
https://blueprints.launchpad.net/heat/+spec/implement-
autoscalinggroup
-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need
serious work to rebase. The good news is that some of the changes I
made in Liberty like https://review.openstack.org/#/c/213555/ should
hopefully make it simpler.

All of which is to say, if you want to help then I think it would be
totally do-able to land support for this relatively early in Newton :)

Failing that, the only think I can think to try is something I am
pretty sure won't work: a ResourceGroup with something like:

availabilityzone: {getparam: [AZ_map, "%i"]}

where AZmap looks something like {"0": "az-1", "1": "az-2", "2":
"az-1", ...} and you're using the member index to pick out the AZ to
use from the parameter. I don't think that works (if "%i" is resolved
after get
param then it won't, and I suspect that's the case) but
it's
worth a try if you need a solution in Mitaka.

Yeah, this won't work if you attempt to do the map/index lookup in the
top-level template where the ResourceGroup is defined, but it does
work if you pass both the map and the index into the nested stack, e.g
something like this (untested):

$ cat rgazmap.yaml
heattemplateversion: 2015-04-30

parameters:
az_map:
type: json
default:
'0': az1
'1': az2

resources:
AGroup:
type: OS::Heat::ResourceGroup
properties:
count: 2
resourcedef:
type: server
mappedaz.yaml
properties:
availability
zonemap: {getparam: az_map}
index: '%index%'

$ cat servermappedaz.yaml
heattemplateversion: 2015-04-30

parameters:
availabilityzonemap:
type: json
index:
type: string

resources:
server:
type: OS::Nova::Server
properties:
image: theimage
flavor: m1.foo
availability
zone: {getparam: [availabilityzonemap, {getparam:
index}]}

This is nice. It seems to address our heterogeneity requirement at deploy time. However, I wonder what is the runtime behavior. For example, I deploy a stack by:
$ heat stack-create -f rgazmap.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rgazmap.yaml -P az_map='{"0":"az1"}'

Will Heat remove resources in index "1" only (with resources in index "0" untouched)? Also, I wonder if we can dynamically add resources (with existing resources untouched). For example, add a server by:
$ heat stack-update -f rgazmap.yaml -P az_map='{"0":"az1","1":"az2","2":"az3"}'

In addition, I want to point out that spreading the availability zones is not the only use case. Magnum has generic use cases to manage heterogeneous set of resources. For example:
$ heat stack-create -f rgazmap.yaml -P azmap='{"resourcegorup1":{"availabilityzone":"az1","count":"2","flavor":"m1.foo",...},"resourcegorup2":{"availability_zone":"az2","count":"3","flavor":"m2.foo",...},...}"

Is it a reasonable to expect Heat to support that?

FWIW we already use this technique in some TripleO templates, and it
works pretty well.

https://github.com/openstack/tripleo-heat-
templates/blob/master/network/ports/externalfrompool.yaml#L35

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 7, 2016 by hongbin.lu_at_huawei (11,620 points)   3 3 6
0 votes

One more example how you may do it using yaql:

oleksii@oleksii:~$ cat example.yaml
heattemplateversion: 2013-05-23

parameters:
az_list:
type: string
count:
type: number

resources:
rg:
type: OS::Heat::ResourceGroup
properties:
count: {getparam: count}
resource
def:
type: server.yaml
properties:
index: "%index%"
availabilityzones: {getparam: az_list}

oleksii@oleksii:~$ cat server.yaml
heattemplateversion: 2013-05-23
parameters:
availabilityzones:
type: comma
delimitedlist
index:
type: string
resources:
instance:
type: OS::Nova::Server
properties:
availability
zone:
yaql:
expression: $.data.availabilityzones[int($.data.index) mod
$.data.availability
zones.len()]
data:
novaflavors: {getparam: availabilityzones}
index: {get
param: index}
flavor: m1.tiny
image: cirros

For example, if count == 4 and az_list=[az1, az2] you will have instance1
in az1, instance2 in az2 and instance3 in az1, instance4 in az2.

On Wed, Jun 8, 2016 at 12:53 AM, Hongbin Lu hongbin.lu@huawei.com wrote:

Hi Heat team,

A question inline.

Best regards,
Hongbin

-----Original Message-----
From: Steven Hardy [mailto:shardy@redhat.com]
Sent: March-03-16 3:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
different availability zones

On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:

On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I
couldn't find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a
variable that would be iterated over, so we would need one
ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor
level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas
to a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
repeat:
foreach:
<%az%>: { get
param: availabilityzones }
template:
rg-<%az%>:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource
def:
type: hotsingleserver.yaml
properties:
availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?

This is a long-standing missing feature in Heat. There are two
blueprints for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-
availabilityzo
nes-impl
https://blueprints.launchpad.net/heat/+spec/implement-
autoscalinggroup
-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need
serious work to rebase. The good news is that some of the changes I
made in Liberty like https://review.openstack.org/#/c/213555/ should
hopefully make it simpler.

All of which is to say, if you want to help then I think it would be
totally do-able to land support for this relatively early in Newton :)

Failing that, the only think I can think to try is something I am
pretty sure won't work: a ResourceGroup with something like:

availabilityzone: {getparam: [AZ_map, "%i"]}

where AZmap looks something like {"0": "az-1", "1": "az-2", "2":
"az-1", ...} and you're using the member index to pick out the AZ to
use from the parameter. I don't think that works (if "%i" is resolved
after get
param then it won't, and I suspect that's the case) but
it's
worth a try if you need a solution in Mitaka.

Yeah, this won't work if you attempt to do the map/index lookup in the
top-level template where the ResourceGroup is defined, but it does
work if you pass both the map and the index into the nested stack, e.g
something like this (untested):

$ cat rgazmap.yaml
heattemplateversion: 2015-04-30

parameters:
az_map:
type: json
default:
'0': az1
'1': az2

resources:
AGroup:
type: OS::Heat::ResourceGroup
properties:
count: 2
resourcedef:
type: server
mappedaz.yaml
properties:
availability
zonemap: {getparam: az_map}
index: '%index%'

$ cat servermappedaz.yaml
heattemplateversion: 2015-04-30

parameters:
availabilityzonemap:
type: json
index:
type: string

resources:
server:
type: OS::Nova::Server
properties:
image: theimage
flavor: m1.foo
availability
zone: {getparam: [availabilityzonemap, {getparam:
index}]}

This is nice. It seems to address our heterogeneity requirement at
deploy time. However, I wonder what is the runtime behavior. For example,
I deploy a stack by:
$ heat stack-create -f rgazmap.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rgazmap.yaml -P az_map='{"0":"az1"}'

Will Heat remove resources in index "1" only (with resources in index "0"
untouched)? Also, I wonder if we can dynamically add resources (with
existing resources untouched). For example, add a server by:
$ heat stack-update -f rgazmap.yaml -P
az_map='{"0":"az1","1":"az2","2":"az3"}'

In addition, I want to point out that spreading the availability zones is
not the only use case. Magnum has generic use cases to manage heterogeneous
set of resources. For example:
$ heat stack-create -f rgazmap.yaml -P
azmap='{"resourcegorup1":{"availabilityzone":"az1","count":"2","flavor":"m1.foo",...},"resourcegorup2":{"availability_zone":"az2","count":"3","flavor":"m2.foo",...},...}"

Is it a reasonable to expect Heat to support that?

FWIW we already use this technique in some TripleO templates, and it
works pretty well.

https://github.com/openstack/tripleo-heat-
templates/blob/master/network/ports/externalfrompool.yaml#L35

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 8, 2016 by Oleksii_Chuprykov (320 points)  
0 votes

On 07/06/16 23:53, Hongbin Lu wrote:
Hi Heat team,

A question inline.

Best regards,
Hongbin

-----Original Message-----
From: Steven Hardy [mailto:shardy@redhat.com]
Sent: March-03-16 3:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
different availability zones

On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:

On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I
couldn't find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a
variable that would be iterated over, so we would need one
ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor
level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas
to a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
repeat:
foreach:
<%az%>: { get
param: availabilityzones }
template:
rg-<%az%>:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource
def:
type: hotsingleserver.yaml
properties:
availability_zone: <%az%>

The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?

This is a long-standing missing feature in Heat. There are two
blueprints for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-
availabilityzo
nes-impl
https://blueprints.launchpad.net/heat/+spec/implement-
autoscalinggroup
-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need
serious work to rebase. The good news is that some of the changes I
made in Liberty like https://review.openstack.org/#/c/213555/ should
hopefully make it simpler.

All of which is to say, if you want to help then I think it would be
totally do-able to land support for this relatively early in Newton :)

Failing that, the only think I can think to try is something I am
pretty sure won't work: a ResourceGroup with something like:

availabilityzone: {getparam: [AZ_map, "%i"]}

where AZmap looks something like {"0": "az-1", "1": "az-2", "2":
"az-1", ...} and you're using the member index to pick out the AZ to
use from the parameter. I don't think that works (if "%i" is resolved
after get
param then it won't, and I suspect that's the case) but
it's
worth a try if you need a solution in Mitaka.

Yeah, this won't work if you attempt to do the map/index lookup in the
top-level template where the ResourceGroup is defined, but it does
work if you pass both the map and the index into the nested stack, e.g
something like this (untested):

$ cat rgazmap.yaml
heattemplateversion: 2015-04-30

parameters:
az_map:
type: json
default:
'0': az1
'1': az2

resources:
AGroup:
type: OS::Heat::ResourceGroup
properties:
count: 2
resourcedef:
type: server
mappedaz.yaml
properties:
availability
zonemap: {getparam: az_map}
index: '%index%'

$ cat servermappedaz.yaml
heattemplateversion: 2015-04-30

parameters:
availabilityzonemap:
type: json
index:
type: string

resources:
server:
type: OS::Nova::Server
properties:
image: theimage
flavor: m1.foo
availability
zone: {getparam: [availabilityzonemap, {getparam:
index}]}

This is nice. It seems to address our heterogeneity requirement at deploy time. However, I wonder what is the runtime behavior. For example, I deploy a stack by:
$ heat stack-create -f rgazmap.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rgazmap.yaml -P az_map='{"0":"az1"}'

Will Heat remove resources in index "1" only (with resources in index "0" untouched)? Also, I wonder if we can dynamically add resources (with existing resources untouched). For example, add a server by:
$ heat stack-update -f rgazmap.yaml -P az_map='{"0":"az1","1":"az2","2":"az3"}'

Removing members from the end of a ResourceGroup works fairly well. It's
when you have to remove members from anywhere else in the list that
things go very very bad very very quickly, which is the reason that I
don't recommend ResourceGroup to anybody.

In addition, I want to point out that spreading the availability zones is not the only use case. Magnum has generic use cases to manage heterogeneous set of resources. For example:
$ heat stack-create -f rgazmap.yaml -P azmap='{"resourcegorup1":{"availabilityzone":"az1","count":"2","flavor":"m1.foo",...},"resourcegorup2":{"availability_zone":"az2","count":"3","flavor":"m2.foo",...},...}"

Is it a reasonable to expect Heat to support that?

IMHO no, it's not. If you want a stack of heterogeneous resources, just
create a template with a bunch of heterogeneous resources. (We're hoping
to build a library[1] that will make this even easier, but it's pretty
straightforward even without that.)

Think of ResourceGroup and even Autoscaling groups as the world's
dumbest template generator. The reason to use them is if you need to
orchestration capabilities that they have tightly integrated with Heat -
e.g. batched rolling upgrades. Turning them into general-purpose
template generators is very much beside the point and the results will
never be entirely satisfactory.

cheers,
Zane.

[1] https://review.openstack.org/#/c/328822/

FWIW we already use this technique in some TripleO templates, and it
works pretty well.

https://github.com/openstack/tripleo-heat-
templates/blob/master/network/ports/externalfrompool.yaml#L35

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 16, 2016 by Zane_Bitter (21,640 points)   4 6 12
...