settingsLogin | Registersettings

[openstack-dev] [magnum] Discovery Mechanism For Swarm Bay

0 votes

Hi Devs,

I'd like to get some discussion going for possible improvements to the

discovery mechanism used in Magnum's Swarm bay.

The method of the existing review1 is to use the public swarm discovery

endpoint, and let the user pass a token in on the bay-create call. This is

definitely not ideal for a couple reasons. First, it requires the user to go

out and request that token themselves. Secondly, it relies on having

access to the internet and public swarm discovery endpoint.

Solving the first issue is fairly simple, the TemplateDefinition could request

the token just like the CoreOS TemplateDefinition does. That still requires

not only Magnum but also all of the instances in the Bay have access to the

public discovery endpoint. I still think this option has some merit for some

cases, how many of our users will be running their bays in isolation without

access to Docker's public services (registry/hub/discovery).

Solving the second issue is going to be a bit more complex. Swarm does

provide multiple alternatives to public token based discovery 2. These

revolve around either static lists of hosts, or around other configuration

services like etcd. Static lists of hosts is going to make growing or shrinking

bays a real pain. I think the best option here is to go with a configuration

service.

Configuration services present their own issues. Should each bay host it's

own service, similar to how we're doing for the swarm manager and agents.

Or, should it be up to the operator to run a global configuration service and

each bay will use some unique ID (Bay UUID may work here) for discovery.

Running a service inside each bay will likely require a different template for

each type of service, and will add more services to each Bay that may need

to be maintained. Due to that, the simpler solution may be to rely on the

operator to run a global service.

Anyways, any thoughts here will be greatly appreciated!

Thanks!

Andrew

?


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Apr 16, 2015 in openstack-dev by Andrew_Melton (680 points)   1 3

4 Responses

0 votes

Andrew,

For clustered services that require generation of a discovery token, and access to a discovery service, we need to keep in mind a few things:

1) Cloud operators have their own preferences for what defaults should be use. Some may want to run their own discovery services, and others will not.

2) Users may not want to use a discovery service offered by his or her cloud provider. They may want to run their own.

3) We do not want the burden for running Magnum to be any higher than it has to be. We should not require cloud operators to also run discovery services in order to use Magnum if they are willing to rely on the public discovery services.

So I propose the following:

  • The address for the discovery service should be set using a magnum configuration directive to be set in magnum.conf.
  • The value of this configuration directive should default to the public discovery service.
  • The Bay Create call should allow a parameter to allow the user to supply his/her own value to use for this setting.

This approach addresses all three concerns laid out above. The same approach should apply both to CoreOS and Swarm Bay so the user experience is consistent.

Regards,

Adrian

On Apr 16, 2015, at 12:31 PM, Andrew Melton andrew.melton@RACKSPACE.COM wrote:

Hi Devs,

I'd like to get some discussion going for possible improvements to the

discovery mechanism used in Magnum's Swarm bay.

The method of the existing review[1] is to use the public swarm discovery

endpoint, and let the user pass a token in on the bay-create call. This is

definitely not ideal for a couple reasons. First, it requires the user to go

out and request that token themselves. Secondly, it relies on having

access to the internet and public swarm discovery endpoint.

Solving the first issue is fairly simple, the TemplateDefinition could request

the token just like the CoreOS TemplateDefinition does. That still requires

not only Magnum but also all of the instances in the Bay have access to the

public discovery endpoint. I still think this option has some merit for some

cases, how many of our users will be running their bays in isolation without

access to Docker's public services (registry/hub/discovery).

Solving the second issue is going to be a bit more complex. Swarm does

provide multiple alternatives to public token based discovery 2. These

revolve around either static lists of hosts, or around other configuration

services like etcd. Static lists of hosts is going to make growing or shrinking

bays a real pain. I think the best option here is to go with a configuration

service.

Configuration services present their own issues. Should each bay host it's

own service, similar to how we're doing for the swarm manager and agents.

Or, should it be up to the operator to run a global configuration service and

each bay will use some unique ID (Bay UUID may work here) for discovery.

Running a service inside each bay will likely require a different template for

each type of service, and will add more services to each Bay that may need

to be maintained. Due to that, the simpler solution may be to rely on the

operator to run a global service.

Anyways, any thoughts here will be greatly appreciated!

Thanks!

Andrew

[1]: https://review.openstack.org/#/c/174112


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 16, 2015 by Adrian_Otto (11,060 points)   2 4 8
0 votes

On 04/16/2015 05:29 PM, Adrian Otto wrote:
This approach addresses all three concerns laid out above. The same
approach should apply both to CoreOS and Swarm Bay so the user
experience is consistent.

So, the above comment got me thinking... why does the user experience
need to be consistent? It's not like developers are going to deploy
stuff on Docker Swarm and Fleet/CoreOS and Kubernetes.

Developers who want to deploy on container clusters generally have
already picked one of them -- Mesos, Kubernetes, Docker Swarm, Fleet,
etc. What is the benefit of having an abstraction layer that tries to
make the usage of these different developer tools RESTful and
consistent? Why wouldn't the developer/deployer simply install
Kubernetes or Mesos or Docker Swarm in one or more VMs using Heat/Murano
and use the native API of their container cluster orchestration tool of
choice?

It's a bit like saying that we need to make a SQL-as-a-Service API
endpoint that installs multiple database servers into VMs and offers
some ANSI SQL via REST API service to communicate with multiple database
servers. It just doesn't make any logical sense to me. Database
developers want to use the native APIs of their preferred database
server, not some lowest-common-denominator SQL over REST interface.

Can someone enlighten me please?

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 16, 2015 by Jay_Pipes (59,760 points)   3 11 14
0 votes

Jay,

In Paris, we had one spec and zero core.

By the time we hit Vancouver, we will have a few scenarios working
with 3 milestones.

Come vancouver, We'll definitely know for sure if there is appetite
and interest in the community for this set of abstractions and
packaging we have currently in progress/proposed in the
developer/deployer community. Interesting times as they say :)

Thanks,
dims

On Thu, Apr 16, 2015 at 6:21 PM, Jay Pipes jaypipes@gmail.com wrote:
On 04/16/2015 05:29 PM, Adrian Otto wrote:

This approach addresses all three concerns laid out above. The same
approach should apply both to CoreOS and Swarm Bay so the user
experience is consistent.

So, the above comment got me thinking... why does the user experience need
to be consistent? It's not like developers are going to deploy stuff on
Docker Swarm and Fleet/CoreOS and Kubernetes.

Developers who want to deploy on container clusters generally have already
picked one of them -- Mesos, Kubernetes, Docker Swarm, Fleet, etc. What is
the benefit of having an abstraction layer that tries to make the usage of
these different developer tools RESTful and consistent? Why wouldn't the
developer/deployer simply install Kubernetes or Mesos or Docker Swarm in one
or more VMs using Heat/Murano and use the native API of their container
cluster orchestration tool of choice?

It's a bit like saying that we need to make a SQL-as-a-Service API endpoint
that installs multiple database servers into VMs and offers some ANSI SQL
via REST API service to communicate with multiple database servers. It just
doesn't make any logical sense to me. Database developers want to use the
native APIs of their preferred database server, not some
lowest-common-denominator SQL over REST interface.

Can someone enlighten me please?

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 16, 2015 by Davanum_Srinivas (35,920 points)   2 4 9
0 votes

Jay,

Fair question. Native tools do not install bays, the Magnum tools do. Once a bay exists, you can use native tools for native operations to act on if you want. It would be awkward for having two different config defaults within Magnum that are similar, but different for setting up clustered bay types. We already have a solution for CoreOS, and I want the Swarm Bay solution to match that. From a configuration discovery perspective, Swarm and CoreOS are already equivalent. It's just a matter of making a control plane for that purpose that gives cloud operators the ability to not depend on internet based services.

Adrian

On Apr 16, 2015, at 3:25 PM, Jay Pipes jaypipes@gmail.com wrote:

On 04/16/2015 05:29 PM, Adrian Otto wrote:
This approach addresses all three concerns laid out above. The same
approach should apply both to CoreOS and Swarm Bay so the user
experience is consistent.

So, the above comment got me thinking... why does the user experience need to be consistent? It's not like developers are going to deploy stuff on Docker Swarm and Fleet/CoreOS and Kubernetes.

Developers who want to deploy on container clusters generally have already picked one of them -- Mesos, Kubernetes, Docker Swarm, Fleet, etc. What is the benefit of having an abstraction layer that tries to make the usage of these different developer tools RESTful and consistent? Why wouldn't the developer/deployer simply install Kubernetes or Mesos or Docker Swarm in one or more VMs using Heat/Murano and use the native API of their container cluster orchestration tool of choice?

It's a bit like saying that we need to make a SQL-as-a-Service API endpoint that installs multiple database servers into VMs and offers some ANSI SQL via REST API service to communicate with multiple database servers. It just doesn't make any logical sense to me. Database developers want to use the native APIs of their preferred database server, not some lowest-common-denominator SQL over REST interface.

Can someone enlighten me please?

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 16, 2015 by Adrian_Otto (11,060 points)   2 4 8
...