settingsLogin | Registersettings

[openstack-dev] [upgrades][skip-level][leapfrog] - RFC - Skipping releases when upgrading

0 votes

Hello Stackers,

As I'm sure many of you know there was a talk about doing "skip-level"[0]
upgrades at the OpenStack Summit which quite a few folks were interested
in. Today many of the interested parties got together and talked about
doing more of this in a formalized capacity. Essentially we're looking for
cloud upgrades with the possibility of skipping releases, ideally enabling
an N+3 upgrade. In our opinion it would go a very long way to solving cloud
consumer and deployer problems it folks didn't have to deal with an upgrade
every six months. While we talked about various issues and some of the
current approaches being kicked around we wanted to field our general chat
to the rest of the community and request input from folks that may have
already fought such a beast. If you've taken on an adventure like this how
did you approach it? Did it work? Any known issues, gotchas, or things
folks should be generally aware of?

During our chat today we generally landed on an in-place upgrade with known
API service downtime and little (at least as little as possible) data plane
downtime. The process discussed was basically:
a1. Create utility "thing-a-me" (container, venv, etc) which contains the
required code to run a service through all of the required upgrades.
a2. Stop service(s).
a3. Run migration(s)/upgrade(s) for all releases using the utility
"thing-a-me".
a4. Repeat for all services.

b1. Once all required migrations are complete run a deployment using the
target release.
b2. Ensure all services are restarted.
b3. Ensure cloud is functional.
b4. profit!

Obviously, there's a lot of hand waving here but such a process is being
developed by the OpenStack-Ansible project[1]. Currently, the OSA tooling
will allow deployers to upgrade from Juno/Kilo to Newton using Ubuntu
14.04. While this has worked in the lab, it's early in development (YMMV).
Also, the tooling is not very general purpose or portable outside of OSA
but it could serve as a guide or just a general talking point. Are there
other tools out there that solve for the multi-release upgrade? Are there
any folks that might want to share their expertise? Maybe a process outline
that worked? Best practices? Do folks believe tools are the right way to
solve this or would comprehensive upgrade documentation be better for the
general community?

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades for
a given project in one step. Another was to simply have mutliple python
virtual environments and just run in-line migrations from a version
specific venv (this is what the OSA tooling does). Does one way work better
than the other? Any thoughts on how this could be better? Would having
N+2/3 migrations addressable within the projects, even if they're not
tested any longer, be helpful?

It was our general thought that folks would be interested in having the
ability to skip releases so we'd like to hear from the community to
validate our thinking. Additionally, we'd like to get more minds together
and see if folks are wanting to work on such an initiative, even if this
turns into nothing more than a co-op/channel where we can "phone a friend".
Would it be good to try and secure some PTG space to work on this? Should
we try and create working group going?

If you've made it this far, please forgive my stream of consciousness. I'm
trying to ask a lot of questions and distill long form conversation(s) into
as little text as possible all without writing a novel. With that said, I
hope this finds you well, I look forward to hearing from (and working with)
you soon.

[0] https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading
[1] https://github.com/openstack/openstack-ansible-ops/tree/
master/leap-upgrades

--

Kevin Carter
IRC: Cloudnull


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Jun 6, 2017 in openstack-dev by Carter,_Kevin (580 points)   1

6 Responses

0 votes

Warning: wall of text incoming :-)

On 26/05/2017 03:55, Carter, Kevin wrote:
If you've taken on an adventure like this how did you approach
it? Did it work? Any known issues, gotchas, or things folks should be
generally aware of?

We're fresh out of a Juno-to-Mitaka upgrade. It worked, but it required
significant downtime of the user VMs for an OS upgrade on all compute
nodes (we fell behind CentOS update schedule due to some code requiring
specific kernel versions, so we could not perform a no-downtime upgrade
even though we're using LinuxBridge for the data plane).

We took a significant amount of time to automate almost everything (OS
updates, OpenStack updates and configuration management), but the
control plane migration was performed manually with a lot of
verification steps to ensure the databases would not end up in shambles
(the procedure was carefully written in a runbook and tested on a
separate testbed and on a snapshot of all production databases).

As I said, the update worked but we hit a few snags:
1. glance and neutron DBs were created with latin1 as default charset,
so we had to convert both to UTF8 (dump, iconv, fix the definition,
restore) - this is an operational issue on our side, though
2. on the testbed we found that nova created duplicated entries for all
hypervisors after starting all services, we traced that down to
compute_nodes.host being NULL for all HVs
3. [cache]/enable in nova.conf *must* be set to true if there are
multiple instances of nova-consoleauth/nova-novncproxy, in previous
releases we'd just point nova to our memcache servers and it would work
(probably we overlooked something in the docs)

During our chat today we generally landed on an in-place upgrade with
known API service downtime and little (at least as little as possible)
data plane downtime. The process discussed was basically:
a1. Create utility "thing-a-me" (container, venv, etc) which contains
the required code to run a service through all of the required upgrades.
a2. Stop service(s).
a3. Run migration(s)/upgrade(s) for all releases using the utility
"thing-a-me".
a4. Repeat for all services.

b1. Once all required migrations are complete run a deployment using the
target release.
b2. Ensure all services are restarted.
b3. Ensure cloud is functional.
b4. profit!

That was our basic workflow, except the "thing-a-me" was myself :-)

Joking aside, we kept one controller host out of the "mass upgrade" loop
and carefully performed single-version upgrades of the packages, running
all required DB migrations for each version.

Also, the tooling is not very general purpose or portable outside of OSA
but it could serve as a guide or just a general talking point.> Are there other tools out there that solve for the multi-release upgrade?

Not that I know. AFAIR, the BlueBox guys (now IBM) had some
Ansible-based tooling for automating a single-version upgrade, I don't
know if they ever considered skip-level upgrades.

Best practices?

  1. automate as much as possible
  2. use a configuration management tool to deploy the final configuration
    to all nodes (Puppet, Ansible, Chef...)
  3. have a testing environment which resembles as closely as possible
    the production environment
  4. simulate all migrations on a snapshot of all production databases to
    catch any issue early

Do folks believe tools are the right way to solve this or would
comprehensive upgrade documentation be better for the general community?

Both, actually. A generic upgrade tool would need to cover a lot of
deployment scenarios, so it would probably end up being a "reference
implementation" only.

Comprehensive skip-level upgrade documentation would be optimal (in our
case we had to rebuild Kilo and Liberty docs from sources).

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades
for a given project in one step. Another was to simply have mutliple
python virtual environments and just run in-line migrations from a
version specific venv (this is what the OSA tooling does). Does one way
work better than the other? Any thoughts on how this could be better?
Would having N+2/3 migrations addressable within the projects, even if
they're not tested any longer, be helpful?

Some projects apparently keep shipping all migrations, even though
they're not supported.

It was our general thought that folks would be interested in having the
ability to skip releases so we'd like to hear from the community to
validate our thinking.

That's good to know :-)

--
Matteo Panella
INFN CNAF
Via Ranzani 13/2 c - 40127 Bologna, Italy
Phone: +39 051 609 2903


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

responded May 26, 2017 by Matteo_Panella (380 points)  
0 votes

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades
for a given project in one step. Another was to simply have mutliple
python virtual environments and just run in-line migrations from a
version specific venv (this is what the OSA tooling does). Does one way
work better than the other? Any thoughts on how this could be better?

IMHO, and speaking from a Nova perspective, I think that maintaining a
separate repo of migrations is a bad idea. We occasionally have to fix a
migration to handle a case where someone is stuck and can't move past a
certain revision due to some situation that was not originally
understood. If you have a separate copy of our migrations, you wouldn't
get those fixes. Nova hasn't compacted migrations in a while anyway, so
there's not a whole lot of value there I think.

The other thing to consider is that our schema migrations often
require data migrations to complete before moving on. That means you
really have to move to some milestone version of the schema, then
move/transform data, and then move to the next milestone. Since we
manage those according to releases, those are the milestones that are
most likely to be successful if you're stepping through things.

I do think that the idea of being able to generate a small utility
container (using the broad sense of the word) from each release, and
using those to step through N, N+1, N+2 to arrive at N+3 makes the most
sense.

Nova has offline tooling to push our data migrations (even though the
command is intended to be runnable online). The concern I would have
would be over how to push Keystone's migrations mechanically, since I
believe they moved forward with their proposal to do data migrations in
stored procedures with triggers. Presumably there is a need for
something similar to nova's online-data-migrations command which will
trip all the triggers and provide a green light for moving on?

In the end, projects support N->N+1 today, so if you're just stepping
through actual 1-version gaps, you should be able to do as many of those
as you want and still be running "supported" transitions. There's a lot
of value in that, IMHO.

--Dan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 26, 2017 by Dan_Smith (9,860 points)   1 2 4
0 votes

On 05/26/2017 10:56 AM, Dan Smith wrote:

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades
for a given project in one step. Another was to simply have mutliple
python virtual environments and just run in-line migrations from a
version specific venv (this is what the OSA tooling does). Does one way
work better than the other? Any thoughts on how this could be better?

IMHO, and speaking from a Nova perspective, I think that maintaining a
separate repo of migrations is a bad idea. We occasionally have to fix a
migration to handle a case where someone is stuck and can't move past a
certain revision due to some situation that was not originally
understood. If you have a separate copy of our migrations, you wouldn't
get those fixes. Nova hasn't compacted migrations in a while anyway, so
there's not a whole lot of value there I think.

+1 I think it's very important that migration logic not be duplicated.
Nova's (and everyone else's) migration files have the information on how
to move between specific schema versions. Any concatenation of these
into an effective "N+X" migration should be on the fly as much as is
possible.

The other thing to consider is that our schema migrations often
require data migrations to complete before moving on. That means you
really have to move to some milestone version of the schema, then
move/transform data, and then move to the next milestone. Since we
manage those according to releases, those are the milestones that are
most likely to be successful if you're stepping through things.

I do think that the idea of being able to generate a small utility
container (using the broad sense of the word) from each release, and
using those to step through N, N+1, N+2 to arrive at N+3 makes the most
sense.

+1

Nova has offline tooling to push our data migrations (even though the
command is intended to be runnable online). The concern I would have
would be over how to push Keystone's migrations mechanically, since I
believe they moved forward with their proposal to do data migrations in
stored procedures with triggers. Presumably there is a need for
something similar to nova's online-data-migrations command which will
trip all the triggers and provide a green light for moving on?

I haven't looked at what Keystone is doing, but to the degree they are
using triggers, those triggers would only impact new data operations as
they continue to run into the schema that is straddling between two
versions (e.g. old column/table still exists, data should be synced to
new column/table). If they are actually running a stored procedure to
migrate existing data (which would be surprising to me...) then I'd
assume that invokes just like any other "ALTER TABLE" instruction in
their migrations. If those operations themselves rely on the triggers,
that's fine.

But a keystone person to chime in would be much better than me just
making stuff up.

In the end, projects support N->N+1 today, so if you're just stepping
through actual 1-version gaps, you should be able to do as many of those
as you want and still be running "supported" transitions. There's a lot
of value in that, IMHO.

--Dan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 26, 2017 by Mike_Bayer (15,260 points)   1 6 8
0 votes

I haven't looked at what Keystone is doing, but to the degree they are
using triggers, those triggers would only impact new data operations as
they continue to run into the schema that is straddling between two
versions (e.g. old column/table still exists, data should be synced to
new column/table). If they are actually running a stored procedure to
migrate existing data (which would be surprising to me...) then I'd
assume that invokes just like any other "ALTER TABLE" instruction in
their migrations. If those operations themselves rely on the triggers,
that's fine.

I haven't looked closely either, but I thought the point was to
transform data. If they are, and you run through a bunch of migrations
where you end at a spot that expects that data was migrated while
running at step 3, triggers dropped at step 7, and then schema compacted
at step 11, then just blowing through them could be a problem. It'd work
for a greenfield install no problem because there was nothing to
migrate, but real people would trip over it.

But a keystone person to chime in would be much better than me just
making stuff up.

Yeah, same :)

--Dan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 26, 2017 by Dan_Smith (9,860 points)   1 2 4
0 votes

I've mentioned this elsewhere but writing here for posterity...

Making N to N+1 upgrades seamless and work well is already challenging
today which is one of the reasons why people aren't upgrading in the
first place.
Making N to N+1 upgrades work as well as possible already puts a great
strain on developers and resources, think about the testing and CI
involved in making sure things really work.

My opinion is that of upgrades were made out to be a simple, easy and
seamless operation, it wouldn't be that much of a problem to upgrade
from N to N+3 by upgrading from release to release (three times) until
you've caught up.
But then, if upgrades are awesome, maybe operators won't be lagging 3
releases behind anymore.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Thu, May 25, 2017 at 9:55 PM, Carter, Kevin kevin@cloudnull.com wrote:
Hello Stackers,

As I'm sure many of you know there was a talk about doing "skip-level"[0]
upgrades at the OpenStack Summit which quite a few folks were interested in.
Today many of the interested parties got together and talked about doing
more of this in a formalized capacity. Essentially we're looking for cloud
upgrades with the possibility of skipping releases, ideally enabling an N+3
upgrade. In our opinion it would go a very long way to solving cloud
consumer and deployer problems it folks didn't have to deal with an upgrade
every six months. While we talked about various issues and some of the
current approaches being kicked around we wanted to field our general chat
to the rest of the community and request input from folks that may have
already fought such a beast. If you've taken on an adventure like this how
did you approach it? Did it work? Any known issues, gotchas, or things folks
should be generally aware of?

During our chat today we generally landed on an in-place upgrade with known
API service downtime and little (at least as little as possible) data plane
downtime. The process discussed was basically:
a1. Create utility "thing-a-me" (container, venv, etc) which contains the
required code to run a service through all of the required upgrades.
a2. Stop service(s).
a3. Run migration(s)/upgrade(s) for all releases using the utility
"thing-a-me".
a4. Repeat for all services.

b1. Once all required migrations are complete run a deployment using the
target release.
b2. Ensure all services are restarted.
b3. Ensure cloud is functional.
b4. profit!

Obviously, there's a lot of hand waving here but such a process is being
developed by the OpenStack-Ansible project[1]. Currently, the OSA tooling
will allow deployers to upgrade from Juno/Kilo to Newton using Ubuntu 14.04.
While this has worked in the lab, it's early in development (YMMV). Also,
the tooling is not very general purpose or portable outside of OSA but it
could serve as a guide or just a general talking point. Are there other
tools out there that solve for the multi-release upgrade? Are there any
folks that might want to share their expertise? Maybe a process outline that
worked? Best practices? Do folks believe tools are the right way to solve
this or would comprehensive upgrade documentation be better for the general
community?

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades for
a given project in one step. Another was to simply have mutliple python
virtual environments and just run in-line migrations from a version specific
venv (this is what the OSA tooling does). Does one way work better than the
other? Any thoughts on how this could be better? Would having N+2/3
migrations addressable within the projects, even if they're not tested any
longer, be helpful?

It was our general thought that folks would be interested in having the
ability to skip releases so we'd like to hear from the community to validate
our thinking. Additionally, we'd like to get more minds together and see if
folks are wanting to work on such an initiative, even if this turns into
nothing more than a co-op/channel where we can "phone a friend". Would it be
good to try and secure some PTG space to work on this? Should we try and
create working group going?

If you've made it this far, please forgive my stream of consciousness. I'm
trying to ask a lot of questions and distill long form conversation(s) into
as little text as possible all without writing a novel. With that said, I
hope this finds you well, I look forward to hearing from (and working with)
you soon.

[0] https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading
[1]
https://github.com/openstack/openstack-ansible-ops/tree/master/leap-upgrades

--

Kevin Carter
IRC: Cloudnull


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 26, 2017 by dms_at_redhat.com (3,780 points)   3 4
0 votes

On Fri, May 26, 2017 at 4:55 AM, Carter, Kevin kevin@cloudnull.com wrote:

Hello Stackers,

Hi Kevin, all,

apologies for the very late response here - fwiw I was working at a remote
location all of last week and am catching up still. I was not at the PTG or
part of the original conversation but this thread && etherpad have been
very helpful so thank you very much for sharing. Mostly replying to say
'this is something TripleO/upgrades are interested in too' - obviously not
for the P cycle - and some thoughts on how TripleO is doing upgrades today.

Big +1 to David Simard's point about 'Making N to N+1 upgrades seamless and
work well is already challenging
today ' - ++ to that from our experience. Besides anything else, going
between versions we've also had to change the workflow itself (docs @ [0]
include a link to the composable services spec that explains why the
workflow had to change for Newton to Ocata upgrades). The point is we are
very much still working towards a seamless upgrades experience - we are
improving on each release most notably N..O - considering more pre-upgrade
validations for example and trying to minimize service downtime. Having
said that some more comments inline to the goal of skipping upgrades:

As I'm sure many of you know there was a talk about doing "skip-level"[0]
upgrades at the OpenStack Summit which quite a few folks were interested
in. Today many of the interested parties got together and talked about
doing more of this in a formalized capacity. Essentially we're looking for
cloud upgrades with the possibility of skipping releases, ideally enabling
an N+3 upgrade. In our opinion it would go a very long way to solving cloud
consumer and deployer problems it folks didn't have to deal with an upgrade
every six months. While we talked about various issues and some of the
current approaches being kicked around we wanted to field our general chat
to the rest of the community and request input from folks that may have
already fought such a beast. If you've taken on an adventure like this how
did you approach it? Did it work? Any known issues, gotchas, or things
folks should be generally aware of?

During our chat today we generally landed on an in-place upgrade with
known API service downtime and little (at least as little as possible) data
plane downtime. The process discussed was basically:
a1. Create utility "thing-a-me" (container, venv, etc) which contains the
required code to run a service through all of the required upgrades.
a2. Stop service(s).
a3. Run migration(s)/upgrade(s) for all releases using the utility
"thing-a-me".
a4. Repeat for all services.

b1. Once all required migrations are complete run a deployment using the
target release.
b2. Ensure all services are restarted.
b3. Ensure cloud is functional.
b4. profit!

Obviously, there's a lot of hand waving here but such a process is being
developed by the OpenStack-Ansible project[1]. Currently, the OSA tooling
will allow deployers to upgrade from Juno/Kilo to Newton using Ubuntu
14.04. While this has worked in the lab, it's early in development (YMMV).
Also, the tooling is not very general purpose or portable outside of OSA
but it could serve as a guide or just a general talking point. Are there
other tools out there that solve for the multi-release upgrade? Are there
any folks that might want to share their expertise? Maybe a process outline
that worked? Best practices? Do folks believe tools are the right way to
solve this or would comprehensive upgrade documentation be better for the
general community?

What about packages - what repos will we set up on these nodes ... will
they jump directly from current version to latest of target e.g. N+2? Is
that possible - I mean we may have to consider any version specific
packaging tasks. In TripleO we are actually using ansible tasks defined per
service manifest e.g. neutron l3 agent @ [1] to stop all the things and
then we rely on puppet (puppet-tripleo and service specific puppet modules)
to update packages, run dbase migrations e.g. [2] and start all the things
again (the exception to this general rule of ansible down/puppet up is some
core services, which we want to recover immediately rather than wait for
puppet run, like at [3] for example rabbit).

I am not by any stretch expert on the dbase migrations so I leave that
discussion to more qualified folks but just from a general scaling point of
view trying to maintain a single repo for all the migration things for all
services doesn't work so +1 to the others here advocating the migrations
live with the service and should be compiled/applied by tooling at run time
- whether it is a container thing-a-me or puppet/whatever. For TripleO you
could even override the puppet PostDeploy steps and run Ansible tasks
instead if that accomplished what you needed for the upgrades in your
service list. In fact the TripleO Ocata to Pike upgrade overrides those to
run docker instead of puppet (puppet is still invoked however) to bring up
your services in containers.

Besides the obviously crucial migrations there are other issues to
consider. We've had to deal with changes to services themselves,
deprecations for example removing foo-api.service and using apache for that
service instead of eventlet. And then special case bugs like openvswitch -
we had to special case ovs 2.4->2.5 for M..N and 2.5->2.6 for N..O to
prevent it from restarting during - and killing - the upgrade). In today's
workflow we would essentially need to combine these into one 'invocation
'of the upgrade but I really have not thought about that in any detail.

thanks for reading, marios

[0]
https://docs.openstack.org/developer/tripleo-docs/post_deployment/upgrade.html#upgrading-the-overcloud-to-ocata-and-beyond
[1]
https://github.com/openstack/tripleo-heat-templates/blob/6f75d76d42203657a2b39af5269d2a8f586e93bc/puppet/services/neutron-l3.yaml#L87
[2]
https://github.com/openstack/puppet-neutron/blob/adaee02815771f5d89975212b8cea24b68750618/manifests/db/sync.pp#L27
[3]
https://github.com/openstack/tripleo-heat-templates/blob/6f75d76d42203657a2b39af5269d2a8f586e93bc/puppet/services/rabbitmq.yaml#L110

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades for
a given project in one step. Another was to simply have mutliple python
virtual environments and just run in-line migrations from a version
specific venv (this is what the OSA tooling does). Does one way work better
than the other? Any thoughts on how this could be better? Would having
N+2/3 migrations addressable within the projects, even if they're not
tested any longer, be helpful?

It was our general thought that folks would be interested in having the
ability to skip releases so we'd like to hear from the community to
validate our thinking. Additionally, we'd like to get more minds together
and see if folks are wanting to work on such an initiative, even if this
turns into nothing more than a co-op/channel where we can "phone a friend".
Would it be good to try and secure some PTG space to work on this? Should
we try and create working group going?

If you've made it this far, please forgive my stream of consciousness. I'm

trying to ask a lot of questions and distill long form conversation(s) into
as little text as possible all without writing a novel. With that said, I
hope this finds you well, I look forward to hearing from (and working with)
you soon.

[0] https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading
[1] https://github.com/openstack/openstack-ansible-ops/tree/mast
er/leap-upgrades

--

Kevin Carter
IRC: Cloudnull


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 6, 2017 by Marios_Andreou (3,200 points)   3 4
...