settingsLogin | Registersettings

[openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

0 votes

Nova stores the output of the Cinder os-initializeconnection info API
in the Nova block
device_mappings table, and uses that later for making
volume connections.

This data can get out of whack or need to be refreshed, like if your
ceph server IP changes, or you need to recycle some secret uuid for your
ceph cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.

I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an
operator. Does anyone see value in this? Are operators doing stuff like
this already, but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from
Cinder and updates the BDM table in the nova DB. It could be an admin
action API, or part of the os-server-external-events API, like what we
have for the 'network-changed' event sent from Neutron which nova uses
to refresh the network info cache.

Other ideas or feedback here?

--

Thanks,

Matt


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Sep 14, 2017 in openstack-operators by mriedemos_at_gmail.c (15,720 points)   2 4 5

12 Responses

0 votes

On 08 Jun 2017, at 15:58, Matt Riedemann mriedemos@gmail.com wrote:

Nova stores the output of the Cinder os-initializeconnection info API in the Nova blockdevice_mappings table, and uses that later for making volume connections.

This data can get out of whack or need to be refreshed, like if your ceph server IP changes, or you need to recycle some secret uuid for your ceph cluster.

I think the only ways to do this on the nova side today are via volume detach/re-attach, reboot, migrations, etc - all of which, except live migration, are disruptive to the running guest.

I've kicked around the idea of adding some sort of admin API interface for refreshing the BDM.connection_info on-demand if needed by an operator. Does anyone see value in this? Are operators doing stuff like this already, but maybe via direct DB updates?

We could have something in the compute API which calls down to the compute for an instance and has it refresh the connection_info from Cinder and updates the BDM table in the nova DB. It could be an admin action API, or part of the os-server-external-events API, like what we have for the 'network-changed' event sent from Neutron which nova uses to refresh the network info cache.

Other ideas or feedback here?

I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue some time ago.
Back then I was more thinking of using an alias and not deal with IP addresses directly. From
what I understand, this should work with Ceph. In any case, there is still interest in a fix :-)

Cheers,
Arne

--
Arne Wiebalck
CERN IT


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 8, 2017 by Arne_Wiebalck (2,160 points)   1 3
0 votes

On 08 Jun 2017, at 17:52, Matt Riedemann mriedemos@gmail.com wrote:

On 6/8/2017 10:17 AM, Arne Wiebalck wrote:

On 08 Jun 2017, at 15:58, Matt Riedemann <mriedemos@gmail.com mriedemos@gmail.com> wrote:

Nova stores the output of the Cinder os-initializeconnection info API in the Nova blockdevice_mappings table, and uses that later for making volume connections.

This data can get out of whack or need to be refreshed, like if your ceph server IP changes, or you need to recycle some secret uuid for your ceph cluster.

I think the only ways to do this on the nova side today are via volume detach/re-attach, reboot, migrations, etc - all of which, except live migration, are disruptive to the running guest.

I've kicked around the idea of adding some sort of admin API interface for refreshing the BDM.connection_info on-demand if needed by an operator. Does anyone see value in this? Are operators doing stuff like this already, but maybe via direct DB updates?

We could have something in the compute API which calls down to the compute for an instance and has it refresh the connection_info from Cinder and updates the BDM table in the nova DB. It could be an admin action API, or part of the os-server-external-events API, like what we have for the 'network-changed' event sent from Neutron which nova uses to refresh the network info cache.

Other ideas or feedback here?
I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue some time ago.
Back then I was more thinking of using an alias and not deal with IP addresses directly. From
what I understand, this should work with Ceph. In any case, there is still interest in a fix :-)
Cheers,
Arne
--
Arne Wiebalck
CERN IT


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yeah this was also discussed in the dev mailing list over a year ago:

http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html

At that time I was opposed to a REST API for a user doing this, but I'm more open to an admin (by default) doing this. Also, if it were initiated via the volume API then Cinder could call the Nova os-server-external-events API which is admin-only by default and then Nova can do a refresh.

Later in that thread Melanie Witt also has an idea about doing a refresh in a periodic task on the compute service, like we do for refreshing the instance network info cache with Neutron in a periodic task.

Wouldn’t using a mon alias (and not resolving it to the respective IP addresses) be enough? Or is that too backend specific?

The idea of a periodic task leveraging existing techniques sounds really nice, but if the overhead is regarded as too much (in the end, the IP addresses shouldn’t change that often), an admin only API to be called when the addresses need to be updated sounds good to me as well.

Cheers,
Arne


Arne Wiebalck
CERN IT__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded Jun 8, 2017 by Arne_Wiebalck (2,160 points)   1 3
0 votes

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:
Nova stores the output of the Cinder os-initializeconnection info API
in the Nova block
device_mappings table, and uses that later for making
volume connections.

This data can get out of whack or need to be refreshed, like if your
ceph server IP changes, or you need to recycle some secret uuid for your
ceph cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.

I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record
for the instance. Maybe detach/re-attach would work too but I can't
remember trying it.

I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an
operator. Does anyone see value in this? Are operators doing stuff like
this already, but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from
Cinder and updates the BDM table in the nova DB. It could be an admin
action API, or part of the os-server-external-events API, like what we
have for the 'network-changed' event sent from Neutron which nova uses
to refresh the network info cache.

Other ideas or feedback here?

We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connectioninfo refresh
+ record update inline with the request flows that will end up reading
connection
info from the block device mapping records. That way,
operators won't have to intervene when connection_info changes.

At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it
will continue to use its existing connection to the Ceph cluster. Things
go wrong when an instance action such as resize, stop/start, or reboot
is done because when the instance is taken offline and being brought
back up, the stale connectioninfo is read from the blockdevicemapping
table and injected into the instance, and so it loses contact with the
cluster. If we query Cinder and update the block
devicemapping record
at the beginning of those actions, the instance will get the new
connection
info.

-melanie


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 8, 2017 by melanie_witt (3,180 points)   2 5
0 votes

Oh, yes please! We've had to go through a lot of hoops to migrate ceph-mon's around while keeping their ip's consistent to avoid vm breakage. All the rest of the ceph ecosystem (at least that we've dealt with) works fine without the level of effort the current nova/cinder implementation imposes.

Thanks,
Kevin


From: melanie witt [melwittt@gmail.com]
Sent: Thursday, June 08, 2017 11:39 AM
To: Matt Riedemann; openstack-operators@lists.openstack.org; openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:
Nova stores the output of the Cinder os-initializeconnection info API
in the Nova block
device_mappings table, and uses that later for making
volume connections.

This data can get out of whack or need to be refreshed, like if your
ceph server IP changes, or you need to recycle some secret uuid for your
ceph cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.

I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record
for the instance. Maybe detach/re-attach would work too but I can't
remember trying it.

I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an
operator. Does anyone see value in this? Are operators doing stuff like
this already, but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from
Cinder and updates the BDM table in the nova DB. It could be an admin
action API, or part of the os-server-external-events API, like what we
have for the 'network-changed' event sent from Neutron which nova uses
to refresh the network info cache.

Other ideas or feedback here?

We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connectioninfo refresh
+ record update inline with the request flows that will end up reading
connection
info from the block device mapping records. That way,
operators won't have to intervene when connection_info changes.

At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it
will continue to use its existing connection to the Ceph cluster. Things
go wrong when an instance action such as resize, stop/start, or reboot
is done because when the instance is taken offline and being brought
back up, the stale connectioninfo is read from the blockdevicemapping
table and injected into the instance, and so it loses contact with the
cluster. If we query Cinder and update the block
devicemapping record
at the beginning of those actions, the instance will get the new
connection
info.

-melanie


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 8, 2017 by Fox,_Kevin_M (29,360 points)   1 3 4
0 votes

On 6/8/2017 1:39 PM, melanie witt wrote:
On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:

Nova stores the output of the Cinder os-initializeconnection info API
in the Nova block
device_mappings table, and uses that later for
making volume connections.

This data can get out of whack or need to be refreshed, like if your
ceph server IP changes, or you need to recycle some secret uuid for
your ceph cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.

I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record
for the instance. Maybe detach/re-attach would work too but I can't
remember trying it.

Shelve has it's own fun set of problems like the fact it doesn't
terminate the connection to the volume backend on shelve. Maybe that's
not a problem for Ceph, I don't know. You do end up on another host
though potentially, and it's a full delete and spawn of the guest on
that other host. Definitely disruptive.

I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an
operator. Does anyone see value in this? Are operators doing stuff
like this already, but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from
Cinder and updates the BDM table in the nova DB. It could be an admin
action API, or part of the os-server-external-events API, like what we
have for the 'network-changed' event sent from Neutron which nova uses
to refresh the network info cache.

Other ideas or feedback here?

We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connectioninfo refresh
+ record update inline with the request flows that will end up reading
connection
info from the block device mapping records. That way,
operators won't have to intervene when connection_info changes.

The thing that sucks about this is if we're going to be refreshing
something that maybe rarely changes for every volume-related operation
on the instance. That seems like a lot of overhead to me (nova/cinder
API interactions, Cinder interactions to the volume backend,
nova-compute round trips to conductor and the DB to update the BDM
table, etc).

At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it
will continue to use its existing connection to the Ceph cluster. Things
go wrong when an instance action such as resize, stop/start, or reboot
is done because when the instance is taken offline and being brought
back up, the stale connectioninfo is read from the blockdevicemapping
table and injected into the instance, and so it loses contact with the
cluster. If we query Cinder and update the block
devicemapping record
at the beginning of those actions, the instance will get the new
connection
info.

-melanie

--

Thanks,

Matt


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 9, 2017 by mriedemos_at_gmail.c (15,720 points)   2 4 5
0 votes

Hello Matt,

It is true that we are refreshing something that rarely changes. But
if you deliver a cloud service for several years, at one point you
might have to do these parameters changes.

Something that should not change rarely are the secrets of the ceph
users to talk to the ceph cluster. Good security would suggest
periodic secret rotation, but today this is not really feasible.

I know the problem is also that you cannot change stuff in libvirt
while the VMs are running. Maybe is time for a discussion with libvirt
developers to make our voice louder about required features ?

The goal would be to change on the fly the ceph/rbd secret that a VM
uses to access a volume, while the VM is running. I think this is very
important.

thank you

Saverio

2017-06-09 6:15 GMT+02:00 Matt Riedemann mriedemos@gmail.com:

On 6/8/2017 1:39 PM, melanie witt wrote:

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:

Nova stores the output of the Cinder os-initializeconnection info API in
the Nova block
device_mappings table, and uses that later for making volume
connections.

This data can get out of whack or need to be refreshed, like if your ceph
server IP changes, or you need to recycle some secret uuid for your ceph
cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.

I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record for
the instance. Maybe detach/re-attach would work too but I can't remember
trying it.

Shelve has it's own fun set of problems like the fact it doesn't terminate
the connection to the volume backend on shelve. Maybe that's not a problem
for Ceph, I don't know. You do end up on another host though potentially,
and it's a full delete and spawn of the guest on that other host. Definitely
disruptive.

I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an operator.
Does anyone see value in this? Are operators doing stuff like this already,
but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from Cinder
and updates the BDM table in the nova DB. It could be an admin action API,
or part of the os-server-external-events API, like what we have for the
'network-changed' event sent from Neutron which nova uses to refresh the
network info cache.

Other ideas or feedback here?

We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connectioninfo refresh +
record update inline with the request flows that will end up reading
connection
info from the block device mapping records. That way, operators
won't have to intervene when connection_info changes.

The thing that sucks about this is if we're going to be refreshing something
that maybe rarely changes for every volume-related operation on the
instance. That seems like a lot of overhead to me (nova/cinder API
interactions, Cinder interactions to the volume backend, nova-compute round
trips to conductor and the DB to update the BDM table, etc).

At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it will
continue to use its existing connection to the Ceph cluster. Things go wrong
when an instance action such as resize, stop/start, or reboot is done
because when the instance is taken offline and being brought back up, the
stale connectioninfo is read from the blockdevicemapping table and
injected into the instance, and so it loses contact with the cluster. If we
query Cinder and update the block
devicemapping record at the beginning of
those actions, the instance will get the new connection
info.

-melanie

--

Thanks,

Matt


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Jun 16, 2017 by Saverio_Proto (5,480 points)   1 3 5
0 votes

Matt, all,

I’m reviving this thread to check if the suggestion to address potentially stale connection
data by an admin command (or a scheduled task) made it to the planning for one of the
upcoming releases?

Thanks!
Arne

On 16 Jun 2017, at 09:37, Saverio Proto zioproto@gmail.com wrote:

Hello Matt,

It is true that we are refreshing something that rarely changes. But
if you deliver a cloud service for several years, at one point you
might have to do these parameters changes.

Something that should not change rarely are the secrets of the ceph
users to talk to the ceph cluster. Good security would suggest
periodic secret rotation, but today this is not really feasible.

I know the problem is also that you cannot change stuff in libvirt
while the VMs are running. Maybe is time for a discussion with libvirt
developers to make our voice louder about required features ?

The goal would be to change on the fly the ceph/rbd secret that a VM
uses to access a volume, while the VM is running. I think this is very
important.

thank you

Saverio

2017-06-09 6:15 GMT+02:00 Matt Riedemann mriedemos@gmail.com:
On 6/8/2017 1:39 PM, melanie witt wrote:

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:

Nova stores the output of the Cinder os-initializeconnection info API in
the Nova block
device_mappings table, and uses that later for making volume
connections.

This data can get out of whack or need to be refreshed, like if your ceph
server IP changes, or you need to recycle some secret uuid for your ceph
cluster.

I think the only ways to do this on the nova side today are via volume
detach/re-attach, reboot, migrations, etc - all of which, except live
migration, are disruptive to the running guest.

I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record for
the instance. Maybe detach/re-attach would work too but I can't remember
trying it.

Shelve has it's own fun set of problems like the fact it doesn't terminate
the connection to the volume backend on shelve. Maybe that's not a problem
for Ceph, I don't know. You do end up on another host though potentially,
and it's a full delete and spawn of the guest on that other host. Definitely
disruptive.

I've kicked around the idea of adding some sort of admin API interface
for refreshing the BDM.connection_info on-demand if needed by an operator.
Does anyone see value in this? Are operators doing stuff like this already,
but maybe via direct DB updates?

We could have something in the compute API which calls down to the
compute for an instance and has it refresh the connection_info from Cinder
and updates the BDM table in the nova DB. It could be an admin action API,
or part of the os-server-external-events API, like what we have for the
'network-changed' event sent from Neutron which nova uses to refresh the
network info cache.

Other ideas or feedback here?

We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connectioninfo refresh +
record update inline with the request flows that will end up reading
connection
info from the block device mapping records. That way, operators
won't have to intervene when connection_info changes.

The thing that sucks about this is if we're going to be refreshing something
that maybe rarely changes for every volume-related operation on the
instance. That seems like a lot of overhead to me (nova/cinder API
interactions, Cinder interactions to the volume backend, nova-compute round
trips to conductor and the DB to update the BDM table, etc).

At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it will
continue to use its existing connection to the Ceph cluster. Things go wrong
when an instance action such as resize, stop/start, or reboot is done
because when the instance is taken offline and being brought back up, the
stale connectioninfo is read from the blockdevicemapping table and
injected into the instance, and so it loses contact with the cluster. If we
query Cinder and update the block
devicemapping record at the beginning of
those actions, the instance will get the new connection
info.

-melanie

--

Thanks,

Matt


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Arne Wiebalck
CERN IT


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 13, 2017 by Arne_Wiebalck (2,160 points)   1 3
0 votes

On 9/13/2017 3:24 AM, Arne Wiebalck wrote:
I’m reviving this thread to check if the suggestion to address
potentially stale connection
data by an admin command (or a scheduled task) made it to the planning
for one of the
upcoming releases?

It hasn't, but we're at the PTG this week so I can throw it on the list
of topics.

--

Thanks,

Matt


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 13, 2017 by mriedemos_at_gmail.c (15,720 points)   2 4 5
0 votes

On 13 Sep 2017, at 16:52, Matt Riedemann mriedemos@gmail.com wrote:

On 9/13/2017 3:24 AM, Arne Wiebalck wrote:

I’m reviving this thread to check if the suggestion to address potentially stale connection
data by an admin command (or a scheduled task) made it to the planning for one of the
upcoming releases?

It hasn't, but we're at the PTG this week so I can throw it on the list of topics.

That’d be great, thanks!

--
Arne Wiebalck
CERN IT


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 13, 2017 by Arne_Wiebalck (2,160 points)   1 3
0 votes

I have been studying how to perform failover operations with Cinder --failover. Nova is not aware of the failover event. Being able to refresh the connection state especially for Nova would come in very handy, especially in admin level dr scenarios.

I'm attaching the blog I wrote on the subject:
http://netapp.io/2017/08/09/cinder-cheesecake-things-to-consider/

-----Original Message-----
From: Arne Wiebalck [mailto:Arne.Wiebalck@cern.ch]
Sent: Wednesday, September 13, 2017 11:53 AM
To: Matt Riedemann mriedemos@gmail.com
Cc: openstack-operators@lists.openstack.org; OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: Re: [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

On 13 Sep 2017, at 16:52, Matt Riedemann mriedemos@gmail.com wrote:

On 9/13/2017 3:24 AM, Arne Wiebalck wrote:

I’m reviving this thread to check if the suggestion to address
potentially stale connection data by an admin command (or a scheduled
task) made it to the planning for one of the upcoming releases?

It hasn't, but we're at the PTG this week so I can throw it on the list of topics.

That’d be great, thanks!

--
Arne Wiebalck
CERN IT


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Sep 13, 2017 by Morgenstern,_Chad (180 points)  
...