settingsLogin | Registersettings

[Openstack] Pike NOVA Disable and Live Migrate all instances.

0 votes

Hello everyone and thanks in advance. I have Openstack Pike (KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade and am seeing a possible issue with disabling a host and live migrating the instances off via the horizon interface. I can migrate the instances individually via the Openstack client without issue. It looks like I might be missing something relating to concurrent jobs in my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail. Migrating a single instance through the horizon interface does function. Below is what I am seeing in my scheduler log on the controller when trying to live migrate all instances from a disabled host. I believe the last line to be the obvious issue but I can not find a nova variable that seems to relate to this. Can anyone point me in the right direction?


2017-09-19 19:02:30.529 19741 INFO nova.scheduler.hostmanager [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Host filter ignoring hosts: kvm02
2017-09-19 19:02:30.530 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Starting with 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:70
2017-09-19 19:02:30.530 19741 DEBUG nova.scheduler.filters.retry
filter [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Re-scheduling is disabled hostpasses /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retryfilter.py:34
2017-09-19 19:02:30.530 19741 DEBUG nova.scheduler.filters.retryfilter [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Re-scheduling is disabled hostpasses /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retryfilter.py:34
2017-09-19 19:02:30.531 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter RetryFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.531 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter AvailabilityZoneFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ComputeFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ComputeCapabilitiesFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ImagePropertiesFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ServerGroupAntiAffinityFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ServerGroupAffinityFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.scheduler.filter
scheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filtered [(kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0, (kvm03, kvm03.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0] getsortedhosts /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:298
2017-09-19 19:02:30.534 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Weighed [(kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0, (kvm03, kvm03.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0] _getsortedhosts /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:308
2017-09-19 19:02:30.534 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Attempting to claim resources in the placement API for instance a16f866b-9d82-44b7-bc66-414f919ba067 _claimresources /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:270
2017-09-19 19:02:30.547 19741 DEBUG nova.scheduler.client.report [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Doubling-up allocation request for move operation. _move
operationallocrequest /usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:162
2017-09-19 19:02:30.547 19741 DEBUG nova.scheduler.client.report [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] New allocation request containing both source and destination hosts in move operation: {'allocations': [{'resourceprovider': {'uuid': u'fb1b6ab4-df2a-4b0c-9578-6f7dee019e36'}, 'resources': {u'VCPU': 4, u'MEMORYMB': 8192, u'DISKGB': 80}}, {'resourceprovider': {'uuid': u'04fe593a-5af9-4878-a635-e9981a7d4dcd'}, 'resources': {u'VCPU': 2, u'MEMORYMB': 4096, u'DISKGB': 40}}]} moveoperationallocrequest /usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:202
2017-09-19 19:02:30.583 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Selected host: (kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0 schedule /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:227
2017-09-19 19:02:30.584 19741 DEBUG osloconcurrency.lockutils [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Lock "(u'kvm01', u'kvm01.c1.us-east-dtw.os.libertycenterone.com')" acquired by "nova.scheduler.hostmanager.locked" :: waited 0.000s inner /usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:273
2017-09-19 19:02:30.585 19741 DEBUG nova.virt.hardware [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Require both a host and instance NUMA topology to fit instance on host. numafitinstancetohost /usr/lib/python2.7/dist-packages/nova/virt/hardware.py:1467
2017-09-19 19:02:30.588 19741 DEBUG osloconcurrency.lockutils [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Lock "(u'kvm01', u'kvm01.c1.us-east-dtw.os.libertycenterone.com')" released by "nova.scheduler.hostmanager.locked" :: held 0.004s inner /usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:285
2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts available but 10 instances requested to build. selectdestinations /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:101
root@controller01:/var/log/nova#

Migrating from the openstack console.

root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.24 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.19 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.17 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.23 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.11 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.9 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.15 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.18 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.7 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.14 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-1
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-2
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-3
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-4
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-5
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-6
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-7
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-8
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-9
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-10
root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.24 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.19 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.17 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.23 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.11 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.9 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.15 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.18 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.7 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.14 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
root@controller01:~#
root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.24 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.19 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.17 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.23 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.11 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.9 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.15 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.18 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.7 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.14 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
root@controller01:~#

Thanks!

Steven Searles


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Sep 20, 2017 in openstack by Steve_Searles (480 points)   3 3

9 Responses

0 votes

Best practice is to never migrate more than one at a time. Maybe that has
been encoded. I.e. no more than one off of a compute at a time and no more
than one onto a compute at a time.

On Sep 19, 2017 5:37 PM, "Steven D. Searles" SSearles@zimcom.net wrote:

Hello everyone and thanks in advance. I have Openstack Pike
(KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade and am
seeing a possible issue with disabling a host and live migrating the
instances off via the horizon interface. I can migrate the instances
individually via the Openstack client without issue. It looks like I might
be missing something relating to concurrent jobs in my nova config?
Interestingly enough when a migrate host is attempted via horizon they all
fail. Migrating a single instance through the horizon interface does
function. Below is what I am seeing in my scheduler log on the controller
when trying to live migrate all instances from a disabled host. I believe
the last line to be the obvious issue but I can not find a nova variable
that seems to relate to this. Can anyone point me in the right direction?


2017-09-19 19:02:30.529 19741 INFO nova.scheduler.hostmanager
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Host filter ignoring
hosts: kvm02
2017-09-19 19:02:30.530 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Starting with 2
host(s) get
filteredobjects /usr/lib/python2.7/dist-
packages/nova/filters.py:70
2017-09-19 19:02:30.530 19741 DEBUG nova.scheduler.filters.retry
filter
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Re-scheduling is
disabled hostpasses /usr/lib/python2.7/dist-packages/nova/scheduler/
filters/retry
filter.py:34
2017-09-19 19:02:30.530 19741 DEBUG nova.scheduler.filters.retryfilter
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Re-scheduling is
disabled host
passes /usr/lib/python2.7/dist-packages/nova/scheduler/
filters/retryfilter.py:34
2017-09-19 19:02:30.531 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter RetryFilter
returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-
packages/nova/filters.py:104
2017-09-19 19:02:30.531 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter
AvailabilityZoneFilter returned 2 host(s) get
filteredobjects
/usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ComputeFilter
returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-
packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter
ComputeCapabilitiesFilter returned 2 host(s) get
filteredobjects
/usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter
ImagePropertiesFilter returned 2 host(s) get
filteredobjects
/usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter
ServerGroupAntiAffinityFilter returned 2 host(s) get
filteredobjects
/usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.filters
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filter
ServerGroupAffinityFilter returned 2 host(s) get
filteredobjects
/usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.scheduler.filter
scheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Filtered [(kvm01,
kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk:
199680MB ioops: 0 instances: 0, (kvm03, kvm03.c1.us-east-dtw.os.
libertycenterone.com) ram: 257403MB disk: 199680MB io
ops: 0 instances:
0] getsortedhosts /usr/lib/python2.7/dist-packages/nova/scheduler/
filter
scheduler.py:298
2017-09-19 19:02:30.534 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Weighed [(kvm01,
kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk:
199680MB io
ops: 0 instances: 0, (kvm03, kvm03.c1.us-east-dtw.os.
libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances:
0] _get
sortedhosts /usr/lib/python2.7/dist-packages/nova/scheduler/
filter
scheduler.py:308
2017-09-19 19:02:30.534 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Attempting to claim
resources in the placement API for instance a16f866b-9d82-44b7-bc66-414f919ba067
_claim
resources /usr/lib/python2.7/dist-packages/nova/scheduler/
filterscheduler.py:270
2017-09-19 19:02:30.547 19741 DEBUG nova.scheduler.client.report
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Doubling-up
allocation request for move operation. _move
operationallocrequest
/usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:162
2017-09-19 19:02:30.547 19741 DEBUG nova.scheduler.client.report
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] New allocation
request containing both source and destination hosts in move operation:
{'allocations': [{'resourceprovider': {'uuid': u'fb1b6ab4-df2a-4b0c-9578-6f7dee019e36'},
'resources': {u'VCPU': 4, u'MEMORY
MB': 8192, u'DISKGB': 80}},
{'resource
provider': {'uuid': u'04fe593a-5af9-4878-a635-e9981a7d4dcd'},
'resources': {u'VCPU': 2, u'MEMORYMB': 4096, u'DISKGB': 40}}]}
moveoperationallocrequest /usr/lib/python2.7/dist-
packages/nova/scheduler/client/report.py:202
2017-09-19 19:02:30.583 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Selected host:
(kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk:
199680MB io
ops: 0 instances: 0 schedule /usr/lib/python2.7/dist-
packages/nova/scheduler/filter
scheduler.py:227
2017-09-19 19:02:30.584 19741 DEBUG osloconcurrency.lockutils
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Lock "(u'kvm01', u'
kvm01.c1.us-east-dtw.os.libertycenterone.com')" acquired by
"nova.scheduler.host
manager.locked" :: waited 0.000s inner
/usr/lib/python2.7/dist-packages/oslo
concurrency/lockutils.py:273
2017-09-19 19:02:30.585 19741 DEBUG nova.virt.hardware
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Require both a host
and instance NUMA topology to fit instance on host.
numafitinstancetohost /usr/lib/python2.7/dist-
packages/nova/virt/hardware.py:1467
2017-09-19 19:02:30.588 19741 DEBUG osloconcurrency.lockutils
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] Lock "(u'kvm01', u'
kvm01.c1.us-east-dtw.os.libertycenterone.com')" released by
"nova.scheduler.host
manager.locked" :: held 0.004s inner
/usr/lib/python2.7/dist-packages/oslo
concurrency/lockutils.py:285
2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts
available but 10 instances requested to build. select
destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:101

root@controller01:/var/log/nova#

Migrating from the openstack console.

root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.24 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.19 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.17 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.23 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.11 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.9 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.15 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.18 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.7 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.14 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm02 | |
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-1
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-2
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-3
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-4
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-5
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-6
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-7
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-8
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-9
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-10
root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.24 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.19 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.17 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.23 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.11 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.9 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | MIGRATING |
migrating | Running | Admin-RFC1918=10.0.0.15 | |
| m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova
| kvm02 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.18 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.7 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.14 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
root@controller01:~#
root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.24 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.19 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.17 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.23 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.11 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.9 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.15 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.18 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.7 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None
| Running | Admin-RFC1918=10.0.0.14 | | |
m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova |
kvm03 | |
root@controller01:~#

Thanks!

Steven Searles


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by David_Medberry (8,000 points)   1 4 5
0 votes

That would be fine if the behavior was to move one at a time and just queue the jobs up but it is failing outright. Thanks for the reply David! I still think I am missing something in my config. Do you have a pike setup to see if you can replicate the behavior by chance?

-Steve

On Sep 19, 2017, at 8:32 PM, David Medberry openstack@medberry.net wrote:

Best practice is to never migrate more than one at a time. Maybe that has been encoded. I.e. no more than one off of a compute at a time and no more than one onto a compute at a time.

On Sep 19, 2017 5:37 PM, "Steven D. Searles" SSearles@zimcom.net wrote:
Hello everyone and thanks in advance. I have Openstack Pike (KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade and am seeing a possible issue with disabling a host and live migrating the instances off via the horizon interface. I can migrate the instances individually via the Openstack client without issue. It looks like I might be missing something relating to concurrent jobs in my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail. Migrating a single instance through the horizon interface does function. Below is what I am seeing in my scheduler log on the controller when trying to live migrate all instances from a disabled host. I believe the last line to be the obvious issue but I can not find a nova variable that seems to relate to this. Can anyone point me in the right direction?


2017-09-19 19:02:30.529 19741 INFO nova.scheduler.hostmanager [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Host filter ignoring hosts: kvm02
2017-09-19 19:02:30.530 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Starting with 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:70
2017-09-19 19:02:30.530 19741 DEBUG nova.scheduler.filters.retry
filter [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Re-scheduling is disabled hostpasses /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retryfilter.py:34
2017-09-19 19:02:30.530 19741 DEBUG nova.scheduler.filters.retryfilter [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Re-scheduling is disabled hostpasses /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retryfilter.py:34
2017-09-19 19:02:30.531 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter RetryFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.531 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter AvailabilityZoneFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ComputeFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ComputeCapabilitiesFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.532 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ImagePropertiesFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ServerGroupAntiAffinityFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.filters [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filter ServerGroupAffinityFilter returned 2 host(s) get
filteredobjects /usr/lib/python2.7/dist-packages/nova/filters.py:104
2017-09-19 19:02:30.533 19741 DEBUG nova.scheduler.filter
scheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Filtered [(kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0, (kvm03, kvm03.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0] getsortedhosts /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:298
2017-09-19 19:02:30.534 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Weighed [(kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0, (kvm03, kvm03.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0] _getsortedhosts /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:308
2017-09-19 19:02:30.534 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Attempting to claim resources in the placement API for instance a16f866b-9d82-44b7-bc66-414f919ba067 _claimresources /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:270
2017-09-19 19:02:30.547 19741 DEBUG nova.scheduler.client.report [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Doubling-up allocation request for move operation. _move
operationallocrequest /usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:162
2017-09-19 19:02:30.547 19741 DEBUG nova.scheduler.client.report [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] New allocation request containing both source and destination hosts in move operation: {'allocations': [{'resourceprovider': {'uuid': u'fb1b6ab4-df2a-4b0c-9578-6f7dee019e36'}, 'resources': {u'VCPU': 4, u'MEMORYMB': 8192, u'DISKGB': 80}}, {'resourceprovider': {'uuid': u'04fe593a-5af9-4878-a635-e9981a7d4dcd'}, 'resources': {u'VCPU': 2, u'MEMORYMB': 4096, u'DISKGB': 40}}]} moveoperationallocrequest /usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:202
2017-09-19 19:02:30.583 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Selected host: (kvm01, kvm01.c1.us-east-dtw.os.libertycenterone.com) ram: 257403MB disk: 199680MB ioops: 0 instances: 0 schedule /usr/lib/python2.7/dist-packages/nova/scheduler/filterscheduler.py:227
2017-09-19 19:02:30.584 19741 DEBUG osloconcurrency.lockutils [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Lock "(u'kvm01', u'kvm01.c1.us-east-dtw.os.libertycenterone.com')" acquired by "nova.scheduler.hostmanager.locked" :: waited 0.000s inner /usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:273
2017-09-19 19:02:30.585 19741 DEBUG nova.virt.hardware [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Require both a host and instance NUMA topology to fit instance on host. numafitinstancetohost /usr/lib/python2.7/dist-packages/nova/virt/hardware.py:1467
2017-09-19 19:02:30.588 19741 DEBUG osloconcurrency.lockutils [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] Lock "(u'kvm01', u'kvm01.c1.us-east-dtw.os.libertycenterone.com')" released by "nova.scheduler.hostmanager.locked" :: held 0.004s inner /usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:285
2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler [req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b 723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts available but 10 instances requested to build. selectdestinations /usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:101
root@controller01:/var/log/nova#

Migrating from the openstack console.

root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.24 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.19 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.17 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.23 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.11 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.9 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.15 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.18 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.7 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.14 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-1
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-2
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-3
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-4
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-5
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-6
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-7
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-8
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-9
root@controller01:~# openstack server migrate --live kvm03 Linux-Test-10
root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.24 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.19 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.17 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.23 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.11 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.9 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | MIGRATING | migrating | Running | Admin-RFC1918=10.0.0.15 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm02 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.18 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.7 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.14 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
root@controller01:~#
root@controller01:~# openstack server list --long |grep Linux-Test
| 1d2866c3-ffc1-43a1-82d7-8bb92f396836 | Linux-Test-10 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.24 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f5e17cb0-9679-4322-a480-7a9bd21e4d5b | Linux-Test-9 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.19 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 38a84ba2-a410-4903-983b-6d1631ad1f0b | Linux-Test-8 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.17 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 2598ed75-9c35-4871-864d-924215eeb5d7 | Linux-Test-7 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.23 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 04218d15-6bbd-4687-900d-00ad8260c867 | Linux-Test-6 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.11 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 5e28d8ef-a577-4982-8799-adb87ec3359a | Linux-Test-5 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.9 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| daee8c1b-5c69-4600-86d6-b10adbbe6e21 | Linux-Test-4 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.15 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| 33c348a7-60a1-4887-8761-e5ae17ea9357 | Linux-Test-3 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.18 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f7d50f42-8f55-47a5-8701-fe8c47b62bcc | Linux-Test-2 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.7 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
| f9242792-80c3-4dcf-bdb9-f820156925f7 | Linux-Test-1 | ACTIVE | None | Running | Admin-RFC1918=10.0.0.14 | | | m1.small | 14d14e99-4bc9-4ead-9a56-d05f574ccaa5 | nova | kvm03 | |
root@controller01:~#

Thanks!

Steven Searles


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Steve_Searles (480 points)   3 3
0 votes

On 09/19/2017 05:21 PM, Steven D. Searles wrote:
Hello everyone and thanks in advance. I have Openstack Pike (KVM,FC-SAN/Cinder)
installed in our lab for testing before upgrade and am seeing a possible issue
with disabling a host and live migrating the instances off via the horizon
interface. I can migrate the instances individually via the Openstack client
without issue. It looks like I might be missing something relating to concurrent
jobs in my nova config? Interestingly enough when a migrate host is attempted
via horizon they all fail. Migrating a single instance through the horizon
interface does function. Below is what I am seeing in my scheduler log on the
controller when trying to live migrate all instances from a disabled host. I
believe the last line to be the obvious issue but I can not find a nova variable
that seems to relate to this. Can anyone point me in the right direction?

There's a default limit of 1 outgoing live migration per compute node. I don't
think that's the whole issue here though.

2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e 385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts available
but 10 instances requested to build. select
destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:101

It's unclear to me why it's trying to schedule 10 instances all at once. Did
you originally create all the instances as part of a single boot request?

Chris

responded Sep 20, 2017 by Chris_Friesen (20,420 points)   3 16 24
0 votes

I did. I will spawn a few singles and see if it does the same thing.

Steven Searles

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen@windriver.com]
Sent: Tuesday, September 19, 2017 11:17 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

On 09/19/2017 05:21 PM, Steven D. Searles wrote:
Hello everyone and thanks in advance. I have Openstack Pike
(KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade
and am seeing a possible issue with disabling a host and live
migrating the instances off via the horizon interface. I can migrate
the instances individually via the Openstack client without issue. It
looks like I might be missing something relating to concurrent jobs in
my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail. Migrating a single instance through the horizon
interface does function. Below is what I am seeing in my scheduler log on the
controller when trying to live migrate all instances from a disabled
host. I believe the last line to be the obvious issue but I can not
find a nova variable that seems to relate to this. Can anyone point me in the right direction?

There's a default limit of 1 outgoing live migration per compute node. I don't think that's the whole issue here though.

2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e
385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts
available but 10 instances requested to build. select
destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:10
1

It's unclear to me why it's trying to schedule 10 instances all at once. Did you originally create all the instances as part of a single boot request?

Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Steve_Searles (480 points)   3 3
0 votes

Chris, You are definitely on to something here. When I create the instances individually this condition does NOT occur. I confirmed this by creating 20 instances with openstack server create all on the same host. I then set the host to disabled in horizon and used the migrate host button. This time the scheduler worked as expected, migrated one at a time as specified in the controller maxconcurrentlive_migrations=1 and queued the rest until they all completed and the host was empty.

Steven Searles

-----Original Message-----
From: Steven D. Searles [mailto:SSearles@zimcom.net]
Sent: Wednesday, September 20, 2017 12:16 AM
To: Chris Friesen chris.friesen@windriver.com; openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

I did. I will spawn a few singles and see if it does the same thing.

Steven Searles

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen@windriver.com]
Sent: Tuesday, September 19, 2017 11:17 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

On 09/19/2017 05:21 PM, Steven D. Searles wrote:
Hello everyone and thanks in advance. I have Openstack Pike
(KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade
and am seeing a possible issue with disabling a host and live
migrating the instances off via the horizon interface. I can migrate
the instances individually via the Openstack client without issue. It
looks like I might be missing something relating to concurrent jobs in
my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail. Migrating a single instance through the horizon
interface does function. Below is what I am seeing in my scheduler log on the
controller when trying to live migrate all instances from a disabled
host. I believe the last line to be the obvious issue but I can not
find a nova variable that seems to relate to this. Can anyone point me in the right direction?

There's a default limit of 1 outgoing live migration per compute node. I don't think that's the whole issue here though.

2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e
385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts
available but 10 instances requested to build. select
destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:10
1

It's unclear to me why it's trying to schedule 10 instances all at once. Did you originally create all the instances as part of a single boot request?

Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Steve_Searles (480 points)   3 3
0 votes

I think that points to a problem in nova. Could you open a bug at
"bugs.launchpad.net/nova/+filebug" and report the bug number in this thread?

Thanks,
Chris

On 09/19/2017 10:42 PM, Steven D. Searles wrote:
Chris, You are definitely on to something here. When I create the instances individually this condition does NOT occur. I confirmed this by creating 20 instances with openstack server create all on the same host. I then set the host to disabled in horizon and used the migrate host button. This time the scheduler worked as expected, migrated one at a time as specified in the controller maxconcurrentlive_migrations=1 and queued the rest until they all completed and the host was empty.

Steven Searles

-----Original Message-----
From: Steven D. Searles [mailto:SSearles@zimcom.net]
Sent: Wednesday, September 20, 2017 12:16 AM
To: Chris Friesen chris.friesen@windriver.com; openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

I did. I will spawn a few singles and see if it does the same thing.

Steven Searles

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen@windriver.com]
Sent: Tuesday, September 19, 2017 11:17 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

On 09/19/2017 05:21 PM, Steven D. Searles wrote:

Hello everyone and thanks in advance. I have Openstack Pike
(KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade
and am seeing a possible issue with disabling a host and live
migrating the instances off via the horizon interface. I can migrate
the instances individually via the Openstack client without issue. It
looks like I might be missing something relating to concurrent jobs in
my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail. Migrating a single instance through the horizon
interface does function. Below is what I am seeing in my scheduler log on the
controller when trying to live migrate all instances from a disabled
host. I believe the last line to be the obvious issue but I can not
find a nova variable that seems to relate to this. Can anyone point me in the right direction?

There's a default limit of 1 outgoing live migration per compute node. I don't think that's the whole issue here though.

2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e
385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts
available but 10 instances requested to build. select
destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:10
1

It's unclear to me why it's trying to schedule 10 instances all at once. Did you originally create all the instances as part of a single boot request?

Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Chris_Friesen (20,420 points)   3 16 24
0 votes

Done, thanks for the assistance Chris and everyone.

https://bugs.launchpad.net/nova/+bug/1718455

Steven Searles

On 9/20/17, 10:44 AM, "Chris Friesen" chris.friesen@windriver.com wrote:

I think that points to a problem in nova. Could you open a bug at
"bugs.launchpad.net/nova/+filebug" and report the bug number in this thread?

Thanks,
Chris

On 09/19/2017 10:42 PM, Steven D. Searles wrote:

Chris, You are definitely on to something here. When I create the instances individually this condition does NOT occur. I confirmed this by creating 20 instances with openstack server create all on the same host. I then set the host to disabled in horizon and used the migrate host button. This time the scheduler worked as expected, migrated one at a time as specified in the controller maxconcurrentlive_migrations=1 and queued the rest until they all completed and the host was empty.

Steven Searles

-----Original Message-----
From: Steven D. Searles [mailto:SSearles@zimcom.net]
Sent: Wednesday, September 20, 2017 12:16 AM
To: Chris Friesen chris.friesen@windriver.com; openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

I did. I will spawn a few singles and see if it does the same thing.

Steven Searles

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen@windriver.com]
Sent: Tuesday, September 19, 2017 11:17 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Pike NOVA Disable and Live Migrate all instances.

On 09/19/2017 05:21 PM, Steven D. Searles wrote:

Hello everyone and thanks in advance. I have Openstack Pike
(KVM,FC-SAN/Cinder) installed in our lab for testing before upgrade
and am seeing a possible issue with disabling a host and live
migrating the instances off via the horizon interface. I can migrate
the instances individually via the Openstack client without issue. It
looks like I might be missing something relating to concurrent jobs in
my nova config? Interestingly enough when a migrate host is attempted via horizon they all fail. Migrating a single instance through the horizon
interface does function. Below is what I am seeing in my scheduler log on the
controller when trying to live migrate all instances from a disabled
host. I believe the last line to be the obvious issue but I can not
find a nova variable that seems to relate to this. Can anyone point me in the right direction?

There's a default limit of 1 outgoing live migration per compute node. I don't think that's the whole issue here though.

2017-09-19 19:02:30.588 19741 DEBUG nova.scheduler.filterscheduler
[req-4268ea83-0657-40cc-961b-f0ae9fb3019e
385c60230b3f49da930dda4d089eda6b
723aa12337a44f818b6d1e1a59f16e49 - default default] There are 1 hosts
available but 10 instances requested to build. select
destinations
/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py:10
1

It's unclear to me why it's trying to schedule 10 instances all at once. Did you originally create all the instances as part of a single boot request?

Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Steve_Searles (480 points)   3 3
0 votes

On 09/20/2017 08:59 AM, Steven D. Searles wrote:
Done, thanks for the assistance Chris and everyone.

https://bugs.launchpad.net/nova/+bug/1718455

I pinged the nova devs and mriedem suggested a fix you might want to try. In
nova/scheduler/filterscheduler.py, function selectdestinations(), around line
81 there is a line that reads:

numinstances = specobj.num_instances

The suggestion is to change that to the following:

numinstances = len(instanceuuids)

Could you try that and see if it fixes the original problem?

Thanks,
Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Chris_Friesen (20,420 points)   3 16 24
0 votes

Will do, I will report back tomorrow.

Steven Searles

On 9/20/17, 11:28 AM, "Chris Friesen" chris.friesen@windriver.com wrote:

On 09/20/2017 08:59 AM, Steven D. Searles wrote:
Done, thanks for the assistance Chris and everyone.

https://bugs.launchpad.net/nova/+bug/1718455

I pinged the nova devs and mriedem suggested a fix you might want to try. In
nova/scheduler/filterscheduler.py, function selectdestinations(), around line
81 there is a line that reads:

numinstances = specobj.num_instances

The suggestion is to change that to the following:

numinstances = len(instanceuuids)

Could you try that and see if it fixes the original problem?

Thanks,
Chris


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 20, 2017 by Steve_Searles (480 points)   3 3
...