settingsLogin | Registersettings

[openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

0 votes

Hey,

We are in the process of integrating OpenStack Ironic into our own OpenStack Distribution.
Still pulling all the pieces together ... have not yet got a successful ‘nova boot’ yet, so issues below could be configuration or setup issues.

We have ironic node enrolled ... and corresponding nova hypervisor has been created for it ... ALTHOUGH does not seem to be populated correctly (see below).
AND then the ‘nova boot’ fails with the error:

 "No valid host was found. There are not enough hosts available. 66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total RAM: req:20480, avail:0 MB,

NOTE: the nova.conf that we are using for the nova.compute being used for ironic servers is attached.

Any Ideas what could be wrong ?
Greg.

[wrsroot@controller-1 ~(keystone_admin)]$ ironic node-show metallica

+------------------------+--------------------------------------------------------------------------+

| Property | Value |

+------------------------+--------------------------------------------------------------------------+

| chassis_uuid | |

| clean_step | {} |

| console_enabled | False |

| created_at | 2017-10-27T20:37:12.241352+00:00 |

| driver | pxe_ipmitool |

| driverinfo | {u'ipmipassword': u'******', u'ipmi_address': u'128.224.64.212', |

| | u'ipmiusername': u'root', u'deploykernel': u'2939e2d4-da3f-4917-b99a- |

| | 01030fd30345', u'deploy_ramdisk': |

| | u'73ad43c4-4300-45a5-87ec-f28646518430'} |

| driverinternalinfo | {} |

| extra | {} |

| inspectionfinishedat | None |

| inspectionstartedat | None |

| instance_info | {} |

| instance_uuid | None |

| last_error | None |

| maintenance | False |

| maintenance_reason | None |

| name | metallica |

| network_interface | |

| power_state | power off |

| properties | {u'memorymb': 20480, u'cpuarch': u'x8664', u'localgb': 100, u'cpus': |

| | 20, u'capabilities': u'boot_option:local'} |

| provision_state | manageable |

| provisionupdatedat | 2017-10-30T15:47:33.397317+00:00 |

| raid_config | |

| reservation | None |

| resource_class | |

| targetpowerstate | None |

| targetprovisionstate | None |

| targetraidconfig | |

| updated_at | 2017-10-30T15:47:51.396471+00:00 |

| uuid | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |

+------------------------+--------------------------------------------------------------------------+

[wrsroot@controller-1 ~(keystone_admin)]$ nova hypervisor-show 66aaf6fa-3cbe-4744-8d55-c90eeae4800a

+-------------------------+--------------------------------------+

| Property | Value |

+-------------------------+--------------------------------------+

| cpu_info | {} |

| current_workload | 0 |

| diskavailableleast | 0 |

| freediskgb | 0 |

| freerammb | 0 |

| host_ip | 127.0.0.1 |

| hypervisor_hostname | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |

| hypervisor_type | ironic |

| hypervisor_version | 1 |

| id | 5 |

| local_gb | 0 |

| localgbused | 0 |

| memory_mb | 0 |

| memorymbnode | None |

| memorymbused | 0 |

| memorymbused_node | None |

| running_vms | 0 |

| servicedisabledreason | None |

| service_host | controller-1 |

| service_id | 28 |

| state | up |

| status | enabled |

| vcpus | 0 |

| vcpus_node | None |

| vcpus_used | 0.0 |

| vcpususednode | None |

+-------------------------+--------------------------------------+

[wrsroot@controller-1 ~(keystone_admin)]$


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

asked Oct 30, 2017 in openstack-dev by Waines,_Greg (2,700 points)   1 5 10

3 Responses

0 votes

You need to set the node's resource_class attribute to the custom
resource class you will use for that chassis/hardware type.

Then you need to add a specific extra_specs key/value to a flavor to
indicate that that flavor is requesting that specific hardware type:

openstack flavor set $flavorname --property resources:$RESOURCE_CLASS=1

for instance, let's say you set your node's resource class to
CUSTOM_METALLICA, you would do this to the flavor you are using to grab
one of those Ironic resources:

openstack flavor set $flavorname --property resources:CUSTOM_METALLICA=1

Then nova boot with that flavor and you should be good to go.

-jay

On 10/30/2017 01:05 PM, Waines, Greg wrote:
Hey,

We are in the process of integrating OpenStack Ironic into our own
OpenStack Distribution.

Still pulling all the pieces together ... have not yet got a successful
‘nova boot’ yet, so issues below could be configuration or setup issues.

We have ironic node enrolled ... and corresponding nova hypervisor has
been created for it ... ALTHOUGH does not seem to be populated correctly
(see below).

AND then the ‘nova boot’ fails with the error:

"No valid host was found. There are not enough hosts available.
66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total
RAM: req:20480, avail:0 MB,

NOTE: the nova.conf that we are using for the nova.compute being used
for ironic servers is attached.

Any Ideas what could be wrong ?

Greg.

[wrsroot@controller-1 ~(keystone_admin)]$ ironic node-show metallica

+------------------------+--------------------------------------------------------------------------+

| Property | Value|

+------------------------+--------------------------------------------------------------------------+

| chassis_uuid ||

| clean_step | {} |

| console_enabled| False|

| created_at | 2017-10-27T20:37:12.241352+00:00 |

| driver | pxe_ipmitool |

| driverinfo| {u'ipmipassword': u'******', u'ipmi_address':
u'128.224.64.212',|

|| u'ipmiusername': u'root', u'deploykernel': u'2939e2d4-da3f-4917-b99a-|

|| 01030fd30345', u'deploy_ramdisk':|

|| u'73ad43c4-4300-45a5-87ec-f28646518430'} |

| driverinternalinfo | {} |

| extra| {} |

| inspectionfinishedat | None |

| inspectionstartedat| None |

| instance_info| {} |

| instance_uuid| None |

| last_error | None |

| maintenance| False|

| maintenance_reason | None |

| name | metallica|

| network_interface||

| power_state| power off|

| properties | {u'memorymb': 20480, u'cpuarch': u'x8664',
u'local
gb': 100, u'cpus': |

|| 20, u'capabilities': u'boot_option:local'} |

| provision_state| manageable |

| provisionupdatedat | 2017-10-30T15:47:33.397317+00:00 |

| raid_config||

| reservation| None |

| resource_class ||

| targetpowerstate | None |

| targetprovisionstate | None |

| targetraidconfig ||

| updated_at | 2017-10-30T15:47:51.396471+00:00 |

| uuid | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |

+------------------------+--------------------------------------------------------------------------+

[wrsroot@controller-1 ~(keystone_admin)]$ nova hypervisor-show
66aaf6fa-3cbe-4744-8d55-c90eeae4800a

+-------------------------+--------------------------------------+

| Property| Value|

+-------------------------+--------------------------------------+

| cpu_info| {} |

| current_workload| 0|

| diskavailableleast| 0|

| freediskgb| 0|

| freerammb | 0|

| host_ip | 127.0.0.1|

| hypervisor_hostname | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |

| hypervisor_type | ironic |

| hypervisor_version| 1|

| id| 5|

| local_gb| 0|

| localgbused | 0|

| memory_mb | 0|

| memorymbnode| None |

| memorymbused| 0|

| memorymbused_node | None |

| running_vms | 0|

| servicedisabledreason | None |

| service_host| controller-1 |

| service_id| 28 |

| state | up |

| status| enabled|

| vcpus | 0|

| vcpus_node| None |

| vcpus_used| 0.0|

| vcpususednode | None |

+-------------------------+--------------------------------------+

[wrsroot@controller-1 ~(keystone_admin)]$


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 30, 2017 by Jay_Pipes (59,760 points)   3 11 14
0 votes

Thanks Jay ... i’ll try this out and let you know.

BTW ... i should have mentioned that i am currently @Newton ... and will eventually move to @PIKEchristopher.pike@intel.com
Does that change anything you suggested below ?

Greg.

From: Jay Pipes jaypipes@gmail.com
Reply-To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org
Date: Monday, October 30, 2017 at 1:23 PM
To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

You need to set the node's resource_class attribute to the custom
resource class you will use for that chassis/hardware type.

Then you need to add a specific extra_specs key/value to a flavor to
indicate that that flavor is requesting that specific hardware type:

openstack flavor set $flavorname --property resources:$RESOURCE_CLASS=1

for instance, let's say you set your node's resource class to
CUSTOM_METALLICA, you would do this to the flavor you are using to grab
one of those Ironic resources:

openstack flavor set $flavorname --property resources:CUSTOM_METALLICA=1

Then nova boot with that flavor and you should be good to go.

-jay

On 10/30/2017 01:05 PM, Waines, Greg wrote:
Hey,
We are in the process of integrating OpenStack Ironic into our own
OpenStack Distribution.
Still pulling all the pieces together ... have not yet got a successful
‘nova boot’ yet, so issues below could be configuration or setup issues.
We have ironic node enrolled ... and corresponding nova hypervisor has
been created for it ... ALTHOUGH does not seem to be populated correctly
(see below).
AND then the ‘nova boot’ fails with the error:
"No valid host was found. There are not enough hosts available.
66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total
RAM: req:20480, avail:0 MB,
NOTE: the nova.conf that we are using for the nova.compute being used
for ironic servers is attached.
Any Ideas what could be wrong ?
Greg.
[wrsroot@controller-1 ~(keystoneadmin)]$ ironic node-show metallica
+------------------------+--------------------------------------------------------------------------+
| Property | Value|
+------------------------+--------------------------------------------------------------------------+
| chassis
uuid ||
| cleanstep | {} |
| console
enabled| False|
| createdat | 2017-10-27T20:37:12.241352+00:00 |
| driver | pxe
ipmitool |
| driverinfo| {u'ipmipassword': u'******', u'ipmiaddress':
u'128.224.64.212',|
|| u'ipmi
username': u'root', u'deploykernel': u'2939e2d4-da3f-4917-b99a-|
|| 01030fd30345', u'deploy
ramdisk':|
|| u'73ad43c4-4300-45a5-87ec-f28646518430'} |
| driverinternalinfo | {} |
| extra| {} |
| inspectionfinishedat | None |
| inspectionstartedat| None |
| instanceinfo| {} |
| instance
uuid| None |
| lasterror | None |
| maintenance| False|
| maintenance
reason | None |
| name | metallica|
| networkinterface||
| power
state| power off|
| properties | {u'memorymb': 20480, u'cpuarch': u'x8664',
u'local
gb': 100, u'cpus': |
|| 20, u'capabilities': u'bootoption:local'} |
| provision
state| manageable |
| provisionupdatedat | 2017-10-30T15:47:33.397317+00:00 |
| raidconfig||
| reservation| None |
| resource
class ||
| targetpowerstate | None |
| targetprovisionstate | None |
| targetraidconfig ||
| updatedat | 2017-10-30T15:47:51.396471+00:00 |
| uuid | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |
+------------------------+--------------------------------------------------------------------------+
[wrsroot@controller-1 ~(keystone
admin)]$ nova hypervisor-show
66aaf6fa-3cbe-4744-8d55-c90eeae4800a
+-------------------------+--------------------------------------+
| Property| Value|
+-------------------------+--------------------------------------+
| cpuinfo| {} |
| current
workload| 0|
| diskavailableleast| 0|
| freediskgb| 0|
| freerammb | 0|
| hostip | 127.0.0.1|
| hypervisor
hostname | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |
| hypervisortype | ironic |
| hypervisor
version| 1|
| id| 5|
| localgb| 0|
| local
gbused | 0|
| memory
mb | 0|
| memorymbnode| None |
| memorymbused| 0|
| memorymbusednode | None |
| running
vms | 0|
| servicedisabledreason | None |
| servicehost| controller-1 |
| service
id| 28 |
| state | up |
| status| enabled|
| vcpus | 0|
| vcpusnode| None |
| vcpus
used| 0.0|
| vcpususednode | None |
+-------------------------+--------------------------------------+
[wrsroot@controller-1 ~(keystone_admin)]$


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 30, 2017 by Waines,_Greg (2,700 points)   1 5 10
0 votes

On 10/30/2017 01:37 PM, Waines, Greg wrote:
Thanks Jay ... i’ll try this out and let you know.

BTW ... i should have mentioned that i am currently @Newton ... and will
eventually move to @PIKE christopher.pike@intel.com
Does that change anything you suggested below ?

Hmm, yes, it does.

In Pike, we began requiring the custom resource class thing with Ironic.
In Newton, I don't believe we had yet changed the scheduler to look at
the resource class "overrides".

Looking at your output, I see that the Ironic node's power_state is set
to "power off". I'm not sure if that's as it should be. Perhaps some
Ironic devs can help with the answer to that..

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 30, 2017 by Jay_Pipes (59,760 points)   3 11 14
...