Forgot to copy the ops list on my reply.
-------- Forwarded Message --------
Subject: Re: [openstack-dev] [nova][ironic] Concerns over rigid resource
class-only ironic scheduling
Date: Thu, 7 Sep 2017 14:57:24 -0500
From: Matt Riedemann firstname.lastname@example.org
On 9/7/2017 2:48 PM, Nisha Agarwal wrote:
Hi Ironic Operators,
From Pike, ironic nodes get scheduled based on just the resource class
from nova. Do you guys see any concerns over this "rigid resource class
only ironic scheduling"?
To be more specific, at your datacentre/production environment what all
filters are configured in nova.conf (configuration file for nova) for
scheduling an ironic node? Do you use RamFilter/DiskFilter/CoreFilter in
the "usebaremetalfilters" for ironic nodes scheduling from nova?
Thanks and Regards
OpenStack Development Mailing List (not for usage questions)
Some more background information is in the ironic spec here:
Also, be aware of these release notes for Pike related to baremetal
In Pike, nova is using a combination of VCPU/MEMORYMB/DISKGB resource
class amounts from the flavor during scheduling as it always has, but it
will also check for the custom resourceclass which comes from the
ironic node. The custom resource class is optional in Pike but will be a
hard requirement in Queens, or at least that was the plan. The idea
being that long-term we'd stop consulting VCPU/MEMORYMB/DISKGB from
the flavor during scheduling and just use the atomic node.resourceclass
since we want to allocate a nova instance to an entire ironic node, and
this is also why the Exact* filters were used too.
There are more details on using custom resource classes for scheduling here:
Nisha is raising the question about whether or not we're making
incorrect assumptions about how people are using nova/ironic and they
want to use the non-Exact filters for VCPU/MEMORYMB/DISKGB, which as
far as I have ever heard is not recommended/supported upstream as it can
lead to resource tracking issues in Nova that eventually lead to
scheduling failures later because of the scheduler thinking a node is
available for more than one instance when it's really not.
OpenStack-operators mailing list