It's that time - hopefully everyone has recovered from the summit by
now and is getting back in the swing of things.
I'd like to take a little of everyone's time to summarize the main
points we covered while in Vancouver (and record them here for folks
who didn't make it, were in other sessions, etc).
There's a lot of interest in switching from the 6mo release cycle to
an independent release cycle. I've started a discussion on that. 
This was probably the biggest and most consistent theme -- neutron
integration to allow for separation of provisioning & tenant networks,
and for tenant network isolation, at the physical switch layer, needs
to be completed. Several companies have done POCs, but we need to move
this into the main code.
Sukhdev has offered to lead this work, and has set up a cross-project
meeting to kick it off.  I'd say, this is our highest priority this
tldr; we'll need to do some work in both the Nova driver and in Ironic
-- but mostly in Ironic -- to change the sequence of Neutron calls,
and to pass additional information to Neutron. We will also need to
define & document the network metadata which Ironic owns (read: is the
source of truth) and supplies to Neutron.
We already knew that our driver list was getting out of hand and only
getting worse. So, we came up with a plan to address it.
- name each Driver after the type of hardware it manages (eg, drac,
ilo, amt, ...) rather than the combination of interfaces it implements
- allow dynamic loading/swapping of some interfaces (eg, deploy,
console) where appropriate for that driver, based on the node's
- have sane and well tested defaults for each interface which can be exchanged
Why this approach? Because the unique thing about a given server is
the hardware interfaces required to managed it -- not the software
interfaces we might use to copy an image, start a console server, etc.
This will require some careful thought to ensure the config options as
well as the API options (eg, when enrolling a node) remain backwards
We'll be splitting part of the DeployInterface out into a new
BootInterface this cycle. This will facilitate the creation of drivers
that no-op deployment -- in other words, pure "boot from network" --
and also facilitate the simplification of the driver matrix mentioned
Additionally, we will be promoting the "heartbeat" / "lookup" /
"passdeployinfo" methods, which are currently implemented in
VendorPassthru, to become a standard part of the Deploy() interface.
It turns out every deploy driver we've got implements this
functionality, and promoting common interfaces out of VendorPassthru
is one of the reasons that interface exists :)
There are two things people mean when they say "cinder integration
with ironic": attach volumes, and boot from volumes.
Attaching iSCSI volumes to hardware is cool and most of the work is
already done -- but the cinder CLI commands to get the endpoint
weren't documented, so we didn't know. Also, we need some actor inside
the guest to attach and mount the volume. That could go into
cloud-init-v2, or an optional process that gets added to the machine
images. Without that actor, the user has to initiate the attach and
mount themselves -- which is fine, too, but again, the cinder CLI
command to get that endpoint wasn't documented, and a little work is
For out-of-band attach (eg, FCoE) the Cinder API should be similar,
but we'll need to create an interface in Ironic to receive and act
upon that information (ie, pass it down to the hardware driver to
initiate the attachment).
For boot-from-volume, passing the iSCSI information to Ironic and
chain-loading iPXE from it is the most straight-forward way, but
should be delayed until after we refactor the Boot interface. Ditto
for boot-from-FCoE -- if we can delay this until the Boot interface is
refactored, then it merely becomes an option to that interface, rather
than a whole new DeployDriver.
We designed more states than we were able to implement last cycle.
This cycle, we'll be adding the "enrolled" state and changing the
process for adding Nodes to Ironic to ensure they are manageable
before making them available to Nova.
We will also implement the "zapping" state, which is where long-lived
operations should take place (eg, flashing firmware or a slow-build of
a RAID array). If there's time, we'll also implement the "rescue"
state, though drafts of this appear to be dependent on the network
isolation work, so it may get bumped.
Better Onboarding & Operator docs
There were multiple discussions about the need to improve our
operator-focused "getting started" documentation. Yes. We know :)
I talked with the OpenStack Docs team, and we'll be working on a way
to syndicate / publish the documentation that we have today
(maintained in openstack/ironic/docs/) into the official doc website
while keeping the source of the docs in the current place.
As developers, we're not the best doc writers, but as developers and
reviewers, we've built a habit of proposing/requiring doc updates
along with code changes. Having the code and the operator docs in the
same repo has been helpful, and we'd like to keep it that way.
So - operators and tech writers - please don't be shy. We'd love your
help writing docs about Ironic, and we don't expect you to do
Using Ironic outside of OpenStack
We had a lot of folks interested in using Ironic by itself, and to
that end, there are a few new projects showcased at the summit that
should facilitate this.
Bifrost  is usable at this point; I did a demo of it on my
presentation on Monday . There's more work to be done here -- it is
not very configurable or dynamic yet -- but we will be setting up
functional testing for Ironic using Bifrost / Ansible, in addition to
our current devstack-based job(s).
Michael demo'd his browser-based JS UI for Ironic  -- it talks
directly from the browser to Ironic's REST API, and thus delivers a UI
for Ironic without running any additional server-side services. This
hasn't been proposed to the openstack/ namespace yet, though.
Both of these are fairly young projects, but I'd like to see more
projects be usable in their own right.
That's all for now!
 about 21 minutes into https://www.youtube.com/watch?v=C9-o1gLHHWo
OpenStack Development Mailing List (not for usage questions)