settingsLogin | Registersettings

[openstack-dev] [kolla] [bifrost] bifrost container.

0 votes

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and configure and runs the services.

The installation of ironic and its dependencies would not be a problem but the ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit ...) without a running init system which is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the service module could test and start the relevant services.

This leave me with 3 paths forward.

  1. I can continue to try and make the bifrost install script work with the kolla build system by using sed to modify the install playbook or try start systemd during the docker build.

  2. I can use the kolla build system to build only part of the image

a. the bifrost-base image would be build with the kolla build system without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such as adding headers/footers to be used.

b. After the base image is built by kolla I can spawn an instance of bifrost-base with systemd running

c. I can then connect to this running container and run the bifrost install script unmodified.

d. Once it is finished I can stop the container and export it to an image "bifros-postinstall".

e. This can either be used directly (fat container) or as the base image for other container that run each of the ironic services (thin containers)

  1. I can skip the kolla build system entirely and create a script/playbook that will build the bifrost container similar to 2.

While option 1 would fully use the kolla build system It is my least favorite as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress please let me know but currently I am leaning towards option 2.

The only other option I see would be to not use a container and either install biforst on the host or in a vm.
These would essentially be a no op for kolla as we would simply have to document how to install bifrost which is covered
Quite well as part of the bifrost project.

Regards
Sean.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked May 6, 2016 in openstack-dev by Mooney,_Sean_K (3,580 points)   3 9
retagged Jan 25, 2017 by admin

16 Responses

0 votes

Are we (as the Kolla community) open to other bare metal provisioners? The austin discussion was titled generic bare metal, but very quickly turned into bifrost-only discourse. The initial survey showed cobbler/maas/OoO as alternatives people use today. So if the bifrost strategy is, "deploy a VM to deploy bifrost to deploy bare metal" and will cleaned up later, then maybe its time to take a deeper look at the other deployment tools and see if they are a better fit?

Thx,
britt

From: "Steven Dake (stdake)" stdake@cisco.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

From: Devananda van der Veen devananda.vdv@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake) stdake@cisco.com wrote:
Sean,

Thanks for taking this on :) I didn't know you had such an AR :)

From: "Mooney, Sean K" sean.k.mooney@intel.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and configure and runs the services.

What I'd do here is ignore the install playbook and duplicate what it installs. We don't want to install at run time, we want to install at build time. You weren't clear if that is what your doing.

That's going to be quite a bit of work. The bifrost-install playbook does a lot more than just install the ironic services and a few system packages; it also installs rabbit, mysql, nginx, dnsmasq and configures all of these in a very specific way. Re-inventing all of this is basically re-inventing Bifrost.

Sean's latest proposal was splitting this one operation into three smaller decomposed steps.

The reason we would ignore the install playbook is because it runs the services. We need to run the services in a different way.

Do you really need to run them in a different way? If it's just a matter of "use a different init system", I wonder how easily that could be accomodated within the Bifrost project itself.... If there's another reason, please elaborate.

To run in a container, we cannot use systemd. This leaves us with supervisord, which certainly can and should be done in the context of upstream bifrost.

This will (as we discussed at ODS) be a fat container on the underlord cloud – which I guess is ok. I'd recommend not using systemd, as that will break systemd systems badly. Instead use a different init system, such as supervisord.

The installation of ironic and its dependencies would not be a problem but the ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the service module could test and start the relevant services.

This leave me with 3 paths forward.

  1. I can continue to try and make the bifrost install script work with the kolla build system by using sed to modify the install playbook or try start systemd during the docker build.

  2. I can use the kolla build system to build only part of the image

a. the bifrost-base image would be build with the kolla build system without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such as adding headers/footers to be used.

b. After the base image is built by kolla I can spawn an instance of bifrost-base with systemd running

c. I can then connect to this running container and run the bifrost install script unmodified.

d. Once it is finished I can stop the container and export it to an image “bifros-postinstall”.

e. This can either be used directly (fat container) or as the base image for other container that run each of the ironic services (thin containers)

  1. I can skip the kolla build system entirely and create a script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was intended – install the files. This is kind of a mashup of your 1-3 ideas. Good thinking :)

While option 1 would fully use the kolla build system It is my least favorite as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress please let me know but currently I am leaning towards option 2.

If you have questions about my suggestion to use supervisord, hit me up on IRC. Ideally we would also contribute these init scripts back into bifrost code base assuming they want them, which I think they would. Nobody will run systemd in a container, and we all have an interest in seeing BiFrost as the standard bare metal deployment model inside or outside of containers.

Regards
-steve

The only other option I see would be to not use a container and either install biforst on the host or in a vm.
GROAN – one advantage containers provide us is not mucking up the host OS with a bajillion dependencies. I'd like to keep that part of Kolla intact :)

Right - don't install it on the host, but what's the problem with running it in a VM?

FWIW, I already run Bifrost quite successfully in a VM in each of my environments.

There isn't a super specific problem with running it in a VM other than Kolla is about containers not VMs. OpenStack can obviously be run in a VM – our major reason for wanting containers is upgradability which Vms don't offer atomically.

That said, we could run in a VM initially and over time port to run in a container. What we are after long term is a container–based approach to bifrost in upstream bifrost, not replicating or duplicating a bunch of work.

I believe Sean's approach of splitting out the 3 separate steps makes logical sense (to me) in the sense that the one major installation step is broken into the separate build & deploy steps that Kolla uses.

Hope that helps

Regards
-steve

--Deva


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 9, 2016 by Britt_Houser_(bhouse (1,360 points)   3
0 votes

On Fri, May 6, 2016 at 1:16 PM, Fox, Kevin M Kevin.Fox@pnnl.gov wrote:

I was under the impression bifrost was 2 things, one, an
installer/configurator of ironic in a stand alone mode, and two, a
management tool for getting machines deployed without needing nova using
ironic.

"Bifrost is a set of ansible playbooks..."
- https://github.com/openstack/bifrost

It's not "an installer" + "a management tool" -- Bifrost contains a
playbook for installing Ironic (and a whole lot of service dependencies,
configuration files, etc), and it contains a playbook for deploying a
machine image to some hardware, by leveraging Ironic (and all the other
service dependencies) that were prepared earlier. It also contains a lot of
other playbooks as well, many of which are actually sub-components of these
two high-level steps. In describing Bifrost, we have found it useful to
think of these as separate steps, but not separate things.

The first use case seems like it should just be handled by enhancing
kolla's ironic container stuff to directly to handle the use case, doing
things the kolla way. This seems much cleaner to me. Doing it at runtime
looses most of the benefits of doing it in a container at all.

You definitely shouldn't install Ironic and all of its system service
dependencies (nginx, dnsmasq, tftpd, rabbit, mysql) at runtime, but I also
don't think you should completely split things up into
one-service-per-container.

The second adds a lot of value I think, and thats what the bifrost
container should be?

The "management tool" you refer to would be more accurately described as
"using the bifrost-dynamic-deploy playbook, which leverages the system
produced by the bifrost-ironic-install playbook, to deploy a machine image
to some hardware, which was previously enrolled in Ironic using the
ironic-enroll-dynamic playbook".

Hope that is helpful,
--Deva


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 9, 2016 by Devananda_van_der_Ve (10,380 points)   2 3 5
0 votes

On Mon, May 9, 2016 at 11:03 AM, Mooney, Sean K sean.k.mooney@intel.com
wrote:

Hi

If we choose to use bifrost to deploy ironic standalone I think combining
kevins previous

suggestion of modifying the bifrost install playbook with Steve Dake’s
suggestion of creating a series

of supervisord configs for running each of the service is a reasonable
approch.

I am currently look to scope how much effort would be required to split
the main task in the bifrost-ironic-install role

https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifrost-ironic-install/tasks/main.yml

into 3 files which would be included in the main.yml:

Installcomponets.yml (executed when skipinstall is not defiend)

Bootstrapcomponents.yml (executed when skipbootstrap is not defiend)

Startcomponents.yml (executed when skipstart is not defiend)

By default all three would be executed maintain the current behavior of
bifrost today,.

At initial glance, this seems reasonable to me, but the details may get
complicated.

For instance,
- in which playbook do you do things like create system users, db schema,
and necessary directories? These will be identical for every build and
every deployment (of the same version and distro)
- in which playbook do you build or download the IPA ramdisk?
- in which playbook do you set up the networking?

During the kolla build of the biforst image the
https://github.com/openstack/bifrost/blob/master/playbooks/install.yaml
would be in

run with skipbootstrap and skipstart defined as true so only
Install_componets.yml will be executed by the main task.

This would install all software components of bifrost/ironic without
preforming configuration or starting the services.

At deployment time during the bootstrap phase we would spawn an instance
of the biforst-base container and invoke

https://github.com/openstack/bifrost/blob/master/playbooks/install.yaml
with skipinstall and skipstart defined executing Bootstrap_components.yml

Bootstrap_components.yml would encapsulate all logic related to creating
the ironic db(running migration scripts) and generating the configuration

Files for the biforst components.

Finally in the start phase we have 3 options

a) Spawn an instance of the bifrost-supervisor container and use
supervisord to run the bifrost/ironic services (fat container)

b) Spawn an instance of the bifrost-base container and Invoke
https://github.com/openstack/bifrost/blob/master/playbooks/install.yaml
with
skipinstall and skipbootstrap and allow biforst to star the
services.(fat container)

c) Spawn a series of containers each running a single service
sharing the required volumes to allow them to communicate (app containers)

I don't know enough about supervisord to comment on (a), unfortunately.

(b) looks like the least amount of work, but I'm unclear as to when the
bootstrap phase would have been run.

(c) seems like a lot more work in the long run to maintain the code to
create those volumes, separate per-service containers, and so on.

I would welcome any input for the bifrost community on this especially
related to the decomposition of the main.yml into 3 phases.

Im hoping to do a quick poc this week to see how easy it is to do this
decomposition.

I would also like to call out upfront that depending on the scope of this
item I may have to withdraw from contributing to it.

I work in intel’s network platforms group so enabling baremetal
installation is somewhat outside the standard

Work our division undertakes. If we can reuse bifrost to do most of the
heavy lifting of creating the bifrost container and deploying ironic then

The scope of creating the bifrost container is small enough that I can
justify spending some of my time working on it. if it requires

Significant changes to bifrost or rework of kolla’s ironic support then I
will have to step back and focus more on feature that are closer aligned to

Our teams core networking and orchestration focus such as enhancing kolla
to be able to deploy ovs with dpdk and/or opendaylight which are

Also items I would like to contribute to this cycle. I don’t want to
commit to delivering this feature unless I know I will have the time to
work on

It but am happy to help where I can.

@kevin some replies to your questions inline.
>

Regards

Sean.

From: Fox, Kevin M [mailto:Kevin.Fox@pnnl.gov]
Sent: Friday, May 6, 2016 9:17 PM
To: OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

I was under the impression bifrost was 2 things, one, an
installer/configurator of ironic in a stand alone mode, and two, a
management tool for getting machines deployed without needing nova using
ironic.

[Mooney, Sean K] yes this is correct, bifrost does provide both install
playbooks for deploying ironic in standalone mode and a series of playbooks
for dynamically enrolling node in ironic and dynamically deploy imanges to
host

Without requiring nova. Bifrost also provides intergration with Disk
image builder to generate machine images if desired.

The first use case seems like it should just be handled by enhancing
kolla's ironic container stuff to directly to handle the use case, doing
things the kolla way. This seems much cleaner to me. Doing it at runtime
looses most of the benefits of doing it in a container at all.

[Mooney, Sean K] I was not suggestiong doing the installation at runtime.
Option 2 and 3 suggested spawning a container as part of the build in
which the install playbook would be run.

That container would then be stopped and exported to form the base image
for the bifrost continer(s). The base image (bifrost-postinstall) would
either be use to create a fat containter using an init system such as
supervisord to run each of the services

*or be used as the base image for a set of bifrost container each of which
ran a single component. *

The second adds a lot of value I think, and thats what the bifrost
container should be?

[Mooney, Sean K] yes it does and I think it can be reused regarless or
how we decide to deploy ironic.

Thanks,
Kevin


From: Mooney, Sean K [sean.k.mooney@intel.com]
Sent: Friday, May 06, 2016 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

From: Steven Dake (stdake) [mailto:stdake@cisco.com stdake@cisco.com]
Sent: Friday, May 6, 2016 6:56 PM
To: OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

Sean,

Thanks for taking this on :) I didn't know you had such an AR :)

*[Mooney, Sean K] well if other want to do the work that ok with me too
but I was planning on deploying bifrost *

At home again anyway so I taught I might as well try to automate the
process while im at it.

*From: *"Mooney, Sean K" sean.k.mooney@intel.com
*Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" openstack-dev@lists.openstack.org
*Date: *Friday, May 6, 2016 at 10:14 AM
*To: *"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
*Subject: *[openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session

https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo

I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install
playbook provided by bifrost.

In particular the install playbook both installs the ironic dependencies
and configure and runs the services.

What I'd do here is ignore the install playbook and duplicate what it
installs. We don't want to install at run time, we want to install at
build time. You weren't clear if that is what your doing.

*[Mooney, Sean K] that is certainly an option but bifrost is an installer
for ironic and its supporting service. Not using its installation scripts
significantly reduces the value of *

*Integrating with bifrost vs fixing the existing ironic support in kolla
and using that to provision the undercloud. *

The reason we would ignore the install playbook is because it runs the
services. We need to run the services in a different way. This will (as
we discussed at ODS) be a fat container on the underlord cloud – which I
guess is ok. I'd recommend not using systemd, as that will break systemd
systems badly. Instead use a different init system, such as supervisord.

*[Mooney, Sean K] if we don’t use the bifrost install playbook then yes
supervisord would be a good choice for the init system. *

Looking at the official centos docker image
https://hub.docker.com/_/centos/ https://hub.docker.com/_/centos/ they
do provided instruction for running systemd containers tough I have had
issues with this in the past.

The installation of ironic and its dependencies would not be a problem but
the ansible service module is not cable able of starting the

Infrastructure services (mysql,rabbit …) without a running init system
which is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart
container then docker exec into the container and ran

Bifrost install script. This works because the init system is running and
the service module could test and start the relevant services.

This leave me with 3 paths forward.

  1. I can continue to try and make the bifrost install script work
    with the kolla build system by using sed to modify the install playbook or
    try start systemd during the docker build.

  2. I can use the kolla build system to build only part of the image

a. the bifrost-base image would be build with the kolla build
system without running the bifrost playbook. This
would allow the existing allow the existing features of the build system
such as adding headers/footers to be used.

b. After the base image is built by kolla I can spawn an instance of
bifrost-base with systemd running

c. I can then connect to this running container and run the bifrost
install script unmodified.

d. Once it is finished I can stop the container and export it to an
image “bifros-postinstall”.

e. This can either be used directly (fat container) or as the base
image for other container that run each of the ironic services (thin
containers)

  1. I can skip the kolla build system entirely and create a
    script/playbook that will build the bifrost container similar to 2.

4.

Make a supervisord set of init scripts and make the docker file do what it
was intended – install the files. This is kind of a mashup of your 1-3
ideas. Good thinking :)

While option 1 would fully use the kolla build system It is my least
favorite as it is both hacky and complicated to make work.

Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully
automate the build but the real question I have

Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress please let me know
but currently I am leaning towards option 2.

If you have questions about my suggestion to use supervisord, hit me up on
IRC. Ideally we would also contribute these init scripts back into bifrost
code base assuming they want them, which I think they would. Nobody will
run systemd in a container, and we all have an interest in seeing BiFrost
as the standard bare metal deployment model inside or outside of containers.

[Mooney, Sean K] I have briefly used supervisord before for a pet
project https://github.com/SeanMooney/docker-devstack
https://github.com/SeanMooney/docker-devstack to create a container for
running devstack so it did hot pollute my host.

supervisord is a nice tool. Im just about to head home for the weekend
but I might grab you on irc on Monday to follow up.

Regards

-steve

The only other option I see would be to not use a container and either
install biforst on the host or in a vm.

GROAN – one advantage containers provide us is not mucking up the host OS
with a bajillion dependencies. I'd like to keep that part of Kolla intact
:)

[Mooney, Sean K] yes I would prefer not to break that too. This was
basically the option of we don’t actually do the integration and instead
just tell

The user how to use bifrost to do the deployment but leave it up to them
do decide how to install it. so for me that was plan Z so we have a couple
of letter

Go through first.

These would essentially be a no op for kolla as we would simply have to
document how to install bifrost which is covered

Quite well as part of the bifrost project.

Regards

Sean.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 9, 2016 by Devananda_van_der_Ve (10,380 points)   2 3 5
0 votes

I'm not sure if it is necessary to write up or provide support on how to
use more than one deployment tool, but I think any work that
inadvertently makes it harder for an operator to use their own existing
deployment infrastructure could run some people off.

Regarding "deploy a VM to deploy bifrost to deploy bare metal", I
suspect that situation will not be unique to bifrost. At the moment I'm
using MAAS and it has a hard dependency on Upstart for init up until
around Ubuntu Trusty and then was ported to systemd in Wily. I do not
think you can just switch to another init daemon or run it under
supervisord without significant work. I was not even able to get the
maas package to install during a docker build because it couldn't
communicate with the init system it wanted. In addition, for any
deployment tool that enrolls/deploys via PXE the tool may also require
accommodations when being containerized simply because this whole topic
is fairly low in the stack of abstractions. For example I'm not sure
whether any of these tools running in a container would respond to a new
bare metal host's initial DHCP broadcast without --net=host or similar
consideration.

As long as the most common deployment option in Kolla is Ansible, making
deployment tools pluggable is fairly easy to solve. MAAS and bifrost
both have inventory scripts that can provide dynamic inventory to
kolla-ansible while still pulling Kolla's child groups from the
multinode inventory file. Another common pattern could be for a given
deployment tool to template out a new (static) multinode inventory and
then we just append Kolla's groups to the file before calling
kolla-ansible. The problem, to me, becomes in getting every other option
(k8s, puppet, etc.) to work similarly. Perhaps you just state that each
implementation must be pluggable to various deployment tools and let
people that know their respective tool handle the how.(?)

Currently I am running MAAS inside a Vagrant box to retain some of the
immutability and easy "create/destroy" workflow that having it
containerized would offer. It works very well and, assuming nothing else
was running on the underlying deployment host, I'd have no issue running
it in prod that way even with the Vagrant layer.

Thank you,
Mark

On 5/9/2016 4:52 PM, Britt Houser (bhouser) wrote:
Are we (as the Kolla community) open to other bare metal provisioners?
The austin discussion was titled generic bare metal, but very quickly
turned into bifrost-only discourse. The initial survey showed
cobbler/maas/OoO as alternatives people use today. So if the bifrost
strategy is, "deploy a VM to deploy bifrost to deploy bare metal" and
will cleaned up later, then maybe its time to take a deeper look at
the other deployment tools and see if they are a better fit?

Thx,
britt

From: "Steven Dake (stdake)" <stdake@cisco.com stdake@cisco.com>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org>
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

From: Devananda van der Veen <devananda.vdv@gmail.com
devananda.vdv@gmail.com>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org>
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake)
<stdake@cisco.com <mailto:stdake@cisco.com>> wrote:

    Sean,

    Thanks for taking this on :)  I didn't know you had such an AR :)

    From: "Mooney, Sean K" <sean.k.mooney@intel.com
    <mailto:sean.k.mooney@intel.com>>
    Reply-To: "OpenStack Development Mailing List (not for usage
    questions)" <openstack-dev@lists.openstack.org
    <mailto:openstack-dev@lists.openstack.org>>
    Date: Friday, May 6, 2016 at 10:14 AM
    To: "OpenStack Development Mailing List (not for usage
    questions)" <openstack-dev@lists.openstack.org
    <mailto:openstack-dev@lists.openstack.org>>
    Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

        Hi everyone.

        Following up on my AR from the kolla host repository session

        https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo

        I started working on creating a kolla bifrost container.

        Are some initial success it have hit a roadblock with the
        current install playbook provided by bifrost.

        In particular the install playbook both installs the
        ironic dependencies and configure and runs the services.


    What I'd do here is ignore the install playbook and duplicate
    what it installs.  We don't want to install at run time, we
    want to install at build time.  You weren't clear if that is
    what your doing.


That's going to be quite a bit of work. The bifrost-install
playbook does a lot more than just install the ironic services and
a few system packages; it also installs rabbit, mysql, nginx,
dnsmasq *and* configures all of these in a very specific way.
Re-inventing all of this is basically re-inventing Bifrost.

Sean's latest proposal was splitting this one operation into three
smaller decomposed steps.

    The reason we would ignore the install playbook is because it
    runs the services.  We need to run the services in a different
    way.


Do you really need to run them in a different way? If it's just a
matter of "use a different init system", I wonder how easily that
could be accomodated within the Bifrost project itself.... If
there's another reason, please elaborate.

To run in a container, we cannot use systemd. This leaves us with
supervisord, which certainly can and should be done in the context of
upstream bifrost.

     This will (as we discussed at ODS) be a fat container on the
    underlord cloud – which I guess is ok.  I'd recommend not
    using systemd, as that will break systemd systems badly.
    Instead use a different init system, such as supervisord.

        The installation of ironic and its dependencies would not
        be a problem but the ansible service module is not cable
        able of starting the

        Infrastructure services (mysql,rabbit …) without a running
        init system which is not present during the docker build.

        When I created a biforst container in the past is spawned
        a Ubuntu upstart container then docker exec into the
        container and ran

        Bifrost install script. This works because the init system
        is running and the service module could test and start the
        relevant services.

        This leave me with 3 paths forward.

        1.I can continue to try and make the bifrost install
        script work with the kolla build system by using sed to
        modify the install playbook or try start systemd during
        the docker build.

        2.I can use the kolla build system to build only part of
        the image

        a. the bifrost-base image would be build with the kolla
        build system without running the bifrost playbook. This
        would allow the existing allow the existing features of
        the build system such as adding headers/footers to be used.

        b.After the base image is built by kolla I can spawn an
        instance of bifrost-base with systemd running

        c.I can then connect to this running container and run the
        bifrost install script unmodified.

        d.Once it is finished I can stop the container and export
        it to an image “bifros-postinstall”.

        e.This can either be used directly (fat container) or as
        the base image for other container that run each of the
        ironic services (thin containers)

        3.I can  skip the kolla build system entirely and create a
        script/playbook that will build the bifrost container
        similar to 2.


    4.
    Make a supervisord set of init scripts and make the docker
    file do what it was intended – install the files. This is kind
    of a mashup of your 1-3 ideas.  Good thinking :)

        While option 1 would fully use the kolla build system It
        is my least favorite as it is both hacky and complicated
        to make work.

        Docker really was not designed to run systemd as part of
        docker build.

        For option 2 and 3 I can provide a single playbook/script
        that will fully automate the build but the real question I
        have

        Is should I use the kolla build system to make the base
        image or not.

        If anyone else has suggestion on how I can progress 
        please let me know but currently I am leaning towards
        option 2.


    If you have questions about my suggestion to use supervisord,
    hit me up on IRC.  Ideally we would also contribute these init
    scripts back into bifrost code base assuming they want them,
    which I think they would.  Nobody will run systemd in a
    container, and we all have an interest in seeing BiFrost as
    the standard bare metal deployment model inside or outside of
    containers.

    Regards
    -steve

        The only other option I see would be to not use a
        container and either install biforst on the host or in a vm.

    GROAN – one advantage containers provide us is not mucking up
    the host OS with a bajillion dependencies.  I'd like to keep
    that part of Kolla intact :)


Right - don't install it on the host, but what's the problem with
running it in a VM?

FWIW, I already run Bifrost quite successfully in a VM in each of
my environments.

There isn't a super specific problem with running it in a VM other
than Kolla is about containers not VMs. OpenStack can obviously be
run in a VM – our major reason for wanting containers is upgradability
which Vms don't offer atomically.

That said, we could run in a VM initially and over time port to run in
a container. What we are after long term is a container–based
approach to bifrost in upstream bifrost, not replicating or
duplicating a bunch of work.

I believe Sean's approach of splitting out the 3 separate steps makes
logical sense (to me) in the sense that the one major installation
step is broken into the separate build & deploy steps that Kolla uses.

Hope that helps

Regards
-steve

--Deva


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 9, 2016 by Mark_Casey (380 points)  
0 votes

Mark,

This is exactly the kind of discussion I was hoping for during Austin. I agree with pretty much all your statements. I think if Kolla can define what it would expect in the inventory provided by a bare metal provisioner, and we can make an ABI around that, then this becomes a lot more operator friendly. I kinda hoped the discussion would start with that definition, and the delve into individual bare metal tools after that.

To add the discussion of looking a little deeper at the deployment tools: we use cobbler and have containerized it in the "Kolla way". We run TFTP and HTTP in their own containers. Cobblerd and DHCP had to be in the same container, only b/c cobbler expects to issue "systemctl restart isc-dhcpd-server" command when it changes the DHCP config. If either cobbler or isc-dhcp could handle this is a more graceful manner, then there wouldn't be any problem putting them each in their own container. We share volumes between the containers, and the cobblerd container runs supervisord. Cobbler has an API using xmlrpc which we utilize for system definition. It also can provide an ansible inventory, although I haven't played with that feature. I know cobbler doesn't do the new shiny image based deployment, but for us its feature-mature, steady, and reliable.

I'd love to hear from other folks about their journey with bare metal deployment with Kolla.

Thx,
britt

From: Mark Casey markcasey@pointofrental.com
Reply-To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org
Date: Monday, May 9, 2016 at 6:48 PM
To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

I'm not sure if it is necessary to write up or provide support on how to use more than one deployment tool, but I think any work that inadvertently makes it harder for an operator to use their own existing deployment infrastructure could run some people off.

Regarding "deploy a VM to deploy bifrost to deploy bare metal", I suspect that situation will not be unique to bifrost. At the moment I'm using MAAS and it has a hard dependency on Upstart for init up until around Ubuntu Trusty and then was ported to systemd in Wily. I do not think you can just switch to another init daemon or run it under supervisord without significant work. I was not even able to get the maas package to install during a docker build because it couldn't communicate with the init system it wanted. In addition, for any deployment tool that enrolls/deploys via PXE the tool may also require accommodations when being containerized simply because this whole topic is fairly low in the stack of abstractions. For example I'm not sure whether any of these tools running in a container would respond to a new bare metal host's initial DHCP broadcast without --net=host or similar consideration.

As long as the most common deployment option in Kolla is Ansible, making deployment tools pluggable is fairly easy to solve. MAAS and bifrost both have inventory scripts that can provide dynamic inventory to kolla-ansible while still pulling Kolla's child groups from the multinode inventory file. Another common pattern could be for a given deployment tool to template out a new (static) multinode inventory and then we just append Kolla's groups to the file before calling kolla-ansible. The problem, to me, becomes in getting every other option (k8s, puppet, etc.) to work similarly. Perhaps you just state that each implementation must be pluggable to various deployment tools and let people that know their respective tool handle the how.(?)

Currently I am running MAAS inside a Vagrant box to retain some of the immutability and easy "create/destroy" workflow that having it containerized would offer. It works very well and, assuming nothing else was running on the underlying deployment host, I'd have no issue running it in prod that way even with the Vagrant layer.

Thank you,
Mark

On 5/9/2016 4:52 PM, Britt Houser (bhouser) wrote:
Are we (as the Kolla community) open to other bare metal provisioners? The austin discussion was titled generic bare metal, but very quickly turned into bifrost-only discourse. The initial survey showed cobbler/maas/OoO as alternatives people use today. So if the bifrost strategy is, "deploy a VM to deploy bifrost to deploy bare metal" and will cleaned up later, then maybe its time to take a deeper look at the other deployment tools and see if they are a better fit?

Thx,
britt

From: "Steven Dake (stdake)" stdake@cisco.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org>
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

From: Devananda van der Veen devananda.vdv@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org>
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake) stdake@cisco.com wrote:
Sean,

Thanks for taking this on :) I didn't know you had such an AR :)

From: "Mooney, Sean K" sean.k.mooney@intel.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev@lists.openstack.orgopenstack-dev@lists.openstack.org>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repohttps://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and configure and runs the services.

What I'd do here is ignore the install playbook and duplicate what it installs. We don't want to install at run time, we want to install at build time. You weren't clear if that is what your doing.

That's going to be quite a bit of work. The bifrost-install playbook does a lot more than just install the ironic services and a few system packages; it also installs rabbit, mysql, nginx, dnsmasq and configures all of these in a very specific way. Re-inventing all of this is basically re-inventing Bifrost.

Sean's latest proposal was splitting this one operation into three smaller decomposed steps.

The reason we would ignore the install playbook is because it runs the services. We need to run the services in a different way.

Do you really need to run them in a different way? If it's just a matter of "use a different init system", I wonder how easily that could be accomodated within the Bifrost project itself.... If there's another reason, please elaborate.

To run in a container, we cannot use systemd. This leaves us with supervisord, which certainly can and should be done in the context of upstream bifrost.

This will (as we discussed at ODS) be a fat container on the underlord cloud – which I guess is ok. I'd recommend not using systemd, as that will break systemd systems badly. Instead use a different init system, such as supervisord.

The installation of ironic and its dependencies would not be a problem but the ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the service module could test and start the relevant services.

This leave me with 3 paths forward.

  1. I can continue to try and make the bifrost install script work with the kolla build system by using sed to modify the install playbook or try start systemd during the docker build.

  2. I can use the kolla build system to build only part of the image

a. the bifrost-base image would be build with the kolla build system without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such as adding headers/footers to be used.

b. After the base image is built by kolla I can spawn an instance of bifrost-base with systemd running

c. I can then connect to this running container and run the bifrost install script unmodified.

d. Once it is finished I can stop the container and export it to an image “bifros-postinstall”.

e. This can either be used directly (fat container) or as the base image for other container that run each of the ironic services (thin containers)

  1. I can skip the kolla build system entirely and create a script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was intended – install the files. This is kind of a mashup of your 1-3 ideas. Good thinking :)

While option 1 would fully use the kolla build system It is my least favorite as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress please let me know but currently I am leaning towards option 2.

If you have questions about my suggestion to use supervisord, hit me up on IRC. Ideally we would also contribute these init scripts back into bifrost code base assuming they want them, which I think they would. Nobody will run systemd in a container, and we all have an interest in seeing BiFrost as the standard bare metal deployment model inside or outside of containers.

Regards
-steve

The only other option I see would be to not use a container and either install biforst on the host or in a vm.
GROAN – one advantage containers provide us is not mucking up the host OS with a bajillion dependencies. I'd like to keep that part of Kolla intact :)

Right - don't install it on the host, but what's the problem with running it in a VM?

FWIW, I already run Bifrost quite successfully in a VM in each of my environments.

There isn't a super specific problem with running it in a VM other than Kolla is about containers not VMs. OpenStack can obviously be run in a VM – our major reason for wanting containers is upgradability which Vms don't offer atomically.

That said, we could run in a VM initially and over time port to run in a container. What we are after long term is a container–based approach to bifrost in upstream bifrost, not replicating or duplicating a bunch of work.

I believe Sean's approach of splitting out the 3 separate steps makes logical sense (to me) in the sense that the one major installation step is broken into the separate build & deploy steps that Kolla uses.

Hope that helps

Regards
-steve

--Deva


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 9, 2016 by Britt_Houser_(bhouse (1,360 points)   3
0 votes

@ Mark
What we discussed at the summit were to actions.
1 create a optional bifrost container to provision the os on a node
2 create a generic playbook that will configure the provisioned node with the deploy dependencies.
The playbook should be reusable regardless of how the system is provisioned.

So I could not sleep last night so I decided hack on a poc of the biforst container
And the biforst install decomposition.

It can be found here
https://github.com/SeanMooney/kolla/tree/bifrost
https://github.com/SeanMooney/bifrost/tree/kolla

I was testing this with a centos host and centos source build of the bifrost-systemd container.

As the name implies im cheating by using systemd currently to be my init system.
This works fine in a container even with systemd on the host provided you follow the recommended steps for running
Systemd containers

e.g. add the following to your docker file.
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ “$i” == “systemd-tmpfiles-setup.service” ] || rm -f “$i”; done); \
rm -f /lib/systemd/system/multi-user.target.wants/;\
rm -f /etc/systemd/system/
.wants/;\
rm -f /lib/systemd/system/local-fs.target.wants/
; \
rm -f /lib/systemd/system/sockets.target.wants/udev; \
rm -f /lib/systemd/system/sockets.target.wants/initctl; \
rm -f /lib/systemd/system/basic.target.wants/;\
rm -f /lib/systemd/system/anaconda.target.wants/
;
VOLUME [ "/sys/fs/cgroup" ]

How the poc works currently is

  1. Clone https://github.com/SeanMooney/kolla and checkout bifrost branch

  2. Run tox –e genconfig and modify for source install and update bifrost-base entry to point to https://github.com/SeanMooney/bifrost/ with reference=kolla

  3. Run tools/build.py bifrost-systemd

a. As part of the build this runs ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml -e @/tmp/build_arg.yml

b. Buildarg contains skipbootstrap: true skipstart: true installdib: true createimagevia_dib: false

c. This results in all bifrost/ironic dependencies being installed as part of the image build without configuration or starting the services

  1. Once the image are built start the bifrost-systemd container for boot straping

a. Docker run –dit –privileged –net=host –name bifrost kollaglue/centos-source-bifrost-systemd

  1. Docker exec –it bifrost bash

  2. Fix /etc/host by adding hostname to 127.0.0.1 line to workaround sed issue running in containter.

  3. Source /bifrost/env-vars and source /opt/stack/ansible/hacking/env-setup

  4. To bootsrap bifrost and start service run ansible-playbook -vvvv -i /bifrost/playbooks/inventory/localhost /bifrost/playbooks/install.yaml –e skipinstall –e networkinterface=
    this could be split into two steps using skipbootstrap and skipstart

At this point ironic should be running.

Known issues currently:

· Ansilbel dose not use the kolla python venv so I have to install shade and jsonpatch manually to make

The enroll-dynamic playbook work correctly.

· Deploy-dynamic currently does not work

After a node is enrolled calling ironic node-set-power-state on works so ironic can connect to the node
Over ipmi and manage it.

Currently I am not sure how to fix the venv issue or deploy-dynamic issue but I belive the deployment would succeed if I can get the playbook to work

Regards
sean

From: Britt Houser (bhouser) [mailto:bhouser@cisco.com]
Sent: Tuesday, May 10, 2016 12:09 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

Mark,

This is exactly the kind of discussion I was hoping for during Austin. I agree with pretty much all your statements. I think if Kolla can define what it would expect in the inventory provided by a bare metal provisioner, and we can make an ABI around that, then this becomes a lot more operator friendly. I kinda hoped the discussion would start with that definition, and the delve into individual bare metal tools after that.

To add the discussion of looking a little deeper at the deployment tools: we use cobbler and have containerized it in the "Kolla way". We run TFTP and HTTP in their own containers. Cobblerd and DHCP had to be in the same container, only b/c cobbler expects to issue "systemctl restart isc-dhcpd-server" command when it changes the DHCP config. If either cobbler or isc-dhcp could handle this is a more graceful manner, then there wouldn't be any problem putting them each in their own container. We share volumes between the containers, and the cobblerd container runs supervisord. Cobbler has an API using xmlrpc which we utilize for system definition. It also can provide an ansible inventory, although I haven't played with that feature. I know cobbler doesn't do the new shiny image based deployment, but for us its feature-mature, steady, and reliable.

I'd love to hear from other folks about their journey with bare metal deployment with Kolla.

Thx,
britt

From: Mark Casey markcasey@pointofrental.com
Reply-To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org
Date: Monday, May 9, 2016 at 6:48 PM
To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

I'm not sure if it is necessary to write up or provide support on how to use more than one deployment tool, but I think any work that inadvertently makes it harder for an operator to use their own existing deployment infrastructure could run some people off.

Regarding "deploy a VM to deploy bifrost to deploy bare metal", I suspect that situation will not be unique to bifrost. At the moment I'm using MAAS and it has a hard dependency on Upstart for init up until around Ubuntu Trusty and then was ported to systemd in Wily. I do not think you can just switch to another init daemon or run it under supervisord without significant work. I was not even able to get the maas package to install during a docker build because it couldn't communicate with the init system it wanted. In addition, for any deployment tool that enrolls/deploys via PXE the tool may also require accommodations when being containerized simply because this whole topic is fairly low in the stack of abstractions. For example I'm not sure whether any of these tools running in a container would respond to a new bare metal host's initial DHCP broadcast without --net=host or similar consideration.

As long as the most common deployment option in Kolla is Ansible, making deployment tools pluggable is fairly easy to solve. MAAS and bifrost both have inventory scripts that can provide dynamic inventory to kolla-ansible while still pulling Kolla's child groups from the multinode inventory file. Another common pattern could be for a given deployment tool to template out a new (static) multinode inventory and then we just append Kolla's groups to the file before calling kolla-ansible. The problem, to me, becomes in getting every other option (k8s, puppet, etc.) to work similarly. Perhaps you just state that each implementation must be pluggable to various deployment tools and let people that know their respective tool handle the how.(?)

Currently I am running MAAS inside a Vagrant box to retain some of the immutability and easy "create/destroy" workflow that having it containerized would offer. It works very well and, assuming nothing else was running on the underlying deployment host, I'd have no issue running it in prod that way even with the Vagrant layer.

Thank you,
Mark
On 5/9/2016 4:52 PM, Britt Houser (bhouser) wrote:
Are we (as the Kolla community) open to other bare metal provisioners? The austin discussion was titled generic bare metal, but very quickly turned into bifrost-only discourse. The initial survey showed cobbler/maas/OoO as alternatives people use today. So if the bifrost strategy is, "deploy a VM to deploy bifrost to deploy bare metal" and will cleaned up later, then maybe its time to take a deeper look at the other deployment tools and see if they are a better fit?

Thx,
britt

From: "Steven Dake (stdake)" stdake@cisco.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

From: Devananda van der Veen devananda.vdv@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake) stdake@cisco.com wrote:
Sean,

Thanks for taking this on :) I didn't know you had such an AR :)

From: "Mooney, Sean K" sean.k.mooney@intel.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and configure and runs the services.

What I'd do here is ignore the install playbook and duplicate what it installs. We don't want to install at run time, we want to install at build time. You weren't clear if that is what your doing.

That's going to be quite a bit of work. The bifrost-install playbook does a lot more than just install the ironic services and a few system packages; it also installs rabbit, mysql, nginx, dnsmasq and configures all of these in a very specific way. Re-inventing all of this is basically re-inventing Bifrost.

Sean's latest proposal was splitting this one operation into three smaller decomposed steps.

The reason we would ignore the install playbook is because it runs the services. We need to run the services in a different way.

Do you really need to run them in a different way? If it's just a matter of "use a different init system", I wonder how easily that could be accomodated within the Bifrost project itself.... If there's another reason, please elaborate.

To run in a container, we cannot use systemd. This leaves us with supervisord, which certainly can and should be done in the context of upstream bifrost.

This will (as we discussed at ODS) be a fat container on the underlord cloud – which I guess is ok. I'd recommend not using systemd, as that will break systemd systems badly. Instead use a different init system, such as supervisord.

The installation of ironic and its dependencies would not be a problem but the ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit …) without a running init system which is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the service module could test and start the relevant services.

This leave me with 3 paths forward.

  1. I can continue to try and make the bifrost install script work with the kolla build system by using sed to modify the install playbook or try start systemd during the docker build.

  2. I can use the kolla build system to build only part of the image

a. the bifrost-base image would be build with the kolla build system without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such as adding headers/footers to be used.

b. After the base image is built by kolla I can spawn an instance of bifrost-base with systemd running

c. I can then connect to this running container and run the bifrost install script unmodified.

d. Once it is finished I can stop the container and export it to an image “bifros-postinstall”.

e. This can either be used directly (fat container) or as the base image for other container that run each of the ironic services (thin containers)

  1. I can skip the kolla build system entirely and create a script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was intended – install the files. This is kind of a mashup of your 1-3 ideas. Good thinking :)

While option 1 would fully use the kolla build system It is my least favorite as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress please let me know but currently I am leaning towards option 2.

If you have questions about my suggestion to use supervisord, hit me up on IRC. Ideally we would also contribute these init scripts back into bifrost code base assuming they want them, which I think they would. Nobody will run systemd in a container, and we all have an interest in seeing BiFrost as the standard bare metal deployment model inside or outside of containers.

Regards
-steve

The only other option I see would be to not use a container and either install biforst on the host or in a vm.
GROAN – one advantage containers provide us is not mucking up the host OS with a bajillion dependencies. I'd like to keep that part of Kolla intact :)

Right - don't install it on the host, but what's the problem with running it in a VM?

FWIW, I already run Bifrost quite successfully in a VM in each of my environments.

There isn't a super specific problem with running it in a VM other than Kolla is about containers not VMs. OpenStack can obviously be run in a VM – our major reason for wanting containers is upgradability which Vms don't offer atomically.

That said, we could run in a VM initially and over time port to run in a container. What we are after long term is a container–based approach to bifrost in upstream bifrost, not replicating or duplicating a bunch of work.

I believe Sean's approach of splitting out the 3 separate steps makes logical sense (to me) in the sense that the one major installation step is broken into the separate build & deploy steps that Kolla uses.

Hope that helps

Regards
-steve

--Deva


OpenStack Development Mailing List (not for usage questions)

Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 10, 2016 by Mooney,_Sean_K (3,580 points)   3 9
...