settingsLogin | Registersettings

[openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

0 votes

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked May 18, 2017 in openstack-dev by Doug_Hellmann (87,520 points)   3 4 11

83 Responses

0 votes

For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:
Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Michał_Jastrzębski (9,220 points)   1 5 6
0 votes

Sorry for the top post, Michal, Can you please clarify a couple of things:

1) Can folks install just one or two services for their specific scenario?
2) Can the container images from kolla be run on bare docker daemon?
3) Can someone take the kolla container images from say dockerhub and
use it without the Kolla framework?

Thanks,
Dims

On Mon, May 15, 2017 at 1:52 PM, Michał Jastrzębski inc007@gmail.com wrote:
For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Davanum_Srinivas (35,920 points)   2 4 8
0 votes

On 05/15/2017 01:52 PM, Michał Jastrzębski wrote:
For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

That concerns me quite a bit. Given the nature of the patch story on
containers (which is a rebuild), I really feel like users should have
their own build / CI pipeline locally to be deploying this way. Making
that easy for them to do, is great, but skipping that required local
infrastructure puts them in a bad position should something go wrong.

I do get that many folks want that, but I think it builds in a set of
expectations that it's not possible to actually meet from an upstream
perspective.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

There have been many instances where 24 hours wasn't good enough as
embargoes end up pretty weird in terms of when things hit mirrors. It
also assumes that when a CVE hits some other part of the gate or
infrastructure isn't wedged so that it's not possible to build new
packages. Or the capacity demands happen during a feature freeze, with
tons of delay in there. There are many single points of failure in this
process.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

Our upstream gating is pretty limited on the scale of the environment, I
really think we're doing people a disservice to encourage doing this
kind of deployment of OpenStack without a local CI pipeline. Local
environments are so varied, and without a local CI mechanism I think
this is going to end in lots of tears.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Having been part of the postgresql deprecation discussions where it was
clear far more was read into various support statements than was true,
and some very large scale decisions got made with bad information, there
are worse things than not doing things for our users. It's not giving
them the correct set of expectations, and having them build out assuming
more support than they really have.

I'm definitely with Doug, publishing actual docker images feels like the
wrong direction.

-Sean

--
Sean Dague
http://dague.net


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Sean_Dague (66,200 points)   4 8 14
0 votes

On 15 May 2017 at 11:19, Davanum Srinivas davanum@gmail.com wrote:
Sorry for the top post, Michal, Can you please clarify a couple of things:

1) Can folks install just one or two services for their specific scenario?

Yes, that's more of a kolla-ansible feature and require a little bit
of ansible know-how, but entirely possible. Kolla-k8s is built to
allow maximum flexibility in that space.

2) Can the container images from kolla be run on bare docker daemon?

Yes, but they need to either override our default CMD (kolla_start) or
provide ENVs requred by it, not a huge deal

3) Can someone take the kolla container images from say dockerhub and
use it without the Kolla framework?

Yes, there is no such thing as kolla framework really. Our images
follow stable ABI and they can be deployed by any deploy mechanism
that will follow it. We have several users who wrote their own deploy
mechanism from scratch.

Containers are just blobs with binaries in it. Little things that we
add are kolla_start script to allow our config file management and
some custom startup scripts for things like mariadb to help with
bootstrapping, both are entirely optional.

Thanks,
Dims

On Mon, May 15, 2017 at 1:52 PM, Michał Jastrzębski inc007@gmail.com wrote:

For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Michał_Jastrzębski (9,220 points)   1 5 6
0 votes

On 15 May 2017 at 11:47, Sean Dague sean@dague.net wrote:
On 05/15/2017 01:52 PM, Michał Jastrzębski wrote:

For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

That concerns me quite a bit. Given the nature of the patch story on
containers (which is a rebuild), I really feel like users should have
their own build / CI pipeline locally to be deploying this way. Making
that easy for them to do, is great, but skipping that required local
infrastructure puts them in a bad position should something go wrong.

I totally agree they should. Even if they do, it's still would be
additive to gating that we run, so it's even better.

I do get that many folks want that, but I think it builds in a set of
expectations that it's not possible to actually meet from an upstream
perspective.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

There have been many instances where 24 hours wasn't good enough as
embargoes end up pretty weird in terms of when things hit mirrors. It
also assumes that when a CVE hits some other part of the gate or
infrastructure isn't wedged so that it's not possible to build new
packages. Or the capacity demands happen during a feature freeze, with
tons of delay in there. There are many single points of failure in this
process.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

Our upstream gating is pretty limited on the scale of the environment, I
really think we're doing people a disservice to encourage doing this
kind of deployment of OpenStack without a local CI pipeline. Local
environments are so varied, and without a local CI mechanism I think
this is going to end in lots of tears.

But it's better to have some CI than no CI at all...

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Having been part of the postgresql deprecation discussions where it was
clear far more was read into various support statements than was true,
and some very large scale decisions got made with bad information, there
are worse things than not doing things for our users. It's not giving
them the correct set of expectations, and having them build out assuming
more support than they really have.

I'm definitely with Doug, publishing actual docker images feels like the
wrong direction.

All of these issues are documentation/messaging issues from my
perspective. I think if we state clearly what images are (gate tested
and relatively fresh, under ASL license so no guarantees) and what
they aren't (certified to work for you), that would solve these
issues. We can use Kolla dockerhub account, Kolla license and all that
to show exactly where our "support" ends.

    -Sean

--
Sean Dague
http://dague.net


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Michał_Jastrzębski (9,220 points)   1 5 6
0 votes

Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:

For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

I have no doubt that consumers of the images would like us to keep
creating them. We had lots of discussions last week about resource
constraints and sustainable practices, though, and this strikes me
as an area where we're deviating from our history in a way that
will require more maintenance work upstream.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Although I work at Red Hat, I want to make sure it's clear that my
objection is purely related to community concerns. For this
conversation, I'm wearing my upstream TC and Release team hats.

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

A daily build job introduces new questions about how big the images
are and how many of them we keep, but let's focus on whether the
change in policy is something we want to adopt before we consider
those questions.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

The ASL would clearly apply to the source of our projects that we
put into the images. The images do contain other software, though,
and it's less clear to me how to interpret the support guarantees
for an image that contains a mix of projects with different licenses
and written by different communities. Or even if we can say that
the images are available under the ASL if some of the contents are
GPL.

Regardless, I'm not sure the copyright license is the correct
document to learn about the support guarantees. If I had an issue
with one of those images, I would be much more likely to try to
find the person or community responsible for publishing them than
the author of a given package contained in the image. If we're
asserting that consumers should not ask for support, why are we
even talking about publishing them?

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Well, that's the question. Today we have teams publishing those
images themselves, right? And the proposal is to have infra do it?
That change could be construed to imply that there is more of a
relationship with the images and the rest of the community (remember,
folks outside of the main community activities do not always make
the same distinctions we do about teams). So, before we go ahead
with that, I want to make sure that we all have a chance to discuss
the policy change and its implications.

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Doug_Hellmann (87,520 points)   3 4 11
0 votes

On 15 May 2017 at 12:12, Doug Hellmann doug@doughellmann.com wrote:
Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:

For starters, I want to emphasize that fresh set of dockerhub images
was one of most requested features from Kolla on this summit and few
other features more or less requires readily-available docker
registry. Features like full release upgrade gates.

This will have numerous benefits for users that doesn't have resources
to put sophisticated CI/staging env, which, I'm willing to bet, is
still quite significant user base. If we do it correctly (and we will
do it correctly), images we'll going to push will go through series of
gates which we have in Kolla (and will have more). So when you pull
image, you know that it was successfully deployed within scenerios
available in our gates, maybe even upgrade and increase scenerio
coverage later? That is a huge benefit for actual users.

I have no doubt that consumers of the images would like us to keep
creating them. We had lots of discussions last week about resource
constraints and sustainable practices, though, and this strikes me
as an area where we're deviating from our history in a way that
will require more maintenance work upstream.

On 15 May 2017 at 10:34, Doug Hellmann doug@doughellmann.com wrote:

Last week at the Forum we had a couple of discussions about
collaboration between the various teams building or consuming
container images. One topic that came up was deciding how to publish
images from the various teams to docker hub or other container
registries. While the technical bits seem easy enough to work out,
there is still the question of precedence and whether it's a good
idea to do so at all.

In the past, we have refrained from publishing binary packages in
other formats such as debs and RPMs. (We did publish debs way back
in the beginning, for testing IIRC, but switched away from them to
sdists to be more inclusive.) Since then, we have said it is the
responsibility of downstream consumers to build production packages,
either as distributors or as a deployer that is rolling their own.
We do package sdists for python libraries, push some JavaScript to
the NPM registries, and have tarballs of those and a bunch of other
artifacts that we build out of our release tools. But none of those
is declared as "production ready," and so the community is not
sending the signal that we are responsible for maintaining them in
the context of production deployments, beyond continuing to produce
new releases when there are bugs.

So for us that would mean something really hacky and bad. We are
community driven not company driven project. We don't have Red Hat or
Canonical teams behind us (we have contributors, but that's
different).

Although I work at Red Hat, I want to make sure it's clear that my
objection is purely related to community concerns. For this
conversation, I'm wearing my upstream TC and Release team hats.

Container images introduce some extra complexity, over the basic
operating system style packages mentioned above. Due to the way
they are constructed, they are likely to include content we don't
produce ourselves (either in the form of base layers or via including
build tools or other things needed when assembling the full image).
That extra content means there would need to be more tracking of
upstream issues (bugs, CVEs, etc.) to ensure the images are updated
as needed.

We can do this by building daily, which was the plan in fact. If we
build every day you have at most 24hrs old packages, CVEs and things
like that on non-openstack packages are still maintained by distro
maintainers.

A daily build job introduces new questions about how big the images
are and how many of them we keep, but let's focus on whether the
change in policy is something we want to adopt before we consider
those questions.

http://tarballs.openstack.org/kolla/images/ we are already doing this
for last few months. Only difference is that it's hacky and we want
something that's not hacky.

Let's separate resource constrains for now please, because from
current standpoint all the resources we need is a single vm that's
gonna run 1hr every day and some uplink megabytes (probably less than
1gig every day as Docker will cache a lot). If that's an issue, we can
work on it and limit amount of pushes to just version changes,
something we were discussing anyway.

Given our security and stable team resources, I'm not entirely
comfortable with us publishing these images, and giving the appearance
that the community as a whole is committing to supporting them.
I don't have any objection to someone from the community publishing
them, as long as it is made clear who the actual owner is. I'm not
sure how easy it is to make that distinction if we publish them
through infra jobs, so that may mean some outside process. I also
don't think there would be any problem in building images on our
infrastructure for our own gate jobs, as long as they are just for
testing and we don't push those to any other registries.

Today we use Kolla account for that and I'm more than happy to keep it
this way. We license our code with ASL which gives no guarantees.
Containers will be licensed this way too, so they're available as-is
and "production readiness" should be decided by everyone who runs it.
That being said what we can promise is that our containers passed
through more or less rigorous gates and that's more than most of
packages/self-built containers ever do. I think that value would be
appreciated by small to mid companies that just want to work with
openstack and don't have means to spare teams/resources for CI.

The ASL would clearly apply to the source of our projects that we
put into the images. The images do contain other software, though,
and it's less clear to me how to interpret the support guarantees
for an image that contains a mix of projects with different licenses
and written by different communities. Or even if we can say that
the images are available under the ASL if some of the contents are
GPL.

Regardless, I'm not sure the copyright license is the correct
document to learn about the support guarantees. If I had an issue
with one of those images, I would be much more likely to try to
find the person or community responsible for publishing them than
the author of a given package contained in the image. If we're
asserting that consumers should not ask for support, why are we
even talking about publishing them?

We have Kolla community to approach. We will be responsible for help
with these images and to keep them healthy. I personally volunteer to
be contact point if we need contact points. Otherwise we can just
point to irc, buglist, ML...all the channels we use today to help.

I'm not a lawyer, but I assume there is a way to say that this doesn't
have any formal guarantees and we ship it as-is?

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Well, that's the question. Today we have teams publishing those
images themselves, right? And the proposal is to have infra do it?
That change could be construed to imply that there is more of a
relationship with the images and the rest of the community (remember,
folks outside of the main community activities do not always make
the same distinctions we do about teams). So, before we go ahead
with that, I want to make sure that we all have a chance to discuss
the policy change and its implications.

Infra as vm running with infra, but team to publish it can be Kolla
team. I assume we'll be responsible to keep these images healthy...

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 15, 2017 by Michał_Jastrzębski (9,220 points)   1 5 6
0 votes

On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
On 15 May 2017 at 11:19, Davanum Srinivas davanum@gmail.com wrote:

Sorry for the top post, Michal, Can you please clarify a couple of things:

1) Can folks install just one or two services for their specific scenario?

Yes, that's more of a kolla-ansible feature and require a little bit
of ansible know-how, but entirely possible. Kolla-k8s is built to
allow maximum flexibility in that space.

2) Can the container images from kolla be run on bare docker daemon?

Yes, but they need to either override our default CMD (kolla_start) or
provide ENVs requred by it, not a huge deal

3) Can someone take the kolla container images from say dockerhub and
use it without the Kolla framework?

Yes, there is no such thing as kolla framework really. Our images
follow stable ABI and they can be deployed by any deploy mechanism
that will follow it. We have several users who wrote their own deploy
mechanism from scratch.

Containers are just blobs with binaries in it. Little things that we
add are kolla_start script to allow our config file management and
some custom startup scripts for things like mariadb to help with
bootstrapping, both are entirely optional.

Just as a bonus example, TripleO is currently using kolla images. They used to
be vanilla and they are not anymore but only because TripleO depends on puppet
being in the image, which has nothing to do with kolla.

Flavio

--
@flaper87
Flavio Percoco


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded May 16, 2017 by Flavio_Percoco (36,960 points)   3 8 11
0 votes

On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
On 15 May 2017 at 12:12, Doug Hellmann doug@doughellmann.com wrote:

[huge snip]

I'm raising the issue here to get some more input into how to
proceed. Do other people think this concern is overblown? Can we
mitigate the risk by communicating through metadata for the images?
Should we stick to publishing build instructions (Dockerfiles, or
whatever) instead of binary images? Are there other options I haven't
mentioned?

Today we do publish build instructions, that's what Kolla is. We also
publish built containers already, just we do it manually on release
today. If we decide to block it, I assume we should stop doing that
too? That will hurt users who uses this piece of Kolla, and I'd hate
to hurt our users:(

Well, that's the question. Today we have teams publishing those
images themselves, right? And the proposal is to have infra do it?
That change could be construed to imply that there is more of a
relationship with the images and the rest of the community (remember,
folks outside of the main community activities do not always make
the same distinctions we do about teams). So, before we go ahead
with that, I want to make sure that we all have a chance to discuss
the policy change and its implications.

Infra as vm running with infra, but team to publish it can be Kolla
team. I assume we'll be responsible to keep these images healthy...

I think this is the gist of the concern and I'd like us to focus on it.

As someone that used to consume these images from kolla's dockerhub account
directly, I can confirm they are useful. However, I do share Doug's concern and
the impact this may have on the community.

From a release perspective, as Doug mentioned, we've avoided releasing projects
in any kind of built form. This was also one of the concerns I raised when
working on the proposal to support other programming languages. The problem of
releasing built images goes beyond the infrastructure requirements. It's the
message and the guarantees implied with the built product itself that are the
concern here. And I tend to agree with Doug that this might be a problem for us
as a community. Unfortunately, putting your name, Michal, as contact point is
not enough. Kolla is not the only project producing container images and we need
to be consistent in the way we release these images.

Nothing prevents people for building their own images and uploading them to
dockerhub. Having this as part of the OpenStack's pipeline is a problem.

Flavio

P.S: note this goes against my container(ish) interests but it's a
community-wide problem.

--
@flaper87
Flavio Percoco


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded May 16, 2017 by Flavio_Percoco (36,960 points)   3 8 11
0 votes

Flavio,

Forgive the top post – outlook ftw.

I understand the concerns raised in this thread. It is unclear if this thread is the feeling of two TC members or enough TC members care deeply about this issue to permanently limit OpenStack big tent projects’ ability to generate container images in various external artifact storage systems. The point of discussion I see effectively raised in this thread is “OpenStack infra will not push images to dockerhub”.

I’d like clarification if this is a ruling from the TC, or simply an exploratory discussion.

If it is exploratory, it is prudent that OpenStack projects not be blocked by debate on this issue until the TC has made such ruling as to banning the creation of container images via OpenStack infrastructure.

Regards
-steve

-----Original Message-----
From: Flavio Percoco flavio@redhat.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Monday, May 15, 2017 at 7:00 PM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:

On 15 May 2017 at 12:12, Doug Hellmann doug@doughellmann.com wrote:

[huge snip]
>>> > I'm raising the issue here to get some more input into how to
>>> > proceed. Do other people think this concern is overblown? Can we
>>> > mitigate the risk by communicating through metadata for the images?
>>> > Should we stick to publishing build instructions (Dockerfiles, or
>>> > whatever) instead of binary images? Are there other options I haven't
>>> > mentioned?
>>>
>>> Today we do publish build instructions, that's what Kolla is. We also
>>> publish built containers already, just we do it manually on release
>>> today. If we decide to block it, I assume we should stop doing that
>>> too? That will hurt users who uses this piece of Kolla, and I'd hate
>>> to hurt our users:(
>>
>> Well, that's the question. Today we have teams publishing those
>> images themselves, right? And the proposal is to have infra do it?
>> That change could be construed to imply that there is more of a
>> relationship with the images and the rest of the community (remember,
>> folks outside of the main community activities do not always make
>> the same distinctions we do about teams). So, before we go ahead
>> with that, I want to make sure that we all have a chance to discuss
>> the policy change and its implications.
>
>Infra as vm running with infra, but team to publish it can be Kolla
>team. I assume we'll be responsible to keep these images healthy...
I think this is the gist of the concern and I'd like us to focus on it.

As someone that used to consume these images from kolla's dockerhub account
directly, I can confirm they are useful. However, I do share Doug's concern and
the impact this may have on the community.

From a release perspective, as Doug mentioned, we've avoided releasing projects
in any kind of built form. This was also one of the concerns I raised when
working on the proposal to support other programming languages. The problem of
releasing built images goes beyond the infrastructure requirements. It's the
message and the guarantees implied with the built product itself that are the
concern here. And I tend to agree with Doug that this might be a problem for us
as a community. Unfortunately, putting your name, Michal, as contact point is
not enough. Kolla is not the only project producing container images and we need
to be consistent in the way we release these images.

Nothing prevents people for building their own images and uploading them to
dockerhub. Having this as part of the OpenStack's pipeline is a problem.

Flavio

P.S: note this goes against my container(ish) interests but it's a
community-wide problem.

-- 
@flaper87
Flavio Percoco


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 16, 2017 by Steven_Dake_(stdake) (24,540 points)   2 10 24
...