settingsLogin | Registersettings

[openstack-dev] [release][infra][puppet][stable] Re: [Release-job-failures] Release of openstack/puppet-nova failed

0 votes

Excerpts from jenkins's message of 2017-05-22 10:49:09 +0000:

Build failed.

The most recent puppet-nova release (newton 9.5.1) failed because
puppet isn't installed on the tarball building node. I know that
node configurations just changed recently to drop puppet, but I
don't know what needs to be done to fix the issue for this particular
job. It does seem to be running bindep, so maybe we just need to
include puppet there? I could use some advice & help.

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked May 22, 2017 in openstack-dev by Doug_Hellmann (87,520 points)   3 4 11

8 Responses

0 votes

On Mon, May 22, 2017 at 10:53:32AM -0400, Doug Hellmann wrote:
Excerpts from jenkins's message of 2017-05-22 10:49:09 +0000:

Build failed.

The most recent puppet-nova release (newton 9.5.1) failed because
puppet isn't installed on the tarball building node. I know that
node configurations just changed recently to drop puppet, but I
don't know what needs to be done to fix the issue for this particular
job. It does seem to be running bindep, so maybe we just need to
include puppet there? I could use some advice & help.

We need to sync 461970[1] across all modules, I've been meaning to do this but will
result in some gerrit spam. If a puppet core already has it setup, maybe they
could do it.

I was going to bring the puppet proposal patch[2] back online to avoid manually
doing this.

[1] https://review.openstack.org/#/c/461970/
[2] https://review.openstack.org/#/c/211744/


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 22, 2017 by pabelanger_at_redhat (6,560 points)   1 1 2
0 votes

On Mon, May 22, 2017 at 8:53 AM, Doug Hellmann doug@doughellmann.com wrote:
Excerpts from jenkins's message of 2017-05-22 10:49:09 +0000:

Build failed.

The most recent puppet-nova release (newton 9.5.1) failed because
puppet isn't installed on the tarball building node. I know that
node configurations just changed recently to drop puppet, but I
don't know what needs to be done to fix the issue for this particular
job. It does seem to be running bindep, so maybe we just need to
include puppet there? I could use some advice & help.

We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder. Perhaps the job needs that added to it

https://review.openstack.org/#/c/465156/

Thanks,
-Alex

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 22, 2017 by aschultz_at_redhat.c (5,800 points)   2 2 4
0 votes

On Mon, May 22, 2017 at 9:05 AM, Paul Belanger pabelanger@redhat.com wrote:
On Mon, May 22, 2017 at 10:53:32AM -0400, Doug Hellmann wrote:

Excerpts from jenkins's message of 2017-05-22 10:49:09 +0000:

Build failed.

The most recent puppet-nova release (newton 9.5.1) failed because
puppet isn't installed on the tarball building node. I know that
node configurations just changed recently to drop puppet, but I
don't know what needs to be done to fix the issue for this particular
job. It does seem to be running bindep, so maybe we just need to
include puppet there? I could use some advice & help.

We need to sync 461970[1] across all modules, I've been meaning to do this but will
result in some gerrit spam. If a puppet core already has it setup, maybe they
could do it.

We already did that and it doesn't solve this problem because we
didn't add puppet to the bindep. We specifically don't want to do
that because we don't necessarily want the distro provided puppet used
(it may be older than what is supported).

I was going to bring the puppet proposal patch[2] back online to avoid manually
doing this.

We probably should get that going, but we need to make sure we are
properly doing the modulesync config for all modules (hint: we
aren't). I ran into issues with the latest version of modulesync and
that also needs to be investigated.

Thanks,
-Alex

[1] https://review.openstack.org/#/c/461970/
[2] https://review.openstack.org/#/c/211744/


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 22, 2017 by aschultz_at_redhat.c (5,800 points)   2 2 4
0 votes

On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
[...]
We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder. Perhaps the job needs that added to it
[...]

Problem here being these repos share the common tarball jobs used
for generating python sdists, with a little custom logic baked into
run-tarball.sh[*] for detecting and adjusting when the repo is for a
Puppet module. I think this highlights the need to create custom
tarball jobs for Puppet modules, preferably by abstracting this
custom logic into a new JJB builder.

[*] #n17 >
--
Jeremy Stanley
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


responded May 22, 2017 by Jeremy_Stanley (56,700 points)   3 5 7
0 votes

On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley fungi@yuggoth.org wrote:
On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
[...]

We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder. Perhaps the job needs that added to it
[...]

Problem here being these repos share the common tarball jobs used
for generating python sdists, with a little custom logic baked into
run-tarball.sh[*] for detecting and adjusting when the repo is for a
Puppet module. I think this highlights the need to create custom
tarball jobs for Puppet modules, preferably by abstracting this
custom logic into a new JJB builder.

I assume you mean a problem if we added this builder to the job and it
fails for some reason thus impacting the python jobs? As far as
adding to the builder to the job that's not really a problem and
wouldn't change those jobs as they don't reference the installed
puppet executable. The problem I have with putting this in the .sh is
that it becomes yet another place where we're doing this package
installation (we already do it in puppet openstack in
puppet-openstack-integration). I originally proposed the builder
because it could be reused if a job requires puppet be available. ie.
this case. I'd rather not do what we do in the builder in a shell
script in the job and it seems like this is making it more complicated
than it needs to be when we have to manage this in the long term.

Thanks,
-Alex

[*] #n17 >
--
Jeremy Stanley


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 22, 2017 by aschultz_at_redhat.c (5,800 points)   2 2 4
0 votes

On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:
On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley fungi@yuggoth.org wrote:

On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
[...]

We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder. Perhaps the job needs that added to it
[...]

Problem here being these repos share the common tarball jobs used
for generating python sdists, with a little custom logic baked into
run-tarball.sh[*] for detecting and adjusting when the repo is for a
Puppet module. I think this highlights the need to create custom
tarball jobs for Puppet modules, preferably by abstracting this
custom logic into a new JJB builder.

I assume you mean a problem if we added this builder to the job
and it fails for some reason thus impacting the python jobs?

My concern is more that it increases complexity by further embedding
package selection and installation choices into that already complex
script. We'd (Infra team) like to get more of the logic out of that
random pile of shell scripts and directly into job definitions
instead. For one thing, those scripts are only updated when we
regenerate our nodepool images (at best once a day) and leads to
significant job inconsistencies if we have image upload failures in
some providers but not others. In contrast, job configurations are
updated nearly instantly (and can even be self-tested in many cases
once we're on Zuul v3).

As far as adding to the builder to the job that's not really a
problem and wouldn't change those jobs as they don't reference the
installed puppet executable.

It does risk further destabilizing the generic tarball jobs by
introducing more outside dependencies which will only be used by a
scant handful of the projects running them.

The problem I have with putting this in the .sh is that it becomes
yet another place where we're doing this package installation (we
already do it in puppet openstack in
puppet-openstack-integration). I originally proposed the builder
because it could be reused if a job requires puppet be available.
ie. this case. I'd rather not do what we do in the builder in a
shell script in the job and it seems like this is making it more
complicated than it needs to be when we have to manage this in the
long term.

Agreed, I'm saying a builder which installs an unnecessary Puppet
toolchain for the generic tarball jobs is not something we'd want,
but it would be pretty trivial to make puppet-specific tarball jobs
which do use that builder (and has the added benefit that
Puppet-specific logic can be moved out of run-tarballs.sh and into
your job configuration instead at that point).
--
Jeremy Stanley


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded May 22, 2017 by Jeremy_Stanley (56,700 points)   3 5 7
0 votes

Excerpts from Jeremy Stanley's message of 2017-05-22 19:16:34 +0000:

On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:

On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley fungi@yuggoth.org wrote:

On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
[...]

We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder. Perhaps the job needs that added to it
[...]

Problem here being these repos share the common tarball jobs used
for generating python sdists, with a little custom logic baked into
run-tarball.sh[*] for detecting and adjusting when the repo is for a
Puppet module. I think this highlights the need to create custom
tarball jobs for Puppet modules, preferably by abstracting this
custom logic into a new JJB builder.

I assume you mean a problem if we added this builder to the job
and it fails for some reason thus impacting the python jobs?

My concern is more that it increases complexity by further embedding
package selection and installation choices into that already complex
script. We'd (Infra team) like to get more of the logic out of that
random pile of shell scripts and directly into job definitions
instead. For one thing, those scripts are only updated when we
regenerate our nodepool images (at best once a day) and leads to
significant job inconsistencies if we have image upload failures in
some providers but not others. In contrast, job configurations are
updated nearly instantly (and can even be self-tested in many cases
once we're on Zuul v3).

As far as adding to the builder to the job that's not really a
problem and wouldn't change those jobs as they don't reference the
installed puppet executable.

It does risk further destabilizing the generic tarball jobs by
introducing more outside dependencies which will only be used by a
scant handful of the projects running them.

The problem I have with putting this in the .sh is that it becomes
yet another place where we're doing this package installation (we
already do it in puppet openstack in
puppet-openstack-integration). I originally proposed the builder
because it could be reused if a job requires puppet be available.
ie. this case. I'd rather not do what we do in the builder in a
shell script in the job and it seems like this is making it more
complicated than it needs to be when we have to manage this in the
long term.

Agreed, I'm saying a builder which installs an unnecessary Puppet
toolchain for the generic tarball jobs is not something we'd want,
but it would be pretty trivial to make puppet-specific tarball jobs
which do use that builder (and has the added benefit that
Puppet-specific logic can be moved out of run-tarballs.sh and into
your job configuration instead at that point).

That approach makes sense.

When the new job template is set up, let me know so I can add it to the
release repo validation as a known way to release things.

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 22, 2017 by Doug_Hellmann (87,520 points)   3 4 11
0 votes

On Mon, May 22, 2017 at 3:43 PM, Doug Hellmann doug@doughellmann.com wrote:
Excerpts from Jeremy Stanley's message of 2017-05-22 19:16:34 +0000:

On 2017-05-22 12:31:49 -0600 (-0600), Alex Schultz wrote:

On Mon, May 22, 2017 at 10:34 AM, Jeremy Stanley fungi@yuggoth.org wrote:

On 2017-05-22 09:06:26 -0600 (-0600), Alex Schultz wrote:
[...]

We ran into this for the puppet-module-build check job so I created a
puppet-agent-install builder. Perhaps the job needs that added to it
[...]

Problem here being these repos share the common tarball jobs used
for generating python sdists, with a little custom logic baked into
run-tarball.sh[*] for detecting and adjusting when the repo is for a
Puppet module. I think this highlights the need to create custom
tarball jobs for Puppet modules, preferably by abstracting this
custom logic into a new JJB builder.

I assume you mean a problem if we added this builder to the job
and it fails for some reason thus impacting the python jobs?

My concern is more that it increases complexity by further embedding
package selection and installation choices into that already complex
script. We'd (Infra team) like to get more of the logic out of that
random pile of shell scripts and directly into job definitions
instead. For one thing, those scripts are only updated when we
regenerate our nodepool images (at best once a day) and leads to
significant job inconsistencies if we have image upload failures in
some providers but not others. In contrast, job configurations are
updated nearly instantly (and can even be self-tested in many cases
once we're on Zuul v3).

As far as adding to the builder to the job that's not really a
problem and wouldn't change those jobs as they don't reference the
installed puppet executable.

It does risk further destabilizing the generic tarball jobs by
introducing more outside dependencies which will only be used by a
scant handful of the projects running them.

The problem I have with putting this in the .sh is that it becomes
yet another place where we're doing this package installation (we
already do it in puppet openstack in
puppet-openstack-integration). I originally proposed the builder
because it could be reused if a job requires puppet be available.
ie. this case. I'd rather not do what we do in the builder in a
shell script in the job and it seems like this is making it more
complicated than it needs to be when we have to manage this in the
long term.

Agreed, I'm saying a builder which installs an unnecessary Puppet
toolchain for the generic tarball jobs is not something we'd want,
but it would be pretty trivial to make puppet-specific tarball jobs
which do use that builder (and has the added benefit that
Puppet-specific logic can be moved out of run-tarballs.sh and into
your job configuration instead at that point).

That approach makes sense.

When the new job template is set up, let me know so I can add it to the
release repo validation as a known way to release things.

https://review.openstack.org/467294

Any feedback is welcome,

Thanks!

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Emilien Macchi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 23, 2017 by emilien_at_redhat.co (36,940 points)   2 6 10
...