settingsLogin | Registersettings

[openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

0 votes

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this
email and followups on this thread for mentions of them. If it's an
issue with your job and you can spot it (bad config) just submit a patch
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
to ask that you send a follow up email to this thread so that we can
ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's
working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY
complicated array of job content you've all created. Hopefully the pain
of the moment will be offset by the ability for you to all take direct
ownership of your awesome content... so bear with us, your patience is
appreciated.

If you find yourself with some extra time on your hands while you wait
on something, you may find it helpful to read:

https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the
issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ
content for it in an etherpad:

https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for
it. Once manual publication is fixed, we'll update the etherpad to point
to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of
right now, there are a few major/systemic ones that we're looking in to
that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do
something wrong?", we're having an issue that jeblair and Shrews are
currently tracking down with intermittent connection issues in the
backend plumbing.

When it happens it's an across the board issue, so fixing it is our
number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on
xenial. The job definitions look correct, so this is also under
investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes
while the old jobs were designed to only collect from the 'primary'.
Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a
bunch of changes in it, so we really do appreciate everyone's
understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Oct 27, 2017 in openstack-dev by Monty_Taylor (22,780 points)   2 4 7

23 Responses

0 votes

Hi,

In Tricircle we use the "multinode" topology to setup a test environment
with three regions, "CentralRegion" and "RegionOne" in one node, and
"RegionTwo" in the other node. I notice that the job definition has been
migrated to
openstack-zuul-jobs/blob/master/playbooks/legacy/tricircle-dsvm-multiregion/run.yaml,
but the job fails with the error that "public endpoint for image service in
RegionTwo region not found", so I guess the node of "RegionTwo" is not
correctly running. Since the original log folder for the second
"subnode-2/" is missing in the job report, I also cannot figure out what
the wrong is with the second node.

Any hints to debug this problem?

On Fri, 29 Sep 2017 at 22:59 Monty Taylor mordred@inaugust.com wrote:

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this
email and followups on this thread for mentions of them. If it's an
issue with your job and you can spot it (bad config) just submit a patch
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
to ask that you send a follow up email to this thread so that we can
ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's
working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY
complicated array of job content you've all created. Hopefully the pain
of the moment will be offset by the ability for you to all take direct
ownership of your awesome content... so bear with us, your patience is
appreciated.

If you find yourself with some extra time on your hands while you wait
on something, you may find it helpful to read:

https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the
issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ
content for it in an etherpad:

https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for
it. Once manual publication is fixed, we'll update the etherpad to point
to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of
right now, there are a few major/systemic ones that we're looking in to
that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do
something wrong?", we're having an issue that jeblair and Shrews are
currently tracking down with intermittent connection issues in the
backend plumbing.

When it happens it's an across the board issue, so fixing it is our
number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on
xenial. The job definitions look correct, so this is also under
investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes
while the old jobs were designed to only collect from the 'primary'.
Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a
bunch of changes in it, so we really do appreciate everyone's
understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
BR
Zhiyuan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 30, 2017 by Vega_Cai (1,900 points)   2
0 votes

Hi Vega,

Please check the document. Some jobs were migrated with incorrect nodesets and have to be switched to multinode in the job definition in openstack-zuul-jobs

Good luck
Mohammed

Sent from my iPhone

On Sep 30, 2017, at 7:35 AM, Vega Cai luckyvega.g@gmail.com wrote:

Hi,

In Tricircle we use the "multinode" topology to setup a test environment with three regions, "CentralRegion" and "RegionOne" in one node, and "RegionTwo" in the other node. I notice that the job definition has been migrated to openstack-zuul-jobs/blob/master/playbooks/legacy/tricircle-dsvm-multiregion/run.yaml, but the job fails with the error that "public endpoint for image service in RegionTwo region not found", so I guess the node of "RegionTwo" is not correctly running. Since the original log folder for the second "subnode-2/" is missing in the job report, I also cannot figure out what the wrong is with the second node.

Any hints to debug this problem?

On Fri, 29 Sep 2017 at 22:59 Monty Taylor mordred@inaugust.com wrote:
Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this
email and followups on this thread for mentions of them. If it's an
issue with your job and you can spot it (bad config) just submit a patch
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
to ask that you send a follow up email to this thread so that we can
ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's
working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY
complicated array of job content you've all created. Hopefully the pain
of the moment will be offset by the ability for you to all take direct
ownership of your awesome content... so bear with us, your patience is
appreciated.

If you find yourself with some extra time on your hands while you wait
on something, you may find it helpful to read:

https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the
issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ
content for it in an etherpad:

https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for
it. Once manual publication is fixed, we'll update the etherpad to point
to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of
right now, there are a few major/systemic ones that we're looking in to
that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do
something wrong?", we're having an issue that jeblair and Shrews are
currently tracking down with intermittent connection issues in the
backend plumbing.

When it happens it's an across the board issue, so fixing it is our
number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on
xenial. The job definitions look correct, so this is also under
investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes
while the old jobs were designed to only collect from the 'primary'.
Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a
bunch of changes in it, so we really do appreciate everyone's
understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
BR
Zhiyuan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 30, 2017 by Mohammed_Naser (3,860 points)   1 3
0 votes

Hi Mohammed,

Thanks for your suggestion. I have submitted a patch [1] to try to fix the
job configuration, and used [2] that depends on it to test whether the fix
works.

[1] https://review.openstack.org/#/c/508824/
[2] https://review.openstack.org/#/c/508496/

On Sat, 30 Sep 2017 at 20:31 Mohammed Naser mnaser@vexxhost.com wrote:

Hi Vega,

Please check the document. Some jobs were migrated with incorrect nodesets
and have to be switched to multinode in the job definition in
openstack-zuul-jobs

Good luck
Mohammed

Sent from my iPhone

On Sep 30, 2017, at 7:35 AM, Vega Cai luckyvega.g@gmail.com wrote:

Hi,

In Tricircle we use the "multinode" topology to setup a test environment
with three regions, "CentralRegion" and "RegionOne" in one node, and
"RegionTwo" in the other node. I notice that the job definition has been
migrated to
openstack-zuul-jobs/blob/master/playbooks/legacy/tricircle-dsvm-multiregion/run.yaml,
but the job fails with the error that "public endpoint for image service in
RegionTwo region not found", so I guess the node of "RegionTwo" is not
correctly running. Since the original log folder for the second
"subnode-2/" is missing in the job report, I also cannot figure out what
the wrong is with the second node.

Any hints to debug this problem?

On Fri, 29 Sep 2017 at 22:59 Monty Taylor mordred@inaugust.com wrote:

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this
email and followups on this thread for mentions of them. If it's an
issue with your job and you can spot it (bad config) just submit a patch
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
to ask that you send a follow up email to this thread so that we can
ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's
working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY
complicated array of job content you've all created. Hopefully the pain
of the moment will be offset by the ability for you to all take direct
ownership of your awesome content... so bear with us, your patience is
appreciated.

If you find yourself with some extra time on your hands while you wait
on something, you may find it helpful to read:

https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the
issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ
content for it in an etherpad:

https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for
it. Once manual publication is fixed, we'll update the etherpad to point
to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of
right now, there are a few major/systemic ones that we're looking in to
that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do
something wrong?", we're having an issue that jeblair and Shrews are
currently tracking down with intermittent connection issues in the
backend plumbing.

When it happens it's an across the board issue, so fixing it is our
number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on
xenial. The job definitions look correct, so this is also under
investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes
while the old jobs were designed to only collect from the 'primary'.
Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a
bunch of changes in it, so we really do appreciate everyone's
understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
BR
Zhiyuan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
BR
Zhiyuan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 2, 2017 by Vega_Cai (1,900 points)   2
0 votes

On 2 Oct 2017, 21:02 +0700, wrote:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do
something wrong?", we're having an issue that jeblair and Shrews are
currently tracking down with intermittent connection issues in the
backend plumbing.

Hi Monty, does it make sense to recheck patches in this case?

Thanks

Renat Akhmerov
@Nokia


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 3, 2017 by renat.akhmerov_at_gm (5,640 points)   2 2 3
0 votes

Any update on where we stand on issues now? Because every single patch I
tried to land yesterday was killed by POST_FAILURE in various ways.
Including some really small stuff - https://review.openstack.org/#/c/324720/

That also includes the patch I'm told fixes some issues with zuul v3 in
the base devstack jobs - https://review.openstack.org/#/c/508344/3

It also appears that many of the skips stopped being a thing -
https://review.openstack.org/#/c/507527/ got a Tempest test run
attempted on it (though everything ended in Node failure).

Do we have a defined point on the calendar for getting the false
negatives back below the noise threshold otherwise a rollback is
implemented so that some of these issues can be addressed in parallel
without holding up community development?

-Sean

On 09/29/2017 10:58 AM, Monty Taylor wrote:
Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this
email and followups on this thread for mentions of them. If it's an
issue with your job and you can spot it (bad config) just submit a patch
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
to ask that you send a follow up email to this thread so that we can
ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's
working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY
complicated array of job content you've all created. Hopefully the pain
of the moment will be offset by the ability for you to all take direct
ownership of your awesome content... so bear with us, your patience is
appreciated.

If you find yourself with some extra time on your hands while you wait
on something, you may find it helpful to read:

  https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the
issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ
content for it in an etherpad:

  https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for
it. Once manual publication is fixed, we'll update the etherpad to point
to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of
right now, there are a few major/systemic ones that we're looking in to
that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do
something wrong?", we're having an issue that jeblair and Shrews are
currently tracking down with intermittent connection issues in the
backend plumbing.

When it happens it's an across the board issue, so fixing it is our
number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on
xenial. The job definitions look correct, so this is also under
investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes
while the old jobs were designed to only collect from the 'primary'.
Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a
bunch of changes in it, so we really do appreciate everyone's
understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Sean Dague
http://dague.net


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 3, 2017 by Sean_Dague (66,200 points)   4 8 14
0 votes

On 10/3/17 5:17 AM, Sean Dague wrote:

Do we have a defined point on the calendar for getting the false
negatives back below the noise threshold otherwise a rollback is
implemented so that some of these issues can be addressed in parallel
without holding up community development?

Along the same lines; where is the best place to get help with zuul v3
issues? The neutron-lib gate is on the floor with multiple problems; 2
broken gating jobs preventing patches from landing and all periodic jobs
broken preventing (safe) releases of neutron-lib. I've been adding the
issues to the etherpad [1] and trying to work through them solo, but
progress is very slow.

[1] https://etherpad.openstack.org/p/zuulv3-migration-faq


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 3, 2017 by Boden_Russell (1,780 points)   2 5
0 votes

We have patches stuck for hours – only info is:
http://zuulv3.openstack.org/static/stream.html?uuid=128746a70c1843d7a94e887120ba381c&logfile=console.log
At the moment we are unable to do anything

On 10/3/17, 3:36 PM, "Boden Russell" bodenvmw@gmail.com wrote:

On 10/3/17 5:17 AM, Sean Dague wrote:

Do we have a defined point on the calendar for getting the false
negatives back below the noise threshold otherwise a rollback is
implemented so that some of these issues can be addressed in parallel
without holding up community development?

Along the same lines; where is the best place to get help with zuul v3
issues? The neutron-lib gate is on the floor with multiple problems; 2
broken gating jobs preventing patches from landing and all periodic jobs
broken preventing (safe) releases of neutron-lib. I've been adding the
issues to the etherpad [1] and trying to work through them solo, but
progress is very slow.

[1] https://etherpad.openstack.org/p/zuulv3-migration-faq

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 3, 2017 by Gary_Kotton (17,280 points)   2 4 8
0 votes

Any update on where we stand on issues now? Because every single patch I
tried to land yesterday was killed by POST_FAILURE in various ways.
Including some really small stuff - https://review.openstack.org/#/c/324720/

Yeah, Nova has only landed eight patches since Thursday. Most of those are test-only patches that run a subset of jobs, and a couple that landed in the wee hours when overall system load was low.

Do we have a defined point on the calendar for getting the false
negatives back below the noise threshold otherwise a rollback is
implemented so that some of these issues can be addressed in parallel
without holding up community development?

On Friday I was supportive of the decision to keep steaming forward instead of rolling back. Today, I’m a bit more concerned about light at the end of the tunnel. The infra folks have been hitting this hard for a long time, and for that I’m very appreciative. I too hope that we’re going to revisit mitigation strategies as we approach the weekiversary of being stuck.

-—Dan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 3, 2017 by Dan_Smith (9,860 points)   1 2 4
0 votes

Hello,

I'm trying to run jobs with Zuul v3 in my local environment.[1]
I prepared a sample job that runs sleep command on zuul's host.
This job doesn't use Nodepool. [2]

As a result, Zuul v3 submitted "SUCCESS" to gerrit when gerrit event occurred.
But, error logs were generated. And my job was not run.

I'd appreciate it if you help me.
(Should I write this topic on Zuul Storyboard?)

[1]I use Ubuntu 16.04 and zuul==2.5.3.dev1374.

[2]In my understanding, I can use Zuul v3 without Nodepool.
https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#attr-job.nodeset

If a job has an empty or no nodeset definition, it will still run and may be able to perform actions on the Zuul executor.

[Conditions]
* Target project is defined as config-project in tenant configuration file.
* I didn't write nodeset in .zuul.yaml.
Because my job doesn't use Nodepool.
* I configured playbooks's hosts as "- hosts: all" or "- hosts: localhost".
(I referred to project-config repository.)

[Error logs]
"no hosts matched" or "list index out of range" were generated.
Please see the attached file.

On 2017/09/29 23:58, Monty Taylor wrote:
Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this email and followups on this thread for mentions of them. If it's an issue with your job and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you send a follow up email to this thread so that we can ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY complicated array of job content you've all created. Hopefully the pain of the moment will be offset by the ability for you to all take direct ownership of your awesome content... so bear with us, your patience is appreciated.

If you find yourself with some extra time on your hands while you wait on something, you may find it helpful to read:

  https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ content for it in an etherpad:

  https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for it. Once manual publication is fixed, we'll update the etherpad to point to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of right now, there are a few major/systemic ones that we're looking in to that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do something wrong?", we're having an issue that jeblair and Shrews are currently tracking down with intermittent connection issues in the backend plumbing.

When it happens it's an across the board issue, so fixing it is our number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on xenial. The job definitions look correct, so this is also under investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes while the old jobs were designed to only collect from the 'primary'. Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a bunch of changes in it, so we really do appreciate everyone's understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
////////////////////////_/
Rikimaru Honjo
E-mail:honjo.rikimaru@po.ntt-tx.co.jp


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded Oct 4, 2017 by honjo.rikimaru_at_po (820 points)   1
0 votes

On Wed, Oct 04, 2017 at 02:39:17PM +0900, Rikimaru Honjo wrote:
Hello,

I'm trying to run jobs with Zuul v3 in my local environment.[1]
I prepared a sample job that runs sleep command on zuul's host.
This job doesn't use Nodepool. [2]

As a result, Zuul v3 submitted "SUCCESS" to gerrit when gerrit event occurred.
But, error logs were generated. And my job was not run.

I'd appreciate it if you help me.
(Should I write this topic on Zuul Storyboard?)

[1]I use Ubuntu 16.04 and zuul==2.5.3.dev1374.

[2]In my understanding, I can use Zuul v3 without Nodepool.
https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#attr-job.nodeset

If a job has an empty or no nodeset definition, it will still run and may be able to perform actions on the Zuul executor.

While this is true, at this time it has limited testing and not sure I would be
writing job content to leverage this too much. Right now, we are only using it
to trigger RTFD hooks in openstack-infra.

Zuulv3 is really meant to be used with nodepool, much tighter now then before.
We do have plan to support static nodes in zuulv3, but work on that hasn't
finished.

[Conditions]
* Target project is defined as config-project in tenant configuration file.
* I didn't write nodeset in .zuul.yaml.
Because my job doesn't use Nodepool.
* I configured playbooks's hosts as "- hosts: all" or "- hosts: localhost".
(I referred to project-config repository.)

[Error logs]
"no hosts matched" or "list index out of range" were generated.
Please see the attached file.

On 2017/09/29 23:58, Monty Taylor wrote:

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this email and followups on this thread for mentions of them. If it's an issue with your job and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you send a follow up email to this thread so that we can ensure we've got them all and so that others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's working perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY complicated array of job content you've all created. Hopefully the pain of the moment will be offset by the ability for you to all take direct ownership of your awesome content... so bear with us, your patience is appreciated.

If you find yourself with some extra time on your hands while you wait on something, you may find it helpful to read:

  https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the issues is that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ content for it in an etherpad:

  https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for it. Once manual publication is fixed, we'll update the etherpad to point to the FAQ section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of right now, there are a few major/systemic ones that we're looking in to that are worth noting:

  • Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do something wrong?", we're having an issue that jeblair and Shrews are currently tracking down with intermittent connection issues in the backend plumbing.

When it happens it's an across the board issue, so fixing it is our number one priority.

  • Incorrect node type

We've got reports of things running on trusty that should be running on xenial. The job definitions look correct, so this is also under investigation.

  • Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes while the old jobs were designed to only collect from the 'primary'. Patches are up to fix this and should be fixed soon.

  • Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a bunch of changes in it, so we really do appreciate everyone's understanding as we work through it all.

Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
////////////////////////_/
Rikimaru Honjo
E-mail:honjo.rikimaru@po.ntt-tx.co.jp

Case1)I configures playbooks's hosts as "- hosts: all".

2017-09-29 16:18:40,247 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Writing logging config for job /tmp/e56656cd5d1444619c01755e6f858be0/work/logs/job-output.txt /tmp/e56656cd5d1444619c01755e6f858be0/ansible/logging.json
2017-09-29 16:18:40,249 DEBUG zuul.BubblewrapExecutionContext: Bubblewrap command: bwrap --dir /tmp --tmpfs /tmp --dir /var --dir /var/tmp --dir /run/user/1000 --ro-bind /usr /usr --ro-bind /lib /lib --ro-bind /bin /bin --ro-bind /sbin /sbin --ro-bind /etc/resolv.conf /etc/resolv.conf --ro-bind /etc/hosts /etc/hosts --ro-bind /tmp/ssh-WDBthw8Kiv3s/agent.8420 /tmp/ssh-WDBthw8Kiv3s/agent.8420 --bind /tmp/e56656cd5d1444619c01755e6f858be0/work /tmp/e56656cd5d1444619c01755e6f858be0/work --proc /proc --dev /dev --chdir /tmp/e56656cd5d1444619c01755e6f858be0/work --unshare-all --share-net --uid 1000 --gid 1000 --file 14 /etc/passwd --file 16 /etc/group --ro-bind /lib64 /lib64 --ro-bind /etc/nsswitch.conf /etc/nsswitch.conf --ro-bind /etc/alternatives /etc/alternatives --ro-bind /var/lib/zuul/ansible /var/lib/zuul/ansible --ro-bind /tmp/e56656cd5d1444619c01755e6f858be0/ansible /tmp/e56656cd5d1444619c01755e6f858be0/ansible --ro-bind /tmp/e56656cd5d1444619c01755e6f858be0/trusted /tmp/e56656cd5d1444619c01755e6f858be0/trusted --ro-bind /tmp/e56656cd5d1444619c01755e6f858be0/ansible/setupplaybook /tmp/e56656cd5d1444619c01755e6f858be0/ansible/setupplaybook --bind /tmp/e56656cd5d1444619c01755e6f858be0/.ansible /tmp/e56656cd5d1444619c01755e6f858be0/.ansible
2017-09-29 16:18:40,249 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible command: ANSIBLECONFIG=/tmp/e56656cd5d1444619c01755e6f858be0/ansible/setupplaybook/ansible.cfg ansible '*' -v -m setup -a 'gather_subset=!all'
2017-09-29 16:18:40,823 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b'Using /tmp/e56656cd5d1444619c01755e6f858be0/ansible/setup_playbook/ansible.cfg as config file'
2017-09-29 16:18:40,841 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b' [WARNING]: provided hosts list is empty, only localhost is available'
2017-09-29 16:18:40,842 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b' [WARNING]: No hosts matched, nothing to do'
2017-09-29 16:18:41,101 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output terminated
2017-09-29 16:18:41,102 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible exit code: 0
2017-09-29 16:18:41,102 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Stopped watchdog
2017-09-29 16:18:41,102 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Stopped disk job killer
2017-09-29 16:18:41,102 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible complete, result RESULT_NORMAL code 0
2017-09-29 16:18:41,103 DEBUG zuul.BubblewrapExecutionContext: Bubblewrap command: bwrap --dir /tmp --tmpfs /tmp --dir /var --dir /var/tmp --dir /run/user/1000 --ro-bind /usr /usr --ro-bind /lib /lib --ro-bind /bin /bin --ro-bind /sbin /sbin --ro-bind /etc/resolv.conf /etc/resolv.conf --ro-bind /etc/hosts /etc/hosts --ro-bind /tmp/ssh-WDBthw8Kiv3s/agent.8420 /tmp/ssh-WDBthw8Kiv3s/agent.8420 --bind /tmp/e56656cd5d1444619c01755e6f858be0/work /tmp/e56656cd5d1444619c01755e6f858be0/work --proc /proc --dev /dev --chdir /tmp/e56656cd5d1444619c01755e6f858be0/work --unshare-all --share-net --uid 1000 --gid 1000 --file 15 /etc/passwd --file 16 /etc/group --ro-bind /lib64 /lib64 --ro-bind /etc/nsswitch.conf /etc/nsswitch.conf --ro-bind /etc/alternatives /etc/alternatives --ro-bind /var/lib/zuul/ansible /var/lib/zuul/ansible --ro-bind /tmp/e56656cd5d1444619c01755e6f858be0/ansible /tmp/e56656cd5d1444619c01755e6f858be0/ansible --ro-bind /tmp/e56656cd5d1444619c01755e6f858be0/trusted /tmp/e56656cd5d1444619c01755e6f858be0/trusted --ro-bind /tmp/e56656cd5d1444619c01755e6f858be0/ansible/playbook_0 /tmp/e56656cd5d1444619c01755e6f858be0/ansible/playbook_0 --bind /tmp/e56656cd5d1444619c01755e6f858be0/.ansible /tmp/e56656cd5d1444619c01755e6f858be0/.ansible
2017-09-29 16:18:41,103 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible command: ANSIBLE_CONFIG=/tmp/e56656cd5d1444619c01755e6f858be0/ansible/playbook_0/ansible.cfg ansible-playbook -v /tmp/e56656cd5d1444619c01755e6f858be0/trusted/project_0/192.168.10.126/masakari-project/playbooks/masakari-project.yaml -e zuul_execution_phase=run -e zuul_execution_trusted=True -e zuul_execution_canonical_name_and_path=192.168.10.126/masakari-project/playbooks/masakari-project -e zuul_execution_branch=master
2017-09-29 16:18:41,744 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b'Using /tmp/e56656cd5d1444619c01755e6f858be0/ansible/playbook_0/ansible.cfg as config file'
2017-09-29 16:18:41,762 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b' [WARNING]: provided hosts list is empty, only localhost is available'
2017-09-29 16:18:41,946 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b' [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin'
2017-09-29 16:18:41,947 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b"(<ansible.plugins.callback.zuul_stream.CallbackModule object at 0x7f34a1a2e470>): 'NoneType' object has no attribute"
2017-09-29 16:18:41,947 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b"'values'"
2017-09-29 16:18:41,947 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b' [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin'
2017-09-29 16:18:41,947 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b'(<ansible.plugins.callback./var/lib/zuul/ansible/zuul/ansible/callback/zuul_json.CallbackModule object at'
2017-09-29 16:18:41,947 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b"0x7f34a1047ba8>): 'NoneType' object has no attribute 'values'"
2017-09-29 16:18:41,948 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output: b'skipping: no hosts matched'
2017-09-29 16:18:41,998 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible output terminated
2017-09-29 16:18:41,999 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible exit code: 0
2017-09-29 16:18:41,999 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Stopped disk job killer
2017-09-29 16:18:41,999 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Ansible complete, result RESULT_NORMAL code 0
2017-09-29 16:18:42,000 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Sending result: {"result": "SUCCESS", "data": {}}

Case2)I configures playbooks's hosts as "- hosts: localhost".

2017-10-02 15:39:00,968 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Writing logging config for job /tmp/cd75e24c10f849c6929f47d866b5d8d4/work/logs/job-output.txt /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/logging.json
2017-10-02 15:39:00,971 DEBUG zuul.BubblewrapExecutionContext: Bubblewrap command: bwrap --dir /tmp --tmpfs /tmp --dir /var --dir /var/tmp --dir /run/user/1000 --ro-bind /usr /usr --ro-bind /lib /lib --ro-bind /bin /bin --ro-bind /sbin /sbin --ro-bind /etc/resolv.conf /etc/resolv.conf --ro-bind /etc/hosts /etc/hosts --ro-bind /tmp/ssh-C8amU70LglpG/agent.14749 /tmp/ssh-C8amU70LglpG/agent.14749 --bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/work /tmp/cd75e24c10f849c6929f47d866b5d8d4/work --proc /proc --dev /dev --chdir /tmp/cd75e24c10f849c6929f47d866b5d8d4/work --unshare-all --share-net --uid 1000 --gid 1000 --file 14 /etc/passwd --file 15 /etc/group --ro-bind /lib64 /lib64 --ro-bind /etc/nsswitch.conf /etc/nsswitch.conf --ro-bind /etc/alternatives /etc/alternatives --ro-bind /var/lib/zuul/ansible /var/lib/zuul/ansible --ro-bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible --ro-bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/trusted /tmp/cd75e24c10f849c6929f47d866b5d8d4/trusted --ro-bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/setupplaybook /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/setupplaybook --bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/.ansible /tmp/cd75e24c10f849c6929f47d866b5d8d4/.ansible
2017-10-02 15:39:00,973 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible command: ANSIBLECONFIG=/tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/setupplaybook/ansible.cfg ansible '*' -v -m setup -a 'gather_subset=!all'
2017-10-02 15:39:01,693 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'Using /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/setup_playbook/ansible.cfg as config file'
2017-10-02 15:39:01,713 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: provided hosts list is empty, only localhost is available'
2017-10-02 15:39:01,714 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: No hosts matched, nothing to do'
2017-10-02 15:39:01,991 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output terminated
2017-10-02 15:39:01,992 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible exit code: 0
2017-10-02 15:39:01,992 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Stopped watchdog
2017-10-02 15:39:01,992 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Stopped disk job killer
2017-10-02 15:39:01,992 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible complete, result RESULT_NORMAL code 0
2017-10-02 15:39:01,993 DEBUG zuul.BubblewrapExecutionContext: Bubblewrap command: bwrap --dir /tmp --tmpfs /tmp --dir /var --dir /var/tmp --dir /run/user/1000 --ro-bind /usr /usr --ro-bind /lib /lib --ro-bind /bin /bin --ro-bind /sbin /sbin --ro-bind /etc/resolv.conf /etc/resolv.conf --ro-bind /etc/hosts /etc/hosts --ro-bind /tmp/ssh-C8amU70LglpG/agent.14749 /tmp/ssh-C8amU70LglpG/agent.14749 --bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/work /tmp/cd75e24c10f849c6929f47d866b5d8d4/work --proc /proc --dev /dev --chdir /tmp/cd75e24c10f849c6929f47d866b5d8d4/work --unshare-all --share-net --uid 1000 --gid 1000 --file 15 /etc/passwd --file 16 /etc/group --ro-bind /lib64 /lib64 --ro-bind /etc/nsswitch.conf /etc/nsswitch.conf --ro-bind /etc/alternatives /etc/alternatives --ro-bind /var/lib/zuul/ansible /var/lib/zuul/ansible --ro-bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible --ro-bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/trusted /tmp/cd75e24c10f849c6929f47d866b5d8d4/trusted --ro-bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/playbook_0 /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/playbook_0 --bind /tmp/cd75e24c10f849c6929f47d866b5d8d4/.ansible /tmp/cd75e24c10f849c6929f47d866b5d8d4/.ansible
2017-10-02 15:39:01,994 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible command: ANSIBLE_CONFIG=/tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/playbook_0/ansible.cfg ansible-playbook -v /tmp/cd75e24c10f849c6929f47d866b5d8d4/trusted/project_0/192.168.10.126/masakari-project/playbooks/masakari-project.yaml -e zuul_execution_phase=run -e zuul_execution_trusted=True -e zuul_execution_canonical_name_and_path=192.168.10.126/masakari-project/playbooks/masakari-project -e zuul_execution_branch=master
2017-10-02 15:39:02,629 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'Using /tmp/cd75e24c10f849c6929f47d866b5d8d4/ansible/playbook_0/ansible.cfg as config file'
2017-10-02 15:39:02,646 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: provided hosts list is empty, only localhost is available'
2017-10-02 15:39:02,828 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin'
2017-10-02 15:39:02,829 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'(<ansible.plugins.callback.zuul_stream.CallbackModule object at 0x7f14cb7c67b8>):'
2017-10-02 15:39:02,829 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin'
2017-10-02 15:39:02,829 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'(<ansible.plugins.callback./var/lib/zuul/ansible/zuul/ansible/callback/zuul_json.CallbackModule object at'
2017-10-02 15:39:02,829 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'0x7f14caddcba8>):'
2017-10-02 15:39:02,853 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin'
2017-10-02 15:39:02,854 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'(<ansible.plugins.callback./var/lib/zuul/ansible/zuul/ansible/callback/zuul_json.CallbackModule object at'
2017-10-02 15:39:02,854 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'0x7f14caddcba8>): list index out of range'
2017-10-02 15:39:03,402 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b' [WARNING]: Failure using method (v2_runner_on_ok) in callback plugin'
2017-10-02 15:39:03,402 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'(<ansible.plugins.callback./var/lib/zuul/ansible/zuul/ansible/callback/zuul_json.CallbackModule object at'
2017-10-02 15:39:03,402 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output: b'0x7f14caddcba8>): list index out of range'
2017-10-02 15:39:03,987 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible output terminated
2017-10-02 15:39:03,987 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible exit code: 0
2017-10-02 15:39:03,987 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Stopped disk job killer
2017-10-02 15:39:03,988 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Ansible complete, result RESULT_NORMAL code 0
2017-10-02 15:39:03,988 DEBUG zuul.AnsibleJob: [build: cd75e24c10f849c6929f47d866b5d8d4] Sending result: {"result": "SUCCESS", "data": {"zuul": {"log_url": "http://192.168.10.22:8000/"}}}


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Oct 4, 2017 by pabelanger_at_redhat (6,560 points)   1 1 2
...