settingsLogin | Registersettings

[openstack-dev] [tripleo] pingtest vs tempest

0 votes

Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,
--
Emilien Macchi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Sep 6, 2017 in openstack-dev by emilien_at_redhat.co (36,940 points)   2 6 9

38 Responses

0 votes

On Wed, Apr 5, 2017 at 4:49 PM, Emilien Macchi emilien@redhat.com wrote:
Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

Browbeat isn't a "validation" tool, like Tempest, however we have
Browbeat integrated in some CI systems already. We could create a very
targeted test to sniff out any issues with the Cloud.

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,
--
Emilien Macchi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 5, 2017 by Joe_Talerico (800 points)  
0 votes

HI,

I think Rally or Browbeat and other performance oriented solutions won't
serve our needs, because we run TripleO CI on virtualized environment with
very limited resources. Actually we are pretty close to full utilizing
these resources when deploying openstack, so very little is available for
test.
It's not a problem to run tempest API tests because they are cheap - take
little time, little resources, but also gives little coverage. Scenario
test are more interesting and gives us more coverage, but also takes a lot
of resources (which we don't have sometimes).

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.

I think could be an option to develop a special scenario tempest tests for
TripleO which would fit our needs.

Thanks

On Wed, Apr 5, 2017 at 11:49 PM, Emilien Macchi emilien@redhat.com wrote:

Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,
--
Emilien Macchi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best regards
Sagi Shnaidman


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Sagi_Shnaidman (1,160 points)   1
0 votes

On 04/05/2017 10:49 PM, Emilien Macchi wrote:
Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

A lot of work is going into Tempest itself and its various plugins, so that it
becomes a convenient and universal tool to test OpenStack clouds. While we're
not quite there in terms of convenience, it's hard to match the coverage of
tempest + plugins. I'd prefer TripleO use (some subset of) Tempest test suite(s).

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

Unless you want to duplicate all the work that goes into Tempest ecosystem now,
this is probably not a good idea.

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Dmitry_Tantsur (18,080 points)   2 3 7
0 votes

Sagi,

I think Rally or Browbeat and other performance oriented solutions won't
serve our needs, because we run TripleO CI on virtualized environment with
very limited resources. Actually we are pretty close to full utilizing
these resources when deploying openstack, so very little is available for
test.

You can run Rally with any load. Including just starting 1 smallest VM.

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.

You can actually pick few of scenarios that we have in Rally and cover most
of the functionality.
If you specify what exactly you want to test I can help with writing Rally
Task for that. (it will use as minimum as possible resources)

Best regards,
Boris Pavlovic

On Thu, Apr 6, 2017 at 2:38 AM, Dmitry Tantsur dtantsur@redhat.com wrote:

On 04/05/2017 10:49 PM, Emilien Macchi wrote:

Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your
overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

A lot of work is going into Tempest itself and its various plugins, so
that it becomes a convenient and universal tool to test OpenStack clouds.
While we're not quite there in terms of convenience, it's hard to match the
coverage of tempest + plugins. I'd prefer TripleO use (some subset of)
Tempest test suite(s).

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

Unless you want to duplicate all the work that goes into Tempest ecosystem
now, this is probably not a good idea.

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by boris_at_pavlovic.me (6,900 points)   1 4 6
0 votes

On Thu, Apr 6, 2017 at 5:32 AM, Sagi Shnaidman sshnaidm@redhat.com wrote:
HI,

I think Rally or Browbeat and other performance oriented solutions won't
serve our needs, because we run TripleO CI on virtualized environment with
very limited resources. Actually we are pretty close to full utilizing these
resources when deploying openstack, so very little is available for test.
It's not a problem to run tempest API tests because they are cheap - take
little time, little resources, but also gives little coverage. Scenario test
are more interesting and gives us more coverage, but also takes a lot of
resources (which we don't have sometimes).

Sagi,
In my original message I mentioned a "targeted" test, I should
explained that more. We could configure the specific scenario so that
the load on the virt overcloud would be minimal. Justin Kilpatrick
already have Browbeat integrated with TripleO Quickstart[1], so there
shouldn't be much work to try this proposed solution.

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest as
well.

I think could be an option to develop a special scenario tempest tests for
TripleO which would fit our needs.

I haven't looked at Tempest in a long time, so maybe its functionality
has improved. I just saw the opportunity to integrate Browbeat/Rally
into CI to test the functionality of OpenStack services, while also
capturing performance metrics.

Joe

[1] https://github.com/openstack/browbeat/tree/master/ci-scripts

Thanks

On Wed, Apr 5, 2017 at 11:49 PM, Emilien Macchi emilien@redhat.com wrote:

Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your
overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,
--
Emilien Macchi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best regards
Sagi Shnaidman


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Joe_Talerico (800 points)  
0 votes

On Thu, 6 Apr 2017, Sagi Shnaidman wrote:

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.

It's sound like using some parts of tempest is perhaps the desired
thing here but in case a "limited edition" test against the APIs to
do what amounts to a smoke test is desired, it might be worthwhile
to investigate using gabbi[1] and its command line gabbi-run[2] tool for
some fairly simple and readable tests that can describe a sequence
of API interactions. There are lots of tools that can do the same
thing, so gabbi may not be the right choice but it's there as an
option.

The telemetry group had (an may still have) some integration tests
that use gabbi files to integrate ceilometer, heat (starting some
vms), aodh and gnocchi and confirm that the expected flow happened.
Since the earlier raw scripts I think there's been some integration
with tempest, but gabbi files are still used[3].

If this might be useful and I can help out, please ask.

[1] http://gabbi.readthedocs.io/
[2] http://gabbi.readthedocs.io/en/latest/runner.html
[3] https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/integration

--
Chris Dent ¯_(ツ)_/¯ https://anticdent.org/
freenode: cdent tw: @anticdent__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded Apr 6, 2017 by cdent_plus_os_at_ant (12,800 points)   2 2 4
0 votes

Maybe I'm getting a little off topic with this question, but why was
Tempest removed last time?

I'm not well versed in the history of this discussion, but from what I
understand Tempest in the gate has
been an off and on again thing for a while but I've never heard the
story of why it got removed.

On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent cdent+os@anticdent.org wrote:
On Thu, 6 Apr 2017, Sagi Shnaidman wrote:

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very
little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.

It's sound like using some parts of tempest is perhaps the desired
thing here but in case a "limited edition" test against the APIs to
do what amounts to a smoke test is desired, it might be worthwhile
to investigate using gabbi[1] and its command line gabbi-run[2] tool for
some fairly simple and readable tests that can describe a sequence
of API interactions. There are lots of tools that can do the same
thing, so gabbi may not be the right choice but it's there as an
option.

The telemetry group had (an may still have) some integration tests
that use gabbi files to integrate ceilometer, heat (starting some
vms), aodh and gnocchi and confirm that the expected flow happened.
Since the earlier raw scripts I think there's been some integration
with tempest, but gabbi files are still used[3].

If this might be useful and I can help out, please ask.

[1] http://gabbi.readthedocs.io/
[2] http://gabbi.readthedocs.io/en/latest/runner.html
[3]
https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/integration

--
Chris Dent ¯_(ツ)_/¯ https://anticdent.org/
freenode: cdent tw: @anticdent


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Justin_Kilpatrick (340 points)  
0 votes

Having tempest running will allow these jobs to appear in openstack-health
system as well.

On Thu, Apr 6, 2017 at 1:29 PM, Justin Kilpatrick jkilpatr@redhat.com
wrote:

Maybe I'm getting a little off topic with this question, but why was
Tempest removed last time?

I'm not well versed in the history of this discussion, but from what I
understand Tempest in the gate has
been an off and on again thing for a while but I've never heard the
story of why it got removed.

On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent cdent+os@anticdent.org wrote:

On Thu, 6 Apr 2017, Sagi Shnaidman wrote:

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very
little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by
pingtest
as well.

It's sound like using some parts of tempest is perhaps the desired
thing here but in case a "limited edition" test against the APIs to
do what amounts to a smoke test is desired, it might be worthwhile
to investigate using gabbi[1] and its command line gabbi-run[2] tool for
some fairly simple and readable tests that can describe a sequence
of API interactions. There are lots of tools that can do the same
thing, so gabbi may not be the right choice but it's there as an
option.

The telemetry group had (an may still have) some integration tests
that use gabbi files to integrate ceilometer, heat (starting some
vms), aodh and gnocchi and confirm that the expected flow happened.
Since the earlier raw scripts I think there's been some integration
with tempest, but gabbi files are still used[3].

If this might be useful and I can help out, please ask.

[1] http://gabbi.readthedocs.io/
[2] http://gabbi.readthedocs.io/en/latest/runner.html
[3]
https://github.com/openstack/ceilometer/tree/master/
ceilometer/tests/integration

--
Chris Dent ¯_(ツ)_/¯ https://anticdent.org/
freenode: cdent tw: @anticdent



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:
unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Arx_Cruz (540 points)  
0 votes

On Thursday, 6 April 2017 13:29:32 CEST Justin Kilpatrick wrote:
Maybe I'm getting a little off topic with this question, but why was
Tempest removed last time?

I'm not well versed in the history of this discussion, but from what I
understand Tempest in the gate has
been an off and on again thing for a while but I've never heard the
story of why it got removed.

Also, saying "tempest" only can be a bit confusing.
I guess that what can be easily done here is:
- use tempest (library, CLI) to initialize the test environment (tempest init)
and run test tests (test run/ostestr), and gather the results
- select a subset of the current tests to be executed, or even write some.
Pingtest itself could be changed into a Tempest plugin...

A possible "resource problem" depend only on the set of tests which are
executed.

--
Luigi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Luigi_Toscano (1,700 points)   2
0 votes

I don't really have much context in what the decision is going to be based
on here,
so I'll just add some random comments here and there.

On Thu, Apr 6, 2017 at 12:48 PM Arx Cruz arxcruz@redhat.com wrote:

Having tempest running will allow these jobs to appear in openstack-health
system as well.

I agree that's a plus. It's also rather easy to produce subunit from
whatever you
are using to run tests, and that's all you need in fact to get data into
open stack-health
without touching the existing infrastructure. So in case you decide not to
use Tempest,
open stack-health can still be on the list.

On Thu, Apr 6, 2017 at 1:29 PM, Justin Kilpatrick jkilpatr@redhat.com
wrote:

Maybe I'm getting a little off topic with this question, but why was
Tempest removed last time?

I'm not well versed in the history of this discussion, but from what I
understand Tempest in the gate has
been an off and on again thing for a while but I've never heard the
story of why it got removed.

On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent cdent+os@anticdent.org wrote:

On Thu, 6 Apr 2017, Sagi Shnaidman wrote:

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very
little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by
pingtest
as well.

We have a smoke attribute here an there, but it's not well curated at all,
so you're
probably better off maintaining your own list.
Since presumably you're more interested in verifying that a deployed cloud
is
functional - as opposed to verify specific APIs are working properly - you
may want
to look at scenario tests, where with a couple of test you can cover
already a lot of
basic stuff, e.g. if you can boot a server from a volume with an image from
glance,
and ssh into it, you have proven a lot already about the general health of
your cloud.

It's sound like using some parts of tempest is perhaps the desired
thing here but in case a "limited edition" test against the APIs to
do what amounts to a smoke test is desired, it might be worthwhile
to investigate using gabbi[1] and its command line gabbi-run[2] tool for
some fairly simple and readable tests that can describe a sequence
of API interactions. There are lots of tools that can do the same
thing, so gabbi may not be the right choice but it's there as an
option.

The telemetry group had (an may still have) some integration tests
that use gabbi files to integrate ceilometer, heat (starting some
vms), aodh and gnocchi and confirm that the expected flow happened.
Since the earlier raw scripts I think there's been some integration
with tempest, but gabbi files are still used[3].

If this might be useful and I can help out, please ask.

[1] http://gabbi.readthedocs.io/
[2] http://gabbi.readthedocs.io/en/latest/runner.html
[3]

https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/integration

--
Chris Dent ¯_(ツ)_/¯ https://anticdent.org/
freenode: cdent tw: @anticdent


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Apr 6, 2017 by Andrea_Frittoli (5,920 points)   1 2 3
...