On Tue, May 2, 2017 at 4:56 PM Sean McGinnis firstname.lastname@example.org wrote:
On Tue, May 02, 2017 at 03:36:20PM +0200, Jordan Pittier wrote:
On Tue, May 2, 2017 at 7:42 AM, Ghanshyam Mann email@example.com
In Cinder, there are many features/APIs which are backend specific and
will return 405 or 501 if same is not implemented on any backend .
If such tests are implemented in Tempest, then it will break some gate
where that backend job is voting. like ceph job in glance_store gate.
There been many such cases recently where ceph jobs were broken due to
such tests and recently it is for force-delete backup feature.
Reverting force-delete tests in . To resolve such cases at some
extend, Jon is going to add a white/black list of tests which can run
on ceph job  depends on what all feature ceph implemented. But this
does not resolve it completely due to many reason like
1. External use of Tempest become difficult where user needs to know
what all tests to skip for which backend
2. Tempest tests become too specific to backend.
Now there are few options to resolve this:
1. Tempest should not tests such API/feature which are backend
specific like mentioned by api-ref like.
So basically, if one of the 50 Cinder driver doesn't support a feature,
should never test that feature ? What about the 49 other drivers ? If a
feature exists and can be tested in the Gate (with whatever default
config/driver is shipped) then I think we should test it.
50? Over 100 as of Ocata.
Well, is tempest's purpose in life to provide complete gate test coverage,
or is tempest's purpose in life to give operators a tool to validate that
their deployment is working as expected?
Tempest is used for several different purposes, but I would say it was never
meant to ensure 100% coverage of the API. It is used by many operators to
validate their deployments, even if that part is better achieved via the
"scenario" tests, as opposed to the "API" tests.
Main use cases for Tempest are:
- integration (cross-service) testing in the gate
- help to ensure API backward compatibility / stability
- home for interoperability tests
In attempting to do things in the past, I've received push back based on
the argument that it was the latter. For this reason, in-tree tempest tests
were added to Cinder to give us a way to get better test coverage for our
There are several use cases for in-tree tests.
Test that provide very little cross-service validation (e.g. most API
tests) do not need to be in Tempest. Running a full cloud is expensive
resource and time-wise, and it's not the best test environment to run a
combination of negative test cases.
Many tests cannot be driven via API. It's very hard for instance to test
any transient resource state via API, since there's not enough control via
Scheduler tests are not best implemented via API as well, since they often
several nodes and resources when executed in a full cloud environment.
Now that this is all in place, I think it's working well and I would like
to see it continue that way. IMO, tempest proper should not have anything
that isn't universally applicable to real world deployments. Not just for
things like Ceph, but things like the manage/unmanage backend specific
tests that were added and broke a large majority of third party CI.
Is there a policy in Cinder that a backend must implement a certain set of
APIs? If so we could think of testing only that set of APIs in Tempest, so
any app developer knows that he/she can rely on that minimum set of APIs.
If the list of APIs is not constrained on cinder side, the next driver
along that does not support an API, and then we would have to stop testing
in Tempest - which is not an option.
Another point is that API that rely on services other than nova are best
in Tempest, so that the tests run in the gate of other services as well -
least the cinder functional test job should run against the other services
OpenStack Development Mailing List (not for usage questions)