settingsLogin | Registersettings

[openstack-dev] [python-glanceclient] Return request-id to caller

0 votes

Hi Devs,

We are adding support for returning 'x-openstack-request-id' to the caller as per the design proposed in cross-project specs:
http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

Problem Description:
Cannot add a new property of list type to the warlock.model object.

How is a model object created:
Let's take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model() does the job of creating a warlock.model object(essentially a dict) based on the schema given as argument (image schema retrieved from glance in this case). Inside model() the raw() method simply return the image schema as JSON object. The advantage of this warlock.model object over a simple dict is that it validates any changes to object based on the rules specified in the reference schema. The keys of this model object are available as object properties to the caller.

Underlying reason:
The schema for different sub APIs is returned a bit differently. For images, metadef APIs glance.schema.Schema.raw() is used which returns a schema containing "additionalProperties": {"type": "string"}. Whereas for members and tasks APIs glance.schema.Schema.minimal() is used to return schema object which does not contain "additionalProperties".

So we can add extra properties of any type to the model object returned from members or tasks API but for images and metadef APIs we can only add properties which can be of type string. Also for the latter case we depend on the glance configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving this issue:

Approach #1: Inject requestids property in the warlock model object in glance client
Here we do the following:
1. Inject the 'request
ids' as additional property into the model object(returned from model())
2. Return the model object which now contains request_ids property

Limitations:
1. Because the glance schemas for images and metadef only allows additional properties of type string, so even though natural type of request_ids should be list we have to make it as a comma separated 'string' of request ids as a compromise.
2. Lot of extra code is needed to wrap objects returned from the client API so that the caller can get request ids. For example we need to write wrapper classes for dict, list, str, tuple, generator.
3. Not a good design as we are adding a property which should actually be a base property but added as additional property as a compromise.
4. There is a dependency on glance whether to allow custom/additional properties or not. [2]

Approach #2: Add 'request_ids' property to all schema definitions in glance

Here we add 'request_ids' property as follows to the various APIs (schema):

"request_ids": {
"type": "array",
"items": {
"type": "string"
}
}

Doing this will make changes in glance client very simple as compared to approach#1.
This also looks a better design as it will be consistent.
We simply need to modify the request_ids property in various API calls for example glanceclient.v2.images.get().

Please let us know which approach is better or any suggestions for the same.

[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L179
[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L944


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Dec 9, 2015 in openstack-dev by Kekane,_Abhishek (3,940 points)   5 7

7 Responses

0 votes

On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ‘x-openstack-request-id’ to the caller as
per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let’s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model() does the
job of creating a warlock.model object(essentially a dict) based on the schema
given as argument (image schema retrieved from glance in this case). Inside
model() the raw() method simply return the image schema as JSON object. The
advantage of this warlock.model object over a simple dict is that it validates
any changes to object based on the rules specified in the reference schema.
The keys of this model object are available as object properties to the
caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For images,
metadef APIs glance.schema.Schema.raw() is used which returns a schema
containing “additionalProperties”: {“type”: “string”}. Whereas for members and
tasks APIs glance.schema.Schema.minimal() is used to return schema object which
does not contain “additionalProperties”.

So we can add extra properties of any type to the model object returned from
members or tasks API but for images and metadef APIs we can only add properties
which can be of type string. Also for the latter case we depend on the glance
configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving this
issue:

Approach #1: Inject request_ids property in the warlock model object in glance
client

Here we do the following:

  1. Inject the ‘request_ids’ as additional property into the model object
    (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows additional
    properties of type string, so even though natural type of request_ids should be
    list we have to make it as a comma separated ‘string’ of request ids as a
    compromise.

  2. Lot of extra code is needed to wrap objects returned from the client API so
    that the caller can get request ids. For example we need to write wrapper
    classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should actually be a
    base property but added as additional property as a compromise.

  4. There is a dependency on glance whether to allow custom/additional
    properties or not. [2]

Approach #2: Add ‘request_ids’ property to all schema definitions in glance

Here we add ‘request_ids’ property as follows to the various APIs (schema):

“request_ids”: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as compared to
approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API calls for
example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the same.

[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
v2/images.py#L179

[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
L944


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded Dec 9, 2015 by Flavio_Percoco (36,960 points)   3 7 11
0 votes

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:

On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ‘x-openstack-request-id’ to the caller as
per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let’s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model() does the
job of creating a warlock.model object(essentially a dict) based on the schema
given as argument (image schema retrieved from glance in this case). Inside
model() the raw() method simply return the image schema as JSON object. The
advantage of this warlock.model object over a simple dict is that it validates
any changes to object based on the rules specified in the reference schema.
The keys of this model object are available as object properties to the
caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For images,
metadef APIs glance.schema.Schema.raw() is used which returns a schema
containing “additionalProperties”: {“type”: “string”}. Whereas for members and
tasks APIs glance.schema.Schema.minimal() is used to return schema object which
does not contain “additionalProperties”.

So we can add extra properties of any type to the model object returned from
members or tasks API but for images and metadef APIs we can only add properties
which can be of type string. Also for the latter case we depend on the glance
configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving this
issue:

Approach #1: Inject request_ids property in the warlock model object in glance
client

Here we do the following:

  1. Inject the ‘request_ids’ as additional property into the model object
    (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows additional
    properties of type string, so even though natural type of request_ids should be
    list we have to make it as a comma separated ‘string’ of request ids as a
    compromise.

  2. Lot of extra code is needed to wrap objects returned from the client API so
    that the caller can get request ids. For example we need to write wrapper
    classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should actually be a
    base property but added as additional property as a compromise.

  4. There is a dependency on glance whether to allow custom/additional
    properties or not. [2]

Approach #2: Add ‘request_ids’ property to all schema definitions in glance

Here we add ‘request_ids’ property as follows to the various APIs (schema):

“request_ids”: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as compared to
approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API calls for
example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id
available to the user of the client library [1]. The user typically
doesn't have access to the headers, so the request id needs to be
part of the payload returned from each method. In other clients
that work with simple data types, they've subclassed dict, list,
etc. to add the extra property. This adds the request id to the
return value without making a breaking change to the API of the
client library.

Abhishek, would it be possible to add the request id information
to the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that
data takes (dictionary, JSON blob, etc.). If it's a dictionary
visible to the client code it would be straightforward to add data
to it.

Failing that, is it possible to change warlock to allow extra
properties with arbitrary types to be added to objects? Because
validating inputs to the constructor is all well and good, but
breaking the ability to add data to an object is a bit un-pythonic.

If we end up having to change the schema definitions in the Glance API,
that also means changing those API calls to add the request id to the
return value, right?

Doug

[1] http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the same.

[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
v2/images.py#L179

[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
L944


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 9, 2015 by Doug_Hellmann (87,520 points)   3 4 9
0 votes

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:

On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ?x-openstack-request-id? to the caller as
per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let?s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model() does the
job of creating a warlock.model object(essentially a dict) based on the schema
given as argument (image schema retrieved from glance in this case). Inside
model() the raw() method simply return the image schema as JSON object. The
advantage of this warlock.model object over a simple dict is that it validates
any changes to object based on the rules specified in the reference schema.
The keys of this model object are available as object properties to the
caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For images,
metadef APIs glance.schema.Schema.raw() is used which returns a schema
containing ?additionalProperties?: {?type?: ?string?}. Whereas for members and
tasks APIs glance.schema.Schema.minimal() is used to return schema object which
does not contain ?additionalProperties?.

So we can add extra properties of any type to the model object returned from
members or tasks API but for images and metadef APIs we can only add properties
which can be of type string. Also for the latter case we depend on the glance
configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving this
issue:

Approach #1: Inject request_ids property in the warlock model object in glance
client

Here we do the following:

  1. Inject the ?request_ids? as additional property into the model object
    (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows additional
    properties of type string, so even though natural type of request_ids should be
    list we have to make it as a comma separated ?string? of request ids as a
    compromise.

  2. Lot of extra code is needed to wrap objects returned from the client API so
    that the caller can get request ids. For example we need to write wrapper
    classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should actually be a
    base property but added as additional property as a compromise.

  4. There is a dependency on glance whether to allow custom/additional
    properties or not. [2]

Approach #2: Add ?request_ids? property to all schema definitions in glance

Here we add ?request_ids? property as follows to the various APIs (schema):

?request_ids?: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as compared to
approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API calls for
example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id
available to the user of the client library [1]. The user typically
doesn't have access to the headers, so the request id needs to be
part of the payload returned from each method. In other clients

Will this work if the payload is image data?

that work with simple data types, they've subclassed dict, list,
etc. to add the extra property. This adds the request id to the
return value without making a breaking change to the API of the
client library.

Abhishek, would it be possible to add the request id information
to the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that
data takes (dictionary, JSON blob, etc.). If it's a dictionary
visible to the client code it would be straightforward to add data
to it.

Failing that, is it possible to change warlock to allow extra
properties with arbitrary types to be added to objects? Because
validating inputs to the constructor is all well and good, but
breaking the ability to add data to an object is a bit un-pythonic.

If we end up having to change the schema definitions in the Glance API,
that also means changing those API calls to add the request id to the
return value, right?

Doug

[1] http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the same.

[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
v2/images.py#L179

[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
L944


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 15
Date: Wed, 9 Dec 2015 21:59:50 +0800
From: "=?utf-8?B?WmhpIENoYW5n?=" changzhi@unitedstack.com
To: "=?utf-8?B?b3BlbnN0YWNrLWRldg==?="
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic]Boot physical machine fails, says
"PXE-E11 ARP Timeout"
Message-ID: tencent_50BBE4336F52F9E54B5710BC@qq.com
Content-Type: text/plain; charset="utf-8"

hi, all
I treat a normal physical machine as a bare metal machine. The physical machine booted when I run "nova boot xxx" in command line. But there is an error happens. I upload a movie in youtube, link: https://www.youtube.com/watch?v=XZQCNsrkyMI&feature=youtu.be. Could someone give me some advice?

Thx
Zhi Chang
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 16
Date: Wed, 09 Dec 2015 09:02:38 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID: 1449669713-sup-8899@lrrr.local
Content-Type: text/plain; charset=UTF-8

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

Congratulations, Matt!

Doug


Message: 17
Date: Wed, 9 Dec 2015 09:32:53 -0430
From: Flavio Percoco flavio@redhat.com
To: Jordan Pittier jordan.pittier@scality.com
Cc: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance][tempest][defcore] Process to
imrpove tests coverge in temepest
Message-ID: 20151209140253.GB10644@redhat.com
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On 08/12/15 22:31 +0100, Jordan Pittier wrote:

Hi Flavio,

On Tue, Dec 8, 2015 at 9:52 PM, Flavio Percoco flavio@redhat.com wrote:

Oh, I meant ocasionally. Whenever a missing test for an API is found,
it'd be easy enough for the implementer to sohw up at the meeting and
bring it up.

From my experience as a Tempest reviewer, I'd say that most newly added tests
are not submitted by Tempest regular contributors. I assume (wrongly ?) that
it's mostly people from the actual projects (e.g glance) who are interested in
adding new Tempest tests to test a feature recently implemented. Put
differently, I don't think it's part of Tempest core team/community to add new
tests. We mostly provide a framework and guidance these days.

I agree that the tempest team should focus on providing the framework
rather than the tests themselves. However, these tests are often
contributed by ppl that are not part of the project's team.

But, reading this thread, I don"t know what to suggest. As a Tempest reviewer I
won't start a new ML thread or send a message to a PTL each time I see a new
test being added...I assume the patch author to know what he is doing, I can't
keep on with what's going on in each and every project.

This is what I'd like to avoid. This assumption is exactly what almost
got the tasks API test almost merged and that will likely happen for
other things.

I don't think it's wrong to ping someone from the community when new
tests are added, especially because these tests are used by defcore
as well. Adding the PTL to the review (or some liaison) is simple
enough. We do this for many things in OpenStack. That is, we wait for
PTLs/liaisons approval before going forward with some decisions.

Also, a test can be quickly removed if it is latter on deemed not so useful.

Sure but this is wasting people's time. The contributor's, reviewer's
and community's time as it'll have to be added, reviewed and then
deleted.

I agree this doesn't happen too often but the fact that it happened is
enough of a reason for me to work on improving the process. Again,
especially because these tests are not meant to be used just by our
CI.

Cheers,
Flavio

Jordan
?

--
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL:


Message: 18
Date: Wed, 9 Dec 2015 09:09:17 -0500
From: Anita Kuno anteaya@anteaya.info
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Evolving the stadium concept
Message-ID: 5668360D.5030206@anteaya.info
Content-Type: text/plain; charset=windows-1252

On 12/09/2015 07:06 AM, Sean Dague wrote:

On 12/09/2015 01:46 AM, Armando M. wrote:

On 3 December 2015 at 02:21, Thierry Carrez <thierry@openstack.org
thierry@openstack.org> wrote:

Armando M. wrote:

On 2 December 2015 at 01:16, Thierry Carrez <thierry@openstack.org thierry@openstack.org
<mailto:thierry@openstack.org thierry@openstack.org>> wrote:

Armando M. wrote:

One solution is, like you mentioned, to make some (or all) of them
full-fledged project teams. Be aware that this means the TC would judge
those new project teams individually and might reject them if we feel
the requirements are not met. We might want to clarify what happens
then.

That's a good point. Do we have existing examples of this or would we be
sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases, the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official projects.
Again, I'm fine with that outcome, but I want to set expectations
clearly :)

Understood. It sounds to me that the outcome would be that those
projects (that may end up being rejected) would show nowhere on [1], but
would still be hosted and can rely on the support and services of the
OpenStack community, right?

[1] http://governance.openstack.org/reference/projects/

Yes they would still be hosted on OpenStack development infrastructure.
Contributions would no longer count toward ATC status, so people who
only contribute to those projects would no longer be able to vote in the
Technical Committee election. They would not have "official" design
summit space either -- they can still camp in the hallway though :)

Hi folks,

For whom of you is interested in the conversation, the topic was brought
for discussion at the latest TC meeting [1]. Unfortunately I was unable
to join, however I would like to try and respond to some of the comments
made to clarify my position on the matter:

ttx: the neutron PTL say he can't vouch for anything in the neutron
"stadium"

To be honest that's not entirely my position.

The problem stems from the fact that, if I am asked what the stadium
means, as a PTL I can't give a straight answer; ttx put it relatively
well (and I quote him): by adding all those projects under your own
project team, you bypass the Technical Committee approval that they
behave like OpenStack projects and are produced by the OpenStack
community. The Neutron team basically vouches for all of them to be on
par. As far as the Technical Committee goes, they are all being produced
by the same team we originally blessed (the Neutron project team).

The reality is: some of these projects are not produced by the same
team, they do not behave the same way, and they do not follow the same
practices and guidelines. For the stadium to make sense, in my humble
opinion, a definition of these practices should happen and enforcement
should follow, but who's got the time for policing and enforcing
eviction, especially on a large scale? So we either reduce the scale
(which might not be feasible because in OpenStack we're all about
scaling and adding more and more and more), or we address the problem
more radically by evolving the relationship from tight aggregation to
loose association; this way who needs to vouch for the Neutron
relationship is not the Neutron PTL, but the person sponsoring the
project that wants to be associated to Neutron. On the other end, the
vouching may still be pursued, but for a much more focused set of
initiatives that are led by the same team.

russellb: Iattempted to start breaking down the different types of
repos that are part of the stadium (consumer, api, implementation of
technology, plugins/drivers).

The distinction between implementation of technology, plugins/drivers
and api is not justified IMO because from a neutron standpoint they all
look like the same: they leverage the pluggable extensions to the
Neutron core framework. As I attempted to say: we have existing plugins
and drivers that implement APIs, and we have plugins that implement
technology, so the extra classification seems overspecification.

flaper87: I agree a driver should not be independent

Why, what's your rationale? If we dig deeper, some drivers are small
code drops with no or untraceable maintainers. Some are actively
developed and can be fairly complex. The spectrum is pretty wide. Either
way, I think that preventing them from being independent in principle
may hurt the ones that can be pretty elaborated, and the ones that are
stale may hurt Neutron's reputation because we're the ones who are
supposed to look after them (after all didn't we vouch for them??)

Armando, definitely agree with you. I think that the first step is
probably declaring what the core team believes they can vouch for in the
governance repo. Any drivers that are outside of that need to be
responsible for their own release and install mechanism. I think the
current middle ground means that no one is responsible for their release
/ install mechanism. Which is bad for everyone.

I think the concept of responsibility is a key concept here. I think
that what I have been seeing is decisions being made by some folks that
suit their needs expecting someone else to take the responsibility for
that decision regarding the effect on the rest of the development
community and the users.

If we can get back down to the point where decision makers are able to
take responsibility for their own decisions, not taking on the
responsibility of other's in an ever expanding way, then perhaps the
responsibility carried by some can be whittled down to a size which is
both manageable and portable. By portable I mean something that is
possible to be comprehended, explained and when appropriate interlaced
with some other project's responsibility for mutual support and benefit.

Thanks,
Anita.

dhellmann: we have previously said that projects run by different teams
talk to each other over rest interfaces as a way of clearly delineating
boundaries

As much as I agree wholeheartedly with this statement (which I made
myself during the GBP/Neutron saga), it's unrealistic to convert the
interface between Neutron and its extension mechanisms to be purely
restful, especially for the price that will have to be paid in the process.

Over a 3 year period, what's the cost of not doing it? Both in code dept
and friction, as well as opportunity cost to have more interesting
services build of these APIs. You and the neutron team would know better
than I, but it's worth considering the flip side as well.

sdague:I don't think anything should be extending the neutron API that
isn't controlled by the neutron core team.

The core should be about the core, why would what's built on top be
controlled by the core? By comparison, it's like saying a SIG on the
physical layer of the OSI stack dictates what a SIG on the session layer
should do. It stifles innovation and prevents problems from being solved
by the right domain experts.

Changing the REST API isn't innovation, it's incompatibility for end
users. If we're ever going to have compatible clouds and a real interop
effort, the APIs for all our services need to be very firmly controlled.
Extending the API arbitrarily should be a deprecated concept across
OpenStack.

Otherwise, I have no idea what the neutron (or any other project) API is.

-Sean


Message: 19
Date: Wed, 9 Dec 2015 17:11:14 +0300
From: Davanum Srinivas davanum@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID:
CANw6fcH61uLHrxXAM5_7uMd65jj1TbA4e90u9-rN7oQVTQe6nw@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

Congrats, Matt!

-- Dims

On Wed, Dec 9, 2015 at 5:02 PM, Doug Hellmann doug@doughellmann.com wrote:

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

Congratulations, Matt!

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


Message: 20
Date: Wed, 9 Dec 2015 09:11:20 -0500
From: Anita Kuno anteaya@anteaya.info
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID: 56683688.2050507@anteaya.info
Content-Type: text/plain; charset=windows-1252

On 12/09/2015 09:02 AM, Doug Hellmann wrote:

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

Congratulations, Matt!

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks to both candidates for putting their name forward, it is nice to
have an election.

Congratulations Matt,
Anita.


Message: 21
Date: Wed, 09 Dec 2015 09:25:24 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Evolving the stadium concept
Message-ID: 1449670640-sup-1539@lrrr.local
Content-Type: text/plain; charset=UTF-8

Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:

On 3 December 2015 at 02:21, Thierry Carrez thierry@openstack.org wrote:

Armando M. wrote:

On 2 December 2015 at 01:16, Thierry Carrez <thierry@openstack.org
thierry@openstack.org> wrote:

Armando M. wrote:

One solution is, like you mentioned, to make some (or all) of
them
full-fledged project teams. Be aware that this means the TC
would judge
those new project teams individually and might reject them if we
feel
the requirements are not met. We might want to clarify what
happens
then.

That's a good point. Do we have existing examples of this or
would we be
sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases,

the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official
projects.
Again, I'm fine with that outcome, but I want to set expectations
clearly :)

Understood. It sounds to me that the outcome would be that those
projects (that may end up being rejected) would show nowhere on [1], but
would still be hosted and can rely on the support and services of the
OpenStack community, right?

[1] http://governance.openstack.org/reference/projects/

Yes they would still be hosted on OpenStack development infrastructure.
Contributions would no longer count toward ATC status, so people who
only contribute to those projects would no longer be able to vote in the
Technical Committee election. They would not have "official" design
summit space either -- they can still camp in the hallway though :)

Hi folks,

For whom of you is interested in the conversation, the topic was brought
for discussion at the latest TC meeting [1]. Unfortunately I was unable to
join, however I would like to try and respond to some of the comments made
to clarify my position on the matter:

ttx: the neutron PTL say he can't vouch for anything in the neutron
"stadium"

To be honest that's not entirely my position.

The problem stems from the fact that, if I am asked what the stadium means,
as a PTL I can't give a straight answer; ttx put it relatively well (and I
quote him): by adding all those projects under your own project team, you
bypass the Technical Committee approval that they behave like OpenStack
projects and are produced by the OpenStack community. The Neutron team
basically vouches for all of them to be on par. As far as the Technical
Committee goes, they are all being produced by the same team we originally
blessed (the Neutron project team).

The reality is: some of these projects are not produced by the same team,
they do not behave the same way, and they do not follow the same practices
and guidelines. For the stadium to make sense, in my humble opinion, a

This is the thing that's key, for me. As Anita points out elsewhere in
this thread, we want to structure our project teams so that decision
making and responsibility are placed in the same set of hands. It sounds
like the Stadium concept has made it easy to let those diverge.

definition of these practices should happen and enforcement should follow,
but who's got the time for policing and enforcing eviction, especially on a
large scale? So we either reduce the scale (which might not be feasible
because in OpenStack we're all about scaling and adding more and more and
more), or we address the problem more radically by evolving the
relationship from tight aggregation to loose association; this way who
needs to vouch for the Neutron relationship is not the Neutron PTL, but the
person sponsoring the project that wants to be associated to Neutron. On
the other end, the vouching may still be pursued, but for a much more
focused set of initiatives that are led by the same team.

russellb: I attempted to start breaking down the different types of repos
that are part of the stadium (consumer, api, implementation of technology,
plugins/drivers).

The distinction between implementation of technology, plugins/drivers and
api is not justified IMO because from a neutron standpoint they all look
like the same: they leverage the pluggable extensions to the Neutron core
framework. As I attempted to say: we have existing plugins and drivers that
implement APIs, and we have plugins that implement technology, so the extra
classification seems overspecification.

flaper87: I agree a driver should not be independent

Why, what's your rationale? If we dig deeper, some drivers are small code
drops with no or untraceable maintainers. Some are actively developed and
can be fairly complex. The spectrum is pretty wide. Either way, I think
that preventing them from being independent in principle may hurt the ones
that can be pretty elaborated, and the ones that are stale may hurt
Neutron's reputation because we're the ones who are supposed to look after
them (after all didn't we vouch for them??)

From a technical perspective, if there is a stable API for driver
plugins, having the driver managed outside of the core team shouldn't
be a problem. If there's no stable API, the driver shouldn't even
be outside of the core repository yet. I know the split has happened,
I don't know how stable the plugin APIs are, though.

From a governance perspective, I agree it is desirable to enable
(but not require) drivers to live outside of core. But see the previous
paragraph for caveats.

dhellmann: we have previously said that projects run by different teams
talk to each other over rest interfaces as a way of clearly delineating
boundaries

As much as I agree wholeheartedly with this statement (which I made myself
during the GBP/Neutron saga), it's unrealistic to convert the interface
between Neutron and its extension mechanisms to be purely restful,
especially for the price that will have to be paid in the process.

Right, I think what we're saying is that you should stop treating
these things as extensions. There are true technical issues introduced
by the need to have strong API guarantees to support out-of-tree
extensions. As Sean mentioned in his response, the TC and community
want projects to have stable, fixed, APIs that do not change based
on deployment choices, so it is easy for users to understand the
API and so we can enable interoperability between deployments.
DefCore depends on these fixed APIs because of the way tests from
the Tempest suite are used in the validation process. Continuing
to support extensions in Neutron is going to make broad adoption
of Neutron APIs for DefCore harder.

sdague: I don't think anything should be extending the neutron API that
isn't controlled by the neutron core team.

The core should be about the core, why would what's built on top be
controlled by the core? By comparison, it's like saying a SIG on the
physical layer of the OSI stack dictates what a SIG on the session layer
should do. It stifles innovation and prevents problems from being solved by
the right domain experts.

It needs to be possible to build on top of neutron without injecting
yourself into the guts of neutron at runtime. See above.

Doug

That's all I managed to process whilst reading the log. I am sure I missed
some important comments and I apologize for not replying to them; one thing
I didn't miss for sure was all the hugging :)

Thanks for acknowledging the discussion and the time and consideration
given during the TC meeting.

Cheers,
Armando

[1] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-12-08-20.01.html

--
Thierry Carrez (ttx)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 22
Date: Wed, 9 Dec 2015 07:41:17 -0700
From: Curtis serverascode@gmail.com
To: Jesse Pretorius jesse.pretorius@gmail.com
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [openstack-ansible]
Mid Cycle Sprint
Message-ID:
CAJ_JamAoAf58dajm3awyfaSFib=r7rHGa44My+1KDX3g5-iCZg@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Wed, Dec 9, 2015 at 5:45 AM, Jesse Pretorius
jesse.pretorius@gmail.com wrote:

Hi everyone,

At the Mitaka design summit in Tokyo we had some corridor discussions about
doing a mid-cycle meetup for the purpose of continuing some design
discussions and doing some specific sprint work.


I'd like indications of who would like to attend and what
locations/dates/topics/sprints would be of interest to you.

I'd like to get more involved in openstack-ansible. I'll be going to
the operators mid-cycle in Feb, so could stay later and attend in West
London. However, I could likely make it to San Antonio as well. Not
sure if that helps but I will definitely try to attend where ever it
occurs.

Thanks.

For guidance/background I've put some notes together below:

Location


We have contributors, deployers and downstream consumers across the globe so
picking a venue is difficult. Rackspace have facilities in the UK (Hayes,
West London) and in the US (San Antonio) and are happy for us to make use of
them.

Dates


Most of the mid-cycles for upstream OpenStack projects are being held in
January. The Operators mid-cycle is on February 15-16.

As I feel that it's important that we're all as involved as possible in
these events, I would suggest that we schedule ours after the Operators
mid-cycle.

It strikes me that it may be useful to do our mid-cycle immediately after
the Ops mid-cycle, and do it in the UK. This may help to optimise travel for
many of us.

Format


The format of the summit is really for us to choose, but typically they're
formatted along the lines of something like this:

Day 1: Big group discussions similar in format to sessions at the design
summit.

Day 2: Collaborative code reviews, usually performed on a projector, where
the goal is to merge things that day (if a review needs more than a single
iteration, we skip it. If a review needs small revisions, we do them on the
spot).

Day 3: Small group / pair programming.

Topics


Some topics/sprints that come to mind that we could explore/do are:
- Install Guide Documentation Improvement [1]
- Development Documentation Improvement (best practises, testing, how to
develop a new role, etc)
- Upgrade Framework [2]
- Multi-OS Support [3]

[1] https://etherpad.openstack.org/p/oa-install-docs
[2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
[3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support

--
Jesse Pretorius
IRC: odyssey4me


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Blog: serverascode.com


Message: 23
Date: Wed, 9 Dec 2015 08:44:19 -0600
From: Matt Riedemann mriedem@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] stable/liberty 13.1.0 release
planning
Message-ID: 56683E43.7080505@linux.vnet.ibm.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 12/9/2015 3:46 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

We've had a few high priority regression fixes in stable/liberty [1][2]
so I think it's time to do a release.
[...]

You probably mean 12.0.1 ?

Err 12.1.0, yeah. Since we've had dependency updates in stable/liberty I
thought that made it a minor version bump to 12.1.0.

--

Thanks,

Matt Riedemann


Message: 24
Date: Wed, 9 Dec 2015 15:48:49 +0100
From: Sebastien Badia sbadia@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [puppet] proposing Cody Herriges part of
Puppet OpenStack core
Message-ID: 20151209144848.GC11592@baloo.sebian.fr
Content-Type: text/plain; charset="utf-8"

On Tue, Dec 08, 2015 at 11:49:08AM (-0500), Emilien Macchi wrote:

Hi,

Back in "old days", Cody was already core on the modules, when they were
hosted by Puppetlabs namespace.
His contributions [1] are very valuable to the group:
* strong knowledge on Puppet and all dependencies in general.
* very helpful to debug issues related to Puppet core or dependencies
(beaker, etc).
* regular attendance to our weekly meeting
* pertinent reviews
* very understanding of our coding style

I would like to propose having him back part of our core team.
As usual, we need to vote.

Of course, a big +1!

Thanks Cody!

Seb
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL:


Message: 25
Date: Wed, 9 Dec 2015 15:57:52 +0100
From: Roman Prykhodchenko me@romcheg.me
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Fuel] Private links
Message-ID: BLU436-SMTP1319845D60F55EF3F4CAF4ADE80@phx.gbl
Content-Type: text/plain; charset="utf-8"

Folks,

on the last two days I have marked several bugs as incomplete because they were referring to one or more private resources that are not accessible by anyone who does not have a @mirantis.com account.

Please keep in mind that Fuel is an open source project and the bug tracker we use is absolutely public. There should not be any private links in public bugs on Launchpad. Please don?t attach links to files on corporate Google Drive or tickets to Jira. The same rule should be applied for code reviews.

That said, I?d like to confirm that we can submit world-accessible links to BVT results. If not ? that should be fixed ASAP.

  • romcheg

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:


Message: 26
Date: Wed, 9 Dec 2015 10:02:34 -0500
From: michael mccune msm@redhat.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] making project_id optional in API
URLs
Message-ID: 5668428A.3090400@redhat.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 12/08/2015 05:59 PM, Adam Young wrote:

I think it is kindof irrelevant. It can be there or not be there in the
URL itself, so long as it does not show up in the service catalog. From
an policy standpoint, having the project in the URL means that you can
do an access control check without fetching the object from the
database; you should, however, confirm that the object return belongs to
the project at a later point.

from the policy standpoint does it matter if the project id appears in
the url or in the headers?

mike


Message: 27
Date: Wed, 9 Dec 2015 16:13:17 +0100
From: Jaume Devesa devvesa@gmail.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
Message-ID:
CABvUA7kGm=xc+0Bh_p7ZAg6=A_T=FXDZiQ4PGNoGj2W7S6aQfQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi Galo,

I think the goal of this split is well explained by Sandro in the first
mails of the chain:

  1. Downstream packaging
  2. Tagging the delivery properly as a library
  3. Adding as a project on pypi

?OpenStack provide us a tarballs web page[1] for each branch of each
project of the infrastructure.
Then, projects like Delorean can allow us to download theses tarball master
branches, create the
packages and host them in a target repository for each one of the rpm-like
distributions[2]. I am pretty sure
that there is something similar for Ubuntu.

Everything is done in a very straightforward and standarized way, because
every repo has its own
deliverable. You can look how they are packaged and you won't see too many
differences between
them. Packaging a python-midonetclient it will be trivial if it is
separated in a single repo. It will be
complicated and we'll have to do tricky things if it is a directory inside
the midonet repo. And I am not
sure if Ubuntu and RDO community will allow us to have weird packaging
metadata repos.

So to me the main reason is

  1. Leverage all the infrastructure and procedures that OpenStack offers to
    integrate MidoNet
    as best as possible with the release process and delivery.

Regards,

[1]: ?http://tarballs.openstack.org/
[2]: http://trunk.rdoproject.org

On 9 December 2015 at 15:52, Antoni Segura Puimedon toni@midokura.com
wrote:

---------- Forwarded message ----------
From: Galo Navarro galo@midokura.com
Date: Wed, Dec 9, 2015 at 2:48 PM
Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Cc: Jaume Devesa jaume@midokura.com

Ditto. We already have a mirror repo of pyc for this purpose
https://github.com/midonet/python-midonetclient, synced daily.

Some of the problems with that is that it does not have any git log
history
nor does it feel like a coding project at all.

Of course, because the goal of this repo is not to provide a
changelog. It's to provide an independent repo. If you want git log,
you should do a git log python-midonetclient in the source repo
(/midonet/midonet).

Allow me to put forward a solution that will allow you keep the
development
in the midonet tree while, at the same time, having a proper repository
with identifiable patches in github.com/midonet/python-midonetclient

Thanks, but I insist: can we please clarify what are we trying to
achieve, before we jump into solutions?

g


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Jaume Devesa
Software Engineer at Midokura
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 28
Date: Thu, 10 Dec 2015 00:28:02 +0900
From: "Ken'ichi Ohmichi" ken1ohmichi@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] jsonschema for scheduler hints
Message-ID:
CAA393vhL4+qbYikp0L7wyGSbcZ2r7CSBANM_Fd1UXT0rU_HtZA@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

2015-12-09 21:20 GMT+09:00 Sean Dague sean@dague.net:

On 12/08/2015 11:47 PM, Ken'ichi Ohmichi wrote:

Hi Sylvain,

2015-12-04 17:48 GMT+09:00 Sylvain Bauza sbauza@redhat.com:

That leaves the out-of-tree discussion about custom filters and how we
could have a consistent behaviour given that. Should we accept something in
a specific deployment while another deployment could 401 against it ? Mmm,
bad to me IMHO.

We can have code to check the out-of-tree filters didn't expose any same
hints with in-tree filter.

Sure, and thank you for that, that was missing in the past. That said, there
are still some interoperability concerns, let me explain : as a cloud
operator, I'm now providing a custom filter (say MyAwesomeFilter) which does
the lookup for an hint called 'myawesomehint'.

If we enforce a strict validation (and not allow to accept any hint) it
would mean that this cloud would accept a request with 'myawesomehint'
while another cloud which wouldn't be running MyAwesomeFilter would then
deny the same request.

I am thinking the operator/vendor own filter should have some
implementation code for registering its original hint to jsonschema to
expose/validate available hints in the future.
The way should be easy as possible so that they can implement the code easily.
After that, we will be able to make the validation strict again.

Yeh, that was my thinking. As someone that did a lot of the jsonschema
work, is that something you could prototype?

Yes.
On a prototype https://review.openstack.org/#/c/220440/ , each filter
needs to contain getschedulerhintapischema() which returns
available scheduler_hints parameter. Then stevedore detects these
parameters from each filter and extends jsonschema with them.
On current prototype, the detection and extension are implemented in nova-api.
but we need to change the prototype like:

  1. nova-sched detects available scheduler-hints from filters.
  2. nova-sched passes these scheduler-hints to nova-api via RPC.
  3. nova-api extends jsonschema with the gotten scheduler-hints.

After implementing the mechanism, the operator/vendor own filters just
need to implement getschedulerhintapischema(). That is not so
hard, I feel.

Thanks
Ken Ohmichi


Message: 29
Date: Wed, 9 Dec 2015 09:45:24 -0600
From: Matt Riedemann mriedem@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] stable/liberty 12.0.1 release
planning
Message-ID: 56684C94.8020603@linux.vnet.ibm.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 12/9/2015 8:44 AM, Matt Riedemann wrote:

On 12/9/2015 3:46 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

We've had a few high priority regression fixes in stable/liberty [1][2]
so I think it's time to do a release.
[...]

You probably mean 12.0.1 ?

Err 12.1.0, yeah. Since we've had dependency updates in stable/liberty I
thought that made it a minor version bump to 12.1.0.

Talked about this in the release channel this morning [1]. Summary is as
long as we aren't raising the minimum required version of a dependency
in stable/liberty, then the nova server release should be 12.0.1. We'd
only bump to 12.1.0 if we needed a newer minimum dependency, and I don't
think we have one of those (but will double check).

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2015-12-09.log.html#t2015-12-09T15:07:12

--

Thanks,

Matt Riedemann


Message: 30
Date: Wed, 9 Dec 2015 16:48:06 +0100
From: Galo Navarro galo@midokura.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
Message-ID:
CACSK4Abq4kKQNtesbEqvJk3XwxKq2qMuXOLqGMxqNU5qgErz7w@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,

I think the goal of this split is well explained by Sandro in the first
mails of the chain:

  1. Downstream packaging
  2. Tagging the delivery properly as a library
  3. Adding as a project on pypi

Not really, because (1) and (2) are a consequence of the repo split. Not
a cause. Please correct me if I'm reading wrong but he's saying:

  • I want tarballs
  • To produce tarballs, I want a separate repo, and separate repos have (1),
    (2) as requirements.

So this is where I'm going: producing a tarball of pyc does not require a
separate repo. If we don't need a new repo, we don't need to do all the
things that a separate repo requires.

Now:

OpenStack provide us a tarballs web page[1] for each branch of each
project
of the infrastructure.
Then, projects like Delorean can allow us to download theses tarball
master
branches, create the
packages and host them in a target repository for each one of the rpm-like
distributions[2]. I am pretty sure
that there is something similar for Ubuntu.

This looks more accurate: you're actually not asking for a tarball. You're
asking for being compatible with a system that produces tarballs off a
repo. This is very different :)

So questions: we have a standalone mirror of the repo, that could be used
for this purpose. Say we move the mirror to OSt infra, would things work?

Everything is done in a very straightforward and standarized way, because
every repo has its own
deliverable. You can look how they are packaged and you won't see too many
differences between
them. Packaging a python-midonetclient it will be trivial if it is
separated
in a single repo. It will be

But create a lot of other problems in development. With a very important
difference: the pain created by the mirror solution is solved cheaply with
software (e.g.: as you know, with a script). OTOH, the pain created by
splitting the repo is paid in very costly human resources.

complicated and we'll have to do tricky things if it is a directory inside
the midonet repo. And I am not
sure if Ubuntu and RDO community will allow us to have weird packaging
metadata repos.

I do get this point and it's a major concern, IMO we should split to a
different conversation as it's not related to where PYC lives, but to a
more general question: do we really need a repo per package?

Like Guillermo and myself said before, the midonet repo generate 4
packages, and this will grow. If having a package per repo is really a
strong requirement, there is a lot of work ahead, so we need to start
talking about this now. But like I said, it's orthogonal to the PYC points
above.

g
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 31
Date: Wed, 9 Dec 2015 16:03:02 +0000
From: "Fabio Giannetti (fgiannet)" fgiannet@cisco.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Monasca]: Mid Cycle Doodle
Message-ID: D28D901E.D094%fgiannet@cisco.com
Content-Type: text/plain; charset="us-ascii"

Guys,
Please find here the doodle for the mid-cycle:

http://doodle.com/poll/yy4unhffy7hi3x67

If we run the meeting Thu/Fri 28/29 we can have the 28 a joint session
with Congress.
First week of Feb is all open and I guess we need to decide if to do 2 or
3 days.
Thanks,
Fabio


Message: 32
Date: Wed, 9 Dec 2015 09:27:37 -0700
From: John Griffith john.griffith8@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID:
CAPWkaSUbzzr2FpjsqmkCD+NCvmPTVMKR-Q2fZRUaYcc=OfOiJw@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan xiaoyan.li@intel.com wrote:

Hi all,

Currently when deleting a volume, it checks whether there are snapshots
created from it. If yes deletion is prohibited. But it allows to extend
the volume, no check whether there are snapshots from it.

?Correct?

The two behaviors in Cinder are not consistent from my viewpoint.

?Well, your snapshot was taken at a point in time; and if you do a create
from snapshot the whole point is you want what you HAD when the snapshot
command was issued and NOT what happened afterwards. So in my opinion this
is not inconsistent at all.
?

In backend storage, their behaviors are same.

?Which backend storage are you referring to in this case?
?

For full snapshot, if still in copying progress, both extend and deletion
are not allowed. If snapshot copying finishes, both extend and deletion are
allowed.
For incremental snapshot, both extend and deletion are not allowed.

?So your particular backend has "different/specific" rules/requirements
around snapshots. That's pretty common, I don't suppose theres any way to
hack around this internally? In other words do things on your backend like
clones as snaps etc to make up for the differences in behavior??

As a result, this raises two concerns here:
1. Let such operations behavior same in Cinder.
2. I prefer to let storage driver decide the dependencies, not in the
general core codes.

?I have and always will strongly disagree with this approach and your
proposal. Sadly we've already started to allow more and more vendor
drivers just "do their own thing" and implement their own special API
methods. This is in my opinion a horrible path and defeats the entire
purpose of have a Cinder abstraction layer.

This will make it impossible to have compatibility between clouds for those
that care about it, it will make it impossible for operators/deployers to
understand exactly what they can and should expect in terms of the usage of
their cloud. Finally, it will also mean that not OpenStack API
functionality is COMPLETELY dependent on backend device. I know people are
sick of hearing me say this, so I'll keep it short and say it one more time:
"Compatibility in the API matters and should always be our priority"

Meanwhile, if we let driver to decide the dependencies, the following
changes need to do in Cinder:
1. When creating a snapshot from volume, it needs copy all metadata of
volume to snapshot. Currently it doesn't.
Any other potential issues please let me know.

Any input will be appreciated.

Best wishes
Lisa


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 33
Date: Wed, 9 Dec 2015 11:03:30 -0600
From: Chris Friesen chris.friesen@windriver.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID: 56685EE2.8020303@windriver.com
Content-Type: text/plain; charset="utf-8"; format=flowed

On 12/09/2015 10:27 AM, John Griffith wrote:

On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan <xiaoyan.li@intel.com
xiaoyan.li@intel.com> wrote:

Hi all,

Currently when deleting a volume, it checks whether there are snapshots
created from it. If yes deletion is prohibited.  But it allows to extend
the volume, no check whether there are snapshots from it.

?Correct?

The two behaviors in Cinder are not consistent from my viewpoint.

?Well, your snapshot was taken at a point in time; and if you do a create from
snapshot the whole point is you want what you HAD when the snapshot command was
issued and NOT what happened afterwards. So in my opinion this is not
inconsistent at all.

If we look at it a different way...suppose that the snapshot is linked in a
copy-on-write manner with the original volume. If someone deletes the original
volume then the snapshot is in trouble. However, if someone modifies the
original volume then a new chunk of backing store is allocated for the original
volume and the snapshot still references the original contents.

If we did allow deletion of the volume we'd have to either keep the volume
backing store around as long as any snapshots are around, or else flatten any
snapshots so they're no longer copy-on-write.

Chris


Message: 34
Date: Wed, 9 Dec 2015 17:06:12 +0000
From: "Kris G. Lindgren" klindgren@godaddy.com
To: Oguz Yarimtepe oguzyarimtepe@gmail.com,
"openstack-operators@lists.openstack.org"
openstack-operators@lists.openstack.org, "OpenStack Development
Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] RBAC
usage at production
Message-ID: 7CAD9EFB-B4B7-48CA-8771-EEE821FB27EB@godaddy.com
Content-Type: text/plain; charset="utf-8"

In other projects the policy.json file is read each time of api request. So changes to the file take place immediately. I was 90% sure keystone was the same way?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 12/9/15, 1:39 AM, "Oguz Yarimtepe" oguzyarimtepe@gmail.com wrote:

Hi,

I am wondering whether there are people using RBAC at production. The
policy.json file has a structure that requires restart of the service
each time you edit the file. Is there and on the fly solution or tips
about it?


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Message: 35
Date: Wed, 9 Dec 2015 18:10:45 +0100
From: Jordan Pittier jordan.pittier@scality.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID:
CAAKgrc=qEQjYn9bu4YRd4VraGwUFGQEcEU9siX8UubpbV_wBSg@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,
FWIW, I completely agree with what John said. All of it.

Please don't do that.

Jordan
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 36
Date: Wed, 9 Dec 2015 20:17:30 +0300
From: Dmitry Klenov dklenov@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Fuel] [Ubuntu bootstrap] Ubuntu bootstrap
becomes default in the Fuel
Message-ID:
CAExpkLxMQnpCYVv4tzBW9yyuYjeH8+P7v_V0pU1_h_Nvfwamog@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hello folks,

I would like to announce that we have completed all items for 'Ubuntu
bootstrap' feature. Thanks to the team for hard work and dedication!

Starting from today Ubuntu bootstrap is enabled in the Fuel by default.

Also it is worth mentioning that Ubuntu bootstrap is integrated with
'Biosdevnames' feature implemented by MOS-Linux team, so new bootstrap will
also benefit from persistent interface naming.

Thanks,
Dmitry.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 37
Date: Wed, 9 Dec 2015 17:18:29 +0000
From: Edgar Magana edgar.magana@workday.com
To: "Kris G. Lindgren" klindgren@godaddy.com, Oguz Yarimtepe
oguzyarimtepe@gmail.com, "openstack-operators@lists.openstack.org"
openstack-operators@lists.openstack.org, "OpenStack Development
Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] RBAC
usage at production
Message-ID: BFCD8D28-53F9-48BC-8F55-A37E9EFD5269@workdayinternal.com
Content-Type: text/plain; charset="utf-8"

We use RBAC in production but basically modify networking operations and some compute ones. In our case we don?t need to restart the services if we modify the policy.json file. I am surprise that keystone is not following the same process.

Edgar

On 12/9/15, 9:06 AM, "Kris G. Lindgren" klindgren@godaddy.com wrote:

In other projects the policy.json file is read each time of api request. So changes to the file take place immediately. I was 90% sure keystone was the same way?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 12/9/15, 1:39 AM, "Oguz Yarimtepe" oguzyarimtepe@gmail.com wrote:

Hi,

I am wondering whether there are people using RBAC at production. The
policy.json file has a structure that requires restart of the service
each time you edit the file. Is there and on the fly solution or tips
about it?


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Message: 38
Date: Wed, 9 Dec 2015 10:31:47 -0700
From: Doug Wiegley dougwig@parksidesoftware.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Evolving the stadium concept
Message-ID:
A46D4F8A-1334-4037-91C0-D557FB4A8178@parksidesoftware.com
Content-Type: text/plain; charset=utf-8

On Dec 9, 2015, at 7:25 AM, Doug Hellmann doug@doughellmann.com wrote:

Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:

On 3 December 2015 at 02:21, Thierry Carrez thierry@openstack.org wrote:

Armando M. wrote:

On 2 December 2015 at 01:16, Thierry Carrez <thierry@openstack.org
thierry@openstack.org> wrote:

Armando M. wrote:

One solution is, like you mentioned, to make some (or all) of
them
full-fledged project teams. Be aware that this means the TC
would judge
those new project teams individually and might reject them if we
feel
the requirements are not met. We might want to clarify what
happens
then.

That's a good point. Do we have existing examples of this or
would we be
sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases,
the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official
projects.
Again, I'm fine with that outcome, but I want to set expectations
clearly :)

Understood. It sounds to me that the outcome would be that those
projects (that may end up being rejected) would show nowhere on [1], but
would still be hosted and can rely on the support and services of the
OpenStack community, right?

[1] http://governance.openstack.org/reference/projects/

Yes they would still be hosted on OpenStack development infrastructure.
Contributions would no longer count toward ATC status, so people who
only contribute to those projects would no longer be able to vote in the
Technical Committee election. They would not have "official" design
summit space either -- they can still camp in the hallway though :)

Hi folks,

For whom of you is interested in the conversation, the topic was brought
for discussion at the latest TC meeting [1]. Unfortunately I was unable to
join, however I would like to try and respond to some of the comments made
to clarify my position on the matter:

ttx: the neutron PTL say he can't vouch for anything in the neutron
"stadium"

To be honest that's not entirely my position.

The problem stems from the fact that, if I am asked what the stadium means,
as a PTL I can't give a straight answer; ttx put it relatively well (and I
quote him): by adding all those projects under your own project team, you
bypass the Technical Committee approval that they behave like OpenStack
projects and are produced by the OpenStack community. The Neutron team
basically vouches for all of them to be on par. As far as the Technical
Committee goes, they are all being produced by the same team we originally
blessed (the Neutron project team).

The reality is: some of these projects are not produced by the same team,
they do not behave the same way, and they do not follow the same practices
and guidelines. For the stadium to make sense, in my humble opinion, a

This is the thing that's key, for me. As Anita points out elsewhere in
this thread, we want to structure our project teams so that decision
making and responsibility are placed in the same set of hands. It sounds
like the Stadium concept has made it easy to let those diverge.

definition of these practices should happen and enforcement should follow,
but who's got the time for policing and enforcing eviction, especially on a
large scale? So we either reduce the scale (which might not be feasible
because in OpenStack we're all about scaling and adding more and more and
more), or we address the problem more radically by evolving the
relationship from tight aggregation to loose association; this way who
needs to vouch for the Neutron relationship is not the Neutron PTL, but the
person sponsoring the project that wants to be associated to Neutron. On
the other end, the vouching may still be pursued, but for a much more
focused set of initiatives that are led by the same team.

russellb: I attempted to start breaking down the different types of repos
that are part of the stadium (consumer, api, implementation of technology,
plugins/drivers).

The distinction between implementation of technology, plugins/drivers and
api is not justified IMO because from a neutron standpoint they all look
like the same: they leverage the pluggable extensions to the Neutron core
framework. As I attempted to say: we have existing plugins and drivers that
implement APIs, and we have plugins that implement technology, so the extra
classification seems overspecification.

flaper87: I agree a driver should not be independent

Why, what's your rationale? If we dig deeper, some drivers are small code
drops with no or untraceable maintainers. Some are actively developed and
can be fairly complex. The spectrum is pretty wide. Either way, I think
that preventing them from being independent in principle may hurt the ones
that can be pretty elaborated, and the ones that are stale may hurt
Neutron's reputation because we're the ones who are supposed to look after
them (after all didn't we vouch for them??)

From a technical perspective, if there is a stable API for driver
plugins, having the driver managed outside of the core team shouldn't
be a problem. If there's no stable API, the driver shouldn't even
be outside of the core repository yet. I know the split has happened,
I don't know how stable the plugin APIs are, though.

Agreed, and making that stable interface is a key initiative in mitaka.

From a governance perspective, I agree it is desirable to enable
(but not require) drivers to live outside of core. But see the previous
paragraph for caveats.

dhellmann: we have previously said that projects run by different teams
talk to each other over rest interfaces as a way of clearly delineating
boundaries

As much as I agree wholeheartedly with this statement (which I made myself
during the GBP/Neutron saga), it's unrealistic to convert the interface
between Neutron and its extension mechanisms to be purely restful,
especially for the price that will have to be paid in the process.

Right, I think what we're saying is that you should stop treating
these things as extensions. There are true technical issues introduced
by the need to have strong API guarantees to support out-of-tree
extensions. As Sean mentioned in his response, the TC and community
want projects to have stable, fixed, APIs that do not change based
on deployment choices, so it is easy for users to understand the
API and so we can enable interoperability between deployments.
DefCore depends on these fixed APIs because of the way tests from
the Tempest suite are used in the validation process. Continuing
to support extensions in Neutron is going to make broad adoption
of Neutron APIs for DefCore harder.

sdague: I don't think anything should be extending the neutron API that
isn't controlled by the neutron core team.

The core should be about the core, why would what's built on top be
controlled by the core? By comparison, it's like saying a SIG on the
physical layer of the OSI stack dictates what a SIG on the session layer
should do. It stifles innovation and prevents problems from being solved by
the right domain experts.

It needs to be possible to build on top of neutron without injecting
yourself into the guts of neutron at runtime. See above.

In point of fact, it is possible, and there is an API to do so, but? most choose not to. I won?t say that?s an argument to keep extensions, but it might be worth examing why people are choosing that route, because I think it points to a big innovation/velocity killer in ?the openstack way?.

One possible interpretation: we have all these rules that basically amount to: 1) don?t be so small you can?t be a wsgi/db app, which is expensive in the current wild west mode of building them, 2) don?t be so large that we feel you?ve diverged too much from what we want things to look like, and 3) be exactly like a rest service with some driver backends implementing some sort of *aaS.

That leaves a pretty narrow, and relatively expensive, runway.

We don?t want extensions for reasons of interop, fine. I think it?s a fairly silly argument to say that rest api?s can be optional, but extensions to an api can?t, because that extra ?/foobar/? is the killer, but whatever. However, maybe we should devote some thinking as to why neutron extensions are being used, and how we could leverage the dev work that doesn?t feel jumping through the above hoops is appropriate/worth it/etc.

Thanks,
doug

Doug

That's all I managed to process whilst reading the log. I am sure I missed
some important comments and I apologize for not replying to them; one thing
I didn't miss for sure was all the hugging :)

Thanks for acknowledging the discussion and the time and consideration
given during the TC meeting.

Cheers,
Armando

[1] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-12-08-20.01.html

--
Thierry Carrez (ttx)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 39
Date: Wed, 9 Dec 2015 17:32:06 +0000
From: Arkady_Kanevsky@DELL.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID:
0c2713f7979240288bbb5f912c239ddd@AUSX13MPS308.AMER.DELL.COM
Content-Type: text/plain; charset="us-ascii"

You can do lazy copy that happens only when volume or snapshot is deleted.
You will need to have refcount on metadata.

-----Original Message-----
From: Li, Xiaoyan [mailto:xiaoyan.li@intel.com]
Sent: Tuesday, December 08, 2015 10:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Dependencies of snapshots on volumes

Hi all,

Currently when deleting a volume, it checks whether there are snapshots created from it. If yes deletion is prohibited. But it allows to extend the volume, no check whether there are snapshots from it.

The two behaviors in Cinder are not consistent from my viewpoint.

In backend storage, their behaviors are same.
For full snapshot, if still in copying progress, both extend and deletion are not allowed. If snapshot copying finishes, both extend and deletion are allowed.
For incremental snapshot, both extend and deletion are not allowed.

As a result, this raises two concerns here:
1. Let such operations behavior same in Cinder.
2. I prefer to let storage driver decide the dependencies, not in the general core codes.

Meanwhile, if we let driver to decide the dependencies, the following changes need to do in Cinder:
1. When creating a snapshot from volume, it needs copy all metadata of volume to snapshot. Currently it doesn't.
Any other potential issues please let me know.

Any input will be appreciated.

Best wishes
Lisa


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

End of OpenStack-dev Digest, Vol 44, Issue 33



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 9, 2015 by stuart.mclaren_at_hp (1,760 points)   4
0 votes

-----Original Message-----
From: Doug Hellmann [mailto:doug@doughellmann.com]
Sent: 09 December 2015 19:28
To: openstack-dev
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to caller

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:
On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ‘x-openstack-request-id’ to the
caller as per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let’s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model()
does the job of creating a warlock.model object(essentially a dict)
based on the schema given as argument (image schema retrieved from
glance in this case). Inside
model() the raw() method simply return the image schema as JSON
object. The advantage of this warlock.model object over a simple dict
is that it validates any changes to object based on the rules specified in the reference schema.
The keys of this model object are available as object properties to
the caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For
images, metadef APIs glance.schema.Schema.raw() is used which returns
a schema containing “additionalProperties”: {“type”: “string”}.
Whereas for members and tasks APIs glance.schema.Schema.minimal() is
used to return schema object which does not contain “additionalProperties”.

So we can add extra properties of any type to the model object
returned from members or tasks API but for images and metadef APIs we
can only add properties which can be of type string. Also for the
latter case we depend on the glance configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving
this
issue:

Approach #1: Inject request_ids property in the warlock model object
in glance client

Here we do the following:

  1. Inject the ‘request_ids’ as additional property into the model
    object (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows
    additional properties of type string, so even though natural type of
    request_ids should be list we have to make it as a comma separated
    ‘string’ of request ids as a compromise.

  2. Lot of extra code is needed to wrap objects returned from the
    client API so that the caller can get request ids. For example we
    need to write wrapper classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should
    actually be a base property but added as additional property as a compromise.

  4. There is a dependency on glance whether to allow custom/additional
    properties or not. [2]

Approach #2: Add ‘request_ids’ property to all schema definitions in
glance

Here we add ‘request_ids’ property as follows to the various APIs (schema):

“request_ids”: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as compared
to approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API
calls for example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id available to the user of the client library [1]. The user typically doesn't have access to the headers, so the request id needs to be part of the payload returned from each method. In other clients that work with simple data types, they've subclassed dict, list, etc. to add the extra property. This adds the request id to the return value without making a breaking change to the API of the client library.

Abhishek, would it be possible to add the request id information to the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that data takes (dictionary, JSON blob, etc.). If it's a dictionary visible to the client code it would be straightforward to add data to it.

Yes, it is possible to add request-id to schema before giving it to warlock, but since it’s a contract IMO it doesn't look good to modify schema at client side.

Failing that, is it possible to change warlock to allow extra properties with arbitrary types to be added to objects? Because validating inputs to the constructor is all well and good, but breaking the ability to add data to an object is a bit un-pythonic.
IMO there is no point to change warlock as it is a 3rd party module.

If we end up having to change the schema definitions in the Glance API, that also means changing those API calls to add the request id to the return value, right?
IMO there will be no changing API calls as request-id will be injected in glanceclient and it doesn't have any impact in glance.
Also we can make this request-id as non-mandatory if required.

Doug

[1] http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

IMO approach #2 is better and it will be consistent to all api's to have request-id as an attribute in schema so that it will be consistent.

So we will make change in glance to add request-id as base property in schema and inject request-id in glanceclient from response headers.

The code change will be look like,

iff --git a/glance/api/v2/images.py b/glance/api/v2/images.py
index bb7949c..2a760a7 100644
--- a/glance/api/v2/images.py
+++ b/glance/api/v2/images.py
@@ -807,6 +807,10 @@ class ResponseSerializer(wsgi.JSONResponseSerializer):

def getbaseproperties():
return {
+ 'request_ids': {
+ 'type': 'array',
+ 'items': {'type': 'string'}
+ },
'id': {
'type': 'string',
'description': _('An identifier for the image'),

Changes in glanceclient to assign request-id from response headers

openstack@openstack-136:~/python-glanceclient$ git diff
diff --git a/glanceclient/v2/images.py b/glanceclient/v2/images.py
index 4fdcea2..65b0d6c 100644
--- a/glanceclient/v2/images.py
+++ b/glanceclient/v2/images.py
@@ -182,6 +182,7 @@ class Controller(object):
# NOTE(bcwaldon): remove 'self' for now until we have an elegant
# way to pass it into the model constructor without conflict
body.pop('self', None)
+ body['request_ids'] = [resp.headers['x-openstack-request-id']]
return self.model(**body)

 def data(self, image_id, do_checksum=True):

Output:

import glanceclient
glance = glanceclient.Client('2', endpoint='http://10.69.4.136:9292/', token='16038d125b804eef805c7020bbebc769')
get = glance.images.get('a00b6125-94d9-43a8-a497-839cf25a8fdd')
get
{u'status': u'active', u'tags': [], u'containerformat': u'aki', u'minram': 0, u'updatedat': u'2015-11-18T13:04:18Z', u'visibility': u'public', 'requestids': ['req-68926f34-4434-45dc-822c-c4eb94506c63'], u'owner': u'd1ee7fd5dcc341c3973f19f790238e63', u'file': u'/v2/images/a00b6125-94d9-43a8-a497-839cf25a8fdd/file', u'mindisk': 0, u'virtualsize': None, u'id': u'a00b6125-94d9-43a8-a497-839cf25a8fdd', u'size': 4979632, u'name': u'cirros-0.3.4-x8664-uec-kernel', u'checksum': u'8a40c862b5735975d82605c1dd395796', u'createdat': u'2015-11-18T13:04:18Z', u'diskformat': u'aki', u'protected': False, u'schema': u'/v2/schemas/image'}
get.request
ids
['req-68926f34-4434-45dc-822c-c4eb94506c63']

Please suggest.

Thank You,

Abhishek

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the same.

[1]
https://github.com/openstack/python-glanceclient/blob/master/glancecl
ient/
v2/images.py#L179

[2]
https://github.com/openstack/glance/blob/master/glance/api/v2/images.
py#
L944


_
Disclaimer: This email and any attachments are sent in strictest
confidence for the sole use of the addressee and may contain legally
privileged, confidential, and proprietary data. If you are not the
intended recipient, please advise the sender by replying promptly to
this email and then delete and destroy this email and any attachments
without any further use, copying or forwarding.


_____ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 10, 2015 by Kekane,_Abhishek (3,940 points)   5 7
0 votes

-----Original Message-----
From: stuart.mclaren@hp.com [mailto:stuart.mclaren@hp.com]
Sent: 09 December 2015 23:54
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to caller

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:

On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ?x-openstack-request-id? to the caller as
per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let?s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model() does the
job of creating a warlock.model object(essentially a dict) based on the schema
given as argument (image schema retrieved from glance in this case). Inside
model() the raw() method simply return the image schema as JSON object. The
advantage of this warlock.model object over a simple dict is that it validates
any changes to object based on the rules specified in the reference schema.
The keys of this model object are available as object properties to the
caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For images,
metadef APIs glance.schema.Schema.raw() is used which returns a schema
containing ?additionalProperties?: {?type?: ?string?}. Whereas for members and
tasks APIs glance.schema.Schema.minimal() is used to return schema object which
does not contain ?additionalProperties?.

So we can add extra properties of any type to the model object returned from
members or tasks API but for images and metadef APIs we can only add properties
which can be of type string. Also for the latter case we depend on the glance
configuration to allow additional properties.

As per our analysis we have come up with two approaches for resolving this
issue:

Approach #1: Inject request_ids property in the warlock model object in glance
client

Here we do the following:

  1. Inject the ?request_ids? as additional property into the model object
    (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows additional
    properties of type string, so even though natural type of request_ids should be
    list we have to make it as a comma separated ?string? of request ids as a
    compromise.

  2. Lot of extra code is needed to wrap objects returned from the client API so
    that the caller can get request ids. For example we need to write wrapper
    classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should actually be a
    base property but added as additional property as a compromise.

  4. There is a dependency on glance whether to allow custom/additional
    properties or not. [2]

Approach #2: Add ?request_ids? property to all schema definitions in glance

Here we add ?request_ids? property as follows to the various APIs (schema):

?request_ids?: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as compared to
approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API calls for
example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id
available to the user of the client library [1]. The user typically
doesn't have access to the headers, so the request id needs to be
part of the payload returned from each method. In other clients

Will this work if the payload is image data?

I think yes, let me test this as well

that work with simple data types, they've subclassed dict, list,
etc. to add the extra property. This adds the request id to the
return value without making a breaking change to the API of the
client library.

Abhishek, would it be possible to add the request id information
to the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that
data takes (dictionary, JSON blob, etc.). If it's a dictionary
visible to the client code it would be straightforward to add data
to it.

Yes, it is possible to add request-id to schema before giving it to warlock, but since it's a contract IMO it doesn't look good to modify schema at client side.

Failing that, is it possible to change warlock to allow extra
properties with arbitrary types to be added to objects? Because
validating inputs to the constructor is all well and good, but
breaking the ability to add data to an object is a bit un-pythonic.

IMO there is no point to change warlock as it is a 3rd party module.

If we end up having to change the schema definitions in the Glance API,
that also means changing those API calls to add the request id to the
return value, right?

IMO there will be no changing API calls as request-id will be injected in glanceclient and it doesn't have any impact in glance.
Also we can make this request-id as non-mandatory if required.

Doug

[1] http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

IMO approach #2 is better and it will be consistent to all api's to have request-id as an attribute in schema so that it will be consistent.

So we will make change in glance to add request-id as base property in schema and inject request-id in glanceclient from response headers.

The code change will be look like,

iff --git a/glance/api/v2/images.py b/glance/api/v2/images.py index bb7949c..2a760a7 100644
--- a/glance/api/v2/images.py
+++ b/glance/api/v2/images.py
@@ -807,6 +807,10 @@ class ResponseSerializer(wsgi.JSONResponseSerializer):

def getbaseproperties():
return {
+ 'request_ids': {
+ 'type': 'array',
+ 'items': {'type': 'string'}
+ },
'id': {
'type': 'string',
'description': _('An identifier for the image'),

Changes in glanceclient to assign request-id from response headers

openstack@openstack-136:~/python-glanceclient$ git diff diff --git a/glanceclient/v2/images.py b/glanceclient/v2/images.py index 4fdcea2..65b0d6c 100644
--- a/glanceclient/v2/images.py
+++ b/glanceclient/v2/images.py
@@ -182,6 +182,7 @@ class Controller(object):
# NOTE(bcwaldon): remove 'self' for now until we have an elegant
# way to pass it into the model constructor without conflict
body.pop('self', None)
+ body['request_ids'] = [resp.headers['x-openstack-request-id']]
return self.model(**body)

 def data(self, image_id, do_checksum=True):

Output:

import glanceclient
glance = glanceclient.Client('2',
endpoint='http://10.69.4.136:9292/',
token='16038d125b804eef805c7020bbebc769')
get = glance.images.get('a00b6125-94d9-43a8-a497-839cf25a8fdd')
get
{u'status': u'active', u'tags': [], u'containerformat': u'aki', u'minram': 0, u'updatedat': u'2015-11-18T13:04:18Z', u'visibility': u'public', 'requestids': ['req-68926f34-4434-45dc-822c-c4eb94506c63'], u'owner': u'd1ee7fd5dcc341c3973f19f790238e63', u'file': u'/v2/images/a00b6125-94d9-43a8-a497-839cf25a8fdd/file', u'mindisk': 0, u'virtualsize': None, u'id': u'a00b6125-94d9-43a8-a497-839cf25a8fdd', u'size': 4979632, u'name': u'cirros-0.3.4-x8664-uec-kernel', u'checksum': u'8a40c862b5735975d82605c1dd395796', u'createdat': u'2015-11-18T13:04:18Z', u'diskformat': u'aki', u'protected': False, u'schema': u'/v2/schemas/image'}
get.request
ids
['req-68926f34-4434-45dc-822c-c4eb94506c63']

Please suggest.

Thank You,

Abhishek

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the same.

[1] https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
v2/images.py#L179

[2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
L944


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 15
Date: Wed, 9 Dec 2015 21:59:50 +0800
From: "=?utf-8?B?WmhpIENoYW5n?=" changzhi@unitedstack.com
To: "=?utf-8?B?b3BlbnN0YWNrLWRldg==?="
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic]Boot physical machine fails, says
"PXE-E11 ARP Timeout"
Message-ID: tencent_50BBE4336F52F9E54B5710BC@qq.com
Content-Type: text/plain; charset="utf-8"

hi, all
I treat a normal physical machine as a bare metal machine. The physical machine booted when I run "nova boot xxx" in command line. But there is an error happens. I upload a movie in youtube, link: https://www.youtube.com/watch?v=XZQCNsrkyMI&feature=youtu.be. Could someone give me some advice?

Thx
Zhi Chang
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 16
Date: Wed, 09 Dec 2015 09:02:38 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID: 1449669713-sup-8899@lrrr.local
Content-Type: text/plain; charset=UTF-8

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

Congratulations, Matt!

Doug


Message: 17
Date: Wed, 9 Dec 2015 09:32:53 -0430
From: Flavio Percoco flavio@redhat.com
To: Jordan Pittier jordan.pittier@scality.com
Cc: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance][tempest][defcore] Process to
imrpove tests coverge in temepest
Message-ID: 20151209140253.GB10644@redhat.com
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On 08/12/15 22:31 +0100, Jordan Pittier wrote:

Hi Flavio,

On Tue, Dec 8, 2015 at 9:52 PM, Flavio Percoco flavio@redhat.com wrote:

Oh, I meant ocasionally. Whenever a missing test for an API is found,
it'd be easy enough for the implementer to sohw up at the meeting and
bring it up.

From my experience as a Tempest reviewer, I'd say that most newly added tests
are not submitted by Tempest regular contributors. I assume (wrongly ?) that
it's mostly people from the actual projects (e.g glance) who are interested in
adding new Tempest tests to test a feature recently implemented. Put
differently, I don't think it's part of Tempest core team/community to add new
tests. We mostly provide a framework and guidance these days.

I agree that the tempest team should focus on providing the framework
rather than the tests themselves. However, these tests are often
contributed by ppl that are not part of the project's team.

But, reading this thread, I don"t know what to suggest. As a Tempest reviewer I
won't start a new ML thread or send a message to a PTL each time I see a new
test being added...I assume the patch author to know what he is doing, I can't
keep on with what's going on in each and every project.

This is what I'd like to avoid. This assumption is exactly what almost
got the tasks API test almost merged and that will likely happen for
other things.

I don't think it's wrong to ping someone from the community when new
tests are added, especially because these tests are used by defcore
as well. Adding the PTL to the review (or some liaison) is simple
enough. We do this for many things in OpenStack. That is, we wait for
PTLs/liaisons approval before going forward with some decisions.

Also, a test can be quickly removed if it is latter on deemed not so useful.

Sure but this is wasting people's time. The contributor's, reviewer's
and community's time as it'll have to be added, reviewed and then
deleted.

I agree this doesn't happen too often but the fact that it happened is
enough of a reason for me to work on improving the process. Again,
especially because these tests are not meant to be used just by our
CI.

Cheers,
Flavio

Jordan
?

--
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL:


Message: 18
Date: Wed, 9 Dec 2015 09:09:17 -0500
From: Anita Kuno anteaya@anteaya.info
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Evolving the stadium concept
Message-ID: 5668360D.5030206@anteaya.info
Content-Type: text/plain; charset=windows-1252

On 12/09/2015 07:06 AM, Sean Dague wrote:

On 12/09/2015 01:46 AM, Armando M. wrote:

On 3 December 2015 at 02:21, Thierry Carrez <thierry@openstack.org
thierry@openstack.org> wrote:

Armando M. wrote:

On 2 December 2015 at 01:16, Thierry Carrez <thierry@openstack.org thierry@openstack.org
<mailto:thierry@openstack.org thierry@openstack.org>> wrote:

Armando M. wrote:

One solution is, like you mentioned, to make some (or all) of them
full-fledged project teams. Be aware that this means the TC would judge
those new project teams individually and might reject them if we feel
the requirements are not met. We might want to clarify what happens
then.

That's a good point. Do we have existing examples of this or would we be
sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases, the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official projects.
Again, I'm fine with that outcome, but I want to set expectations
clearly :)

Understood. It sounds to me that the outcome would be that those
projects (that may end up being rejected) would show nowhere on [1], but
would still be hosted and can rely on the support and services of the
OpenStack community, right?

[1] http://governance.openstack.org/reference/projects/

Yes they would still be hosted on OpenStack development infrastructure.
Contributions would no longer count toward ATC status, so people who
only contribute to those projects would no longer be able to vote in the
Technical Committee election. They would not have "official" design
summit space either -- they can still camp in the hallway though :)

Hi folks,

For whom of you is interested in the conversation, the topic was brought
for discussion at the latest TC meeting [1]. Unfortunately I was unable
to join, however I would like to try and respond to some of the comments
made to clarify my position on the matter:

ttx: the neutron PTL say he can't vouch for anything in the neutron
"stadium"

To be honest that's not entirely my position.

The problem stems from the fact that, if I am asked what the stadium
means, as a PTL I can't give a straight answer; ttx put it relatively
well (and I quote him): by adding all those projects under your own
project team, you bypass the Technical Committee approval that they
behave like OpenStack projects and are produced by the OpenStack
community. The Neutron team basically vouches for all of them to be on
par. As far as the Technical Committee goes, they are all being produced
by the same team we originally blessed (the Neutron project team).

The reality is: some of these projects are not produced by the same
team, they do not behave the same way, and they do not follow the same
practices and guidelines. For the stadium to make sense, in my humble
opinion, a definition of these practices should happen and enforcement
should follow, but who's got the time for policing and enforcing
eviction, especially on a large scale? So we either reduce the scale
(which might not be feasible because in OpenStack we're all about
scaling and adding more and more and more), or we address the problem
more radically by evolving the relationship from tight aggregation to
loose association; this way who needs to vouch for the Neutron
relationship is not the Neutron PTL, but the person sponsoring the
project that wants to be associated to Neutron. On the other end, the
vouching may still be pursued, but for a much more focused set of
initiatives that are led by the same team.

russellb: Iattempted to start breaking down the different types of
repos that are part of the stadium (consumer, api, implementation of
technology, plugins/drivers).

The distinction between implementation of technology, plugins/drivers
and api is not justified IMO because from a neutron standpoint they all
look like the same: they leverage the pluggable extensions to the
Neutron core framework. As I attempted to say: we have existing plugins
and drivers that implement APIs, and we have plugins that implement
technology, so the extra classification seems overspecification.

flaper87: I agree a driver should not be independent

Why, what's your rationale? If we dig deeper, some drivers are small
code drops with no or untraceable maintainers. Some are actively
developed and can be fairly complex. The spectrum is pretty wide. Either
way, I think that preventing them from being independent in principle
may hurt the ones that can be pretty elaborated, and the ones that are
stale may hurt Neutron's reputation because we're the ones who are
supposed to look after them (after all didn't we vouch for them??)

Armando, definitely agree with you. I think that the first step is
probably declaring what the core team believes they can vouch for in the
governance repo. Any drivers that are outside of that need to be
responsible for their own release and install mechanism. I think the
current middle ground means that no one is responsible for their release
/ install mechanism. Which is bad for everyone.

I think the concept of responsibility is a key concept here. I think
that what I have been seeing is decisions being made by some folks that
suit their needs expecting someone else to take the responsibility for
that decision regarding the effect on the rest of the development
community and the users.

If we can get back down to the point where decision makers are able to
take responsibility for their own decisions, not taking on the
responsibility of other's in an ever expanding way, then perhaps the
responsibility carried by some can be whittled down to a size which is
both manageable and portable. By portable I mean something that is
possible to be comprehended, explained and when appropriate interlaced
with some other project's responsibility for mutual support and benefit.

Thanks,
Anita.

dhellmann: we have previously said that projects run by different teams
talk to each other over rest interfaces as a way of clearly delineating
boundaries

As much as I agree wholeheartedly with this statement (which I made
myself during the GBP/Neutron saga), it's unrealistic to convert the
interface between Neutron and its extension mechanisms to be purely
restful, especially for the price that will have to be paid in the process.

Over a 3 year period, what's the cost of not doing it? Both in code dept
and friction, as well as opportunity cost to have more interesting
services build of these APIs. You and the neutron team would know better
than I, but it's worth considering the flip side as well.

sdague:I don't think anything should be extending the neutron API that
isn't controlled by the neutron core team.

The core should be about the core, why would what's built on top be
controlled by the core? By comparison, it's like saying a SIG on the
physical layer of the OSI stack dictates what a SIG on the session layer
should do. It stifles innovation and prevents problems from being solved
by the right domain experts.

Changing the REST API isn't innovation, it's incompatibility for end
users. If we're ever going to have compatible clouds and a real interop
effort, the APIs for all our services need to be very firmly controlled.
Extending the API arbitrarily should be a deprecated concept across
OpenStack.

Otherwise, I have no idea what the neutron (or any other project) API is.

-Sean


Message: 19
Date: Wed, 9 Dec 2015 17:11:14 +0300
From: Davanum Srinivas davanum@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID:
CANw6fcH61uLHrxXAM5_7uMd65jj1TbA4e90u9-rN7oQVTQe6nw@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

Congrats, Matt!

-- Dims

On Wed, Dec 9, 2015 at 5:02 PM, Doug Hellmann doug@doughellmann.com wrote:

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

Congratulations, Matt!

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


Message: 20
Date: Wed, 9 Dec 2015 09:11:20 -0500
From: Anita Kuno anteaya@anteaya.info
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
open
Message-ID: 56683688.2050507@anteaya.info
Content-Type: text/plain; charset=windows-1252

On 12/09/2015 09:02 AM, Doug Hellmann wrote:

Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to generate
election rolls).

OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!

Election is over[1], let me congratulate Matt Riedemann for his election
! Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team
to the governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

Congratulations, Matt!

Doug


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks to both candidates for putting their name forward, it is nice to
have an election.

Congratulations Matt,
Anita.


Message: 21
Date: Wed, 09 Dec 2015 09:25:24 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Evolving the stadium concept
Message-ID: 1449670640-sup-1539@lrrr.local
Content-Type: text/plain; charset=UTF-8

Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:

On 3 December 2015 at 02:21, Thierry Carrez thierry@openstack.org wrote:

Armando M. wrote:

On 2 December 2015 at 01:16, Thierry Carrez <thierry@openstack.org
thierry@openstack.org> wrote:

Armando M. wrote:

One solution is, like you mentioned, to make some (or all) of
them
full-fledged project teams. Be aware that this means the TC
would judge
those new project teams individually and might reject them if we
feel
the requirements are not met. We might want to clarify what
happens
then.

That's a good point. Do we have existing examples of this or
would we be
sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases,

the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official
projects.
Again, I'm fine with that outcome, but I want to set expectations
clearly :)

Understood. It sounds to me that the outcome would be that those
projects (that may end up being rejected) would show nowhere on [1], but
would still be hosted and can rely on the support and services of the
OpenStack community, right?

[1] http://governance.openstack.org/reference/projects/

Yes they would still be hosted on OpenStack development infrastructure.
Contributions would no longer count toward ATC status, so people who
only contribute to those projects would no longer be able to vote in the
Technical Committee election. They would not have "official" design
summit space either -- they can still camp in the hallway though :)

Hi folks,

For whom of you is interested in the conversation, the topic was brought
for discussion at the latest TC meeting [1]. Unfortunately I was unable to
join, however I would like to try and respond to some of the comments made
to clarify my position on the matter:

ttx: the neutron PTL say he can't vouch for anything in the neutron
"stadium"

To be honest that's not entirely my position.

The problem stems from the fact that, if I am asked what the stadium means,
as a PTL I can't give a straight answer; ttx put it relatively well (and I
quote him): by adding all those projects under your own project team, you
bypass the Technical Committee approval that they behave like OpenStack
projects and are produced by the OpenStack community. The Neutron team
basically vouches for all of them to be on par. As far as the Technical
Committee goes, they are all being produced by the same team we originally
blessed (the Neutron project team).

The reality is: some of these projects are not produced by the same team,
they do not behave the same way, and they do not follow the same practices
and guidelines. For the stadium to make sense, in my humble opinion, a

This is the thing that's key, for me. As Anita points out elsewhere in
this thread, we want to structure our project teams so that decision
making and responsibility are placed in the same set of hands. It sounds
like the Stadium concept has made it easy to let those diverge.

definition of these practices should happen and enforcement should follow,
but who's got the time for policing and enforcing eviction, especially on a
large scale? So we either reduce the scale (which might not be feasible
because in OpenStack we're all about scaling and adding more and more and
more), or we address the problem more radically by evolving the
relationship from tight aggregation to loose association; this way who
needs to vouch for the Neutron relationship is not the Neutron PTL, but the
person sponsoring the project that wants to be associated to Neutron. On
the other end, the vouching may still be pursued, but for a much more
focused set of initiatives that are led by the same team.

russellb: I attempted to start breaking down the different types of repos
that are part of the stadium (consumer, api, implementation of technology,
plugins/drivers).

The distinction between implementation of technology, plugins/drivers and
api is not justified IMO because from a neutron standpoint they all look
like the same: they leverage the pluggable extensions to the Neutron core
framework. As I attempted to say: we have existing plugins and drivers that
implement APIs, and we have plugins that implement technology, so the extra
classification seems overspecification.

flaper87: I agree a driver should not be independent

Why, what's your rationale? If we dig deeper, some drivers are small code
drops with no or untraceable maintainers. Some are actively developed and
can be fairly complex. The spectrum is pretty wide. Either way, I think
that preventing them from being independent in principle may hurt the ones
that can be pretty elaborated, and the ones that are stale may hurt
Neutron's reputation because we're the ones who are supposed to look after
them (after all didn't we vouch for them??)

From a technical perspective, if there is a stable API for driver
plugins, having the driver managed outside of the core team shouldn't
be a problem. If there's no stable API, the driver shouldn't even
be outside of the core repository yet. I know the split has happened,
I don't know how stable the plugin APIs are, though.

From a governance perspective, I agree it is desirable to enable
(but not require) drivers to live outside of core. But see the previous
paragraph for caveats.

dhellmann: we have previously said that projects run by different teams
talk to each other over rest interfaces as a way of clearly delineating
boundaries

As much as I agree wholeheartedly with this statement (which I made myself
during the GBP/Neutron saga), it's unrealistic to convert the interface
between Neutron and its extension mechanisms to be purely restful,
especially for the price that will have to be paid in the process.

Right, I think what we're saying is that you should stop treating
these things as extensions. There are true technical issues introduced
by the need to have strong API guarantees to support out-of-tree
extensions. As Sean mentioned in his response, the TC and community
want projects to have stable, fixed, APIs that do not change based
on deployment choices, so it is easy for users to understand the
API and so we can enable interoperability between deployments.
DefCore depends on these fixed APIs because of the way tests from
the Tempest suite are used in the validation process. Continuing
to support extensions in Neutron is going to make broad adoption
of Neutron APIs for DefCore harder.

sdague: I don't think anything should be extending the neutron API that
isn't controlled by the neutron core team.

The core should be about the core, why would what's built on top be
controlled by the core? By comparison, it's like saying a SIG on the
physical layer of the OSI stack dictates what a SIG on the session layer
should do. It stifles innovation and prevents problems from being solved by
the right domain experts.

It needs to be possible to build on top of neutron without injecting
yourself into the guts of neutron at runtime. See above.

Doug

That's all I managed to process whilst reading the log. I am sure I missed
some important comments and I apologize for not replying to them; one thing
I didn't miss for sure was all the hugging :)

Thanks for acknowledging the discussion and the time and consideration
given during the TC meeting.

Cheers,
Armando

[1] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-12-08-20.01.html

--
Thierry Carrez (ttx)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 22
Date: Wed, 9 Dec 2015 07:41:17 -0700
From: Curtis serverascode@gmail.com
To: Jesse Pretorius jesse.pretorius@gmail.com
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [openstack-ansible]
Mid Cycle Sprint
Message-ID:
CAJ_JamAoAf58dajm3awyfaSFib=r7rHGa44My+1KDX3g5-iCZg@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Wed, Dec 9, 2015 at 5:45 AM, Jesse Pretorius
jesse.pretorius@gmail.com wrote:

Hi everyone,

At the Mitaka design summit in Tokyo we had some corridor discussions about
doing a mid-cycle meetup for the purpose of continuing some design
discussions and doing some specific sprint work.


I'd like indications of who would like to attend and what
locations/dates/topics/sprints would be of interest to you.

I'd like to get more involved in openstack-ansible. I'll be going to
the operators mid-cycle in Feb, so could stay later and attend in West
London. However, I could likely make it to San Antonio as well. Not
sure if that helps but I will definitely try to attend where ever it
occurs.

Thanks.

For guidance/background I've put some notes together below:

Location


We have contributors, deployers and downstream consumers across the globe so
picking a venue is difficult. Rackspace have facilities in the UK (Hayes,
West London) and in the US (San Antonio) and are happy for us to make use of
them.

Dates


Most of the mid-cycles for upstream OpenStack projects are being held in
January. The Operators mid-cycle is on February 15-16.

As I feel that it's important that we're all as involved as possible in
these events, I would suggest that we schedule ours after the Operators
mid-cycle.

It strikes me that it may be useful to do our mid-cycle immediately after
the Ops mid-cycle, and do it in the UK. This may help to optimise travel for
many of us.

Format


The format of the summit is really for us to choose, but typically they're
formatted along the lines of something like this:

Day 1: Big group discussions similar in format to sessions at the design
summit.

Day 2: Collaborative code reviews, usually performed on a projector, where
the goal is to merge things that day (if a review needs more than a single
iteration, we skip it. If a review needs small revisions, we do them on the
spot).

Day 3: Small group / pair programming.

Topics


Some topics/sprints that come to mind that we could explore/do are:
- Install Guide Documentation Improvement [1]
- Development Documentation Improvement (best practises, testing, how to
develop a new role, etc)
- Upgrade Framework [2]
- Multi-OS Support [3]

[1] https://etherpad.openstack.org/p/oa-install-docs
[2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
[3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support

--
Jesse Pretorius
IRC: odyssey4me


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Blog: serverascode.com


Message: 23
Date: Wed, 9 Dec 2015 08:44:19 -0600
From: Matt Riedemann mriedem@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] stable/liberty 13.1.0 release
planning
Message-ID: 56683E43.7080505@linux.vnet.ibm.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 12/9/2015 3:46 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

We've had a few high priority regression fixes in stable/liberty [1][2]
so I think it's time to do a release.
[...]

You probably mean 12.0.1 ?

Err 12.1.0, yeah. Since we've had dependency updates in stable/liberty I
thought that made it a minor version bump to 12.1.0.

--

Thanks,

Matt Riedemann


Message: 24
Date: Wed, 9 Dec 2015 15:48:49 +0100
From: Sebastien Badia sbadia@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [puppet] proposing Cody Herriges part of
Puppet OpenStack core
Message-ID: 20151209144848.GC11592@baloo.sebian.fr
Content-Type: text/plain; charset="utf-8"

On Tue, Dec 08, 2015 at 11:49:08AM (-0500), Emilien Macchi wrote:

Hi,

Back in "old days", Cody was already core on the modules, when they were
hosted by Puppetlabs namespace.
His contributions [1] are very valuable to the group:
* strong knowledge on Puppet and all dependencies in general.
* very helpful to debug issues related to Puppet core or dependencies
(beaker, etc).
* regular attendance to our weekly meeting
* pertinent reviews
* very understanding of our coding style

I would like to propose having him back part of our core team.
As usual, we need to vote.

Of course, a big +1!

Thanks Cody!

Seb
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL:


Message: 25
Date: Wed, 9 Dec 2015 15:57:52 +0100
From: Roman Prykhodchenko me@romcheg.me
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Fuel] Private links
Message-ID: BLU436-SMTP1319845D60F55EF3F4CAF4ADE80@phx.gbl
Content-Type: text/plain; charset="utf-8"

Folks,

on the last two days I have marked several bugs as incomplete because they were referring to one or more private resources that are not accessible by anyone who does not have a @mirantis.com account.

Please keep in mind that Fuel is an open source project and the bug tracker we use is absolutely public. There should not be any private links in public bugs on Launchpad. Please don?t attach links to files on corporate Google Drive or tickets to Jira. The same rule should be applied for code reviews.

That said, I?d like to confirm that we can submit world-accessible links to BVT results. If not ? that should be fixed ASAP.

  • romcheg

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 842 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:


Message: 26
Date: Wed, 9 Dec 2015 10:02:34 -0500
From: michael mccune msm@redhat.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] making project_id optional in API
URLs
Message-ID: 5668428A.3090400@redhat.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 12/08/2015 05:59 PM, Adam Young wrote:

I think it is kindof irrelevant. It can be there or not be there in the
URL itself, so long as it does not show up in the service catalog. From
an policy standpoint, having the project in the URL means that you can
do an access control check without fetching the object from the
database; you should, however, confirm that the object return belongs to
the project at a later point.

from the policy standpoint does it matter if the project id appears in
the url or in the headers?

mike


Message: 27
Date: Wed, 9 Dec 2015 16:13:17 +0100
From: Jaume Devesa devvesa@gmail.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
Message-ID:
CABvUA7kGm=xc+0Bh_p7ZAg6=A_T=FXDZiQ4PGNoGj2W7S6aQfQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi Galo,

I think the goal of this split is well explained by Sandro in the first
mails of the chain:

  1. Downstream packaging
  2. Tagging the delivery properly as a library
  3. Adding as a project on pypi

?OpenStack provide us a tarballs web page[1] for each branch of each
project of the infrastructure.
Then, projects like Delorean can allow us to download theses tarball master
branches, create the
packages and host them in a target repository for each one of the rpm-like
distributions[2]. I am pretty sure
that there is something similar for Ubuntu.

Everything is done in a very straightforward and standarized way, because
every repo has its own
deliverable. You can look how they are packaged and you won't see too many
differences between
them. Packaging a python-midonetclient it will be trivial if it is
separated in a single repo. It will be
complicated and we'll have to do tricky things if it is a directory inside
the midonet repo. And I am not
sure if Ubuntu and RDO community will allow us to have weird packaging
metadata repos.

So to me the main reason is

  1. Leverage all the infrastructure and procedures that OpenStack offers to
    integrate MidoNet
    as best as possible with the release process and delivery.

Regards,

[1]: ?http://tarballs.openstack.org/
[2]: http://trunk.rdoproject.org

On 9 December 2015 at 15:52, Antoni Segura Puimedon toni@midokura.com
wrote:

---------- Forwarded message ----------
From: Galo Navarro galo@midokura.com
Date: Wed, Dec 9, 2015 at 2:48 PM
Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Cc: Jaume Devesa jaume@midokura.com

Ditto. We already have a mirror repo of pyc for this purpose
https://github.com/midonet/python-midonetclient, synced daily.

Some of the problems with that is that it does not have any git log
history
nor does it feel like a coding project at all.

Of course, because the goal of this repo is not to provide a
changelog. It's to provide an independent repo. If you want git log,
you should do a git log python-midonetclient in the source repo
(/midonet/midonet).

Allow me to put forward a solution that will allow you keep the
development
in the midonet tree while, at the same time, having a proper repository
with identifiable patches in github.com/midonet/python-midonetclient

Thanks, but I insist: can we please clarify what are we trying to
achieve, before we jump into solutions?

g


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Jaume Devesa
Software Engineer at Midokura
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 28
Date: Thu, 10 Dec 2015 00:28:02 +0900
From: "Ken'ichi Ohmichi" ken1ohmichi@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] jsonschema for scheduler hints
Message-ID:
CAA393vhL4+qbYikp0L7wyGSbcZ2r7CSBANM_Fd1UXT0rU_HtZA@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

2015-12-09 21:20 GMT+09:00 Sean Dague sean@dague.net:

On 12/08/2015 11:47 PM, Ken'ichi Ohmichi wrote:

Hi Sylvain,

2015-12-04 17:48 GMT+09:00 Sylvain Bauza sbauza@redhat.com:

That leaves the out-of-tree discussion about custom filters and how we
could have a consistent behaviour given that. Should we accept something in
a specific deployment while another deployment could 401 against it ? Mmm,
bad to me IMHO.

We can have code to check the out-of-tree filters didn't expose any same
hints with in-tree filter.

Sure, and thank you for that, that was missing in the past. That said, there
are still some interoperability concerns, let me explain : as a cloud
operator, I'm now providing a custom filter (say MyAwesomeFilter) which does
the lookup for an hint called 'myawesomehint'.

If we enforce a strict validation (and not allow to accept any hint) it
would mean that this cloud would accept a request with 'myawesomehint'
while another cloud which wouldn't be running MyAwesomeFilter would then
deny the same request.

I am thinking the operator/vendor own filter should have some
implementation code for registering its original hint to jsonschema to
expose/validate available hints in the future.
The way should be easy as possible so that they can implement the code easily.
After that, we will be able to make the validation strict again.

Yeh, that was my thinking. As someone that did a lot of the jsonschema
work, is that something you could prototype?

Yes.
On a prototype https://review.openstack.org/#/c/220440/ , each filter
needs to contain getschedulerhintapischema() which returns
available scheduler_hints parameter. Then stevedore detects these
parameters from each filter and extends jsonschema with them.
On current prototype, the detection and extension are implemented in nova-api.
but we need to change the prototype like:

  1. nova-sched detects available scheduler-hints from filters.
  2. nova-sched passes these scheduler-hints to nova-api via RPC.
  3. nova-api extends jsonschema with the gotten scheduler-hints.

After implementing the mechanism, the operator/vendor own filters just
need to implement getschedulerhintapischema(). That is not so
hard, I feel.

Thanks
Ken Ohmichi


Message: 29
Date: Wed, 9 Dec 2015 09:45:24 -0600
From: Matt Riedemann mriedem@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] stable/liberty 12.0.1 release
planning
Message-ID: 56684C94.8020603@linux.vnet.ibm.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 12/9/2015 8:44 AM, Matt Riedemann wrote:

On 12/9/2015 3:46 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

We've had a few high priority regression fixes in stable/liberty [1][2]
so I think it's time to do a release.
[...]

You probably mean 12.0.1 ?

Err 12.1.0, yeah. Since we've had dependency updates in stable/liberty I
thought that made it a minor version bump to 12.1.0.

Talked about this in the release channel this morning [1]. Summary is as
long as we aren't raising the minimum required version of a dependency
in stable/liberty, then the nova server release should be 12.0.1. We'd
only bump to 12.1.0 if we needed a newer minimum dependency, and I don't
think we have one of those (but will double check).

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2015-12-09.log.html#t2015-12-09T15:07:12

--

Thanks,

Matt Riedemann


Message: 30
Date: Wed, 9 Dec 2015 16:48:06 +0100
From: Galo Navarro galo@midokura.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [midonet] Split up python-midonetclient
Message-ID:
CACSK4Abq4kKQNtesbEqvJk3XwxKq2qMuXOLqGMxqNU5qgErz7w@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,

I think the goal of this split is well explained by Sandro in the first
mails of the chain:

  1. Downstream packaging
  2. Tagging the delivery properly as a library
  3. Adding as a project on pypi

Not really, because (1) and (2) are a consequence of the repo split. Not
a cause. Please correct me if I'm reading wrong but he's saying:

  • I want tarballs
  • To produce tarballs, I want a separate repo, and separate repos have (1),
    (2) as requirements.

So this is where I'm going: producing a tarball of pyc does not require a
separate repo. If we don't need a new repo, we don't need to do all the
things that a separate repo requires.

Now:

OpenStack provide us a tarballs web page[1] for each branch of each
project
of the infrastructure.
Then, projects like Delorean can allow us to download theses tarball
master
branches, create the
packages and host them in a target repository for each one of the rpm-like
distributions[2]. I am pretty sure
that there is something similar for Ubuntu.

This looks more accurate: you're actually not asking for a tarball. You're
asking for being compatible with a system that produces tarballs off a
repo. This is very different :)

So questions: we have a standalone mirror of the repo, that could be used
for this purpose. Say we move the mirror to OSt infra, would things work?

Everything is done in a very straightforward and standarized way, because
every repo has its own
deliverable. You can look how they are packaged and you won't see too many
differences between
them. Packaging a python-midonetclient it will be trivial if it is
separated
in a single repo. It will be

But create a lot of other problems in development. With a very important
difference: the pain created by the mirror solution is solved cheaply with
software (e.g.: as you know, with a script). OTOH, the pain created by
splitting the repo is paid in very costly human resources.

complicated and we'll have to do tricky things if it is a directory inside
the midonet repo. And I am not
sure if Ubuntu and RDO community will allow us to have weird packaging
metadata repos.

I do get this point and it's a major concern, IMO we should split to a
different conversation as it's not related to where PYC lives, but to a
more general question: do we really need a repo per package?

Like Guillermo and myself said before, the midonet repo generate 4
packages, and this will grow. If having a package per repo is really a
strong requirement, there is a lot of work ahead, so we need to start
talking about this now. But like I said, it's orthogonal to the PYC points
above.

g
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 31
Date: Wed, 9 Dec 2015 16:03:02 +0000
From: "Fabio Giannetti (fgiannet)" fgiannet@cisco.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Monasca]: Mid Cycle Doodle
Message-ID: D28D901E.D094%fgiannet@cisco.com
Content-Type: text/plain; charset="us-ascii"

Guys,
Please find here the doodle for the mid-cycle:

http://doodle.com/poll/yy4unhffy7hi3x67

If we run the meeting Thu/Fri 28/29 we can have the 28 a joint session
with Congress.
First week of Feb is all open and I guess we need to decide if to do 2 or
3 days.
Thanks,
Fabio


Message: 32
Date: Wed, 9 Dec 2015 09:27:37 -0700
From: John Griffith john.griffith8@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID:
CAPWkaSUbzzr2FpjsqmkCD+NCvmPTVMKR-Q2fZRUaYcc=OfOiJw@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan xiaoyan.li@intel.com wrote:

Hi all,

Currently when deleting a volume, it checks whether there are snapshots
created from it. If yes deletion is prohibited. But it allows to extend
the volume, no check whether there are snapshots from it.

?Correct?

The two behaviors in Cinder are not consistent from my viewpoint.

?Well, your snapshot was taken at a point in time; and if you do a create
from snapshot the whole point is you want what you HAD when the snapshot
command was issued and NOT what happened afterwards. So in my opinion this
is not inconsistent at all.
?

In backend storage, their behaviors are same.

?Which backend storage are you referring to in this case?
?

For full snapshot, if still in copying progress, both extend and deletion
are not allowed. If snapshot copying finishes, both extend and deletion are
allowed.
For incremental snapshot, both extend and deletion are not allowed.

?So your particular backend has "different/specific" rules/requirements
around snapshots. That's pretty common, I don't suppose theres any way to
hack around this internally? In other words do things on your backend like
clones as snaps etc to make up for the differences in behavior??

As a result, this raises two concerns here:
1. Let such operations behavior same in Cinder.
2. I prefer to let storage driver decide the dependencies, not in the
general core codes.

?I have and always will strongly disagree with this approach and your
proposal. Sadly we've already started to allow more and more vendor
drivers just "do their own thing" and implement their own special API
methods. This is in my opinion a horrible path and defeats the entire
purpose of have a Cinder abstraction layer.

This will make it impossible to have compatibility between clouds for those
that care about it, it will make it impossible for operators/deployers to
understand exactly what they can and should expect in terms of the usage of
their cloud. Finally, it will also mean that not OpenStack API
functionality is COMPLETELY dependent on backend device. I know people are
sick of hearing me say this, so I'll keep it short and say it one more time:
"Compatibility in the API matters and should always be our priority"

Meanwhile, if we let driver to decide the dependencies, the following
changes need to do in Cinder:
1. When creating a snapshot from volume, it needs copy all metadata of
volume to snapshot. Currently it doesn't.
Any other potential issues please let me know.

Any input will be appreciated.

Best wishes
Lisa


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 33
Date: Wed, 9 Dec 2015 11:03:30 -0600
From: Chris Friesen chris.friesen@windriver.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID: 56685EE2.8020303@windriver.com
Content-Type: text/plain; charset="utf-8"; format=flowed

On 12/09/2015 10:27 AM, John Griffith wrote:

On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan <xiaoyan.li@intel.com
xiaoyan.li@intel.com> wrote:

Hi all,

Currently when deleting a volume, it checks whether there are snapshots
created from it. If yes deletion is prohibited.  But it allows to extend
the volume, no check whether there are snapshots from it.

?Correct?

The two behaviors in Cinder are not consistent from my viewpoint.

?Well, your snapshot was taken at a point in time; and if you do a create from
snapshot the whole point is you want what you HAD when the snapshot command was
issued and NOT what happened afterwards. So in my opinion this is not
inconsistent at all.

If we look at it a different way...suppose that the snapshot is linked in a
copy-on-write manner with the original volume. If someone deletes the original
volume then the snapshot is in trouble. However, if someone modifies the
original volume then a new chunk of backing store is allocated for the original
volume and the snapshot still references the original contents.

If we did allow deletion of the volume we'd have to either keep the volume
backing store around as long as any snapshots are around, or else flatten any
snapshots so they're no longer copy-on-write.

Chris


Message: 34
Date: Wed, 9 Dec 2015 17:06:12 +0000
From: "Kris G. Lindgren" klindgren@godaddy.com
To: Oguz Yarimtepe oguzyarimtepe@gmail.com,
"openstack-operators@lists.openstack.org"
openstack-operators@lists.openstack.org, "OpenStack Development
Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] RBAC
usage at production
Message-ID: 7CAD9EFB-B4B7-48CA-8771-EEE821FB27EB@godaddy.com
Content-Type: text/plain; charset="utf-8"

In other projects the policy.json file is read each time of api request. So changes to the file take place immediately. I was 90% sure keystone was the same way?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 12/9/15, 1:39 AM, "Oguz Yarimtepe" oguzyarimtepe@gmail.com wrote:

Hi,

I am wondering whether there are people using RBAC at production. The
policy.json file has a structure that requires restart of the service
each time you edit the file. Is there and on the fly solution or tips
about it?


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Message: 35
Date: Wed, 9 Dec 2015 18:10:45 +0100
From: Jordan Pittier jordan.pittier@scality.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID:
CAAKgrc=qEQjYn9bu4YRd4VraGwUFGQEcEU9siX8UubpbV_wBSg@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,
FWIW, I completely agree with what John said. All of it.

Please don't do that.

Jordan
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 36
Date: Wed, 9 Dec 2015 20:17:30 +0300
From: Dmitry Klenov dklenov@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Fuel] [Ubuntu bootstrap] Ubuntu bootstrap
becomes default in the Fuel
Message-ID:
CAExpkLxMQnpCYVv4tzBW9yyuYjeH8+P7v_V0pU1_h_Nvfwamog@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hello folks,

I would like to announce that we have completed all items for 'Ubuntu
bootstrap' feature. Thanks to the team for hard work and dedication!

Starting from today Ubuntu bootstrap is enabled in the Fuel by default.

Also it is worth mentioning that Ubuntu bootstrap is integrated with
'Biosdevnames' feature implemented by MOS-Linux team, so new bootstrap will
also benefit from persistent interface naming.

Thanks,
Dmitry.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 37
Date: Wed, 9 Dec 2015 17:18:29 +0000
From: Edgar Magana edgar.magana@workday.com
To: "Kris G. Lindgren" klindgren@godaddy.com, Oguz Yarimtepe
oguzyarimtepe@gmail.com, "openstack-operators@lists.openstack.org"
openstack-operators@lists.openstack.org, "OpenStack Development
Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [keystone] RBAC
usage at production
Message-ID: BFCD8D28-53F9-48BC-8F55-A37E9EFD5269@workdayinternal.com
Content-Type: text/plain; charset="utf-8"

We use RBAC in production but basically modify networking operations and some compute ones. In our case we don?t need to restart the services if we modify the policy.json file. I am surprise that keystone is not following the same process.

Edgar

On 12/9/15, 9:06 AM, "Kris G. Lindgren" klindgren@godaddy.com wrote:

In other projects the policy.json file is read each time of api request. So changes to the file take place immediately. I was 90% sure keystone was the same way?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

On 12/9/15, 1:39 AM, "Oguz Yarimtepe" oguzyarimtepe@gmail.com wrote:

Hi,

I am wondering whether there are people using RBAC at production. The
policy.json file has a structure that requires restart of the service
each time you edit the file. Is there and on the fly solution or tips
about it?


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Message: 38
Date: Wed, 9 Dec 2015 10:31:47 -0700
From: Doug Wiegley dougwig@parksidesoftware.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Evolving the stadium concept
Message-ID:
A46D4F8A-1334-4037-91C0-D557FB4A8178@parksidesoftware.com
Content-Type: text/plain; charset=utf-8

On Dec 9, 2015, at 7:25 AM, Doug Hellmann doug@doughellmann.com wrote:

Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:

On 3 December 2015 at 02:21, Thierry Carrez thierry@openstack.org wrote:

Armando M. wrote:

On 2 December 2015 at 01:16, Thierry Carrez <thierry@openstack.org
thierry@openstack.org> wrote:

Armando M. wrote:

One solution is, like you mentioned, to make some (or all) of
them
full-fledged project teams. Be aware that this means the TC
would judge
those new project teams individually and might reject them if we
feel
the requirements are not met. We might want to clarify what
happens
then.

That's a good point. Do we have existing examples of this or
would we be
sailing in uncharted waters?

It's been pretty common that we rejected/delayed applications for
projects where we felt they needed more alignment. In such cases,
the
immediate result for those projects if they are out of the Neutron
"stadium" is that they would fall from the list of official
projects.
Again, I'm fine with that outcome, but I want to set expectations
clearly :)

Understood. It sounds to me that the outcome would be that those
projects (that may end up being rejected) would show nowhere on [1], but
would still be hosted and can rely on the support and services of the
OpenStack community, right?

[1] http://governance.openstack.org/reference/projects/

Yes they would still be hosted on OpenStack development infrastructure.
Contributions would no longer count toward ATC status, so people who
only contribute to those projects would no longer be able to vote in the
Technical Committee election. They would not have "official" design
summit space either -- they can still camp in the hallway though :)

Hi folks,

For whom of you is interested in the conversation, the topic was brought
for discussion at the latest TC meeting [1]. Unfortunately I was unable to
join, however I would like to try and respond to some of the comments made
to clarify my position on the matter:

ttx: the neutron PTL say he can't vouch for anything in the neutron
"stadium"

To be honest that's not entirely my position.

The problem stems from the fact that, if I am asked what the stadium means,
as a PTL I can't give a straight answer; ttx put it relatively well (and I
quote him): by adding all those projects under your own project team, you
bypass the Technical Committee approval that they behave like OpenStack
projects and are produced by the OpenStack community. The Neutron team
basically vouches for all of them to be on par. As far as the Technical
Committee goes, they are all being produced by the same team we originally
blessed (the Neutron project team).

The reality is: some of these projects are not produced by the same team,
they do not behave the same way, and they do not follow the same practices
and guidelines. For the stadium to make sense, in my humble opinion, a

This is the thing that's key, for me. As Anita points out elsewhere in
this thread, we want to structure our project teams so that decision
making and responsibility are placed in the same set of hands. It sounds
like the Stadium concept has made it easy to let those diverge.

definition of these practices should happen and enforcement should follow,
but who's got the time for policing and enforcing eviction, especially on a
large scale? So we either reduce the scale (which might not be feasible
because in OpenStack we're all about scaling and adding more and more and
more), or we address the problem more radically by evolving the
relationship from tight aggregation to loose association; this way who
needs to vouch for the Neutron relationship is not the Neutron PTL, but the
person sponsoring the project that wants to be associated to Neutron. On
the other end, the vouching may still be pursued, but for a much more
focused set of initiatives that are led by the same team.

russellb: I attempted to start breaking down the different types of repos
that are part of the stadium (consumer, api, implementation of technology,
plugins/drivers).

The distinction between implementation of technology, plugins/drivers and
api is not justified IMO because from a neutron standpoint they all look
like the same: they leverage the pluggable extensions to the Neutron core
framework. As I attempted to say: we have existing plugins and drivers that
implement APIs, and we have plugins that implement technology, so the extra
classification seems overspecification.

flaper87: I agree a driver should not be independent

Why, what's your rationale? If we dig deeper, some drivers are small code
drops with no or untraceable maintainers. Some are actively developed and
can be fairly complex. The spectrum is pretty wide. Either way, I think
that preventing them from being independent in principle may hurt the ones
that can be pretty elaborated, and the ones that are stale may hurt
Neutron's reputation because we're the ones who are supposed to look after
them (after all didn't we vouch for them??)

From a technical perspective, if there is a stable API for driver
plugins, having the driver managed outside of the core team shouldn't
be a problem. If there's no stable API, the driver shouldn't even
be outside of the core repository yet. I know the split has happened,
I don't know how stable the plugin APIs are, though.

Agreed, and making that stable interface is a key initiative in mitaka.

From a governance perspective, I agree it is desirable to enable
(but not require) drivers to live outside of core. But see the previous
paragraph for caveats.

dhellmann: we have previously said that projects run by different teams
talk to each other over rest interfaces as a way of clearly delineating
boundaries

As much as I agree wholeheartedly with this statement (which I made myself
during the GBP/Neutron saga), it's unrealistic to convert the interface
between Neutron and its extension mechanisms to be purely restful,
especially for the price that will have to be paid in the process.

Right, I think what we're saying is that you should stop treating
these things as extensions. There are true technical issues introduced
by the need to have strong API guarantees to support out-of-tree
extensions. As Sean mentioned in his response, the TC and community
want projects to have stable, fixed, APIs that do not change based
on deployment choices, so it is easy for users to understand the
API and so we can enable interoperability between deployments.
DefCore depends on these fixed APIs because of the way tests from
the Tempest suite are used in the validation process. Continuing
to support extensions in Neutron is going to make broad adoption
of Neutron APIs for DefCore harder.

sdague: I don't think anything should be extending the neutron API that
isn't controlled by the neutron core team.

The core should be about the core, why would what's built on top be
controlled by the core? By comparison, it's like saying a SIG on the
physical layer of the OSI stack dictates what a SIG on the session layer
should do. It stifles innovation and prevents problems from being solved by
the right domain experts.

It needs to be possible to build on top of neutron without injecting
yourself into the guts of neutron at runtime. See above.

In point of fact, it is possible, and there is an API to do so, but? most choose not to. I won?t say that?s an argument to keep extensions, but it might be worth examing why people are choosing that route, because I think it points to a big innovation/velocity killer in ?the openstack way?.

One possible interpretation: we have all these rules that basically amount to: 1) don?t be so small you can?t be a wsgi/db app, which is expensive in the current wild west mode of building them, 2) don?t be so large that we feel you?ve diverged too much from what we want things to look like, and 3) be exactly like a rest service with some driver backends implementing some sort of *aaS.

That leaves a pretty narrow, and relatively expensive, runway.

We don?t want extensions for reasons of interop, fine. I think it?s a fairly silly argument to say that rest api?s can be optional, but extensions to an api can?t, because that extra ?/foobar/? is the killer, but whatever. However, maybe we should devote some thinking as to why neutron extensions are being used, and how we could leverage the dev work that doesn?t feel jumping through the above hoops is appropriate/worth it/etc.

Thanks,
doug

Doug

That's all I managed to process whilst reading the log. I am sure I missed
some important comments and I apologize for not replying to them; one thing
I didn't miss for sure was all the hugging :)

Thanks for acknowledging the discussion and the time and consideration
given during the TC meeting.

Cheers,
Armando

[1] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-12-08-20.01.html

--
Thierry Carrez (ttx)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 39
Date: Wed, 9 Dec 2015 17:32:06 +0000
From: Arkady_Kanevsky@DELL.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] Dependencies of snapshots on
volumes
Message-ID:
0c2713f7979240288bbb5f912c239ddd@AUSX13MPS308.AMER.DELL.COM
Content-Type: text/plain; charset="us-ascii"

You can do lazy copy that happens only when volume or snapshot is deleted.
You will need to have refcount on metadata.

-----Original Message-----
From: Li, Xiaoyan [mailto:xiaoyan.li@intel.com]
Sent: Tuesday, December 08, 2015 10:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Dependencies of snapshots on volumes

Hi all,

Currently when deleting a volume, it checks whether there are snapshots created from it. If yes deletion is prohibited. But it allows to extend the volume, no check whether there are snapshots from it.

The two behaviors in Cinder are not consistent from my viewpoint.

In backend storage, their behaviors are same.
For full snapshot, if still in copying progress, both extend and deletion are not allowed. If snapshot copying finishes, both extend and deletion are allowed.
For incremental snapshot, both extend and deletion are not allowed.

As a result, this raises two concerns here:
1. Let such operations behavior same in Cinder.
2. I prefer to let storage driver decide the dependencies, not in the general core codes.

Meanwhile, if we let driver to decide the dependencies, the following changes need to do in Cinder:
1. When creating a snapshot from volume, it needs copy all metadata of volume to snapshot. Currently it doesn't.
Any other potential issues please let me know.

Any input will be appreciated.

Best wishes
Lisa


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

End of OpenStack-dev Digest, Vol 44, Issue 33



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 10, 2015 by Kekane,_Abhishek (3,940 points)   5 7
0 votes

-----Original Message-----
From: Kekane, Abhishek [mailto:Abhishek.Kekane@nttdata.com]
Sent: 10 December 2015 12:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to caller

-----Original Message-----
From: stuart.mclaren@hp.com [mailto:stuart.mclaren@hp.com]
Sent: 09 December 2015 23:54
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to caller

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:

On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ?x-openstack-request-id? to the
caller as per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let?s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model()
does the job of creating a warlock.model object(essentially a dict)
based on the schema given as argument (image schema retrieved from
glance in this case). Inside
model() the raw() method simply return the image schema as JSON
object. The advantage of this warlock.model object over a simple
dict is that it validates any changes to object based on the rules specified in the reference schema.
The keys of this model object are available as object properties to
the caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For
images, metadef APIs glance.schema.Schema.raw() is used which
returns a schema containing ?additionalProperties?: {?type?:
?string?}. Whereas for members and tasks APIs
glance.schema.Schema.minimal() is used to return schema object which does not contain ?additionalProperties?.

So we can add extra properties of any type to the model object
returned from members or tasks API but for images and metadef APIs
we can only add properties which can be of type string. Also for the
latter case we depend on the glance configuration to allow additional properties.

As per our analysis we have come up with two approaches for
resolving this
issue:

Approach #1: Inject request_ids property in the warlock model
object in glance client

Here we do the following:

  1. Inject the ?request_ids? as additional property into the model
    object (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows
    additional properties of type string, so even though natural type of
    request_ids should be list we have to make it as a comma separated
    ?string? of request ids as a compromise.

  2. Lot of extra code is needed to wrap objects returned from the
    client API so that the caller can get request ids. For example we
    need to write wrapper classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should
    actually be a base property but added as additional property as a compromise.

  4. There is a dependency on glance whether to allow
    custom/additional properties or not. [2]

Approach #2: Add ?request_ids? property to all schema definitions
in glance

Here we add ?request_ids? property as follows to the various APIs (schema):

?request_ids?: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as
compared to approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API
calls for example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id
available to the user of the client library [1]. The user typically
doesn't have access to the headers, so the request id needs to be part
of the payload returned from each method. In other clients

Will this work if the payload is image data?

I think yes, let me test this as well

that work with simple data types, they've subclassed dict, list, etc.
to add the extra property. This adds the request id to the return
value without making a breaking change to the API of the client
library.

Abhishek, would it be possible to add the request id information to
the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that
data takes (dictionary, JSON blob, etc.). If it's a dictionary visible
to the client code it would be straightforward to add data to it.

Yes, it is possible to add request-id to schema before giving it to warlock, but since it's a contract IMO it doesn't look good to modify schema at client side.

Failing that, is it possible to change warlock to allow extra
properties with arbitrary types to be added to objects? Because
validating inputs to the constructor is all well and good, but
breaking the ability to add data to an object is a bit un-pythonic.

IMO there is no point to change warlock as it is a 3rd party module.

If we end up having to change the schema definitions in the Glance
API, that also means changing those API calls to add the request id to
the return value, right?

IMO there will be no changing API calls as request-id will be injected in glanceclient and it doesn't have any impact in glance.
Also we can make this request-id as non-mandatory if required.

Doug

[1]
http://specs.openstack.org/openstack/openstack-specs/specs/return-requ
est-id.html

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

IMO approach #2 is better and it will be consistent to all api's to have request-id as an attribute in schema so that it will be consistent.

So we will make change in glance to add request-id as base property in schema and inject request-id in glanceclient from response headers.

The code change will be look like,

iff --git a/glance/api/v2/images.py b/glance/api/v2/images.py index bb7949c..2a760a7 100644
--- a/glance/api/v2/images.py
+++ b/glance/api/v2/images.py
@@ -807,6 +807,10 @@ class ResponseSerializer(wsgi.JSONResponseSerializer):

def getbaseproperties():
return {
+ 'request_ids': {
+ 'type': 'array',
+ 'items': {'type': 'string'}
+ },
'id': {
'type': 'string',
'description': _('An identifier for the image'),

Changes in glanceclient to assign request-id from response headers

openstack@openstack-136:~/python-glanceclient$ git diff diff --git a/glanceclient/v2/images.py b/glanceclient/v2/images.py index 4fdcea2..65b0d6c 100644
--- a/glanceclient/v2/images.py
+++ b/glanceclient/v2/images.py
@@ -182,6 +182,7 @@ class Controller(object):
# NOTE(bcwaldon): remove 'self' for now until we have an elegant
# way to pass it into the model constructor without conflict
body.pop('self', None)
+ body['request_ids'] = [resp.headers['x-openstack-request-id']]
return self.model(**body)

 def data(self, image_id, do_checksum=True):

Output:

import glanceclient
glance = glanceclient.Client('2',
endpoint='http://10.69.4.136:9292/',
token='16038d125b804eef805c7020bbebc769')
get = glance.images.get('a00b6125-94d9-43a8-a497-839cf25a8fdd')
get
{u'status': u'active', u'tags': [], u'containerformat': u'aki', u'minram': 0, u'updatedat': u'2015-11-18T13:04:18Z', u'visibility': u'public', 'requestids': ['req-68926f34-4434-45dc-822c-c4eb94506c63'], u'owner': u'd1ee7fd5dcc341c3973f19f790238e63', u'file': u'/v2/images/a00b6125-94d9-43a8-a497-839cf25a8fdd/file', u'mindisk': 0, u'virtualsize': None, u'id': u'a00b6125-94d9-43a8-a497-839cf25a8fdd', u'size': 4979632, u'name': u'cirros-0.3.4-x8664-uec-kernel', u'checksum': u'8a40c862b5735975d82605c1dd395796', u'createdat': u'2015-11-18T13:04:18Z', u'diskformat': u'aki', u'protected': False, u'schema': u'/v2/schemas/image'}
get.request
ids
['req-68926f34-4434-45dc-822c-c4eb94506c63']

Hi Flavio, Glance Cores,

To implement this requirement, I need make changes in glance schema to add request_id attribute as a property.
Do I need to submit a glance-specs for this change or just blueprint is fine as cross-project specs is already approved.

Please suggest.

Thank You,

Abhishek

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the same.

[1]
https://github.com/openstack/python-glanceclient/blob/master/glancec
lient/
v2/images.py#L179

[2]
https://github.com/openstack/glance/blob/master/glance/api/v2/images
.py#
L944


__
Disclaimer: This email and any attachments are sent in strictest
confidence for the sole use of the addressee and may contain legally
privileged, confidential, and proprietary data. If you are not the
intended recipient, please advise the sender by replying promptly to
this email and then delete and destroy this email and any
attachments without any further use, copying or forwarding.


______ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 11, 2015 by Kekane,_Abhishek (3,940 points)   5 7
0 votes

-----Original Message-----
From: Kekane, Abhishek [mailto:Abhishek.Kekane@nttdata.com]
Sent: 11 December 2015 09:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to
caller

-----Original Message-----
From: Kekane, Abhishek [mailto:Abhishek.Kekane@nttdata.com]
Sent: 10 December 2015 12:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to
caller

-----Original Message-----
From: stuart.mclaren@hp.com [mailto:stuart.mclaren@hp.com]
Sent: 09 December 2015 23:54
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to
caller

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:

On 09/12/15 11:33 +0000, Kekane, Abhishek wrote:

Hi Devs,

We are adding support for returning ?x-openstack-request-id? to the
caller as per the design proposed in cross-project specs:

http://specs.openstack.org/openstack/openstack-specs/specs/
return-request-id.html

Problem Description:

Cannot add a new property of list type to the warlock.model object.

How is a model object created:

Let?s take an example of glanceclient.api.v2.images.get() call [1]:

Here after getting the response we call model() method. This model()
does the job of creating a warlock.model object(essentially a dict)
based on the schema given as argument (image schema retrieved from
glance in this case). Inside
model() the raw() method simply return the image schema as JSON
object. The advantage of this warlock.model object over a simple
dict is that it validates any changes to object based on the rules specified
in the reference schema.
The keys of this model object are available as object properties to
the caller.

Underlying reason:

The schema for different sub APIs is returned a bit differently. For
images, metadef APIs glance.schema.Schema.raw() is used which
returns a schema containing ?additionalProperties?: {?type?:
?string?}. Whereas for members and tasks APIs
glance.schema.Schema.minimal() is used to return schema object which
does not contain ?additionalProperties?.

So we can add extra properties of any type to the model object
returned from members or tasks API but for images and metadef APIs
we can only add properties which can be of type string. Also for the
latter case we depend on the glance configuration to allow additional
properties.

As per our analysis we have come up with two approaches for
resolving this
issue:

Approach #1: Inject request_ids property in the warlock model
object in glance client

Here we do the following:

  1. Inject the ?request_ids? as additional property into the model
    object (returned from model())

  2. Return the model object which now contains request_ids property

Limitations:

  1. Because the glance schemas for images and metadef only allows
    additional properties of type string, so even though natural type of
    request_ids should be list we have to make it as a comma separated
    ?string? of request ids as a compromise.

  2. Lot of extra code is needed to wrap objects returned from the
    client API so that the caller can get request ids. For example we
    need to write wrapper classes for dict, list, str, tuple, generator.

  3. Not a good design as we are adding a property which should
    actually be a base property but added as additional property as a
    compromise.

  4. There is a dependency on glance whether to allow
    custom/additional properties or not. [2]

Approach #2: Add ?request_ids? property to all schema definitions
in glance

Here we add ?request_ids? property as follows to the various APIs
(schema):

?request_ids?: {

"type": "array",

"items": {

"type": "string"

}

}

Doing this will make changes in glance client very simple as
compared to approach#1.

This also looks a better design as it will be consistent.

We simply need to modify the request_ids property in various API
calls for example glanceclient.v2.images.get().

Hey Abhishek,

thanks for working on this.

To be honest, I'm a bit confused on why the request_id needs to be an
attribute of the image. Isn't it passed as a header? Does it have to
be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id
available to the user of the client library [1]. The user typically
doesn't have access to the headers, so the request id needs to be part
of the payload returned from each method. In other clients

Will this work if the payload is image data?

I think yes, let me test this as well

that work with simple data types, they've subclassed dict, list, etc.
to add the extra property. This adds the request id to the return
value without making a breaking change to the API of the client
library.

Abhishek, would it be possible to add the request id information to
the schema data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that
data takes (dictionary, JSON blob, etc.). If it's a dictionary visible
to the client code it would be straightforward to add data to it.

Yes, it is possible to add request-id to schema before giving it to warlock, but
since it's a contract IMO it doesn't look good to modify schema at client side.

Failing that, is it possible to change warlock to allow extra
properties with arbitrary types to be added to objects? Because
validating inputs to the constructor is all well and good, but
breaking the ability to add data to an object is a bit un-pythonic.

IMO there is no point to change warlock as it is a 3rd party module.

If we end up having to change the schema definitions in the Glance
API, that also means changing those API calls to add the request id to
the return value, right?

IMO there will be no changing API calls as request-id will be injected in
glanceclient and it doesn't have any impact in glance.
Also we can make this request-id as non-mandatory if required.

Doug

[1]
http://specs.openstack.org/openstack/openstack-specs/specs/return-
requ
est-id.html

As it is presented in your email, I'd probably go with option #2 but
I'm curious to know the answer to my question.

IMO approach #2 is better and it will be consistent to all api's to have
request-id as an attribute in schema so that it will be consistent.

I have bit of a problem understanding (and agreeing) how would request ID be part of the image base properties. If our client can't handle providing info from the headers to it's caller, IMO we need to fix the client, not the image.

So we will make change in glance to add request-id as base property in
schema and inject request-id in glanceclient from response headers.

The code change will be look like,

iff --git a/glance/api/v2/images.py b/glance/api/v2/images.py index
bb7949c..2a760a7 100644
--- a/glance/api/v2/images.py
+++ b/glance/api/v2/images.py
@@ -807,6 +807,10 @@ class
ResponseSerializer(wsgi.JSONResponseSerializer):

def getbaseproperties():
return {
+ 'request_ids': {
+ 'type': 'array',
+ 'items': {'type': 'string'}
+ },
'id': {
'type': 'string',
'description': _('An identifier for the image'),

Changes in glanceclient to assign request-id from response headers

openstack@openstack-136:~/python-glanceclient$ git diff diff --git
a/glanceclient/v2/images.py b/glanceclient/v2/images.py index
4fdcea2..65b0d6c 100644
--- a/glanceclient/v2/images.py
+++ b/glanceclient/v2/images.py
@@ -182,6 +182,7 @@ class Controller(object):
# NOTE(bcwaldon): remove 'self' for now until we have an elegant
# way to pass it into the model constructor without conflict
body.pop('self', None)
+ body['request_ids'] = [resp.headers['x-openstack-request-id']]
return self.model(**body)

 def data(self, image_id, do_checksum=True):

Output:

import glanceclient
glance = glanceclient.Client('2',
endpoint='http://10.69.4.136:9292/',
token='16038d125b804eef805c7020bbebc769')
get = glance.images.get('a00b6125-94d9-43a8-a497-839cf25a8fdd')
get
{u'status': u'active', u'tags': [], u'containerformat': u'aki', u'minram': 0,
u'updatedat': u'2015-11-18T13:04:18Z', u'visibility': u'public', 'requestids':
['req-68926f34-4434-45dc-822c-c4eb94506c63'], u'owner':
u'd1ee7fd5dcc341c3973f19f790238e63', u'file': u'/v2/images/a00b6125-94d9-
43a8-a497-839cf25a8fdd/file', u'mindisk': 0, u'virtualsize': None, u'id':
u'a00b6125-94d9-43a8-a497-839cf25a8fdd', u'size': 4979632, u'name': u'cirros-
0.3.4-x8664-uec-kernel', u'checksum':
u'8a40c862b5735975d82605c1dd395796', u'created
at': u'2015-11-
18T13:04:18Z', u'diskformat': u'aki', u'protected': False, u'schema':
u'/v2/schemas/image'}
get.request
ids
['req-68926f34-4434-45dc-822c-c4eb94506c63']

Hi Flavio, Glance Cores,

To implement this requirement, I need make changes in glance schema to
add request_id attribute as a property.
Do I need to submit a glance-specs for this change or just blueprint is fine as
cross-project specs is already approved.

Please suggest.

Thank You,

Abhishek

Please file a lite-spec (put the details into bug report and mark it as wishlist). We can continue the discussion there.

  • Erno

Cheers,
Flavio

Please let us know which approach is better or any suggestions for the
same.

[1]
https://github.com/openstack/python-
glanceclient/blob/master/glancec
lient/
v2/images.py#L179

[2]

https://github.com/openstack/glance/blob/master/glance/api/v2/images

.py#
L944



__
Disclaimer: This email and any attachments are sent in strictest
confidence for the sole use of the addressee and may contain legally
privileged, confidential, and proprietary data. If you are not the
intended recipient, please advise the sender by replying promptly to
this email and then delete and destroy this email and any
attachments without any further use, copying or forwarding.



______ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Dec 11, 2015 by kuvaja_at_hpe.com (1,320 points)   1
...