settingsLogin | Registersettings

[openstack-dev] [kolla] Proposing duonghq for core

0 votes

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Mar 22, 2017 in openstack-dev by Michał_Jastrzębski (9,220 points)   1 3 5

15 Responses

0 votes

+1 from me

On Wed, Mar 8, 2017 at 2:41 PM, Michał Jastrzębski inc007@gmail.com wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by Lei_Zhang (6,360 points)   1 2 4
0 votes

+1

On 08/03/17 08:29, Jeffrey Zhang wrote:
+1 from me

On Wed, Mar 8, 2017 at 2:41 PM, Michał Jastrzębski <inc007@gmail.com
inc007@gmail.com> wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by Paul_Bourke (3,840 points)   1 3
0 votes

+1

2017-03-08 8:29 GMT+00:00 Jeffrey Zhang zhang.lei.fly@gmail.com:

+1 from me

On Wed, Mar 8, 2017 at 2:41 PM, Michał Jastrzębski inc007@gmail.com
wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by Eduardo_Gonzalez (1,120 points)  
0 votes

+1

On 8 Mar 2017, at 07:41, Michał Jastrzębski inc007@gmail.com wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Christian Berendt
Chief Executive Officer (CEO)

Mail: berendt@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by berendt_at_betacloud (1,360 points)   1
0 votes

+1

2017-03-08 7:34 GMT-03:00 Christian Berendt berendt@betacloud-solutions.de
:

+1

On 8 Mar 2017, at 07:41, Michał Jastrzębski inc007@gmail.com wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:
unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Christian Berendt
Chief Executive Officer (CEO)

Mail: berendt@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by Mauricio_Lima (1,000 points)  
0 votes

+1

From: Mauricio Lima mauriciolimab@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Wednesday, March 8, 2017 at 5:34 AM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core

+1

2017-03-08 7:34 GMT-03:00 Christian Berendt berendt@betacloud-solutions.de:
+1

On 8 Mar 2017, at 07:41, Michał Jastrzębski inc007@gmail.com wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Christian Berendt
Chief Executive Officer (CEO)

Mail: berendt@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by Kwasniewska,_Alicja (740 points)  
0 votes

+1!

-----Original Message-----
From: Michał Jastrzębski inc007@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Wednesday, March 8, 2017 at 1:41 AM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] Proposing duonghq for core

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 8, 2017 by Steven_Dake_(stdake) (24,540 points)   2 6 20
0 votes

+1

------------------ Original ------------------
From: "OpenStack-dev-request";openstack-dev-request@lists.openstack.org;
Date: Thu, Mar 9, 2017 03:25 AM
To: "openstack-dev"openstack-dev@lists.openstack.org;

Subject: OpenStack-dev Digest, Vol 59, Issue 24

Send OpenStack-dev mailing list submissions to
openstack-dev@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
openstack-dev-request@lists.openstack.org

You can reach the person managing the list at
openstack-dev-owner@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."

Today's Topics:

  1. [acceleration] No team meeting today, resume next Wed
    (Zhipeng Huang)
  2. Re: [ironic] OpenStack client default ironic API version
    (Dmitry Tantsur)
  3. Re: [release][tripleo][fuel][kolla][ansible] Ocata Release
    countdown for R+2 Week, 6-10 March (Doug Hellmann)
  4. Re: [tc][appcat][murano][app-catalog] The future of the App
    Catalog (Ian Cordasco)
  5. Re: [telemetry][requirements] ceilometer grenade gate failure
    (gordon chung)
  6. Re: [acceleration] No team meeting today, resume next Wed
    (Harm Sluiman)
  7. [trove] today weekly meeting (Amrith Kumar)
  8. Re: [kolla][ubuntu][libvirt] Is libvirt 2.5.0 in ubuntu cloud
    archive ocata repo bust (Corey Bryant)
  9. Re: [ironic] OpenStack client default ironic API version
    (Mario Villaplana)

    1. [neutron] [infra] Depends-on tag effect (Hirofumi Ichihara)
    2. Re: [nova] Question to clarify versioned notifications
      (Matt Riedemann)
    3. Re: [neutron] [infra] Depends-on tag effect (ZZelle)
    4. Re: [tc][appcat] The future of the App Catalog (Jay Pipes)
    5. [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Yu Wei)
    6. Re: [neutron] [infra] Depends-on tag effect (Andreas Jaeger)
    7. Re: [TripleO][Heat] Selectively disabling deployment
      resources (James Slagle)
    8. [ironic] Pike PTG report (Dmitry Tantsur)
    9. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Chris Dent)
    10. [cinder][glance][horizon][keystone][nova][qa][swift] Feedback
      needed: Removal of legacy per-project vanity domain redirects
      (Monty Taylor)
    11. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Yu Wei)
    12. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Chris Dent)
    13. Re: [neutron] [infra] Depends-on tag effect (Hirofumi Ichihara)
    14. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Lance Bragstad)
    15. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Yu Wei)
    16. Re: [puppet] puppet-cep beaker test (Scheglmann, Stefan)
    17. Re: [puppet] puppet-cep beaker test (Alex Schultz)
    18. Re: [tc][appcat] The future of the App Catalog
      (David Moreau Simard)
    19. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Brian Rosmaita)
    20. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Chris Dent)
    21. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Daniel P. Berrange)
    22. Re: [kolla][ubuntu][libvirt] Is libvirt 2.5.0 in ubuntu cloud
      archive ocata repo bust (Jeffrey Zhang)
    23. Re: [TripleO][Heat] Selectively disabling deployment
      resources (James Slagle)
    24. Re: [neutron] [infra] Depends-on tag effect (Armando M.)
    25. Re: [ironic] OpenStack client default ironic API version
      (Jim Rollenhagen)
    26. Re: [tc][appcat] The future of the App Catalog (Fox, Kevin M)
    27. Re: [kolla] Proposing duonghq for core (Kwasniewska, Alicja)
    28. Re: [kolla][ubuntu][libvirt] Is libvirt 2.5.0 in ubuntu cloud
      archive ocata repo bust (Corey Bryant)
    29. Re: [infra][tripleo] initial discussion for a new periodic
      pipeline (Jeremy Stanley)
    30. [requirements] pycrypto is dead, long live pycryptodome... or
      cryptography... (Matthew Thode)
    31. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Andreas Jaeger)
    32. [kolla] kolla 4.0.0.0rc2 (ocata) (no-reply@openstack.org)
    33. Re: [requirements] pycrypto is dead, long live
      pycryptodome... or cryptography... (Davanum Srinivas)
    34. [kolla] kolla-ansible 4.0.0.0rc2 (ocata) (no-reply@openstack.org)
    35. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Steve Martinelli)
    36. Re: [requirements] pycrypto is dead, long live
      pycryptodome... or cryptography... (Matthew Thode)

Message: 1
Date: Wed, 8 Mar 2017 20:22:29 +0800
From: Zhipeng Huang zhipengh512@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [acceleration] No team meeting today, resume
next Wed
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi team,

As agreed per our PTG/VTG session, we will have the team meeting two weeks
after to give people enough time to prepare the BPs we discussed.

Therefore there will be no team meeting today, and the next meeting is on
next Wed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 2
Date: Wed, 8 Mar 2017 13:40:37 +0100
From: Dmitry Tantsur dtantsur@redhat.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ironic] OpenStack client default ironic
API version
Message-ID: f33ad8bf-73ef-bfe6-61d4-7f6ec03f8758@redhat.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 03/07/2017 04:59 PM, Loo, Ruby wrote:
On 2017-03-06, 3:46 PM, "Mario Villaplana" mario.villaplana@gmail.com wrote:

Hi ironic,

At the PTG, an issue regarding the default version of the ironic API
used in our python-openstackclient plugin was discussed. [0] In short,
the issue is that we default to a very old API version when the user
doesn't otherwise specify it. This limits discoverability of new
features and makes the client more difficult to use for deployments
running the latest version of the code.

We came to the following consensus:

1. For a deprecation period, we should log a warning whenever the user
doesn't specify an API version, informing them of this change.

2. After the deprecation period:

a) OSC baremetal plugin will default to the latest available version

I think OSC and ironic CLI have the same behaviour -- are we only interested in OSC or are we interested in both, except that we also want to at some point soon perhaps, deprecate ironic CLI?

I think we should only touch OSC, because of planned deprecation you mention.

Also, by 'latest available version', the OSC plugin knows (or thinks it knows) what the latest version is 1. Will you be using that, or 'latest'?

It will pass "latest" to the API, so it may end up with a version the client
side does not know about. This is intended, I think. It does have some
consequences if we make breaking changes like removing parameters. As we're not
overly keen on breaking changes anyway, this may not be a huge concern.

b) Specifying just macroversion will default to latest microversion
within that macroversion (example: --os-baremetal-api-version=1 would
default to 1.31 if 1.31 is the last microversion with 1 macroversion,
even if we have API 2.2 supported)

I have a patch up for review with the deprecation warning:
https://review.openstack.org/442153

Do you have an RFE? I'd like a spec for this too please.

Dunno if this change really requires a spec, but if you want one - let's have one :)

We should have an RFE anyway, obviously.

Please comment on that patch with any concerns.

We also still have yet to decide what a suitable deprecation period is
for this change, as far as I'm aware. Please respond to this email
with any suggestions on the deprecation period.

Thanks,
Mario


[0] https://etherpad.openstack.org/p/ironic-pike-ptg-operations L30

Thank YOU!

--ruby

1 https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L29


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 3
Date: Wed, 08 Mar 2017 07:52:23 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [release][tripleo][fuel][kolla][ansible]
Ocata Release countdown for R+2 Week, 6-10 March
Message-ID: 1488977242-sup-5435@lrrr.local
Content-Type: text/plain; charset=UTF-8

Excerpts from Vladimir Kuklin's message of 2017-03-08 01:20:21 +0300:

Doug

I have proposed the change for Fuel RC2 [0], but it has W-1 set as I am
waiting for the final tests result. If everything goes alright, I will
check the Worfklow off and this RC2 can be the cut as the release.

I've approved the RC2 tag and prepared
https://review.openstack.org/443116 with the final release tag. Please
+1 if that looks OK. I will approve it tomorrow.

Doug

[0] https://review.openstack.org/#/c/442775/

On Tue, Mar 7, 2017 at 3:39 AM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Sorry for late. But Kolla project need a new release candidate. I will
push it today.

On Tue, Mar 7, 2017 at 6:27 AM, Doug Hellmann doug@doughellmann.com
wrote:

Excerpts from Doug Hellmann's message of 2017-03-06 11:00:15 -0500:

Excerpts from Doug Hellmann's message of 2017-03-02 18:24:12 -0500:

Release Tasks


Liaisons for cycle-trailing projects should prepare their final
release candidate tags by Monday 6 March. The release team will
prepare a patch showing the final release versions on Wednesday 7
March, and PTLs and liaisons for affected projects should +1. We
will then approve the final releases on Thursday 8 March.

We have 13 cycle-trailing deliverables without final releases for Ocata.
All have at least one release candidate, so if no new release candidates
are proposed today I will prepare a patch using these versions as the
final and we will approve that early Wednesday.

If you know that you need a new release candidate, please speak up now.

If you know that you do not need a new release candidate, please also
let me know that.

Thanks!
Doug

$ list-deliverables --series ocata --missing-final -v
fuel fuel 11.0.0.0rc1 other
cycle-trailing
instack-undercloud tripleo 6.0.0.0rc1 other
cycle-trailing
kolla-ansible kolla 4.0.0.0rc1 other
cycle-trailing
kolla kolla 4.0.0.0rc1 other
cycle-trailing
openstack-ansible OpenStackAnsible 15.0.0.0rc1 other
cycle-trailing
os-apply-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-cloud-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-collect-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-net-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-refresh-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
tripleo-heat-templates tripleo 6.0.0.0rc1 other
cycle-trailing
tripleo-image-elements tripleo 6.0.0.0rc1 other
cycle-trailing
tripleo-puppet-elements tripleo 6.0.0.0rc1 other
cycle-trailing

I have lined up patches with the final release tags for all 3 projects.
Please review and +1 or propose a new patch with an updated release
candidate.

Ansible: https://review.openstack.org/442138
Kolla: https://review.openstack.org/442137
TripleO: https://review.openstack.org/442129

Doug



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 4
Date: Wed, 8 Mar 2017 08:25:21 -0500
From: Ian Cordasco sigmavirus24@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [tc][appcat][murano][app-catalog] The
future of the App Catalog
Message-ID:

Content-Type: text/plain; charset=UTF-8

-----Original Message-----
From:?Christopher Aedo doc@aedo.net
Reply:?OpenStack Development Mailing List (not for usage questions)

Date:?March 7, 2017 at 22:11:22
To:?OpenStack Development Mailing List (not for usage questions)

Subject:? Re: [openstack-dev] [tc][appcat][murano][app-catalog] The
future of the App Catalog

On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez wrote:

Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

Without something like Murano "thinly wrapping" docker apps, how would
you propose current users of OpenStack clouds deploy docker apps? Or
any other app for that matter? It seems a little unfair to talk about
murano apps this way when no reasonable alternative exists for easily
deploying docker apps. When I look back at the recent history of how
we've handled containers (nova-docker, magnum, kubernetes, etc) it
does not seem like we're making it easy for the folks who want to
deploy a container on their cloud...

Please understand I am not pleading to keep the Community App Catalog
alive in perpetuity. This just sounds like an unfair point of
comparison. One of the biggest challenges we've faced with the app
catalog since day one is that there is no such thing as a simple
definition of an "OpenStack Application". OpenStack is an IaaS before
anything else, and to my knowledge there is no universally accepted
application deployment mechanism for OpenStack clouds. Heat doesn't
solve that problem as its very operator focused, and while being very
popular and used heavily, it's not used as a way to share generic
templates suitable for deploying apps across different clouds. Murano
is not widely adopted (last time I checked it's not available on any
public clouds, though I hear it is actually used on a several
university clouds, and it's also used on a few private clouds I'm
aware of.)

As a place to find things that run on OpenStack clouds, the app
catalog did a reasonable job. If anything, the experiment showed that
there is no community looking for a place to share OpenStack-specific
applications. There are definitely communities for PaaS layers (cloud
foundry, mesosphere, docker, kubernetes), but I don't see any
community for openstack-native applications that can be deployed on
any cloud, nor a commonly accepted way to deploy them.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks !

As the former PTL I am obviously a little bit biased. Even though my
focus has shifted and I've stepped away from the app catalog, I had
been spending a lot of time trying to figure out how to make
applications an easy to run thing on OpenStack. I've also been trying
to find a community of people who are looking for that, and it doesn't
seem like they've materialized; possibly because that community
doesn't exist? Or else we just haven't been able to figure out where
they're hiding ;)

The one consideration that is pretty important here is what this would
mean to the Murano community. Those folks have been contributed time
and resources to the app catalog project. They've also standardized
on the app catalog as the distribution mechanism, intending to make
the app catalog UI a native component for Murano. We do need to make
sure that if the app catalog is retired, it doesn't hamper or impact
people who have already deployed Murano and are counting on finding
the apps in the app catalog.

All of this is true. But Murano still doesn't have a stable way to
store artifacts. In fact, it seems like Murano relies on a lot of
unstable OpenStack infrastructure. While lots of people have
contributed time, energy, sweat, and tears to the project there are
still plenty of things that make Murano less than desirable. Perhaps
that's why the project has found so few adopters. I'm sure there are
plenty of people who want to use an OpenStack cloud to deploy
applications. In fact, I know there are companies that try to provide
that kind of support via Heat templates. All that said, I don't think
allowing for competition with Murano is a bad thing.

--
Ian Cordasco


Message: 5
Date: Wed, 8 Mar 2017 13:27:12 +0000
From: gordon chung gord@live.ca
To: "openstack-dev@lists.openstack.org"

Subject: Re: [openstack-dev] [telemetry][requirements] ceilometer
grenade gate failure
Message-ID:

Content-Type: text/plain; charset="Windows-1252"

On 07/03/17 11:16 PM, Tony Breeds wrote:
Sure.

I've approved it but it's blocked behind https://review.openstack.org/#/c/442886/1

awesome! thanks Tony!

cheers,

--
gord


Message: 6
Date: Wed, 8 Mar 2017 09:01:00 -0500
From: Harm Sluiman harm.sluiman@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [acceleration] No team meeting today,
resume next Wed
Message-ID: DC117932-5897-4A7F-B521-522E06D2115F@gmail.com
Content-Type: text/plain; charset=us-ascii

Thanks for the update. Unfortunately I could not attend and can't seem to find a summary or anything about what took place. A pointer would be appreciated please ;-)

Thanks for your time
Harm Sluiman
harm.sluiman@gmail.com

On Mar 8, 2017, at 7:22 AM, Zhipeng Huang zhipengh512@gmail.com wrote:

Hi team,

As agreed per our PTG/VTG session, we will have the team meeting two weeks after to give people enough time to prepare the BPs we discussed.

Therefore there will be no team meeting today, and the next meeting is on next Wed.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 7
Date: Wed, 8 Mar 2017 09:03:21 -0500
From: "Amrith Kumar" amrith.kumar@gmail.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove] today weekly meeting
Message-ID: 00bf01d29814$bf3fe810$3dbfb830$@gmail.com
Content-Type: text/plain; charset="us-ascii"

While I try to schedule my life to not conflict with the weekly Trove
meeting, it appears that Wednesday afternoon at 1pm is a particularly
popular time for people to want to meet me.

This week, and next week are no exception. While I'd tried to avoid these
conflicts, I've managed to be unable to do it (again).

Nikhil (slicknik) has kindly agreed to run the meeting today, same place,
same time as always.

Thanks Nikhil.

-amrith

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 8
Date: Wed, 8 Mar 2017 09:03:27 -0500
From: Corey Bryant corey.bryant@canonical.com
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: openstack openstack@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][ubuntu][libvirt] Is libvirt 2.5.0
in ubuntu cloud archive ocata repo bust
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Tue, Mar 7, 2017 at 10:28 PM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Kolla deploy ubuntu gate is red now. here is the related bug[0].

libvirt failed to access the console.log file when booting instance. After
made some debugging, i got following.

Jeffrey, This is likely fixed in ocata-proposed and should be promoted to
ocata-updates soon after testing completes.
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1667033.

Corey

how console.log works

nova create a empty console.log with nova:nova ( this is another bug
workaround actually1), then libvirt ( running with root ) will change the
file owner to qemu process user/group ( configured by dynamic_ownership ).
Now qemu process can write logs into this file.

what's wrong now

libvirt 2.5.0 stop change the file owner, then qemu/libvirt failed to write
logs into console.log file

other test

  • ubuntu + fallback libvirt 1.3.x works 2
  • ubuntu + libvirt 2.5.0 + chang the qemu process user/group to
    nova:nova works, too.[3]
  • centos + libvirt 2.0.0 works, never saw such issue in centos.

conclusion

I guess there are something wrong in libvirt 2.5.0 with dynamic_ownership

[0] https://bugs.launchpad.net/kolla-ansible/+bug/1668654
1 https://github.com/openstack/nova/blob/master/
nova/virt/libvirt/driver.py#L2922,L2952
2 https://review.openstack.org/442673
[3] https://review.openstack.org/442850

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 9
Date: Wed, 8 Mar 2017 09:05:07 -0500
From: Mario Villaplana mario.villaplana@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [ironic] OpenStack client default ironic
API version
Message-ID:

Content-Type: text/plain; charset=UTF-8

We want to deprecate ironic CLI soon, but I would prefer if that were
discussed on a separate thread if possible, aside from concerns about
versioning in ironic CLI. Feature parity should exist in Pike, then we
can issue a warning in Queens and deprecate the cycle after. More
information is on L56:
https://etherpad.openstack.org/p/ironic-pike-ptg-operations

I'm a bit torn on whether to use the API version coded in the OSC
plugin or not. On one hand, it'd be good to be able to test out new
features as soon as they're available. On the other hand, it's
possible that the client won't know how to parse certain items after a
microversion bump. I think I prefer using the hard-coded version to
avoid breakage, but we'd have to be disciplined about updating the
client when the API version is bumped (if needed). Opinions on this
are welcome. In either case, I think the deprecation warning could
land without specifying that.

I'll certainly make an RFE when I update the patch later this week,
great suggestion.

I can make a spec, but it might be mostly empty except for the client
impact section. Also, this is a < 40 line change. :)

Mario

On Tue, Mar 7, 2017 at 10:59 AM, Loo, Ruby ruby.loo@intel.com wrote:
On 2017-03-06, 3:46 PM, "Mario Villaplana" mario.villaplana@gmail.com wrote:

Hi ironic,

At the PTG, an issue regarding the default version of the ironic API
used in our python-openstackclient plugin was discussed. [0] In short,
the issue is that we default to a very old API version when the user
doesn't otherwise specify it. This limits discoverability of new
features and makes the client more difficult to use for deployments
running the latest version of the code.

We came to the following consensus:

1. For a deprecation period, we should log a warning whenever the user
doesn't specify an API version, informing them of this change.

2. After the deprecation period:

a) OSC baremetal plugin will default to the latest available version

I think OSC and ironic CLI have the same behaviour -- are we only interested in OSC or are we interested in both, except that we also want to at some point soon perhaps, deprecate ironic CLI?

Also, by 'latest available version', the OSC plugin knows (or thinks it knows) what the latest version is 1. Will you be using that, or 'latest'?

b) Specifying just macroversion will default to latest microversion
within that macroversion (example: --os-baremetal-api-version=1 would
default to 1.31 if 1.31 is the last microversion with 1 macroversion,
even if we have API 2.2 supported)

I have a patch up for review with the deprecation warning:
https://review.openstack.org/442153

Do you have an RFE? I'd like a spec for this too please.

Please comment on that patch with any concerns.

We also still have yet to decide what a suitable deprecation period is
for this change, as far as I'm aware. Please respond to this email
with any suggestions on the deprecation period.

Thanks,
Mario


[0] https://etherpad.openstack.org/p/ironic-pike-ptg-operations L30

Thank YOU!

--ruby

1 https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L29


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 10
Date: Wed, 8 Mar 2017 23:16:54 +0900
From: Hirofumi Ichihara ichihara.hirofumi@lab.ntt.co.jp
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID: 00bc73ee-06bc-76e2-ea11-dd0b0a321314@lab.ntt.co.jp
Content-Type: text/plain; charset=iso-2022-jp; format=flowed;
delsp=yes

Hi,

I thought that we can post neutron patch depending on neutron-lib patch
under review.
However, I saw it doesn't work[1, 2]. In the patches, neutron patch1
has Depends-on tag with neutron-lib patch2 but the pep8 and unit test
fails because the test doesn't use the neutron-lib patch.

Please correct me if it's my misunderstanding.

Thanks,
Hirofumi


Message: 11
Date: Wed, 8 Mar 2017 08:33:16 -0600
From: Matt Riedemann mriedemos@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Question to clarify versioned
notifications
Message-ID: 3e95e057-b332-6c65-231b-f26001f6d5a8@gmail.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 3/8/2017 4:19 AM, Balazs Gibizer wrote:

Honestly, If searchlight needs to be adapted to the versioned
notifications then the smallest thing to change is to handle the removed
prefix from the event_type. The biggest difference is the format and the
content of the payload. In the legacy notifications the payload was a
simply json dict in the versioned notification the payload is a json
serialized ovo. Which means quite a different data structure. E.g. extra
keys, deeper nesting, etc.

Cheers,
gibi

Heh, yeah, I agree. Thanks for the confirmation and details. I was just
making sure I had this all straight since I was jumping around from
specs and docs and code quite a bit yesterday piecing this together and
wanted to make sure I had it straight. Plus you don't apparently work 20
hours a day gibi so I couldn't ask you in IRC. :)

--

Thanks,

Matt Riedemann


Message: 12
Date: Wed, 8 Mar 2017 15:40:53 +0100
From: ZZelle zzelle@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib master
... So the change should depends on a change in requirements repo
incrementing neutron-lib version

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara <
ichihara.hirofumi@lab.ntt.co.jp> wrote:

Hi,

I thought that we can post neutron patch depending on neutron-lib patch
under review.
However, I saw it doesn't work[1, 2]. In the patches, neutron patch1 has
Depends-on tag with neutron-lib patch2 but the pep8 and unit test fails
because the test doesn't use the neutron-lib patch.

Please correct me if it's my misunderstanding.

1: https://review.openstack.org/#/c/424340/
2: https://review.openstack.org/#/c/424868/

Thanks,
Hirofumi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 13
Date: Wed, 8 Mar 2017 09:41:05 -0500
From: Jay Pipes jaypipes@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][appcat] The future of the App
Catalog
Message-ID: c8147907-6d4b-6bac-f9b3-6b8b07d75494@gmail.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 03/06/2017 06:26 AM, Thierry Carrez wrote:
Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks !

Mirantis' position is that the App Catalog was a good idea, but we agree
with you that other application repositories like DockerHub and Quay.io
are both more useful and more actively used.

The OpenStack App Catalog does indeed seem to unnecessarily compete with
those application repositories, and we would support its retirement if
that is what the community would like to do. We'll provide resources and
help in winding anything down if needed.

Best,
-jay


Message: 14
Date: Wed, 8 Mar 2017 14:59:41 +0000
From: Yu Wei yu2003w@hotmail.com
To: "openstack-dev@lists.openstack.org"

Subject: [openstack-dev] [nova][placement-api] Is there any document
about openstack-placement-api for installation and configure?
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi Guys,
I'm new to openstack.
I tried to install openstack-ocata.
As placement-api is required since Ocata, is there any detailed document
about how to install and configure placement-api?

Thanks,
Jared


Message: 15
Date: Wed, 8 Mar 2017 15:59:59 +0100
From: Andreas Jaeger aj@suse.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID: c141c37b-53df-0983-5857-9980b6e2b16e@suse.com
Content-Type: text/plain; charset="windows-1252"

On 2017-03-08 15:40, ZZelle wrote:
Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib
master ... So the change should depends on a change in requirements repo
incrementing neutron-lib version

This is documented also at - together with some other caveats:

https://docs.openstack.org/infra/manual/developers.html#limitations-and-caveats

Note a depends-on requirements won't work either - you really need to
release it. Or you need to change the test to pull neutron-lib from source,

Andreas

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara
<ichihara.hirofumi@lab.ntt.co.jp
ichihara.hirofumi@lab.ntt.co.jp> wrote:

Hi,

I thought that we can post neutron patch depending on neutron-lib
patch under review.
However, I saw it doesn't work[1, 2]. In the patches, neutron
patch[1] has Depends-on tag with neutron-lib patch[2] but the pep8
and unit test fails because the test doesn't use the neutron-lib patch.

Please correct me if it's my misunderstanding.

[1]: https://review.openstack.org/#/c/424340/
<https://review.openstack.org/#/c/424340/>
[2]: https://review.openstack.org/#/c/424868/
<https://review.openstack.org/#/c/424868/>

Thanks,
Hirofumi



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
HRB 21284 (AG N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


Message: 16
Date: Wed, 8 Mar 2017 10:05:45 -0500
From: James Slagle james.slagle@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [TripleO][Heat] Selectively disabling
deployment resources
Message-ID:

Content-Type: text/plain; charset=UTF-8

On Tue, Mar 7, 2017 at 7:24 PM, Zane Bitter zbitter@redhat.com wrote:
On 07/03/17 14:34, James Slagle wrote:

I've been working on this spec for TripleO:
https://review.openstack.org/#/c/431745/

which allows users to selectively disable Heat deployment resources
for a given server (or server in the case of a *DeloymentGroup
resource).

I'm not completely clear on what this means. You can selectively disable
resources with conditionals. But I think you mean that you want to
selectively disable changes to resources?

Yes, that's right. The reason I can't use conditionals is that I still
want the SoftwareDeploymentGroup resources to be updated, but I may
want to selectively exclude servers from the group that is passed in
via the servers property. E.g., instead of updating the deployment
metadata for all computes, I may want to exclude a single compute
that is temporarily unreachable, without that failing the whole
stack-update.

I started by taking an approach that would be specific to TripleO.
Basically mapping all the deployment resources to a nested stack
containing the logic to selectively disable servers from the
deployment (using yaql) based on a provided parameter value. Here's
the main patch: https://review.openstack.org/#/c/442681/

After considering that complexity, particularly the yaql expression,
I'm wondering if it would be better to add this support natively to
Heat.

I was looking at the restrictedactions key in the resourceregistry
and was thinking this might be a reasonable place to add such support.
It would require some changes to how restricted_actions work.

One change would be a method for specifying that restrictedactions
should not fail the stack operation if an action would have otherwise
been triggered. Currently the behavior is to raise an exception and
mark the stack failed if an action needs to be taken but has been
marked restricted. That would need to be tweaked to allow specifying
that that we don't want the stack to fail. One thought would be to
change the allowed values of restricted
actions to:

replacefail
replace
ignore
updatefail
update
ignore
replace
update

where replace and update were synonyms for replacefail/updatefail to
maintain backwards compatibility.

Anything that involves the resource definition in the template changing but
Heat not modifying the resource is problematic, because that messes with
Heat's internal bookkeeping.

I don't think this case would violate that principle. The template +
environment files would match what Heat has done. After an update, the
2 would be in sync as to what servers the updated Deployment resource
was triggered.

Another change would be to add logic to the Deployment resources
themselves to consider if any restricted_actions have been set on an
Server resources before triggering an updated deployment for a given
server.

Why not just a property, "nonewdeployments_please: true"?

That would actually work and be pretty straightforward I think. We
could have a map parameter with server names and the property that the
user could use to set the value.

The reason why I was initially not considering this route was because
it doesn't allow the user to disable only some deployments for a given
server. It's all or nothing. However, it's much simpler than a totally
flexible option, and it addresses 2 of the largest use cases of this
feature. I'll look into this route a bit more.

--
-- James Slagle
--


Message: 17
Date: Wed, 8 Mar 2017 16:06:59 +0100
From: Dmitry Tantsur dtantsur@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [ironic] Pike PTG report
Message-ID: 23821f81-9509-3600-6b9c-693984aa0132@redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed

Hi all!

I've finished my Pike PTG report. It is spread over four blog posts:

http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-1.html
http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-2.html
http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-3.html
http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-4.html

It was a lot of typing, please pardon mistakes. The whole text (in RST format)
for archiving purposes is copy-pasted in the end of this message.

Please feel free to respond here or in the blog comments.

Cheers,
Dmitry

Ongoing work and status updates


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-ongoing-work.

We spent the first half of Wednesday discussing this. There was a lot of
incomplete work left from Ocata, and some major ongoing work that we did not
even plan to finish in Ocata.

Boot-from-volume
~~~~~~~~~~~~~~~~

Got some progress, most of the Ironic patches are up. Desperately needs review
and testing, though. The Nova part is also lagging behind, and should be
brought to the Nova team attention.

Actions
mgoddard and dtantsur volunteered to help with testing, while
mjturek, hsiina and crushil volunteered to do some coding.
Goals for Pike
finish the first (iSCSI using iPXE) case and the Nova part.

Networking
~~~~~~~~~~

A lot of progress here during Ocata, completed bonding and attach/detach API.

VLAN-aware instances should work. However, it requires an expensive ToR switch,
supporting VLAN/VLAN and VLAN/VXLAN rewriting, and, of course ML2 plugin
support. Also, reusing an existing segmentation ID requires more work: we have
no current way to put the right ID in the configdrive.

Actions
vsaienko, armando and kevinbenton are looking into the Neutron
part of the configdrive problem.

Routed networks support require Ironic to be aware of which physical network(s)
each node is connected to.

Goals for Pike
* model physical networks on Ironic ports,
* update VIF attach logic to no longer attach things to wrong physnets.

We discussed introducing notifications from Neutron to Ironic about events
of interest for us. We are going to use the same model as between Neutron and
Nova: create a Neutron plugin that filters out interesting events and posts
to a new Ironic API endpoint.

Goals for Pike
have this notification system in place.

Finally, we agreed that we need to work on a reference architecture document,
describing the best practices of deploying Ironic, especially around
multi-tenant networking setup.

Actions
jroll to kickstart this document, JayF and mariojv to help.

Rolling upgrades
~~~~~~~~~~~~~~~~

Missed Ocata by a small margin. The code is up and needs reviewing. The CI
is waiting for the multinode job to start working (should be close as well).

Goals for Pike
rolling upgrade Ocata -> Pike.

Driver composition reform
~~~~~~~~~~~~~~~~~~~~~~~~~

Most of the code landed in Ocata already. Some client changes landed in Pike,
some are still on review. As we released Ocata with the driver composition
changes being experimental, we are not ready to deprecate old-style drivers in
Pike. Documentation is also still lacking.

Goals for Pike
* make new-style dynamic drivers the recommend way of writing and using
drivers,
* fill in missing documentation,
* recommend vendors to have hardware types for their hardware, as well
as 3rdparty CI support for it.
Important decisions
* no new classic drivers are accepted in-tree (please check when accepting
specifications),
* no new interfaces additions for classic drivers(volume_interface is
the last accepted from them),
* remove the SSH drivers by Pike final (probably around M3).

Ironic Inspector HA
~~~~~~~~~~~~~~~~~~~

Preliminary work (switch to a real state machine) done in Ocata. Splitting the
service into API and conductor/engine parts correlates with the WSGI
cross-project goal.

We also had a deeper discussion about ironic-inspector architecture earlier
that week, where we were looking <https://etherpad.openstack.org/p/ironic-pike-ptg-inspector-arch>_ into
potential future work to make ironic-inspector both HA and multi-tenancy
friendly. It was suggested to split discovery process (simple process to
detect MACs and/or power credentials) and inspection process (full process
when a MAC is known).

Goals for Pike
* switch locking to tooz (with Redis probably being the default
backend for now),
* split away API process with WSGI support,
* leader election using tooz for periodic tasks,
* stop messing with iptables and start directly managing dnsmasq
instead (similarly to how Neutron does it),
* try using dnsmasq in active/active configuration with
non-intersecting IP addresses pools from the same subnet.
Actions
also sambetts will write a spec on a potential workflow split.

Ironic UI
~~~~~~~~~

The project got some important features implemented, and an RDO package
emerged during Ocata. Still, it desperately needs volunteers for coding and
testing. A spreadsheet <https://docs.google.com/spreadsheets/d/1petifqVxOT70H2Krz7igV2m9YqgXaAiCHR8CXgoi9a0/edit?usp=sharing>_
captures the current (as of beginning of Pike) status of features.

Actions
dtantsur, davidlenwell, bradjones and crushil agreed to
dedicate some time to the UI.

Rescue
~~~~~~

Most of the patches are up, the feature is tested with the CoreOS-based
ramdisk for now. Still, the ramdisk side poses a problem: while using DHCP is
easy, static network configuration seems not. It's especially problematic in
CoreOS. Might be much easier in the DIB-based ramdisk, but we don't support it
officially in the Ironic community.

RedFish driver
~~~~~~~~~~~~~~

We want to get a driver supporting RedFish soon. There was some critics raised
around the currently proposed python-redfish library. As an alternative,
a new library <https://github.com/openstack/sushy>_ was written. Is it
lightweight, covered by unit tests and only contain what Ironic needs.
We agreed to start our driver implementation with it, and switch to the
python-redfish library when/if it is ready to be consumed by us.

We postponed discussing advanced features like nodes composition till after
we get the basic driver in.

Small status updates
~~~~~~~~~~~~~~~~~~~~

  • Of the API evolution initiative, only E-Tag work got some progress. The spec
    needs reviewing now.

  • Node tags work needs review and is close to landing. We decided to discuss
    port tags as part of a separate RFE, if anybody is interested.

  • IPA API versioning also needs reviews, there are several moderately
    contentions points about it. It was suggested that we only support one
    direction of IPA/ironic upgrades to simplify testing. We'll probably only
    support old IPA with new ironic, which is already tested by our grenade job.

CI and testing


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-ci-testing

Missing CI coverage
~~~~~~~~~~~~~~~~~~~

UEFI
Cirros finally released a stable version with UEFI support built in.
A non-voting job is running with partition images, should be made voting
soon. A test with whole disk images will be introduced as part of
standalone tests <https://review.openstack.org/#/c/423556/>_.
Local bootloader
Requires small enough instance images with Grub2 present (Cirros does not
have it). We agreed to create a new repository with scripts to build
suitable images. Potentially can be shared with other teams (e.g. Neutron).

 Actions: **lucasagomes** and/or **vsaienko** to look into it.

Adopt state
Tests have been up for some time, but have ordering issues with nova-based
tests. Suggesting TheJulia to move them to standalone tests_.
Root device hints
Not covered by any CI. Will need modifying how we create virtual machines.
First step is to get size-based hints work. Check two cases: with size
strictly equal and greater than requested.

 Actions: **dtantsur** to look into it.

Capabilities-based scheduling
This may actually go to Nova gate, not ours. Still, it relies on some code
in our driver, so we'd better cover it to ensure that the placement API
changes don't break it.

 Actions: **vsaienko** to look into it.

Port groups
The same image problem as with local boot - the same action item to create
a repository with build scripts to build our images.
VLAN-aware instances
The same image problem + requires reworking our network simulation code <https://review.openstack.org/#/c/392959/>_.
Conductor take over and hash ring
Requires a separate multi-node job.

 Action: **vsaienko** to investigate.

DIB-based IPA image
^^^^^^^^^^^^^^^^^^^

Currently the ironic-agent element to build such image is in the DIB
repository outside of our control. If we want to properly support it, we need
to gate on its changes, and to gate IPA changes on its job. Some time ago we
had a tentative agreement to move the element to our tree.

It was blocked by the fact that DIB rarely or never removes elements, and does
not have a way to properly de-duplicate elements with the same name.

An obvious solution we are going to propose is to take this element in IPA
tree under a different name (ironic-python-agent?). The old element will
get deprecated and only critical fixes will be accepted for it.

Action
dtantsur to (re)start this discussion with the TripleO and DIB teams.

API microversions testing
^^^^^^^^^^^^^^^^^^^^^^^^^

We are not sure we have tests covering all microversions. We seem to have API
tests using fake driver that cover at least some of them. We should start
paying more attention to this part of our testing.

Actions
dtantsur to check if these tests are up-to-date and split them to a
separate CI job.
pas-ha to write API tests for internal API (i.e. lookup/heartbeat).

Global OpenStack goals
~~~~~~~~~~~~~~~~~~~~~~

Splitting away tempest plugins
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

It did not end up a goal for Pike, and there are still some concerns in the
community. Still, as we already apply ugly hacks in our jobs to use the
tempest plugin from master, we agreed to proceed with the split.

To simplify both maintenance and consuming our tests, we agreed to merge
ironic and ironic-inspector plugins. The introspection tests will or will
not run based on ironic-inspector presence.

We propose having a merged core team (i.e. ironic-inspector-core which
already includes ironic-core) for this repository. We trust people who
only have core rights on ironic-inspector to not approve things they're
not authorized to approve.

Python 3 support
^^^^^^^^^^^^^^^^

We've been running Python 3 unit tests for quite some time. Additionally,
ironic-inspector runs a non-voting Python 3 functional test. Ironic has an
experimental job which fails, apparently, because of swift. We can start with
switching this job to the pxe_ipmitool driver (not requiring swift).
Inspector does not have a Python 3 integration tests job proposed yet.

Actions
JayF and hurricanerix will drive this work in both ironic and
ironic-inspector.

 **lucasagomes** to check pyghmi and virtualbmc compatibility.

 **krtaylor** and/or **mjturek** to check MoltenIron.

We agreed that Bifrost is out of scope for this task. Its Python 3
compatibility mostly depends on one of Ansible anyway. Similarly, for the UI
we need horizon to be fully Python 3 compatible first.

Important decisions
We recommend vendors to make their libraries compatible with Python 3.
It may become a strict requirement in one of the coming releases.

API behind WSGI container
^^^^^^^^^^^^^^^^^^^^^^^^^

This seems quite straightforward. The work has started to switch ironic CI to
WSGI already. For ironic-inspector it's going to be done as part of the HA
work.

Operations


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-operations

OSC plugin and API versioning
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Currently we default the OSC plugin (and old client too) to a really old API
version. We agreed that this situation is not desired, and that we should take
the same approach as Nova and default to the latest version. We are planning
to announce the change this cycle, both via the ML and via a warning issues
when no versions are specified.

Next, in the Queens cycle, we will have to make the change, bearing in mind
that OSC does not support values like latest for API versions. So the plan
is as follows:

  • make the default --os-baremetal-api-version=1 in

https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L67

  • when instantiating the ironic client in the OSC plugin, replace '1' with
    'latest':

https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L41

  • when handling --os-baremetal-api-version=latest, replace it with 1,
    so that it's later replaced with latest again:

https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L85

As a side effect, that will make 1 equivalent to latest as well.

It was also suggested to have an new command, displaying both server supported
and client supported API versions.

Deprecating the standalone ironic CLI in favor of the OSC plugin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

We do not want to maintain two CLI in the long run. We agreed to start
thinking about deprecating the old ironic command. Main concerns:

  • lack of feature parity,

  • ugly way to work without authentication, for example::

    openstack baremetal --os-url http://ironic --os-token fake

Plan for Pike
* Ensure complete feature parity between two clients.
* Only use openstack baremetal commands in the documentation.

The actual deprecation is planned for Queens.

RAID configuration enhancements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A few suggestions were made:

  • Support ordered list of logical disk definition. The first possible
    configuration is applied to the node. For example:

    • Top of list - RAID 10 but we don't have enough drives
    • Fallback to next preference in list - RAID 1 on a pair of available drives
    • Finally, JBOD or RAID 0 on only available drive
  • Specify the number of instances for a logical disk definition to create.

  • Specify backing physical disks by stating preference for the smallest, e.g.
    smallest like-sized pair or two smallest disks.

  • Specify location of physical disks, e.g. first two or last two as perceived
    by the hardware, front/rear/internal location.

Actions
rpioso will write RFE(s)

Smaller topics
~~~~~~~~~~~~~~

Non-aborteable clean steps stuck in clean wait state
We discussed a potential force-abort functionality, but the only thing
we agreed on is check that all current clean steps are marked as
abortable if they really are.

Status of long-running cleaning operations
There is a request to be able to get status of e.g. disk shredding (which
may take hours). We found out that the current IPA API design essentially
prevents running several commands in parallel. We agreed that we need IPA
API versioning first, and that this work is not a huge priority right now.

OSC command for listing driver and RAID properties
We cannot agree on the exact form of these two commands. The primary
candidates discussed on the PTG were::

     openstack baremetal driver property list <DRIVER>
     openstack baremetal driver property show <DRIVER>

 We agreed to move this to the spec: https://review.openstack.org/439907.

Abandoning an active node
I.e. an opposite to adopt. It's unclear how such operation would play with
nova, maybe it's only useful for a standalone case.

Future Work


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-future-work.

Neutron event processing
~~~~~~~~~~~~~~~~~~~~~~~~

RFE: https://bugs.launchpad.net/ironic/+bug/1304673, spec:
https://review.openstack.org/343684.

We need to wait for certain events from neutron (like port bindings).
Currently we just wait some time, and hope it went well. We agreed to follow
the same pattern that nova does for neutron to nova notifications.
The neutron part is
https://github.com/openstack/neutron/blob/master/neutron/notifiers/nova.py.
We agreed with the Neutron team that notifier and the other ironic-specific
stuff for neutron would live in a separate repo under Baremetal governance.
Draft code is https://review.openstack.org/#/c/357780.

Splitting node.properties[capabilities] into a separate table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This is something we've planned on for long time. Currently, it's not possible
to update capabilities atomically, and the format is quite hard to work with:
k1:v1,k2:v2. We discussed going away from using word capability. It's
already overused in the OpenStack world, and nova is switching to the notion
of "traits". It also looks like traits will be qualitative-only while, we have
proposals from quantitative capabilities (like gpu_count).

It was proposed to model a typical CRUD API for traits in Ironic::

 GET /v1/nodes/<NODE>/traits
 POST  /v1/nodes/<NODE>/traits
 GET /v1/nodes/<NODE>/traits/<trait>
 DELETE /v1/nodes/<NODE>/traits/<trait>

In API versions before this addition, we would make
properties/capabilities a transparent proxy to new tables.

It was noted that the database change can be done first, with API change
following it.

Actions
rloo to propose two separate RFEs for database and API parts.

Avoid changing behavior based on properties[capabilities]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Currently our capabilities have a dual role. They serve both for scheduling
(to inform nova of what nodes can) and for making decisions based on flavor
(e.g. request UEFI boot). It is complicated by the fact that sometimes the
same capability (e.g. UEFI) can be of both types depending on a driver.
This is quite confusing for users, and may be incompatible with future changes
both in ironic and nova.

For things like boot option and (potentially) BIOS setting, we need to be able
to get requests from flavors and/or nova boot arguments without abusing
capabilities for it. Maybe similar to how NUMA support does it:
https://docs.openstack.org/admin-guide/compute-cpu-topologies.html.

For example::

 flavor.extra_specs[traits:has_ssd]=True

(tells the scheduler to find a node with SSD disk; does not change
behavior/config of node)

::

 flavor.extra_specs[configuration:use_uefi]=True

(configures the node to boot UEFI; has no impact on scheduling)

::

 flavor.extra_specs[traits:has_uefi]=True
 flavor.extra_specs[configuration:use_uefi]=True

(tells the scheduler to find a node supporting UEFI; if this support is
dynamic, configures the node to enable UEFI boot).

Actions
jroll to start conversation with nova folks about how/if to have a
replacement for this elsewhere.

 Stop accepting driver features relying on ``properties[capabilities]`` (as
 opposed to ``instance_info[capabilities]``).

Potential actions
* Remove instance_info[capabilities] into
instance_info[configuration] for clarity.

Deploy-time RAID
~~~~~~~~~~~~~~~~

This was discussed on the last design summit. Since then we've got a nova spec <https://review.openstack.org/408151>_, which, however, hasn't got many
reviews so far. The spec continues using block_device_mapping_v2, other
options apparently were not considered.

We discussed how to inform Nova whether or not RAID can be built for
a particular node. Ideally, we need to tell the scheduler about many things:
RAID support, disk number, disk sizes. We decided that it's an overkill, at
least for the beginning. We'll only rely on a "supports RAID" trait for now.

It's still unclear what to do about local_gb property, but with planned
Nova changes it may not be required any more.

Advanced partitioning
~~~~~~~~~~~~~~~~~~~~~

There is a desire for flexible partitioning in ironic, both in case of
partition and whole disk images (in the latter case - partition other disks).
Generally, there was no consensus on the PTG. Some people were very much in
favor of this feature, some - quite against. It's unclear how to pass
partitioning information from Nova. There is a concern that such feature will
get us too much into OS-specific details. We agreed that someone interested
will collect the requirements, create a more detailed proposal, and we'll
discuss it on the next PTG.

Splitting nodes into separate pools
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This feature is about dedicating some nodes to a tenant, essentially adding a
tenant_id field to nodes. This can be helpful e.g. for a hardware provider to
reserve hardware for a tenant, so that it's always available.

This seems relatively easy to implement in Ironic. We need a new field on
nodes, then only show non-admin users their hardware. A bit trickier to make
it work with Nova. We agreed to investigate passing a token from Nova to
Ironic, as opposed to always using a service user admin token.

Actions
vdrok to work out the details and propose a spec.

Requirements for routed networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

We discussed requirements for achieving routed architecture like
spine-and-leaf. It seems that most of the requirements are already in our
plans. The outstanding items are:

  • Multiple subnets support for ironic-inspector. Can be solved in
    dnsmasq.conf level, an appropriate change was merged into
    puppet-ironic.

  • Per-node provision and cleaning networks. There is an RFE, somebody just
    has to do the work.

This does not seem to be a Pike goal for us, but many of the dependencies
are planned for Pike.

Configuring BIOS setting for nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Preparing a node to be configured to serve a certain rule by tweaking its
settings. Currently, it is implemented by the Drac driver in a vendor pass-thru.

We agreed that such feature would better fit cleaning, rather then
pre-deployment. Thus, it does not depend on deploy steps. It was suggested to
extend the management interface to support passing it an arbitrary JSON with
configuration. Then a clean step would pick it (similar to RAID).

Actions
rpioso to write a spec for this feature.

Deploy steps
~~~~~~~~~~~~

We discussed the deploy steps proposal <https://review.openstack.org/412523>_
in depth. We agreed on partially splitting the deployment procedure into
pluggable bits. We will leave the very core of the deployment - flashing the
image onto a target disk - hardcoded, at least for now. The drivers will be
able to define steps to run before and after this core deployment. Pre- and
post-deployment steps will have different priorities ranges, something like::

 0 < pre-max/deploy-min < deploy-max/post-min < infinity

We plan on making partitioning a pre-deploy step, and installing a bootloader
a post-deploy step. We will not allow IPA hardware managers to define deploy
steps, at least for now.

Actions
yolanda is planning to work on this feature, rloo and TheJulia
to help.

Authenticating IPA
~~~~~~~~~~~~~~~~~~

IPA HTTP endpoints, and the endpoints Ironic provides for ramdisk callbacks
are completely insecure right now. We hesitated to add any authentication to
them, as any secrets published for the ramdisk to use (be it part of kernel
command line or image itself) are readily available to anyone on the network.

We agreed on several things to look into:

  • A random CSRF-like token to use for each node. This will somewhat limit the
    attack surface by requiring an attacker to intercept a token for the
    specific node, rather then just access the endpoints.

  • Document splitting out public and private Ironic API as part of our future
    reference architecture guide.

  • Make sure we support TLS between Ironic and IPA, which is particularly
    helpful when virtual media is used (and secrets are not leaked).

Actions
jroll and joanna to look into the random token idea.
jroll to write an RFE for TLS between IPA and Ironic.

Smaller things
~~~~~~~~~~~~~~

Using ansible-networking as a ML2 driver for ironic-neutron integration work
It was suggested to make it one of backends for
networking-generic-switch in addition to netmiko. Potential
concurrency issues when using SSH were raised, and still require a solution.

Extending and standardizing the list of capabilities the drivers may discover
It was proposed to use os-traits <https://github.com/jaypipes/os-traits>_
for standardizing qualitative capabilities. jroll will look into
quantitative capabilities.

Pluggable interface for long-running processes
This was proposed as an optional way to mitigate certain problems with
local long-running services, like console. E.g. if a conductor crashes,
its console services keep running. It was noted that this is a bug to be
fixed (TheJulia volunteered to triage it).
The proposed solution involved optionally run processes on a remote
cluster, e.g. k8s. Concerns were voiced on the PTG around complicating
support matrix and adding more decisions to make for operators.
There was no apparent consensus on implementing this feature due to that.

Setting specific boot device for PXE booting
It was found to be already solved by setting pxe_enabled on ports.
We just need to update ironic-inspector to set this flag.

Priorities and planning


The suggested priorities list is now finalized in
https://review.openstack.org/439710.

We also agreed on the following priorities for ironic-inspector subteam:

  • Inspector HA (milan)
  • Community goal - python 3.5 (JayF, hurricanerix)
  • Community goal - devstack+apache+wsgi (aarefiev, ovoshchana)
  • Inspector needs to update pxe_enabled flag on ports (dtantsur)

Message: 18
Date: Wed, 8 Mar 2017 15:07:06 +0000 (GMT)
From: Chris Dent cdent+os@anticdent.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID: alpine.OSX.2.20.1703081500570.59117@shine.local
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On Wed, 8 Mar 2017, Yu Wei wrote:

I'm new to openstack.
I tried to install openstack-ocata.
As placement-api is required since Ocata, is there any detailed document
about how to install and configure placement-api?

There are two different things which might be useful to you. Some
nova "in-tree" docs about placement:

 https://docs.openstack.org/developer/nova/placement.html
 https://docs.openstack.org/developer/nova/placement_dev.html

and some in progress documents about installing placement:

 https://review.openstack.org/#/c/438328/

That latter has some errors that are in the process of being fixed,
so make sure you read the associated comments.

--
Chris Dent ?_(?)_/? https://anticdent.org/
freenode: cdent tw: @anticdent


Message: 19
Date: Wed, 8 Mar 2017 09:12:59 -0600
From: Monty Taylor mordred@inaugust.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: 9b5cf68a-39bf-5d33-2f4c-77c5f5ff7f78@inaugust.com
Content-Type: text/plain; charset=utf-8

Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

Thanks!
Monty


Message: 20
Date: Wed, 8 Mar 2017 15:29:05 +0000
From: Yu Wei yu2003w@hotmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID:

Content-Type: text/plain; charset="utf-8"

@Chris, Thanks for replying.

When I tried to configure placement-api, I met following problems,

AH01630: client denied by server configuration: /usr/bin/nova-placement-api

I will read the links you pointed out.

Thanks again,

Jared

On 2017?03?08? 23:07, Chris Dent wrote:
On Wed, 8 Mar 2017, Yu Wei wrote:

I'm new to openstack.
I tried to install openstack-ocata.
As placement-api is required since Ocata, is there any detailed document
about how to install and configure placement-api?

There are two different things which might be useful to you. Some
nova "in-tree" docs about placement:

https://docs.openstack.org/developer/nova/placement.html
https://docs.openstack.org/developer/nova/placement_dev.html

and some in progress documents about installing placement:

https://review.openstack.org/#/c/438328/

That latter has some errors that are in the process of being fixed,
so make sure you read the associated comments.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 21
Date: Wed, 8 Mar 2017 15:35:34 +0000 (GMT)
From: Chris Dent cdent+os@anticdent.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID: alpine.OSX.2.20.1703081532490.59117@shine.local
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On Wed, 8 Mar 2017, Yu Wei wrote:

When I tried to configure placement-api, I met following problems,

AH01630: client denied by server configuration: /usr/bin/nova-placement-api

That can be fixed by doing (somewhere in your apache config):

     Require all granted

but rather than doing that you may wish to move nova-placement-api
to a less global directory and grant access to that directory.
Providing wide access to /usr/bin is not a great idea.

--
Chris Dent ?_(?)_/? https://anticdent.org/
freenode: cdent tw: @anticdent


Message: 22
Date: Thu, 9 Mar 2017 00:39:03 +0900
From: Hirofumi Ichihara ichihara.hirofumi@lab.ntt.co.jp
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID: 6664232b-37bc-9a2b-f06e-812318f62b67@lab.ntt.co.jp
Content-Type: text/plain; charset=windows-1252; format=flowed

On 2017/03/08 23:59, Andreas Jaeger wrote:
On 2017-03-08 15:40, ZZelle wrote:

Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib
master ... So the change should depends on a change in requirements repo
incrementing neutron-lib version
This is documented also at - together with some other caveats:

https://docs.openstack.org/infra/manual/developers.html#limitations-and-caveats
Thank you for the pointer. I understand.

Hirofumi

Note a depends-on requirements won't work either - you really need to
release it. Or you need to change the test to pull neutron-lib from source,

Andreas

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara
<ichihara.hirofumi@lab.ntt.co.jp
ichihara.hirofumi@lab.ntt.co.jp> wrote:

 Hi,

 I thought that we can post neutron patch depending on neutron-lib
 patch under review.
 However, I saw it doesn't work[1, 2]. In the patches, neutron
 patch[1] has Depends-on tag with neutron-lib patch[2] but the pep8
 and unit test fails because the test doesn't use the neutron-lib patch.

 Please correct me if it's my misunderstanding.

 [1]: https://review.openstack.org/#/c/424340/
 <https://review.openstack.org/#/c/424340/>
 [2]: https://review.openstack.org/#/c/424868/
 <https://review.openstack.org/#/c/424868/>

 Thanks,
 Hirofumi



 __________________________________________________________________________
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 23
Date: Wed, 8 Mar 2017 09:50:34 -0600
From: Lance Bragstad lbragstad@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID:

Content-Type: text/plain; charset="utf-8"

From a keystone-perspective, I'm fine killing keystone.openstack.org.
Unless another team member with more context/history has a reason to keep
it around.

On Wed, Mar 8, 2017 at 9:12 AM, Monty Taylor mordred@inaugust.com wrote:

Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

Thanks!
Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 24
Date: Wed, 8 Mar 2017 15:55:19 +0000
From: Yu Wei yu2003w@hotmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID:

Content-Type: text/plain; charset="utf-8"

It seems that nova-placement-api acts as a CGI module.

Is it?

On 2017?03?08? 23:35, Chris Dent wrote:
On Wed, 8 Mar 2017, Yu Wei wrote:

When I tried to configure placement-api, I met following problems,

AH01630: client denied by server configuration: /usr/bin/nova-placement-api

That can be fixed by doing (somewhere in your apache config):

    Require all granted

but rather than doing that you may wish to move nova-placement-api
to a less global directory and grant access to that directory.
Providing wide access to /usr/bin is not a great idea.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 25
Date: Wed, 8 Mar 2017 16:02:12 +0000
From: "Scheglmann, Stefan" scheglmann@strato.de
To: "openstack-dev@lists.openstack.org"

Subject: Re: [openstack-dev] [puppet] puppet-cep beaker test
Message-ID: AA214AA9-86CB-4DC6-8BDE-627F9067D6F8@strato.de
Content-Type: text/plain; charset="utf-8"

Hey Alex,

thx for the reply, unfortunately it doesn?t seem to work. Adding PUPPETMAJVERSION to the call seems not to have any effect.

Stefan

On Tue, Mar 7, 2017 at 7:09 AM, Scheglmann, Stefan scheglmann@strato.de wrote:

Hi,

currently got some problems running the beaker test for the puppet-cep module. Working on OSX using Vagrant version 1.8.6 and VirtualBox version 5.1.14. Call is 'BEAKERdestroy=no BEAKERdebug=1 bundle exec --verbose rspec spec/acceptance? output in http://pastebin.com/w5ifgrvd

Try running:
PUPPETMAJVERSION=4 BEAKERdestroy=no BEAKERdebug=1 bundle exec
--verbose rspec spec/acceptance

Thanks,
-Alex

Tried this, this just changes the trace a bit, now it seems like that it worked in the first place but then failed for the same reason.
Trace here:

Trace:
An error occurred in a before(:suite) hook.
Failure/Error: raise CommandFailure, "Host '#{self}' exited with #{result.exitcode} running:\n #{cmdline}\nLast #{@options[:tracelimit]} lines of output were:\n#{result.formattedoutput(@options[:tracelimit])}"
Beaker::Host::CommandFailure:
Host 'first' exited with 127 running:
ZUULREF= ZUULBRANCH= ZUULURL= PUPPETMAJVERSION= bash openstack/puppet-openstack-integration/installmodules.sh
Last 10 lines of output were:
+ '[' -n 'SHELLOPTS=braceexpand:hashall:interactive-comments:xtrace
if [ -n "$(set | grep xtrace)" ]; then
local enablextrace='\''yes'\'';
if [ -n "${enable
xtrace}" ]; then' ']'
+ set +x
--------------------------------------------------------------------------------
| Install r10k |
--------------------------------------------------------------------------------
+ gem install fastgettext -v '< 1.2.0'
openstack/puppet-openstack-integration/install
modules.sh: line 29: gem: command not found

It seems like that the box beaker is using (puppetlabs/ubuntu-14.04-64-nocm), somehow ends up with has puppet 4.x installed. I could not exactly pin down how this happens, cause when i sin up some VM just from that base box and install puppet, i end up with 3.4. But during the beaker tests it ends up with puppet 4 and in puppet 4 some paths have changed. /opt/puppetlabs/bin is just for the 'public' applications and the ?private' ones like gem or ruby are in /opt/puppetlabs/puppet/bin. Therefore the openstack/puppet-openstack-integration/install_modules.sh script fails on installation of r10k, cause it cannot find gem and later on it fails on the r10k call cause it is also installed to /opt/puppetlabs/puppet/bin.
Symlinking gem and r10k in an provisioned machine, and rerun the tests fixes the problem. Currently i am doing all this cause i added some functionalities for the puppet-cep manifests to support bluestone/rocksdb and some additional config params which i would like to see in upstream.

Greets Stefan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 26
Date: Wed, 8 Mar 2017 09:15:04 -0700
From: Alex Schultz aschultz@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [puppet] puppet-cep beaker test
Message-ID:

Content-Type: text/plain; charset=UTF-8

On Wed, Mar 8, 2017 at 9:02 AM, Scheglmann, Stefan scheglmann@strato.de wrote:
Hey Alex,

thx for the reply, unfortunately it doesn?t seem to work. Adding PUPPETMAJVERSION to the call seems not to have any effect.

I just read the bottom part of the original message and it's getting a
14.04 box from puppet-ceph/spec/acceptance/nodesets/default.yml. You
could try changing that to 16.04. For our CI we're using the
nodepool-xenial.yml via BEAKERset=nodepool-xenial.yml but that
assumes you're running on localhost. You could try grabbing the 1604
configuration from puppet-openstack
extras[0] and putting that in your
spec/acceptance/nodesets folder to see if that works for you. Then you
should able to run:

PUPPETMAJVERSION=4 BEAKERdestroy=no BEAKERdebug=1
BEAKER_set=ubuntu-server-1604-x86 bundle exec --verbose rspec
spec/acceptance

If you run in to more problems, you may want to try hopping on IRC and
we can help you in #puppet-openstack on freenode.

Thanks,
-Alex

[0] https://github.com/openstack/puppet-openstack_extras/blob/master/spec/acceptance/nodesets/ubuntu-server-1604-x64.yml

Stefan

On Tue, Mar 7, 2017 at 7:09 AM, Scheglmann, Stefan scheglmann@strato.de wrote:

Hi,

currently got some problems running the beaker test for the puppet-cep module. Working on OSX using Vagrant version 1.8.6 and VirtualBox version 5.1.14. Call is 'BEAKERdestroy=no BEAKERdebug=1 bundle exec --verbose rspec spec/acceptance? output in http://pastebin.com/w5ifgrvd

Try running:
PUPPETMAJVERSION=4 BEAKERdestroy=no BEAKERdebug=1 bundle exec
--verbose rspec spec/acceptance

Thanks,
-Alex

Tried this, this just changes the trace a bit, now it seems like that it worked in the first place but then failed for the same reason.
Trace here:

Trace:
An error occurred in a before(:suite) hook.
Failure/Error: raise CommandFailure, "Host '#{self}' exited with #{result.exitcode} running:\n #{cmdline}\nLast #{@options[:tracelimit]} lines of output were:\n#{result.formattedoutput(@options[:tracelimit])}"
Beaker::Host::CommandFailure:
Host 'first' exited with 127 running:
ZUULREF= ZUULBRANCH= ZUULURL= PUPPETMAJVERSION= bash openstack/puppet-openstack-integration/installmodules.sh
Last 10 lines of output were:
+ '[' -n 'SHELLOPTS=braceexpand:hashall:interactive-comments:xtrace
if [ -n "$(set | grep xtrace)" ]; then
local enablextrace='\''yes'\'';
if [ -n "${enable
xtrace}" ]; then' ']'
+ set +x
--------------------------------------------------------------------------------
| Install r10k |
--------------------------------------------------------------------------------
+ gem install fastgettext -v '< 1.2.0'
openstack/puppet-openstack-integration/install
modules.sh: line 29: gem: command not found

It seems like that the box beaker is using (puppetlabs/ubuntu-14.04-64-nocm), somehow ends up with has puppet 4.x installed. I could not exactly pin down how this happens, cause when i sin up some VM just from that base box and install puppet, i end up with 3.4. But during the beaker tests it ends up with puppet 4 and in puppet 4 some paths have changed. /opt/puppetlabs/bin is just for the 'public' applications and the ?private' ones like gem or ruby are in /opt/puppetlabs/puppet/bin. Therefore the openstack/puppet-openstack-integration/install_modules.sh script fails on installation of r10k, cause it cannot find gem and later on it fails on the r10k call cause it is also installed to /opt/puppetlabs/puppet/bin.
Symlinking gem and r10k in an provisioned machine, and rerun the tests fixes the problem. Currently i am doing all this cause i added some functionalities for the puppet-cep manifests to support bluestone/rocksdb and some additional config params which i would like to see in upstream.

Greets Stefan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 27
Date: Wed, 8 Mar 2017 11:23:50 -0500
From: David Moreau Simard dms@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [tc][appcat] The future of the App
Catalog
Message-ID:

Content-Type: text/plain; charset=UTF-8

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Wed, Mar 8, 2017 at 9:41 AM, Jay Pipes jaypipes@gmail.com wrote:
On 03/06/2017 06:26 AM, Thierry Carrez wrote:

Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks
!

Mirantis' position is that the App Catalog was a good idea, but we agree
with you that other application repositories like DockerHub and Quay.io are
both more useful and more actively used.

The OpenStack App Catalog does indeed seem to unnecessarily compete with
those application repositories, and we would support its retirement if that
is what the community would like to do. We'll provide resources and help in
winding anything down if needed.

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 28
Date: Wed, 8 Mar 2017 11:23:50 -0500
From: Brian Rosmaita rosmaita.fossdev@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: ddb8d3a0-85b3-fd6f-31e1-5724b51b99c3@gmail.com
Content-Type: text/plain; charset=windows-1252

On 3/8/17 10:12 AM, Monty Taylor wrote:
Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

My concern is that glance.openstack.org is easy to remember and type, so
I imagine there are links out there that we have no control over using
that URL. So what are the consequences of it 404'ing or "site cannot be
reached" in a browser?

glance.o.o currently redirects to docs.o.o/developer/glance

If glance.o.o failed for me, I'd next try openstack.org (or
www.openstack.org). Those would give me a page with a top bar of links,
one of which is DOCS. If I took that link, I'd get the docs home page.

There's a search bar there; typing in 'glance' gets me
docs.o.o/developer/glance as the first hit.

If instead I scroll past the search bar, I have to scroll down to
"Project-Specific Guides" and follow "Services & Libraries" ->
"Openstack Services" -> "image service (glance) -> docs.o.o/developer/glance

Which sounds kind of bad ... until I type "glance docs" into google.
Right now the first hit is docs.o.o/developer/glance. And all the kids
these days use the google. So by trying to be clever and hack the URL,
I could get lost, but if I just google 'glance docs', I find what I'm
looking for right away.

So I'm willing to go with the majority on this, with the caveat that if
one or two teams keep the redirect, it's going to be confusing to end
users if the redirect doesn't work for other projects.

cheers,
brian

Thanks!
Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 29
Date: Wed, 8 Mar 2017 16:27:56 +0000 (GMT)
From: Chris Dent cdent+os@anticdent.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID: alpine.OSX.2.20.1703081625580.59117@shine.local
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On Wed, 8 Mar 2017, Yu Wei wrote:

It seems that nova-placement-api acts as a CGI module.

Is it?

It's a WSGI application module, which is configured and accessed via
some mod wsgi configuration settings, if you're using mod_wsgi with
apache2:

 https://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html
 https://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIScriptAlias.html

It's a similar concept with other WSGI servers.

--
Chris Dent ?_(?)_/? https://anticdent.org/
freenode: cdent tw: @anticdent


Message: 30
Date: Wed, 8 Mar 2017 16:31:23 +0000
From: "Daniel P. Berrange" berrange@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: 20170308163123.GT7470@redhat.com
Content-Type: text/plain; charset=utf-8

On Wed, Mar 08, 2017 at 09:12:59AM -0600, Monty Taylor wrote:
Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

Does the server have any access log that could provide stats on whether
any of the subdomains are a receiving a meaningful amount of traffic ?
Easy to justify removing them if they're not seeing any real traffic.

If there's any referrer logs present, that might highlight which places
still have outdated links that need updating to kill off remaining
traffic.

Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|


Message: 31
Date: Thu, 9 Mar 2017 00:50:56 +0800
From: Jeffrey Zhang zhang.lei.fly@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: openstack openstack@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][ubuntu][libvirt] Is libvirt 2.5.0
in ubuntu cloud archive ocata repo bust
Message-ID:

Content-Type: text/plain; charset="utf-8"

Thanks Corey, But i tried ocata proposed repo, the issue is still happening.

On Wed, Mar 8, 2017 at 10:03 PM, Corey Bryant corey.bryant@canonical.com
wrote:

On Tue, Mar 7, 2017 at 10:28 PM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Kolla deploy ubuntu gate is red now. here is the related bug[0].

libvirt failed to access the console.log file when booting instance. After
made some debugging, i got following.

Jeffrey, This is likely fixed in ocata-proposed and should be promoted to
ocata-updates soon after testing completes. https://bugs.launchpad.net/
ubuntu/+source/libvirt/+bug/1667033.

Corey

how console.log works

nova create a empty console.log with nova:nova ( this is another bug
workaround actually1), then libvirt ( running with root ) will change
the
file owner to qemu process user/group ( configured by dynamic_ownership
).
Now qemu process can write logs into this file.

what's wrong now

libvirt 2.5.0 stop change the file owner, then qemu/libvirt failed to
write
logs into console.log file

other test

  • ubuntu + fallback libvirt 1.3.x works 2
  • ubuntu + libvirt 2.5.0 + chang the qemu process user/group to
    nova:nova works, too.[3]
  • centos + libvirt 2.0.0 works, never saw such issue in centos.

conclusion

I guess there are something wrong in libvirt 2.5.0 with dynamic_ownership

[0] https://bugs.launchpad.net/kolla-ansible/+bug/1668654
1 https://github.com/openstack/nova/blob/master/nova/virt/
libvirt/driver.py#L2922,L2952
2 https://review.openstack.org/442673
[3] https://review.openstack.org/442850

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 32
Date: Wed, 8 Mar 2017 12:08:11 -0500
From: James Slagle james.slagle@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [TripleO][Heat] Selectively disabling
deployment resources
Message-ID:

Content-Type: text/plain; charset=UTF-8

On Wed, Mar 8, 2017 at 4:08 AM, Steven Hardy shardy@redhat.com wrote:
On Tue, Mar 07, 2017 at 02:34:50PM -0500, James Slagle wrote:

I've been working on this spec for TripleO:
https://review.openstack.org/#/c/431745/

which allows users to selectively disable Heat deployment resources
for a given server (or server in the case of a *DeloymentGroup
resource).

Some of the main use cases in TripleO for such a feature are scaling
out compute nodes where you do not need to rerun Puppet (or make any
changes at all) on non-compute nodes, or to exclude nodes from hanging
a stack-update if you know they are unreachable or degraded for some
reason. There are others, but those are 2 of the major use cases.

Thanks for raising this, I know it's been a pain point for some users of
TripleO.

However I think we're conflating two different issues here:

  1. Don't re-run puppet (or yum update) when no other changes have happened

  2. Disable deployment resources when changes have happened

Yea, possibly, but (1) doesn't really solve the use cases in the spec.
It'd certainly be a small improvement, but it's not really what users
are asking for.

(2) is much more difficult to reason about because we in fact have to
execute puppet to fully determine if changes have happened.

I don't really think these two are conflated. For some purposes, the
2nd is just a more abstract definition of the first. For better or
worse, part of the reason people are asking for this feature is
because they don't want to undo manual changes. While that's not
something we should really spend a lot of time solving for, the fact
is that OpenStack architecture allows for horizontally scaling compute
nodes without have to touch every other single node in your deployment
but TripleO can't take advantage of that.

So, just giving users a way to opt out of the generated unique
identifier triggering the puppet applys and other deployments,
wouldn't help them if they unintentionally changed some other hiera
data that triggers a deployment.

Plus, we have some deployments that are going to execute every time
outside of unique identifiers being generated (hosts-config.yaml).

(1) is actually very simple, and is the default behavior of Heat
(SoftwareDeployment resources never update unless either the config
referenced or the input_values change). We just need to provide an option
to disable the DeployIdentifier/UpdateIdentifier timestamps from being
generated in tripleoclient.

(2) is harder, because the whole point of SoftwareDeploymentGroup is to run
the exact same configuration on a group of servers, with no exceptions.

As Zane mentions (2) is related to the way ResourceGroup works, but the
problem here isn't ResourceGroup per-se, as it would in theory be pretty
easy to reimplement SoftwareDeploymentGroup to generate it's nested stack
without inheriting from ResourceGroup (which may be needed if you want a
flag to make existing Deployments in the group immutable).

I'd suggest we solve (1) and do some testing, it may be enough to solve the
"don't change computes on scale-out" case at least?

Possibly, as long as no other deployments are triggered. I think of
the use case more as:

add a compute node(s), don't touch any existing nodes to minimize risk

as opposed to:

add a compute node(s), don't re-run puppet on any existing nodes as I
know that it's not needed

For the scale out case, the desire to minimize risk is a big part of
why other nodes don't need to be touched.

One way to potentially solve (2) would be to unroll the
SoftwareDeploymentGroup resources and instead generate the Deployment
resources via jinja2 - this would enable completely removing them on update
if that's what is desired, similar to what we already do for upgrades to
e.g not upgrade any compute nodes.

Thanks, I hadn't considered that approach, but will look into it. I'd
guess you'd still need a parameter or map data fed into the jinja2
templating, so that it would not generate the deployment resources
based on what was desired to be disabled. Or, this could use
conditionals perhaps.

--
-- James Slagle
--


Message: 33
Date: Wed, 8 Mar 2017 09:33:29 -0800
From: "Armando M." armamig@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID:

Content-Type: text/plain; charset="utf-8"

On 8 March 2017 at 07:39, Hirofumi Ichihara <ichihara.hirofumi@lab.ntt.co.jp
wrote:

On 2017/03/08 23:59, Andreas Jaeger wrote:

On 2017-03-08 15:40, ZZelle wrote:

Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib
master ... So the change should depends on a change in requirements repo
incrementing neutron-lib version

This is documented also at - together with some other caveats:

https://docs.openstack.org/infra/manual/developers.html#limi
tations-and-caveats

Thank you for the pointer. I understand.

You can do the reverse as documented in 1: i.e. file a dummy patch
against neutron-lib that pulls in both neutron's and neutron-lib changes.
One example is 2

1 https://docs.openstack.org/developer/neutron-lib/review-guidelines.html
2 https://review.openstack.org/#/c/386846/

Hirofumi

Note a depends-on requirements won't work either - you really need to
release it. Or you need to change the test to pull neutron-lib from
source,

Andreas

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara
<ichihara.hirofumi@lab.ntt.co.jp
ichihara.hirofumi@lab.ntt.co.jp> wrote:

 Hi,

 I thought that we can post neutron patch depending on neutron-lib
 patch under review.
 However, I saw it doesn't work[1, 2]. In the patches, neutron
 patch[1] has Depends-on tag with neutron-lib patch[2] but the pep8
 and unit test fails because the test doesn't use the neutron-lib

patch.

 Please correct me if it's my misunderstanding.

 [1]: https://review.openstack.org/#/c/424340/
 <https://review.openstack.org/#/c/424340/>
 [2]: https://review.openstack.org/#/c/424868/
 <https://review.openstack.org/#/c/424868/>

 Thanks,
 Hirofumi



 ___________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.op
enstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 34
Date: Wed, 8 Mar 2017 12:38:01 -0500
From: Jim Rollenhagen jim@jimrollenhagen.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [ironic] OpenStack client default ironic
API version
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Wed, Mar 8, 2017 at 9:05 AM, Mario Villaplana <mario.villaplana@gmail.com
wrote:

We want to deprecate ironic CLI soon, but I would prefer if that were
discussed on a separate thread if possible, aside from concerns about
versioning in ironic CLI. Feature parity should exist in Pike, then we
can issue a warning in Queens and deprecate the cycle after. More
information is on L56:
https://etherpad.openstack.org/p/ironic-pike-ptg-operations

I'm a bit torn on whether to use the API version coded in the OSC
plugin or not. On one hand, it'd be good to be able to test out new
features as soon as they're available. On the other hand, it's
possible that the client won't know how to parse certain items after a
microversion bump. I think I prefer using the hard-coded version to
avoid breakage, but we'd have to be disciplined about updating the
client when the API version is bumped (if needed). Opinions on this
are welcome. In either case, I think the deprecation warning could
land without specifying that.

I agree, I think we should pin it, otherwise it's one more hump to
overcome when we do want to make a breaking change.

FWIW, nova pins (both clients) to the max the client knows about,
specifically for this reason:
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/compute/client.py#L52-L57
https://github.com/openstack/python-novaclient/blob/master/novaclient/__init__.py#L23-L28

I'll certainly make an RFE when I update the patch later this week,
great suggestion.

I can make a spec, but it might be mostly empty except for the client
impact section. Also, this is a < 40 line change. :)

I tend to think a spec is a bit overkill for this, but I won't deny Ruby's
request.
Ping me when it's up and I'm happy to review it ASAP.

// jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 35
Date: Wed, 8 Mar 2017 17:42:02 +0000
From: "Fox, Kevin M" Kevin.Fox@pnnl.gov
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [tc][appcat] The future of the App
Catalog
Message-ID:

Content-Type: text/plain; charset="us-ascii"

For the OpenStack Applications Catalog to be successful in its mission, it required other parts of OpenStack to consider the use case a priority. Over the years it became quite clear to me that a significant part of the OpenStack community does not want OpenStack to become a place where cloud native applications would be built/packaged/provided to users using OpenStacks apis but instead just a place to run virtual machines on which you might deploy a cloud native platform to handle that use case. As time goes on, and COE's gain multitenancy, I see a big contraction in the number of OpenStack deployments or deployed node count and a shifting of OpenStack based workloads more towards managing pet vm's, as the cloud native stuff moves more and more towards containers/COE's which don't actually need vm's.

This I think will bring the issue to a head in the OpenStack community soon. What is OpenStack? Is it purely an IaaS implementation? Its pretty good at that now. But something that will be very niche soon I think. Is it an Cloud Operating system? The community seems to have made that a resounding no. Is it an OpenSource competitor to AWS? Today, its getting further and further behind in that. If nothing changes, that will be impossible.

My 2 cents? I think the world does need an OpenSource implementation of what AWS provides. That can't happen on the path we're all going down now. We're struggling with division of vision between the two ideologies and lack of decision around a COE, causing us to spend a huge amount of effort on things like Trove/Sahara/etc to reproduce functionality in AWS, but not being as agile as AWS so we can't ever make headway. If we want to be an OpenSource AWS competitor, that requires us to make some hard calls, pick a COE (Kubernetes has won that space I believe), start integrating it quickly, and retool advanced services like Trove/Sahara/etc to target the COE rather then VM's for deployment. This should greatly enhance our ability to produce functional solutions quickly.

But, its ultimately the Community who decide what OpenStack will become. If we're ok with the path its headed down, to basically just be an IaaS, that's fine with me. I'd just like it to be a conscious decision rather then one that just happens. If thats the way it goes, lets just decide on it now, and let the folks that are spinning their wheels move on to a system that will help them make headway in their goals. It will be better for everyone.

Thanks,
Kevin


From: David Moreau Simard [dms@redhat.com]
Sent: Wednesday, March 08, 2017 8:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Wed, Mar 8, 2017 at 9:41 AM, Jay Pipes jaypipes@gmail.com wrote:
On 03/06/2017 06:26 AM, Thierry Carrez wrote:

Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks
!

Mirantis' position is that the App Catalog was a good idea, but we agree
with you that other application repositories like DockerHub and Quay.io are
both more useful and more actively used.

The OpenStack App Catalog does indeed seem to unnecessarily compete with
those application repositories, and we would support its retirement if that
is what the community would like to do. We'll provide resources and help in
winding anything down if needed.

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 36
Date: Wed, 8 Mar 2017 17:52:43 +0000
From: "Kwasniewska, Alicja" alicja.kwasniewska@intel.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core
Message-ID: E5DE2A5A-DCA7-4900-BFB7-4849CE6D9DAF@intel.com
Content-Type: text/plain; charset="utf-8"

+1

From: Mauricio Lima mauriciolimab@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Wednesday, March 8, 2017 at 5:34 AM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core

+1

2017-03-08 7:34 GMT-03:00 Christian Berendt berendt@betacloud-solutions.de:
+1

On 8 Mar 2017, at 07:41, Micha? Jastrz?bski inc007@gmail.com wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Christian Berendt
Chief Executive Officer (CEO)

Mail: berendt@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Gesch?ftsf?hrer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 37
Date: Wed, 8 Mar 2017 13:17:14 -0500
From: Corey Bryant corey.bryant@canonical.com
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: openstack openstack@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][ubuntu][libvirt] Is libvirt 2.5.0
in ubuntu cloud archive ocata repo bust
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Wed, Mar 8, 2017 at 11:50 AM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Thanks Corey, But i tried ocata proposed repo, the issue is still
happening.

In that case, would you mind opening a bug if you haven't already?

Thanks,
Corey
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 38
Date: Wed, 8 Mar 2017 18:29:52 +0000
From: Jeremy Stanley fungi@yuggoth.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [infra][tripleo] initial discussion for a
new periodic pipeline
Message-ID: 20170308182952.GG12827@yuggoth.org
Content-Type: text/plain; charset=us-ascii

On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin wrote:
The TripleO team would like to initiate a conversation about the
possibility of creating a new pipeline in Openstack Infra to allow
a set of jobs to run periodically every four hours
[...]

The request doesn't strike me as contentious/controversial. Why not
just propose your addition to the zuul/layout.yaml file in the
openstack-infra/project-config repo and hash out any resulting
concerns via code review?
--
Jeremy Stanley


Message: 39
Date: Wed, 8 Mar 2017 13:03:58 -0600
From: Matthew Thode prometheanfire@gentoo.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [requirements] pycrypto is dead, long live
pycryptodome... or cryptography...
Message-ID: cba43a52-7c71-5ad0-15c1-5127ff4c302e@gentoo.org
Content-Type: text/plain; charset="utf-8"

So, pycrypto upstream is dead and has been for a while, we should look
at moving off of it for both bugfix and security reasons.

Currently it's used by the following.

barbican, cinder, trove, glance, heat, keystoneauth, keystonemiddleware,
kolla, openstack-ansible, and a couple of other smaller places.

Development of it was forked into pycryptodome, which is supposed to be
a drop in replacement. The problem is that due to co-installability
requirements we can't have half of packages out there using pycrypto and
the other half using pycryptodome. We'd need to hard switch everyone as
both packages install into the same namespace.

Another alternative would be to use something like cryptography instead,
though it is not a drop in replacement, the migration would be able to
be done piecemeal.

I'd be interested in hearing about migration plans, especially from the
affected projects.

--
Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL:


Message: 40
Date: Wed, 8 Mar 2017 20:04:10 +0100
From: Andreas Jaeger aj@suse.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: 9617a5f5-2f01-c713-c1bf-86c6308422f3@suse.com
Content-Type: text/plain; charset="windows-1252"

On 2017-03-08 17:23, Brian Rosmaita wrote:
On 3/8/17 10:12 AM, Monty Taylor wrote:

Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

My concern is that glance.openstack.org is easy to remember and type, so
I imagine there are links out there that we have no control over using
that URL. So what are the consequences of it 404'ing or "site cannot be
reached" in a browser?

glance.o.o currently redirects to docs.o.o/developer/glance

If glance.o.o failed for me, I'd next try openstack.org (or
www.openstack.org). Those would give me a page with a top bar of links,
one of which is DOCS. If I took that link, I'd get the docs home page.

There's a search bar there; typing in 'glance' gets me
docs.o.o/developer/glance as the first hit.

If instead I scroll past the search bar, I have to scroll down to
"Project-Specific Guides" and follow "Services & Libraries" ->
"Openstack Services" -> "image service (glance) -> docs.o.o/developer/glance

Which sounds kind of bad ... until I type "glance docs" into google.
Right now the first hit is docs.o.o/developer/glance. And all the kids
these days use the google. So by trying to be clever and hack the URL,
I could get lost, but if I just google 'glance docs', I find what I'm
looking for right away.

So I'm willing to go with the majority on this, with the caveat that if
one or two teams keep the redirect, it's going to be confusing to end
users if the redirect doesn't work for other projects.

Very few people know about these URLs at all and there are only a few
places that use it in openstack (I just send a few patches for those).
If you google for "openstack glance", you won't get this URL at all,

Andreas
--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
HRB 21284 (AG N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


Message: 41
From: no-reply@openstack.org
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] kolla 4.0.0.0rc2 (ocata)
Message-ID:

Hello everyone,

A new release candidate for kolla for the end of the Ocata
cycle is available! You can find the source code tarball at:

https://tarballs.openstack.org/kolla/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/kolla/log/?h=stable/ocata

Release notes for kolla can be found at:

http://docs.openstack.org/releasenotes/kolla/

Message: 42
Date: Wed, 8 Mar 2017 14:11:59 -0500
From: Davanum Srinivas davanum@gmail.com
To: prometheanfire@gentoo.org, "OpenStack Development Mailing List
(not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [requirements] pycrypto is dead, long
live pycryptodome... or cryptography...
Message-ID:

Content-Type: text/plain; charset=UTF-8

Matthew,

Please see the last time i took inventory:
https://review.openstack.org/#/q/pycryptodome+owner:dims-v

Thanks,
Dims

On Wed, Mar 8, 2017 at 2:03 PM, Matthew Thode prometheanfire@gentoo.org wrote:
So, pycrypto upstream is dead and has been for a while, we should look
at moving off of it for both bugfix and security reasons.

Currently it's used by the following.

barbican, cinder, trove, glance, heat, keystoneauth, keystonemiddleware,
kolla, openstack-ansible, and a couple of other smaller places.

Development of it was forked into pycryptodome, which is supposed to be
a drop in replacement. The problem is that due to co-installability
requirements we can't have half of packages out there using pycrypto and
the other half using pycryptodome. We'd need to hard switch everyone as
both packages install into the same namespace.

Another alternative would be to use something like cryptography instead,
though it is not a drop in replacement, the migration would be able to
be done piecemeal.

I'd be interested in hearing about migration plans, especially from the
affected projects.

--
Matthew Thode (prometheanfire)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


Message: 43
From: no-reply@openstack.org
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] kolla-ansible 4.0.0.0rc2 (ocata)
Message-ID:

Hello everyone,

A new release candidate for kolla-ansible for the end of the Ocata
cycle is available! You can find the source code tarball at:

https://tarballs.openstack.org/kolla-ansible/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/kolla-ansible/log/?h=stable/ocata

Release notes for kolla-ansible can be found at:

http://docs.openstack.org/releasenotes/kolla-ansible/

Message: 44
Date: Wed, 8 Mar 2017 14:17:59 -0500
From: Steve Martinelli s.martinelli@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Wed, Mar 8, 2017 at 2:04 PM, Andreas Jaeger aj@suse.com wrote:

Very few people know about these URLs at all and there are only a few
places that use it in openstack (I just send a few patches for those).

++

I had no idea they existed...
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 45
Date: Wed, 8 Mar 2017 13:24:50 -0600
From: Matthew Thode prometheanfire@gentoo.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [requirements] pycrypto is dead, long
live pycryptodome... or cryptography...
Message-ID: b6af5257-85dd-28be-2629-0ada5af81b7c@gentoo.org
Content-Type: text/plain; charset="utf-8"

I'm aware, iirc it was brought up when pysaml2 had to be fixed due to a
CVE. This thread is more looking for a long term fix.

On 03/08/2017 01:11 PM, Davanum Srinivas wrote:
Matthew,

Please see the last time i took inventory:
https://review.openstack.org/#/q/pycryptodome+owner:dims-v

Thanks,
Dims

On Wed, Mar 8, 2017 at 2:03 PM, Matthew Thode prometheanfire@gentoo.org wrote:

So, pycrypto upstream is dead and has been for a while, we should look
at moving off of it for both bugfix and security reasons.

Currently it's used by the following.

barbican, cinder, trove, glance, heat, keystoneauth, keystonemiddleware,
kolla, openstack-ansible, and a couple of other smaller places.

Development of it was forked into pycryptodome, which is supposed to be
a drop in replacement. The problem is that due to co-installability
requirements we can't have half of packages out there using pycrypto and
the other half using pycryptodome. We'd need to hard switch everyone as
both packages install into the same namespace.

Another alternative would be to use something like cryptography instead,
though it is not a drop in replacement, the migration would be able to
be done piecemeal.

I'd be interested in hearing about migration plans, especially from the
affected projects.

--
Matthew Thode (prometheanfire)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL:



OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

End of OpenStack-dev Digest, Vol 59, Issue 24
*********************************************__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded Mar 9, 2017 by 1392607554 (160 points)  
0 votes

On Wed, Mar 8, 2017 at 7:41 AM, Michał Jastrzębski inc007@gmail.com wrote:
Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

+1
Martin

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 9, 2017 by m.andre_at_redhat.co (1,280 points)   1
0 votes

Hello,

I’m not sure who 1392607554@qq.com is. Could you identify yourself please for the mailing list? Note only core reviewers may vote on core reviewer nominations and you may be a core reviewer, so as I core reviewer, I want to verify you are indeed a core reviewer such that your vote is counted. I think I know who you are, but I am not certain.

Thanks
-steve

From: 1392607554 1392607554@qq.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Wednesday, March 8, 2017 at 8:49 PM
To: OpenStack-dev-request openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core

+1

------------------ Original ------------------
From: "OpenStack-dev-request";openstack-dev-request@lists.openstack.org;
Date: Thu, Mar 9, 2017 03:25 AM
To: "openstack-dev"openstack-dev@lists.openstack.org;
Subject: OpenStack-dev Digest, Vol 59, Issue 24

Send OpenStack-dev mailing list submissions to
openstack-dev@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
openstack-dev-request@lists.openstack.org

You can reach the person managing the list at
openstack-dev-owner@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."

Today's Topics:

  1. [acceleration] No team meeting today, resume next Wed
    (Zhipeng Huang)
  2. Re: [ironic] OpenStack client default ironic API version
    (Dmitry Tantsur)
  3. Re: [release][tripleo][fuel][kolla][ansible] Ocata Release
    countdown for R+2 Week, 6-10 March (Doug Hellmann)
  4. Re: [tc][appcat][murano][app-catalog] The future of the App
    Catalog (Ian Cordasco)
  5. Re: [telemetry][requirements] ceilometer grenade gate failure
    (gordon chung)
  6. Re: [acceleration] No team meeting today, resume next Wed
    (Harm Sluiman)
  7. [trove] today weekly meeting (Amrith Kumar)
  8. Re: [kolla][ubuntu][libvirt] Is libvirt 2.5.0 in ubuntu cloud
    archive ocata repo bust (Corey Bryant)
  9. Re: [ironic] OpenStack client default ironic API version
    (Mario Villaplana)

    1. [neutron] [infra] Depends-on tag effect (Hirofumi Ichihara)
    2. Re: [nova] Question to clarify versioned notifications
      (Matt Riedemann)
    3. Re: [neutron] [infra] Depends-on tag effect (ZZelle)
    4. Re: [tc][appcat] The future of the App Catalog (Jay Pipes)
    5. [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Yu Wei)
    6. Re: [neutron] [infra] Depends-on tag effect (Andreas Jaeger)
    7. Re: [TripleO][Heat] Selectively disabling deployment
      resources (James Slagle)
    8. [ironic] Pike PTG report (Dmitry Tantsur)
    9. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Chris Dent)
    10. [cinder][glance][horizon][keystone][nova][qa][swift] Feedback
      needed: Removal of legacy per-project vanity domain redirects
      (Monty Taylor)
    11. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Yu Wei)
    12. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Chris Dent)
    13. Re: [neutron] [infra] Depends-on tag effect (Hirofumi Ichihara)
    14. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Lance Bragstad)
    15. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Yu Wei)
    16. Re: [puppet] puppet-cep beaker test (Scheglmann, Stefan)
    17. Re: [puppet] puppet-cep beaker test (Alex Schultz)
    18. Re: [tc][appcat] The future of the App Catalog
      (David Moreau Simard)
    19. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Brian Rosmaita)
    20. Re: [nova][placement-api] Is there any document about
      openstack-placement-api for installation and configure? (Chris Dent)
    21. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Daniel P. Berrange)
    22. Re: [kolla][ubuntu][libvirt] Is libvirt 2.5.0 in ubuntu cloud
      archive ocata repo bust (Jeffrey Zhang)
    23. Re: [TripleO][Heat] Selectively disabling deployment
      resources (James Slagle)
    24. Re: [neutron] [infra] Depends-on tag effect (Armando M.)
    25. Re: [ironic] OpenStack client default ironic API version
      (Jim Rollenhagen)
    26. Re: [tc][appcat] The future of the App Catalog (Fox, Kevin M)
    27. Re: [kolla] Proposing duonghq for core (Kwasniewska, Alicja)
    28. Re: [kolla][ubuntu][libvirt] Is libvirt 2.5.0 in ubuntu cloud
      archive ocata repo bust (Corey Bryant)
    29. Re: [infra][tripleo] initial discussion for a new periodic
      pipeline (Jeremy Stanley)
    30. [requirements] pycrypto is dead, long live pycryptodome... or
      cryptography... (Matthew Thode)
    31. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Andreas Jaeger)
    32. [kolla] kolla 4.0.0.0rc2 (ocata) (no-reply@openstack.org)
    33. Re: [requirements] pycrypto is dead, long live
      pycryptodome... or cryptography... (Davanum Srinivas)
    34. [kolla] kolla-ansible 4.0.0.0rc2 (ocata) (no-reply@openstack.org)
    35. Re: [cinder][glance][horizon][keystone][nova][qa][swift]
      Feedback needed: Removal of legacy per-project vanity domain
      redirects (Steve Martinelli)
    36. Re: [requirements] pycrypto is dead, long live
      pycryptodome... or cryptography... (Matthew Thode)

Message: 1
Date: Wed, 8 Mar 2017 20:22:29 +0800
From: Zhipeng Huang zhipengh512@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [acceleration] No team meeting today, resume
next Wed
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi team,

As agreed per our PTG/VTG session, we will have the team meeting two weeks
after to give people enough time to prepare the BPs we discussed.

Therefore there will be no team meeting today, and the next meeting is on
next Wed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 2
Date: Wed, 8 Mar 2017 13:40:37 +0100
From: Dmitry Tantsur dtantsur@redhat.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ironic] OpenStack client default ironic
API version
Message-ID: f33ad8bf-73ef-bfe6-61d4-7f6ec03f8758@redhat.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 03/07/2017 04:59 PM, Loo, Ruby wrote:
On 2017-03-06, 3:46 PM, "Mario Villaplana" mario.villaplana@gmail.com wrote:

Hi ironic,

At the PTG, an issue regarding the default version of the ironic API
used in our python-openstackclient plugin was discussed. [0] In short,
the issue is that we default to a very old API version when the user
doesn't otherwise specify it. This limits discoverability of new
features and makes the client more difficult to use for deployments
running the latest version of the code.

We came to the following consensus:

1. For a deprecation period, we should log a warning whenever the user
doesn't specify an API version, informing them of this change.

2. After the deprecation period:

a) OSC baremetal plugin will default to the latest available version

I think OSC and ironic CLI have the same behaviour -- are we only interested in OSC or are we interested in both, except that we also want to at some point soon perhaps, deprecate ironic CLI?

I think we should only touch OSC, because of planned deprecation you mention.

Also, by 'latest available version', the OSC plugin knows (or thinks it knows) what the latest version is [1]. Will you be using that, or 'latest'?

It will pass "latest" to the API, so it may end up with a version the client
side does not know about. This is intended, I think. It does have some
consequences if we make breaking changes like removing parameters. As we're not
overly keen on breaking changes anyway, this may not be a huge concern.

b) Specifying just macroversion will default to latest microversion
within that macroversion (example: --os-baremetal-api-version=1 would
default to 1.31 if 1.31 is the last microversion with 1 macroversion,
even if we have API 2.2 supported)

I have a patch up for review with the deprecation warning:
https://review.openstack.org/442153

Do you have an RFE? I'd like a spec for this too please.

Dunno if this change really requires a spec, but if you want one - let's have one :)

We should have an RFE anyway, obviously.

Please comment on that patch with any concerns.

We also still have yet to decide what a suitable deprecation period is
for this change, as far as I'm aware. Please respond to this email
with any suggestions on the deprecation period.

Thanks,
Mario


[0] https://etherpad.openstack.org/p/ironic-pike-ptg-operations L30

Thank YOU!

--ruby

[1] https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L29


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 3
Date: Wed, 08 Mar 2017 07:52:23 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [release][tripleo][fuel][kolla][ansible]
Ocata Release countdown for R+2 Week, 6-10 March
Message-ID: 1488977242-sup-5435@lrrr.local
Content-Type: text/plain; charset=UTF-8

Excerpts from Vladimir Kuklin's message of 2017-03-08 01:20:21 +0300:
Doug

I have proposed the change for Fuel RC2 [0], but it has W-1 set as I am
waiting for the final tests result. If everything goes alright, I will
check the Worfklow off and this RC2 can be the cut as the release.

I've approved the RC2 tag and prepared
https://review.openstack.org/443116 with the final release tag. Please
+1 if that looks OK. I will approve it tomorrow.

Doug

[0] https://review.openstack.org/#/c/442775/

On Tue, Mar 7, 2017 at 3:39 AM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Sorry for late. But Kolla project need a new release candidate. I will
push it today.

On Tue, Mar 7, 2017 at 6:27 AM, Doug Hellmann doug@doughellmann.com
wrote:

Excerpts from Doug Hellmann's message of 2017-03-06 11:00:15 -0500:

Excerpts from Doug Hellmann's message of 2017-03-02 18:24:12 -0500:

Release Tasks


Liaisons for cycle-trailing projects should prepare their final
release candidate tags by Monday 6 March. The release team will
prepare a patch showing the final release versions on Wednesday 7
March, and PTLs and liaisons for affected projects should +1. We
will then approve the final releases on Thursday 8 March.

We have 13 cycle-trailing deliverables without final releases for Ocata.
All have at least one release candidate, so if no new release candidates
are proposed today I will prepare a patch using these versions as the
final and we will approve that early Wednesday.

If you know that you need a new release candidate, please speak up now.

If you know that you do not need a new release candidate, please also
let me know that.

Thanks!
Doug

$ list-deliverables --series ocata --missing-final -v
fuel fuel 11.0.0.0rc1 other
cycle-trailing
instack-undercloud tripleo 6.0.0.0rc1 other
cycle-trailing
kolla-ansible kolla 4.0.0.0rc1 other
cycle-trailing
kolla kolla 4.0.0.0rc1 other
cycle-trailing
openstack-ansible OpenStackAnsible 15.0.0.0rc1 other
cycle-trailing
os-apply-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-cloud-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-collect-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-net-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
os-refresh-config tripleo 6.0.0.0rc1 other
cycle-with-milestones
tripleo-heat-templates tripleo 6.0.0.0rc1 other
cycle-trailing
tripleo-image-elements tripleo 6.0.0.0rc1 other
cycle-trailing
tripleo-puppet-elements tripleo 6.0.0.0rc1 other
cycle-trailing

I have lined up patches with the final release tags for all 3 projects.
Please review and +1 or propose a new patch with an updated release
candidate.

Ansible: https://review.openstack.org/442138
Kolla: https://review.openstack.org/442137
TripleO: https://review.openstack.org/442129

Doug



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 4
Date: Wed, 8 Mar 2017 08:25:21 -0500
From: Ian Cordasco sigmavirus24@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [tc][appcat][murano][app-catalog] The
future of the App Catalog
Message-ID:

Content-Type: text/plain; charset=UTF-8

-----Original Message-----
From:?Christopher Aedo doc@aedo.net
Reply:?OpenStack Development Mailing List (not for usage questions)

Date:?March 7, 2017 at 22:11:22
To:?OpenStack Development Mailing List (not for usage questions)

Subject:? Re: [openstack-dev] [tc][appcat][murano][app-catalog] The
future of the App Catalog

On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez wrote:

Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

Without something like Murano "thinly wrapping" docker apps, how would
you propose current users of OpenStack clouds deploy docker apps? Or
any other app for that matter? It seems a little unfair to talk about
murano apps this way when no reasonable alternative exists for easily
deploying docker apps. When I look back at the recent history of how
we've handled containers (nova-docker, magnum, kubernetes, etc) it
does not seem like we're making it easy for the folks who want to
deploy a container on their cloud...

Please understand I am not pleading to keep the Community App Catalog
alive in perpetuity. This just sounds like an unfair point of
comparison. One of the biggest challenges we've faced with the app
catalog since day one is that there is no such thing as a simple
definition of an "OpenStack Application". OpenStack is an IaaS before
anything else, and to my knowledge there is no universally accepted
application deployment mechanism for OpenStack clouds. Heat doesn't
solve that problem as its very operator focused, and while being very
popular and used heavily, it's not used as a way to share generic
templates suitable for deploying apps across different clouds. Murano
is not widely adopted (last time I checked it's not available on any
public clouds, though I hear it is actually used on a several
university clouds, and it's also used on a few private clouds I'm
aware of.)

As a place to find things that run on OpenStack clouds, the app
catalog did a reasonable job. If anything, the experiment showed that
there is no community looking for a place to share OpenStack-specific
applications. There are definitely communities for PaaS layers (cloud
foundry, mesosphere, docker, kubernetes), but I don't see any
community for openstack-native applications that can be deployed on
any cloud, nor a commonly accepted way to deploy them.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks !

As the former PTL I am obviously a little bit biased. Even though my
focus has shifted and I've stepped away from the app catalog, I had
been spending a lot of time trying to figure out how to make
applications an easy to run thing on OpenStack. I've also been trying
to find a community of people who are looking for that, and it doesn't
seem like they've materialized; possibly because that community
doesn't exist? Or else we just haven't been able to figure out where
they're hiding ;)

The one consideration that is pretty important here is what this would
mean to the Murano community. Those folks have been contributed time
and resources to the app catalog project. They've also standardized
on the app catalog as the distribution mechanism, intending to make
the app catalog UI a native component for Murano. We do need to make
sure that if the app catalog is retired, it doesn't hamper or impact
people who have already deployed Murano and are counting on finding
the apps in the app catalog.

All of this is true. But Murano still doesn't have a stable way to
store artifacts. In fact, it seems like Murano relies on a lot of
unstable OpenStack infrastructure. While lots of people have
contributed time, energy, sweat, and tears to the project there are
still plenty of things that make Murano less than desirable. Perhaps
that's why the project has found so few adopters. I'm sure there are
plenty of people who want to use an OpenStack cloud to deploy
applications. In fact, I know there are companies that try to provide
that kind of support via Heat templates. All that said, I don't think
allowing for competition with Murano is a bad thing.

--
Ian Cordasco


Message: 5
Date: Wed, 8 Mar 2017 13:27:12 +0000
From: gordon chung gord@live.ca
To: "openstack-dev@lists.openstack.org"

Subject: Re: [openstack-dev] [telemetry][requirements] ceilometer
grenade gate failure
Message-ID:

Content-Type: text/plain; charset="Windows-1252"

On 07/03/17 11:16 PM, Tony Breeds wrote:
Sure.

I've approved it but it's blocked behind https://review.openstack.org/#/c/442886/1

awesome! thanks Tony!

cheers,

--
gord


Message: 6
Date: Wed, 8 Mar 2017 09:01:00 -0500
From: Harm Sluiman harm.sluiman@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [acceleration] No team meeting today,
resume next Wed
Message-ID: DC117932-5897-4A7F-B521-522E06D2115F@gmail.com
Content-Type: text/plain; charset=us-ascii

Thanks for the update. Unfortunately I could not attend and can't seem to find a summary or anything about what took place. A pointer would be appreciated please ;-)

Thanks for your time
Harm Sluiman
harm.sluiman@gmail.com

On Mar 8, 2017, at 7:22 AM, Zhipeng Huang zhipengh512@gmail.com wrote:

Hi team,

As agreed per our PTG/VTG session, we will have the team meeting two weeks after to give people enough time to prepare the BPs we discussed.

Therefore there will be no team meeting today, and the next meeting is on next Wed.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 7
Date: Wed, 8 Mar 2017 09:03:21 -0500
From: "Amrith Kumar" amrith.kumar@gmail.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [trove] today weekly meeting
Message-ID: 00bf01d29814$bf3fe810$3dbfb830$@gmail.com
Content-Type: text/plain; charset="us-ascii"

While I try to schedule my life to not conflict with the weekly Trove
meeting, it appears that Wednesday afternoon at 1pm is a particularly
popular time for people to want to meet me.

This week, and next week are no exception. While I'd tried to avoid these
conflicts, I've managed to be unable to do it (again).

Nikhil (slicknik) has kindly agreed to run the meeting today, same place,
same time as always.

Thanks Nikhil.

-amrith

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 8
Date: Wed, 8 Mar 2017 09:03:27 -0500
From: Corey Bryant corey.bryant@canonical.com
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: openstack openstack@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][ubuntu][libvirt] Is libvirt 2.5.0
in ubuntu cloud archive ocata repo bust
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Tue, Mar 7, 2017 at 10:28 PM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Kolla deploy ubuntu gate is red now. here is the related bug[0].

libvirt failed to access the console.log file when booting instance. After
made some debugging, i got following.

Jeffrey, This is likely fixed in ocata-proposed and should be promoted to
ocata-updates soon after testing completes.
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1667033.

Corey

how console.log works

nova create a empty console.log with nova:nova ( this is another bug
workaround actually[1]), then libvirt ( running with root ) will change the
file owner to qemu process user/group ( configured by dynamic_ownership ).
Now qemu process can write logs into this file.

what's wrong now

libvirt 2.5.0 stop change the file owner, then qemu/libvirt failed to write
logs into console.log file

other test

  • ubuntu + fallback libvirt 1.3.x works 2
  • ubuntu + libvirt 2.5.0 + chang the qemu process user/group to
    nova:nova works, too.[3]
  • centos + libvirt 2.0.0 works, never saw such issue in centos.

conclusion

I guess there are something wrong in libvirt 2.5.0 with dynamic_ownership

[0] https://bugs.launchpad.net/kolla-ansible/+bug/1668654
[1] https://github.com/openstack/nova/blob/master/
nova/virt/libvirt/driver.py#L2922,L2952
2 https://review.openstack.org/442673
[3] https://review.openstack.org/442850

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 9
Date: Wed, 8 Mar 2017 09:05:07 -0500
From: Mario Villaplana mario.villaplana@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [ironic] OpenStack client default ironic
API version
Message-ID:

Content-Type: text/plain; charset=UTF-8

We want to deprecate ironic CLI soon, but I would prefer if that were
discussed on a separate thread if possible, aside from concerns about
versioning in ironic CLI. Feature parity should exist in Pike, then we
can issue a warning in Queens and deprecate the cycle after. More
information is on L56:
https://etherpad.openstack.org/p/ironic-pike-ptg-operations

I'm a bit torn on whether to use the API version coded in the OSC
plugin or not. On one hand, it'd be good to be able to test out new
features as soon as they're available. On the other hand, it's
possible that the client won't know how to parse certain items after a
microversion bump. I think I prefer using the hard-coded version to
avoid breakage, but we'd have to be disciplined about updating the
client when the API version is bumped (if needed). Opinions on this
are welcome. In either case, I think the deprecation warning could
land without specifying that.

I'll certainly make an RFE when I update the patch later this week,
great suggestion.

I can make a spec, but it might be mostly empty except for the client
impact section. Also, this is a < 40 line change. :)

Mario

On Tue, Mar 7, 2017 at 10:59 AM, Loo, Ruby ruby.loo@intel.com wrote:
On 2017-03-06, 3:46 PM, "Mario Villaplana" mario.villaplana@gmail.com wrote:

Hi ironic,

At the PTG, an issue regarding the default version of the ironic API
used in our python-openstackclient plugin was discussed. [0] In short,
the issue is that we default to a very old API version when the user
doesn't otherwise specify it. This limits discoverability of new
features and makes the client more difficult to use for deployments
running the latest version of the code.

We came to the following consensus:

1. For a deprecation period, we should log a warning whenever the user
doesn't specify an API version, informing them of this change.

2. After the deprecation period:

a) OSC baremetal plugin will default to the latest available version

I think OSC and ironic CLI have the same behaviour -- are we only interested in OSC or are we interested in both, except that we also want to at some point soon perhaps, deprecate ironic CLI?

Also, by 'latest available version', the OSC plugin knows (or thinks it knows) what the latest version is [1]. Will you be using that, or 'latest'?

b) Specifying just macroversion will default to latest microversion
within that macroversion (example: --os-baremetal-api-version=1 would
default to 1.31 if 1.31 is the last microversion with 1 macroversion,
even if we have API 2.2 supported)

I have a patch up for review with the deprecation warning:
https://review.openstack.org/442153

Do you have an RFE? I'd like a spec for this too please.

Please comment on that patch with any concerns.

We also still have yet to decide what a suitable deprecation period is
for this change, as far as I'm aware. Please respond to this email
with any suggestions on the deprecation period.

Thanks,
Mario


[0] https://etherpad.openstack.org/p/ironic-pike-ptg-operations L30

Thank YOU!

--ruby

[1] https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L29


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 10
Date: Wed, 8 Mar 2017 23:16:54 +0900
From: Hirofumi Ichihara ichihara.hirofumi@lab.ntt.co.jp
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID: 00bc73ee-06bc-76e2-ea11-dd0b0a321314@lab.ntt.co.jp
Content-Type: text/plain; charset=iso-2022-jp; format=flowed;
delsp=yes

Hi,

I thought that we can post neutron patch depending on neutron-lib patch
under review.
However, I saw it doesn't work[1, 2]. In the patches, neutron patch[1]
has Depends-on tag with neutron-lib patch2 but the pep8 and unit test
fails because the test doesn't use the neutron-lib patch.

Please correct me if it's my misunderstanding.

[1]: https://review.openstack.org/#/c/424340/
Thanks,
Hirofumi


Message: 11
Date: Wed, 8 Mar 2017 08:33:16 -0600
From: Matt Riedemann mriedemos@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Question to clarify versioned
notifications
Message-ID: 3e95e057-b332-6c65-231b-f26001f6d5a8@gmail.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 3/8/2017 4:19 AM, Balazs Gibizer wrote:

Honestly, If searchlight needs to be adapted to the versioned
notifications then the smallest thing to change is to handle the removed
prefix from the event_type. The biggest difference is the format and the
content of the payload. In the legacy notifications the payload was a
simply json dict in the versioned notification the payload is a json
serialized ovo. Which means quite a different data structure. E.g. extra
keys, deeper nesting, etc.

Cheers,
gibi

Heh, yeah, I agree. Thanks for the confirmation and details. I was just
making sure I had this all straight since I was jumping around from
specs and docs and code quite a bit yesterday piecing this together and
wanted to make sure I had it straight. Plus you don't apparently work 20
hours a day gibi so I couldn't ask you in IRC. :)

--

Thanks,

Matt Riedemann


Message: 12
Date: Wed, 8 Mar 2017 15:40:53 +0100
From: ZZelle zzelle@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib master
... So the change should depends on a change in requirements repo
incrementing neutron-lib version

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara <
ichihara.hirofumi@lab.ntt.co.jp> wrote:

Hi,

I thought that we can post neutron patch depending on neutron-lib patch
under review.
However, I saw it doesn't work[1, 2]. In the patches, neutron patch[1] has
Depends-on tag with neutron-lib patch2 but the pep8 and unit test fails
because the test doesn't use the neutron-lib patch.

Please correct me if it's my misunderstanding.

[1]: https://review.openstack.org/#/c/424340/
2: https://review.openstack.org/#/c/424868/

Thanks,
Hirofumi


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 13
Date: Wed, 8 Mar 2017 09:41:05 -0500
From: Jay Pipes jaypipes@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][appcat] The future of the App
Catalog
Message-ID: c8147907-6d4b-6bac-f9b3-6b8b07d75494@gmail.com
Content-Type: text/plain; charset=windows-1252; format=flowed

On 03/06/2017 06:26 AM, Thierry Carrez wrote:
Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks !

Mirantis' position is that the App Catalog was a good idea, but we agree
with you that other application repositories like DockerHub and Quay.io
are both more useful and more actively used.

The OpenStack App Catalog does indeed seem to unnecessarily compete with
those application repositories, and we would support its retirement if
that is what the community would like to do. We'll provide resources and
help in winding anything down if needed.

Best,
-jay


Message: 14
Date: Wed, 8 Mar 2017 14:59:41 +0000
From: Yu Wei yu2003w@hotmail.com
To: "openstack-dev@lists.openstack.org"

Subject: [openstack-dev] [nova][placement-api] Is there any document
about openstack-placement-api for installation and configure?
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi Guys,
I'm new to openstack.
I tried to install openstack-ocata.
As placement-api is required since Ocata, is there any detailed document
about how to install and configure placement-api?

Thanks,
Jared


Message: 15
Date: Wed, 8 Mar 2017 15:59:59 +0100
From: Andreas Jaeger aj@suse.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID: c141c37b-53df-0983-5857-9980b6e2b16e@suse.com
Content-Type: text/plain; charset="windows-1252"

On 2017-03-08 15:40, ZZelle wrote:
Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib
master ... So the change should depends on a change in requirements repo
incrementing neutron-lib version

This is documented also at - together with some other caveats:

https://docs.openstack.org/infra/manual/developers.html#limitations-and-caveats

Note a depends-on requirements won't work either - you really need to
release it. Or you need to change the test to pull neutron-lib from source,

Andreas

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara
<ichihara.hirofumi@lab.ntt.co.jp
ichihara.hirofumi@lab.ntt.co.jp> wrote:

Hi,

I thought that we can post neutron patch depending on neutron-lib
patch under review.
However, I saw it doesn't work[1, 2]. In the patches, neutron
patch[1] has Depends-on tag with neutron-lib patch[2] but the pep8
and unit test fails because the test doesn't use the neutron-lib patch.

Please correct me if it's my misunderstanding.

[1]: https://review.openstack.org/#/c/424340/
<https://review.openstack.org/#/c/424340/>
[2]: https://review.openstack.org/#/c/424868/
<https://review.openstack.org/#/c/424868/>

Thanks,
Hirofumi



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
HRB 21284 (AG N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


Message: 16
Date: Wed, 8 Mar 2017 10:05:45 -0500
From: James Slagle james.slagle@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [TripleO][Heat] Selectively disabling
deployment resources
Message-ID:

Content-Type: text/plain; charset=UTF-8

On Tue, Mar 7, 2017 at 7:24 PM, Zane Bitter zbitter@redhat.com wrote:
On 07/03/17 14:34, James Slagle wrote:

I've been working on this spec for TripleO:
https://review.openstack.org/#/c/431745/

which allows users to selectively disable Heat deployment resources
for a given server (or server in the case of a *DeloymentGroup
resource).

I'm not completely clear on what this means. You can selectively disable
resources with conditionals. But I think you mean that you want to
selectively disable changes to resources?

Yes, that's right. The reason I can't use conditionals is that I still
want the SoftwareDeploymentGroup resources to be updated, but I may
want to selectively exclude servers from the group that is passed in
via the servers property. E.g., instead of updating the deployment
metadata for all computes, I may want to exclude a single compute
that is temporarily unreachable, without that failing the whole
stack-update.

I started by taking an approach that would be specific to TripleO.
Basically mapping all the deployment resources to a nested stack
containing the logic to selectively disable servers from the
deployment (using yaql) based on a provided parameter value. Here's
the main patch: https://review.openstack.org/#/c/442681/

After considering that complexity, particularly the yaql expression,
I'm wondering if it would be better to add this support natively to
Heat.

I was looking at the restrictedactions key in the resourceregistry
and was thinking this might be a reasonable place to add such support.
It would require some changes to how restricted_actions work.

One change would be a method for specifying that restrictedactions
should not fail the stack operation if an action would have otherwise
been triggered. Currently the behavior is to raise an exception and
mark the stack failed if an action needs to be taken but has been
marked restricted. That would need to be tweaked to allow specifying
that that we don't want the stack to fail. One thought would be to
change the allowed values of restricted
actions to:

replacefail
replace
ignore
updatefail
update
ignore
replace
update

where replace and update were synonyms for replacefail/updatefail to
maintain backwards compatibility.

Anything that involves the resource definition in the template changing but
Heat not modifying the resource is problematic, because that messes with
Heat's internal bookkeeping.

I don't think this case would violate that principle. The template +
environment files would match what Heat has done. After an update, the
2 would be in sync as to what servers the updated Deployment resource
was triggered.

Another change would be to add logic to the Deployment resources
themselves to consider if any restricted_actions have been set on an
Server resources before triggering an updated deployment for a given
server.

Why not just a property, "nonewdeployments_please: true"?

That would actually work and be pretty straightforward I think. We
could have a map parameter with server names and the property that the
user could use to set the value.

The reason why I was initially not considering this route was because
it doesn't allow the user to disable only some deployments for a given
server. It's all or nothing. However, it's much simpler than a totally
flexible option, and it addresses 2 of the largest use cases of this
feature. I'll look into this route a bit more.

--
-- James Slagle
--


Message: 17
Date: Wed, 8 Mar 2017 16:06:59 +0100
From: Dmitry Tantsur dtantsur@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [ironic] Pike PTG report
Message-ID: 23821f81-9509-3600-6b9c-693984aa0132@redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed

Hi all!

I've finished my Pike PTG report. It is spread over four blog posts:

http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-1.html
http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-2.html
http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-3.html
http://dtantsur.github.io/posts/ironic-ptg-atlanta-2017-4.html

It was a lot of typing, please pardon mistakes. The whole text (in RST format)
for archiving purposes is copy-pasted in the end of this message.

Please feel free to respond here or in the blog comments.

Cheers,
Dmitry

Ongoing work and status updates


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-ongoing-work.

We spent the first half of Wednesday discussing this. There was a lot of
incomplete work left from Ocata, and some major ongoing work that we did not
even plan to finish in Ocata.

Boot-from-volume
~~~~~~~~~~~~~~~~

Got some progress, most of the Ironic patches are up. Desperately needs review
and testing, though. The Nova part is also lagging behind, and should be
brought to the Nova team attention.

Actions
mgoddard and dtantsur volunteered to help with testing, while
mjturek, hsiina and crushil volunteered to do some coding.
Goals for Pike
finish the first (iSCSI using iPXE) case and the Nova part.

Networking
~~~~~~~~~~

A lot of progress here during Ocata, completed bonding and attach/detach API.

VLAN-aware instances should work. However, it requires an expensive ToR switch,
supporting VLAN/VLAN and VLAN/VXLAN rewriting, and, of course ML2 plugin
support. Also, reusing an existing segmentation ID requires more work: we have
no current way to put the right ID in the configdrive.

Actions
vsaienko, armando and kevinbenton are looking into the Neutron
part of the configdrive problem.

Routed networks support require Ironic to be aware of which physical network(s)
each node is connected to.

Goals for Pike
* model physical networks on Ironic ports,
* update VIF attach logic to no longer attach things to wrong physnets.

We discussed introducing notifications from Neutron to Ironic about events
of interest for us. We are going to use the same model as between Neutron and
Nova: create a Neutron plugin that filters out interesting events and posts
to a new Ironic API endpoint.

Goals for Pike
have this notification system in place.

Finally, we agreed that we need to work on a reference architecture document,
describing the best practices of deploying Ironic, especially around
multi-tenant networking setup.

Actions
jroll to kickstart this document, JayF and mariojv to help.

Rolling upgrades
~~~~~~~~~~~~~~~~

Missed Ocata by a small margin. The code is up and needs reviewing. The CI
is waiting for the multinode job to start working (should be close as well).

Goals for Pike
rolling upgrade Ocata -> Pike.

Driver composition reform
~~~~~~~~~~~~~~~~~~~~~~~~~

Most of the code landed in Ocata already. Some client changes landed in Pike,
some are still on review. As we released Ocata with the driver composition
changes being experimental, we are not ready to deprecate old-style drivers in
Pike. Documentation is also still lacking.

Goals for Pike
* make new-style dynamic drivers the recommend way of writing and using
drivers,
* fill in missing documentation,
* recommend vendors to have hardware types for their hardware, as well
as 3rdparty CI support for it.
Important decisions
* no new classic drivers are accepted in-tree (please check when accepting
specifications),
* no new interfaces additions for classic drivers(volume_interface is
the last accepted from them),
* remove the SSH drivers by Pike final (probably around M3).

Ironic Inspector HA
~~~~~~~~~~~~~~~~~~~

Preliminary work (switch to a real state machine) done in Ocata. Splitting the
service into API and conductor/engine parts correlates with the WSGI
cross-project goal.

We also had a deeper discussion about ironic-inspector architecture earlier
that week, where we were looking <https://etherpad.openstack.org/p/ironic-pike-ptg-inspector-arch>_ into
potential future work to make ironic-inspector both HA and multi-tenancy
friendly. It was suggested to split discovery process (simple process to
detect MACs and/or power credentials) and inspection process (full process
when a MAC is known).

Goals for Pike
* switch locking to tooz (with Redis probably being the default
backend for now),
* split away API process with WSGI support,
* leader election using tooz for periodic tasks,
* stop messing with iptables and start directly managing dnsmasq
instead (similarly to how Neutron does it),
* try using dnsmasq in active/active configuration with
non-intersecting IP addresses pools from the same subnet.
Actions
also sambetts will write a spec on a potential workflow split.

Ironic UI
~~~~~~~~~

The project got some important features implemented, and an RDO package
emerged during Ocata. Still, it desperately needs volunteers for coding and
testing. A spreadsheet <https://docs.google.com/spreadsheets/d/1petifqVxOT70H2Krz7igV2m9YqgXaAiCHR8CXgoi9a0/edit?usp=sharing>_
captures the current (as of beginning of Pike) status of features.

Actions
dtantsur, davidlenwell, bradjones and crushil agreed to
dedicate some time to the UI.

Rescue
~~~~~~

Most of the patches are up, the feature is tested with the CoreOS-based
ramdisk for now. Still, the ramdisk side poses a problem: while using DHCP is
easy, static network configuration seems not. It's especially problematic in
CoreOS. Might be much easier in the DIB-based ramdisk, but we don't support it
officially in the Ironic community.

RedFish driver
~~~~~~~~~~~~~~

We want to get a driver supporting RedFish soon. There was some critics raised
around the currently proposed python-redfish library. As an alternative,
a new library <https://github.com/openstack/sushy>_ was written. Is it
lightweight, covered by unit tests and only contain what Ironic needs.
We agreed to start our driver implementation with it, and switch to the
python-redfish library when/if it is ready to be consumed by us.

We postponed discussing advanced features like nodes composition till after
we get the basic driver in.

Small status updates
~~~~~~~~~~~~~~~~~~~~

  • Of the API evolution initiative, only E-Tag work got some progress. The spec
    needs reviewing now.

  • Node tags work needs review and is close to landing. We decided to discuss
    port tags as part of a separate RFE, if anybody is interested.

  • IPA API versioning also needs reviews, there are several moderately
    contentions points about it. It was suggested that we only support one
    direction of IPA/ironic upgrades to simplify testing. We'll probably only
    support old IPA with new ironic, which is already tested by our grenade job.

CI and testing


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-ci-testing

Missing CI coverage
~~~~~~~~~~~~~~~~~~~

UEFI
Cirros finally released a stable version with UEFI support built in.
A non-voting job is running with partition images, should be made voting
soon. A test with whole disk images will be introduced as part of
standalone tests <https://review.openstack.org/#/c/423556/>_.
Local bootloader
Requires small enough instance images with Grub2 present (Cirros does not
have it). We agreed to create a new repository with scripts to build
suitable images. Potentially can be shared with other teams (e.g. Neutron).

 Actions: **lucasagomes** and/or **vsaienko** to look into it.

Adopt state
Tests have been up for some time, but have ordering issues with nova-based
tests. Suggesting TheJulia to move them to standalone tests_.
Root device hints
Not covered by any CI. Will need modifying how we create virtual machines.
First step is to get size-based hints work. Check two cases: with size
strictly equal and greater than requested.

 Actions: **dtantsur** to look into it.

Capabilities-based scheduling
This may actually go to Nova gate, not ours. Still, it relies on some code
in our driver, so we'd better cover it to ensure that the placement API
changes don't break it.

 Actions: **vsaienko** to look into it.

Port groups
The same image problem as with local boot - the same action item to create
a repository with build scripts to build our images.
VLAN-aware instances
The same image problem + requires reworking our network simulation code <https://review.openstack.org/#/c/392959/>_.
Conductor take over and hash ring
Requires a separate multi-node job.

 Action: **vsaienko** to investigate.

DIB-based IPA image
^^^^^^^^^^^^^^^^^^^

Currently the ironic-agent element to build such image is in the DIB
repository outside of our control. If we want to properly support it, we need
to gate on its changes, and to gate IPA changes on its job. Some time ago we
had a tentative agreement to move the element to our tree.

It was blocked by the fact that DIB rarely or never removes elements, and does
not have a way to properly de-duplicate elements with the same name.

An obvious solution we are going to propose is to take this element in IPA
tree under a different name (ironic-python-agent?). The old element will
get deprecated and only critical fixes will be accepted for it.

Action
dtantsur to (re)start this discussion with the TripleO and DIB teams.

API microversions testing
^^^^^^^^^^^^^^^^^^^^^^^^^

We are not sure we have tests covering all microversions. We seem to have API
tests using fake driver that cover at least some of them. We should start
paying more attention to this part of our testing.

Actions
dtantsur to check if these tests are up-to-date and split them to a
separate CI job.
pas-ha to write API tests for internal API (i.e. lookup/heartbeat).

Global OpenStack goals
~~~~~~~~~~~~~~~~~~~~~~

Splitting away tempest plugins
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

It did not end up a goal for Pike, and there are still some concerns in the
community. Still, as we already apply ugly hacks in our jobs to use the
tempest plugin from master, we agreed to proceed with the split.

To simplify both maintenance and consuming our tests, we agreed to merge
ironic and ironic-inspector plugins. The introspection tests will or will
not run based on ironic-inspector presence.

We propose having a merged core team (i.e. ironic-inspector-core which
already includes ironic-core) for this repository. We trust people who
only have core rights on ironic-inspector to not approve things they're
not authorized to approve.

Python 3 support
^^^^^^^^^^^^^^^^

We've been running Python 3 unit tests for quite some time. Additionally,
ironic-inspector runs a non-voting Python 3 functional test. Ironic has an
experimental job which fails, apparently, because of swift. We can start with
switching this job to the pxe_ipmitool driver (not requiring swift).
Inspector does not have a Python 3 integration tests job proposed yet.

Actions
JayF and hurricanerix will drive this work in both ironic and
ironic-inspector.

 **lucasagomes** to check pyghmi and virtualbmc compatibility.

 **krtaylor** and/or **mjturek** to check MoltenIron.

We agreed that Bifrost is out of scope for this task. Its Python 3
compatibility mostly depends on one of Ansible anyway. Similarly, for the UI
we need horizon to be fully Python 3 compatible first.

Important decisions
We recommend vendors to make their libraries compatible with Python 3.
It may become a strict requirement in one of the coming releases.

API behind WSGI container
^^^^^^^^^^^^^^^^^^^^^^^^^

This seems quite straightforward. The work has started to switch ironic CI to
WSGI already. For ironic-inspector it's going to be done as part of the HA
work.

Operations


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-operations

OSC plugin and API versioning
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Currently we default the OSC plugin (and old client too) to a really old API
version. We agreed that this situation is not desired, and that we should take
the same approach as Nova and default to the latest version. We are planning
to announce the change this cycle, both via the ML and via a warning issues
when no versions are specified.

Next, in the Queens cycle, we will have to make the change, bearing in mind
that OSC does not support values like latest for API versions. So the plan
is as follows:

  • make the default --os-baremetal-api-version=1 in

https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L67

  • when instantiating the ironic client in the OSC plugin, replace '1' with
    'latest':

https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L41

  • when handling --os-baremetal-api-version=latest, replace it with 1,
    so that it's later replaced with latest again:

https://github.com/openstack/python-ironicclient/blob/f242c6af3b295051019aeabb4ec7cf82eb085874/ironicclient/osc/plugin.py#L85

As a side effect, that will make 1 equivalent to latest as well.

It was also suggested to have an new command, displaying both server supported
and client supported API versions.

Deprecating the standalone ironic CLI in favor of the OSC plugin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

We do not want to maintain two CLI in the long run. We agreed to start
thinking about deprecating the old ironic command. Main concerns:

  • lack of feature parity,

  • ugly way to work without authentication, for example::

    openstack baremetal --os-url http://ironic --os-token fake

Plan for Pike
* Ensure complete feature parity between two clients.
* Only use openstack baremetal commands in the documentation.

The actual deprecation is planned for Queens.

RAID configuration enhancements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A few suggestions were made:

  • Support ordered list of logical disk definition. The first possible
    configuration is applied to the node. For example:

    • Top of list - RAID 10 but we don't have enough drives
    • Fallback to next preference in list - RAID 1 on a pair of available drives
    • Finally, JBOD or RAID 0 on only available drive
  • Specify the number of instances for a logical disk definition to create.

  • Specify backing physical disks by stating preference for the smallest, e.g.
    smallest like-sized pair or two smallest disks.

  • Specify location of physical disks, e.g. first two or last two as perceived
    by the hardware, front/rear/internal location.

Actions
rpioso will write RFE(s)

Smaller topics
~~~~~~~~~~~~~~

Non-aborteable clean steps stuck in clean wait state
We discussed a potential force-abort functionality, but the only thing
we agreed on is check that all current clean steps are marked as
abortable if they really are.

Status of long-running cleaning operations
There is a request to be able to get status of e.g. disk shredding (which
may take hours). We found out that the current IPA API design essentially
prevents running several commands in parallel. We agreed that we need IPA
API versioning first, and that this work is not a huge priority right now.

OSC command for listing driver and RAID properties
We cannot agree on the exact form of these two commands. The primary
candidates discussed on the PTG were::

     openstack baremetal driver property list <DRIVER>
     openstack baremetal driver property show <DRIVER>

 We agreed to move this to the spec: https://review.openstack.org/439907.

Abandoning an active node
I.e. an opposite to adopt. It's unclear how such operation would play with
nova, maybe it's only useful for a standalone case.

Future Work


Etherpad: https://etherpad.openstack.org/p/ironic-pike-ptg-future-work.

Neutron event processing
~~~~~~~~~~~~~~~~~~~~~~~~

RFE: https://bugs.launchpad.net/ironic/+bug/1304673, spec:
https://review.openstack.org/343684.

We need to wait for certain events from neutron (like port bindings).
Currently we just wait some time, and hope it went well. We agreed to follow
the same pattern that nova does for neutron to nova notifications.
The neutron part is
https://github.com/openstack/neutron/blob/master/neutron/notifiers/nova.py.
We agreed with the Neutron team that notifier and the other ironic-specific
stuff for neutron would live in a separate repo under Baremetal governance.
Draft code is https://review.openstack.org/#/c/357780.

Splitting node.properties[capabilities] into a separate table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This is something we've planned on for long time. Currently, it's not possible
to update capabilities atomically, and the format is quite hard to work with:
k1:v1,k2:v2. We discussed going away from using word capability. It's
already overused in the OpenStack world, and nova is switching to the notion
of "traits". It also looks like traits will be qualitative-only while, we have
proposals from quantitative capabilities (like gpu_count).

It was proposed to model a typical CRUD API for traits in Ironic::

 GET /v1/nodes/<NODE>/traits
 POST  /v1/nodes/<NODE>/traits
 GET /v1/nodes/<NODE>/traits/<trait>
 DELETE /v1/nodes/<NODE>/traits/<trait>

In API versions before this addition, we would make
properties/capabilities a transparent proxy to new tables.

It was noted that the database change can be done first, with API change
following it.

Actions
rloo to propose two separate RFEs for database and API parts.

Avoid changing behavior based on properties[capabilities]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Currently our capabilities have a dual role. They serve both for scheduling
(to inform nova of what nodes can) and for making decisions based on flavor
(e.g. request UEFI boot). It is complicated by the fact that sometimes the
same capability (e.g. UEFI) can be of both types depending on a driver.
This is quite confusing for users, and may be incompatible with future changes
both in ironic and nova.

For things like boot option and (potentially) BIOS setting, we need to be able
to get requests from flavors and/or nova boot arguments without abusing
capabilities for it. Maybe similar to how NUMA support does it:
https://docs.openstack.org/admin-guide/compute-cpu-topologies.html.

For example::

 flavor.extra_specs[traits:has_ssd]=True

(tells the scheduler to find a node with SSD disk; does not change
behavior/config of node)

::

 flavor.extra_specs[configuration:use_uefi]=True

(configures the node to boot UEFI; has no impact on scheduling)

::

 flavor.extra_specs[traits:has_uefi]=True
 flavor.extra_specs[configuration:use_uefi]=True

(tells the scheduler to find a node supporting UEFI; if this support is
dynamic, configures the node to enable UEFI boot).

Actions
jroll to start conversation with nova folks about how/if to have a
replacement for this elsewhere.

 Stop accepting driver features relying on ``properties[capabilities]`` (as
 opposed to ``instance_info[capabilities]``).

Potential actions
* Remove instance_info[capabilities] into
instance_info[configuration] for clarity.

Deploy-time RAID
~~~~~~~~~~~~~~~~

This was discussed on the last design summit. Since then we've got a nova spec <https://review.openstack.org/408151>_, which, however, hasn't got many
reviews so far. The spec continues using block_device_mapping_v2, other
options apparently were not considered.

We discussed how to inform Nova whether or not RAID can be built for
a particular node. Ideally, we need to tell the scheduler about many things:
RAID support, disk number, disk sizes. We decided that it's an overkill, at
least for the beginning. We'll only rely on a "supports RAID" trait for now.

It's still unclear what to do about local_gb property, but with planned
Nova changes it may not be required any more.

Advanced partitioning
~~~~~~~~~~~~~~~~~~~~~

There is a desire for flexible partitioning in ironic, both in case of
partition and whole disk images (in the latter case - partition other disks).
Generally, there was no consensus on the PTG. Some people were very much in
favor of this feature, some - quite against. It's unclear how to pass
partitioning information from Nova. There is a concern that such feature will
get us too much into OS-specific details. We agreed that someone interested
will collect the requirements, create a more detailed proposal, and we'll
discuss it on the next PTG.

Splitting nodes into separate pools
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This feature is about dedicating some nodes to a tenant, essentially adding a
tenant_id field to nodes. This can be helpful e.g. for a hardware provider to
reserve hardware for a tenant, so that it's always available.

This seems relatively easy to implement in Ironic. We need a new field on
nodes, then only show non-admin users their hardware. A bit trickier to make
it work with Nova. We agreed to investigate passing a token from Nova to
Ironic, as opposed to always using a service user admin token.

Actions
vdrok to work out the details and propose a spec.

Requirements for routed networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

We discussed requirements for achieving routed architecture like
spine-and-leaf. It seems that most of the requirements are already in our
plans. The outstanding items are:

  • Multiple subnets support for ironic-inspector. Can be solved in
    dnsmasq.conf level, an appropriate change was merged into
    puppet-ironic.

  • Per-node provision and cleaning networks. There is an RFE, somebody just
    has to do the work.

This does not seem to be a Pike goal for us, but many of the dependencies
are planned for Pike.

Configuring BIOS setting for nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Preparing a node to be configured to serve a certain rule by tweaking its
settings. Currently, it is implemented by the Drac driver in a vendor pass-thru.

We agreed that such feature would better fit cleaning, rather then
pre-deployment. Thus, it does not depend on deploy steps. It was suggested to
extend the management interface to support passing it an arbitrary JSON with
configuration. Then a clean step would pick it (similar to RAID).

Actions
rpioso to write a spec for this feature.

Deploy steps
~~~~~~~~~~~~

We discussed the deploy steps proposal <https://review.openstack.org/412523>_
in depth. We agreed on partially splitting the deployment procedure into
pluggable bits. We will leave the very core of the deployment - flashing the
image onto a target disk - hardcoded, at least for now. The drivers will be
able to define steps to run before and after this core deployment. Pre- and
post-deployment steps will have different priorities ranges, something like::

 0 < pre-max/deploy-min < deploy-max/post-min < infinity

We plan on making partitioning a pre-deploy step, and installing a bootloader
a post-deploy step. We will not allow IPA hardware managers to define deploy
steps, at least for now.

Actions
yolanda is planning to work on this feature, rloo and TheJulia
to help.

Authenticating IPA
~~~~~~~~~~~~~~~~~~

IPA HTTP endpoints, and the endpoints Ironic provides for ramdisk callbacks
are completely insecure right now. We hesitated to add any authentication to
them, as any secrets published for the ramdisk to use (be it part of kernel
command line or image itself) are readily available to anyone on the network.

We agreed on several things to look into:

  • A random CSRF-like token to use for each node. This will somewhat limit the
    attack surface by requiring an attacker to intercept a token for the
    specific node, rather then just access the endpoints.

  • Document splitting out public and private Ironic API as part of our future
    reference architecture guide.

  • Make sure we support TLS between Ironic and IPA, which is particularly
    helpful when virtual media is used (and secrets are not leaked).

Actions
jroll and joanna to look into the random token idea.
jroll to write an RFE for TLS between IPA and Ironic.

Smaller things
~~~~~~~~~~~~~~

Using ansible-networking as a ML2 driver for ironic-neutron integration work
It was suggested to make it one of backends for
networking-generic-switch in addition to netmiko. Potential
concurrency issues when using SSH were raised, and still require a solution.

Extending and standardizing the list of capabilities the drivers may discover
It was proposed to use os-traits <https://github.com/jaypipes/os-traits>_
for standardizing qualitative capabilities. jroll will look into
quantitative capabilities.

Pluggable interface for long-running processes
This was proposed as an optional way to mitigate certain problems with
local long-running services, like console. E.g. if a conductor crashes,
its console services keep running. It was noted that this is a bug to be
fixed (TheJulia volunteered to triage it).
The proposed solution involved optionally run processes on a remote
cluster, e.g. k8s. Concerns were voiced on the PTG around complicating
support matrix and adding more decisions to make for operators.
There was no apparent consensus on implementing this feature due to that.

Setting specific boot device for PXE booting
It was found to be already solved by setting pxe_enabled on ports.
We just need to update ironic-inspector to set this flag.

Priorities and planning


The suggested priorities list is now finalized in
https://review.openstack.org/439710.

We also agreed on the following priorities for ironic-inspector subteam:

  • Inspector HA (milan)
  • Community goal - python 3.5 (JayF, hurricanerix)
  • Community goal - devstack+apache+wsgi (aarefiev, ovoshchana)
  • Inspector needs to update pxe_enabled flag on ports (dtantsur)

Message: 18
Date: Wed, 8 Mar 2017 15:07:06 +0000 (GMT)
From: Chris Dent cdent+os@anticdent.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID: alpine.OSX.2.20.1703081500570.59117@shine.local
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On Wed, 8 Mar 2017, Yu Wei wrote:

I'm new to openstack.
I tried to install openstack-ocata.
As placement-api is required since Ocata, is there any detailed document
about how to install and configure placement-api?

There are two different things which might be useful to you. Some
nova "in-tree" docs about placement:

 https://docs.openstack.org/developer/nova/placement.html
 https://docs.openstack.org/developer/nova/placement_dev.html

and some in progress documents about installing placement:

 https://review.openstack.org/#/c/438328/

That latter has some errors that are in the process of being fixed,
so make sure you read the associated comments.

--
Chris Dent ?_(?)_/? https://anticdent.org/
freenode: cdent tw: @anticdent


Message: 19
Date: Wed, 8 Mar 2017 09:12:59 -0600
From: Monty Taylor mordred@inaugust.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: 9b5cf68a-39bf-5d33-2f4c-77c5f5ff7f78@inaugust.com
Content-Type: text/plain; charset=utf-8

Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

Thanks!
Monty


Message: 20
Date: Wed, 8 Mar 2017 15:29:05 +0000
From: Yu Wei yu2003w@hotmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID:

Content-Type: text/plain; charset="utf-8"

@Chris, Thanks for replying.

When I tried to configure placement-api, I met following problems,

AH01630: client denied by server configuration: /usr/bin/nova-placement-api

I will read the links you pointed out.

Thanks again,

Jared

On 2017?03?08? 23:07, Chris Dent wrote:
On Wed, 8 Mar 2017, Yu Wei wrote:

I'm new to openstack.
I tried to install openstack-ocata.
As placement-api is required since Ocata, is there any detailed document
about how to install and configure placement-api?

There are two different things which might be useful to you. Some
nova "in-tree" docs about placement:

https://docs.openstack.org/developer/nova/placement.html
https://docs.openstack.org/developer/nova/placement_dev.html

and some in progress documents about installing placement:

https://review.openstack.org/#/c/438328/

That latter has some errors that are in the process of being fixed,
so make sure you read the associated comments.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 21
Date: Wed, 8 Mar 2017 15:35:34 +0000 (GMT)
From: Chris Dent cdent+os@anticdent.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID: alpine.OSX.2.20.1703081532490.59117@shine.local
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On Wed, 8 Mar 2017, Yu Wei wrote:

When I tried to configure placement-api, I met following problems,

AH01630: client denied by server configuration: /usr/bin/nova-placement-api

That can be fixed by doing (somewhere in your apache config):

     Require all granted

but rather than doing that you may wish to move nova-placement-api
to a less global directory and grant access to that directory.
Providing wide access to /usr/bin is not a great idea.

--
Chris Dent ?_(?)_/? https://anticdent.org/
freenode: cdent tw: @anticdent


Message: 22
Date: Thu, 9 Mar 2017 00:39:03 +0900
From: Hirofumi Ichihara ichihara.hirofumi@lab.ntt.co.jp
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID: 6664232b-37bc-9a2b-f06e-812318f62b67@lab.ntt.co.jp
Content-Type: text/plain; charset=windows-1252; format=flowed

On 2017/03/08 23:59, Andreas Jaeger wrote:
On 2017-03-08 15:40, ZZelle wrote:

Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib
master ... So the change should depends on a change in requirements repo
incrementing neutron-lib version
This is documented also at - together with some other caveats:

https://docs.openstack.org/infra/manual/developers.html#limitations-and-caveats
Thank you for the pointer. I understand.

Hirofumi

Note a depends-on requirements won't work either - you really need to
release it. Or you need to change the test to pull neutron-lib from source,

Andreas

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara
<ichihara.hirofumi@lab.ntt.co.jp
ichihara.hirofumi@lab.ntt.co.jp> wrote:

 Hi,

 I thought that we can post neutron patch depending on neutron-lib
 patch under review.
 However, I saw it doesn't work[1, 2]. In the patches, neutron
 patch[1] has Depends-on tag with neutron-lib patch[2] but the pep8
 and unit test fails because the test doesn't use the neutron-lib patch.

 Please correct me if it's my misunderstanding.

 [1]: https://review.openstack.org/#/c/424340/
 <https://review.openstack.org/#/c/424340/>
 [2]: https://review.openstack.org/#/c/424868/
 <https://review.openstack.org/#/c/424868/>

 Thanks,
 Hirofumi



 __________________________________________________________________________
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 23
Date: Wed, 8 Mar 2017 09:50:34 -0600
From: Lance Bragstad lbragstad@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID:

Content-Type: text/plain; charset="utf-8"

From a keystone-perspective, I'm fine killing keystone.openstack.org.
Unless another team member with more context/history has a reason to keep
it around.

On Wed, Mar 8, 2017 at 9:12 AM, Monty Taylor mordred@inaugust.com wrote:

Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

Thanks!
Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 24
Date: Wed, 8 Mar 2017 15:55:19 +0000
From: Yu Wei yu2003w@hotmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID:

Content-Type: text/plain; charset="utf-8"

It seems that nova-placement-api acts as a CGI module.

Is it?

On 2017?03?08? 23:35, Chris Dent wrote:
On Wed, 8 Mar 2017, Yu Wei wrote:

When I tried to configure placement-api, I met following problems,

AH01630: client denied by server configuration: /usr/bin/nova-placement-api

That can be fixed by doing (somewhere in your apache config):

    Require all granted

but rather than doing that you may wish to move nova-placement-api
to a less global directory and grant access to that directory.
Providing wide access to /usr/bin is not a great idea.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 25
Date: Wed, 8 Mar 2017 16:02:12 +0000
From: "Scheglmann, Stefan" scheglmann@strato.de
To: "openstack-dev@lists.openstack.org"

Subject: Re: [openstack-dev] [puppet] puppet-cep beaker test
Message-ID: AA214AA9-86CB-4DC6-8BDE-627F9067D6F8@strato.de
Content-Type: text/plain; charset="utf-8"

Hey Alex,

thx for the reply, unfortunately it doesn?t seem to work. Adding PUPPETMAJVERSION to the call seems not to have any effect.

Stefan
On Tue, Mar 7, 2017 at 7:09 AM, Scheglmann, Stefan scheglmann@strato.de wrote:

Hi,

currently got some problems running the beaker test for the puppet-cep module. Working on OSX using Vagrant version 1.8.6 and VirtualBox version 5.1.14. Call is 'BEAKERdestroy=no BEAKERdebug=1 bundle exec --verbose rspec spec/acceptance? output in http://pastebin.com/w5ifgrvd

Try running:
PUPPETMAJVERSION=4 BEAKERdestroy=no BEAKERdebug=1 bundle exec
--verbose rspec spec/acceptance

Thanks,
-Alex

Tried this, this just changes the trace a bit, now it seems like that it worked in the first place but then failed for the same reason.
Trace here:

Trace:
An error occurred in a before(:suite) hook.
Failure/Error: raise CommandFailure, "Host '#{self}' exited with #{result.exitcode} running:\n #{cmdline}\nLast #{@options[:tracelimit]} lines of output were:\n#{result.formattedoutput(@options[:tracelimit])}"
Beaker::Host::CommandFailure:
Host 'first' exited with 127 running:
ZUULREF= ZUULBRANCH= ZUULURL= PUPPETMAJVERSION= bash openstack/puppet-openstack-integration/installmodules.sh
Last 10 lines of output were:
+ '[' -n 'SHELLOPTS=braceexpand:hashall:interactive-comments:xtrace
if [ -n "$(set | grep xtrace)" ]; then
local enablextrace='\''yes'\'';
if [ -n "${enable
xtrace}" ]; then' ']'
+ set +x
--------------------------------------------------------------------------------
| Install r10k |
--------------------------------------------------------------------------------
+ gem install fastgettext -v '< 1.2.0'
openstack/puppet-openstack-integration/install
modules.sh: line 29: gem: command not found

It seems like that the box beaker is using (puppetlabs/ubuntu-14.04-64-nocm), somehow ends up with has puppet 4.x installed. I could not exactly pin down how this happens, cause when i sin up some VM just from that base box and install puppet, i end up with 3.4. But during the beaker tests it ends up with puppet 4 and in puppet 4 some paths have changed. /opt/puppetlabs/bin is just for the 'public' applications and the ?private' ones like gem or ruby are in /opt/puppetlabs/puppet/bin. Therefore the openstack/puppet-openstack-integration/install_modules.sh script fails on installation of r10k, cause it cannot find gem and later on it fails on the r10k call cause it is also installed to /opt/puppetlabs/puppet/bin.
Symlinking gem and r10k in an provisioned machine, and rerun the tests fixes the problem. Currently i am doing all this cause i added some functionalities for the puppet-cep manifests to support bluestone/rocksdb and some additional config params which i would like to see in upstream.

Greets Stefan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 26
Date: Wed, 8 Mar 2017 09:15:04 -0700
From: Alex Schultz aschultz@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [puppet] puppet-cep beaker test
Message-ID:

Content-Type: text/plain; charset=UTF-8

On Wed, Mar 8, 2017 at 9:02 AM, Scheglmann, Stefan scheglmann@strato.de wrote:
Hey Alex,

thx for the reply, unfortunately it doesn?t seem to work. Adding PUPPETMAJVERSION to the call seems not to have any effect.

I just read the bottom part of the original message and it's getting a
14.04 box from puppet-ceph/spec/acceptance/nodesets/default.yml. You
could try changing that to 16.04. For our CI we're using the
nodepool-xenial.yml via BEAKERset=nodepool-xenial.yml but that
assumes you're running on localhost. You could try grabbing the 1604
configuration from puppet-openstack
extras[0] and putting that in your
spec/acceptance/nodesets folder to see if that works for you. Then you
should able to run:

PUPPETMAJVERSION=4 BEAKERdestroy=no BEAKERdebug=1
BEAKER_set=ubuntu-server-1604-x86 bundle exec --verbose rspec
spec/acceptance

If you run in to more problems, you may want to try hopping on IRC and
we can help you in #puppet-openstack on freenode.

Thanks,
-Alex

[0] https://github.com/openstack/puppet-openstack_extras/blob/master/spec/acceptance/nodesets/ubuntu-server-1604-x64.yml

Stefan

On Tue, Mar 7, 2017 at 7:09 AM, Scheglmann, Stefan scheglmann@strato.de wrote:

Hi,

currently got some problems running the beaker test for the puppet-cep module. Working on OSX using Vagrant version 1.8.6 and VirtualBox version 5.1.14. Call is 'BEAKERdestroy=no BEAKERdebug=1 bundle exec --verbose rspec spec/acceptance? output in http://pastebin.com/w5ifgrvd

Try running:
PUPPETMAJVERSION=4 BEAKERdestroy=no BEAKERdebug=1 bundle exec
--verbose rspec spec/acceptance

Thanks,
-Alex

Tried this, this just changes the trace a bit, now it seems like that it worked in the first place but then failed for the same reason.
Trace here:

Trace:
An error occurred in a before(:suite) hook.
Failure/Error: raise CommandFailure, "Host '#{self}' exited with #{result.exitcode} running:\n #{cmdline}\nLast #{@options[:tracelimit]} lines of output were:\n#{result.formattedoutput(@options[:tracelimit])}"
Beaker::Host::CommandFailure:
Host 'first' exited with 127 running:
ZUULREF= ZUULBRANCH= ZUULURL= PUPPETMAJVERSION= bash openstack/puppet-openstack-integration/installmodules.sh
Last 10 lines of output were:
+ '[' -n 'SHELLOPTS=braceexpand:hashall:interactive-comments:xtrace
if [ -n "$(set | grep xtrace)" ]; then
local enablextrace='\''yes'\'';
if [ -n "${enable
xtrace}" ]; then' ']'
+ set +x
--------------------------------------------------------------------------------
| Install r10k |
--------------------------------------------------------------------------------
+ gem install fastgettext -v '< 1.2.0'
openstack/puppet-openstack-integration/install
modules.sh: line 29: gem: command not found

It seems like that the box beaker is using (puppetlabs/ubuntu-14.04-64-nocm), somehow ends up with has puppet 4.x installed. I could not exactly pin down how this happens, cause when i sin up some VM just from that base box and install puppet, i end up with 3.4. But during the beaker tests it ends up with puppet 4 and in puppet 4 some paths have changed. /opt/puppetlabs/bin is just for the 'public' applications and the ?private' ones like gem or ruby are in /opt/puppetlabs/puppet/bin. Therefore the openstack/puppet-openstack-integration/install_modules.sh script fails on installation of r10k, cause it cannot find gem and later on it fails on the r10k call cause it is also installed to /opt/puppetlabs/puppet/bin.
Symlinking gem and r10k in an provisioned machine, and rerun the tests fixes the problem. Currently i am doing all this cause i added some functionalities for the puppet-cep manifests to support bluestone/rocksdb and some additional config params which i would like to see in upstream.

Greets Stefan


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 27
Date: Wed, 8 Mar 2017 11:23:50 -0500
From: David Moreau Simard dms@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [tc][appcat] The future of the App
Catalog
Message-ID:

Content-Type: text/plain; charset=UTF-8

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Wed, Mar 8, 2017 at 9:41 AM, Jay Pipes jaypipes@gmail.com wrote:
On 03/06/2017 06:26 AM, Thierry Carrez wrote:

Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks
!

Mirantis' position is that the App Catalog was a good idea, but we agree
with you that other application repositories like DockerHub and Quay.io are
both more useful and more actively used.

The OpenStack App Catalog does indeed seem to unnecessarily compete with
those application repositories, and we would support its retirement if that
is what the community would like to do. We'll provide resources and help in
winding anything down if needed.

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 28
Date: Wed, 8 Mar 2017 11:23:50 -0500
From: Brian Rosmaita rosmaita.fossdev@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: ddb8d3a0-85b3-fd6f-31e1-5724b51b99c3@gmail.com
Content-Type: text/plain; charset=windows-1252

On 3/8/17 10:12 AM, Monty Taylor wrote:
Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

My concern is that glance.openstack.org is easy to remember and type, so
I imagine there are links out there that we have no control over using
that URL. So what are the consequences of it 404'ing or "site cannot be
reached" in a browser?

glance.o.o currently redirects to docs.o.o/developer/glance

If glance.o.o failed for me, I'd next try openstack.org (or
www.openstack.org). Those would give me a page with a top bar of links,
one of which is DOCS. If I took that link, I'd get the docs home page.

There's a search bar there; typing in 'glance' gets me
docs.o.o/developer/glance as the first hit.

If instead I scroll past the search bar, I have to scroll down to
"Project-Specific Guides" and follow "Services & Libraries" ->
"Openstack Services" -> "image service (glance) -> docs.o.o/developer/glance

Which sounds kind of bad ... until I type "glance docs" into google.
Right now the first hit is docs.o.o/developer/glance. And all the kids
these days use the google. So by trying to be clever and hack the URL,
I could get lost, but if I just google 'glance docs', I find what I'm
looking for right away.

So I'm willing to go with the majority on this, with the caveat that if
one or two teams keep the redirect, it's going to be confusing to end
users if the redirect doesn't work for other projects.

cheers,
brian

Thanks!
Monty


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 29
Date: Wed, 8 Mar 2017 16:27:56 +0000 (GMT)
From: Chris Dent cdent+os@anticdent.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [nova][placement-api] Is there any
document about openstack-placement-api for installation and configure?
Message-ID: alpine.OSX.2.20.1703081625580.59117@shine.local
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On Wed, 8 Mar 2017, Yu Wei wrote:

It seems that nova-placement-api acts as a CGI module.

Is it?

It's a WSGI application module, which is configured and accessed via
some mod wsgi configuration settings, if you're using mod_wsgi with
apache2:

 https://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIDaemonProcess.html
 https://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIScriptAlias.html

It's a similar concept with other WSGI servers.

--
Chris Dent ?_(?)_/? https://anticdent.org/
freenode: cdent tw: @anticdent


Message: 30
Date: Wed, 8 Mar 2017 16:31:23 +0000
From: "Daniel P. Berrange" berrange@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: 20170308163123.GT7470@redhat.com
Content-Type: text/plain; charset=utf-8

On Wed, Mar 08, 2017 at 09:12:59AM -0600, Monty Taylor wrote:
Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

Does the server have any access log that could provide stats on whether
any of the subdomains are a receiving a meaningful amount of traffic ?
Easy to justify removing them if they're not seeing any real traffic.

If there's any referrer logs present, that might highlight which places
still have outdated links that need updating to kill off remaining
traffic.

Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|


Message: 31
Date: Thu, 9 Mar 2017 00:50:56 +0800
From: Jeffrey Zhang zhang.lei.fly@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: openstack openstack@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][ubuntu][libvirt] Is libvirt 2.5.0
in ubuntu cloud archive ocata repo bust
Message-ID:

Content-Type: text/plain; charset="utf-8"

Thanks Corey, But i tried ocata proposed repo, the issue is still happening.

On Wed, Mar 8, 2017 at 10:03 PM, Corey Bryant corey.bryant@canonical.com
wrote:

On Tue, Mar 7, 2017 at 10:28 PM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Kolla deploy ubuntu gate is red now. here is the related bug[0].

libvirt failed to access the console.log file when booting instance. After
made some debugging, i got following.

Jeffrey, This is likely fixed in ocata-proposed and should be promoted to
ocata-updates soon after testing completes. https://bugs.launchpad.net/
ubuntu/+source/libvirt/+bug/1667033.

Corey

how console.log works

nova create a empty console.log with nova:nova ( this is another bug
workaround actually[1]), then libvirt ( running with root ) will change
the
file owner to qemu process user/group ( configured by dynamic_ownership
).
Now qemu process can write logs into this file.

what's wrong now

libvirt 2.5.0 stop change the file owner, then qemu/libvirt failed to
write
logs into console.log file

other test

  • ubuntu + fallback libvirt 1.3.x works 2
  • ubuntu + libvirt 2.5.0 + chang the qemu process user/group to
    nova:nova works, too.[3]
  • centos + libvirt 2.0.0 works, never saw such issue in centos.

conclusion

I guess there are something wrong in libvirt 2.5.0 with dynamic_ownership

[0] https://bugs.launchpad.net/kolla-ansible/+bug/1668654
[1] https://github.com/openstack/nova/blob/master/nova/virt/
libvirt/driver.py#L2922,L2952
2 https://review.openstack.org/442673
[3] https://review.openstack.org/442850

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 32
Date: Wed, 8 Mar 2017 12:08:11 -0500
From: James Slagle james.slagle@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [TripleO][Heat] Selectively disabling
deployment resources
Message-ID:

Content-Type: text/plain; charset=UTF-8

On Wed, Mar 8, 2017 at 4:08 AM, Steven Hardy shardy@redhat.com wrote:
On Tue, Mar 07, 2017 at 02:34:50PM -0500, James Slagle wrote:

I've been working on this spec for TripleO:
https://review.openstack.org/#/c/431745/

which allows users to selectively disable Heat deployment resources
for a given server (or server in the case of a *DeloymentGroup
resource).

Some of the main use cases in TripleO for such a feature are scaling
out compute nodes where you do not need to rerun Puppet (or make any
changes at all) on non-compute nodes, or to exclude nodes from hanging
a stack-update if you know they are unreachable or degraded for some
reason. There are others, but those are 2 of the major use cases.

Thanks for raising this, I know it's been a pain point for some users of
TripleO.

However I think we're conflating two different issues here:

  1. Don't re-run puppet (or yum update) when no other changes have happened

  2. Disable deployment resources when changes have happened

Yea, possibly, but (1) doesn't really solve the use cases in the spec.
It'd certainly be a small improvement, but it's not really what users
are asking for.

(2) is much more difficult to reason about because we in fact have to
execute puppet to fully determine if changes have happened.

I don't really think these two are conflated. For some purposes, the
2nd is just a more abstract definition of the first. For better or
worse, part of the reason people are asking for this feature is
because they don't want to undo manual changes. While that's not
something we should really spend a lot of time solving for, the fact
is that OpenStack architecture allows for horizontally scaling compute
nodes without have to touch every other single node in your deployment
but TripleO can't take advantage of that.

So, just giving users a way to opt out of the generated unique
identifier triggering the puppet applys and other deployments,
wouldn't help them if they unintentionally changed some other hiera
data that triggers a deployment.

Plus, we have some deployments that are going to execute every time
outside of unique identifiers being generated (hosts-config.yaml).

(1) is actually very simple, and is the default behavior of Heat
(SoftwareDeployment resources never update unless either the config
referenced or the input_values change). We just need to provide an option
to disable the DeployIdentifier/UpdateIdentifier timestamps from being
generated in tripleoclient.

(2) is harder, because the whole point of SoftwareDeploymentGroup is to run
the exact same configuration on a group of servers, with no exceptions.

As Zane mentions (2) is related to the way ResourceGroup works, but the
problem here isn't ResourceGroup per-se, as it would in theory be pretty
easy to reimplement SoftwareDeploymentGroup to generate it's nested stack
without inheriting from ResourceGroup (which may be needed if you want a
flag to make existing Deployments in the group immutable).

I'd suggest we solve (1) and do some testing, it may be enough to solve the
"don't change computes on scale-out" case at least?

Possibly, as long as no other deployments are triggered. I think of
the use case more as:

add a compute node(s), don't touch any existing nodes to minimize risk

as opposed to:

add a compute node(s), don't re-run puppet on any existing nodes as I
know that it's not needed

For the scale out case, the desire to minimize risk is a big part of
why other nodes don't need to be touched.

One way to potentially solve (2) would be to unroll the
SoftwareDeploymentGroup resources and instead generate the Deployment
resources via jinja2 - this would enable completely removing them on update
if that's what is desired, similar to what we already do for upgrades to
e.g not upgrade any compute nodes.

Thanks, I hadn't considered that approach, but will look into it. I'd
guess you'd still need a parameter or map data fed into the jinja2
templating, so that it would not generate the deployment resources
based on what was desired to be disabled. Or, this could use
conditionals perhaps.

--
-- James Slagle
--


Message: 33
Date: Wed, 8 Mar 2017 09:33:29 -0800
From: "Armando M." armamig@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [neutron] [infra] Depends-on tag effect
Message-ID:

Content-Type: text/plain; charset="utf-8"

On 8 March 2017 at 07:39, Hirofumi Ichihara <ichihara.hirofumi@lab.ntt.co.jp
wrote:

On 2017/03/08 23:59, Andreas Jaeger wrote:

On 2017-03-08 15:40, ZZelle wrote:

Hi,

iiuc, neutron uses a released version of neutron-lib not neutron-lib
master ... So the change should depends on a change in requirements repo
incrementing neutron-lib version

This is documented also at - together with some other caveats:

https://docs.openstack.org/infra/manual/developers.html#limi
tations-and-caveats

Thank you for the pointer. I understand.

You can do the reverse as documented in [1]: i.e. file a dummy patch
against neutron-lib that pulls in both neutron's and neutron-lib changes.
One example is 2

[1] https://docs.openstack.org/developer/neutron-lib/review-guidelines.html
2 https://review.openstack.org/#/c/386846/

Hirofumi

Note a depends-on requirements won't work either - you really need to
release it. Or you need to change the test to pull neutron-lib from
source,

Andreas

On Wed, Mar 8, 2017 at 3:16 PM, Hirofumi Ichihara
<ichihara.hirofumi@lab.ntt.co.jp
ichihara.hirofumi@lab.ntt.co.jp> wrote:

 Hi,

 I thought that we can post neutron patch depending on neutron-lib
 patch under review.
 However, I saw it doesn't work[1, 2]. In the patches, neutron
 patch[1] has Depends-on tag with neutron-lib patch[2] but the pep8
 and unit test fails because the test doesn't use the neutron-lib

patch.

 Please correct me if it's my misunderstanding.

 [1]: https://review.openstack.org/#/c/424340/
 <https://review.openstack.org/#/c/424340/>
 [2]: https://review.openstack.org/#/c/424868/
 <https://review.openstack.org/#/c/424868/>

 Thanks,
 Hirofumi



 ___________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.op
enstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 34
Date: Wed, 8 Mar 2017 12:38:01 -0500
From: Jim Rollenhagen jim@jimrollenhagen.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [ironic] OpenStack client default ironic
API version
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Wed, Mar 8, 2017 at 9:05 AM, Mario Villaplana <mario.villaplana@gmail.com
wrote:

We want to deprecate ironic CLI soon, but I would prefer if that were
discussed on a separate thread if possible, aside from concerns about
versioning in ironic CLI. Feature parity should exist in Pike, then we
can issue a warning in Queens and deprecate the cycle after. More
information is on L56:
https://etherpad.openstack.org/p/ironic-pike-ptg-operations

I'm a bit torn on whether to use the API version coded in the OSC
plugin or not. On one hand, it'd be good to be able to test out new
features as soon as they're available. On the other hand, it's
possible that the client won't know how to parse certain items after a
microversion bump. I think I prefer using the hard-coded version to
avoid breakage, but we'd have to be disciplined about updating the
client when the API version is bumped (if needed). Opinions on this
are welcome. In either case, I think the deprecation warning could
land without specifying that.

I agree, I think we should pin it, otherwise it's one more hump to
overcome when we do want to make a breaking change.

FWIW, nova pins (both clients) to the max the client knows about,
specifically for this reason:
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/compute/client.py#L52-L57
https://github.com/openstack/python-novaclient/blob/master/novaclient/__init__.py#L23-L28

I'll certainly make an RFE when I update the patch later this week,
great suggestion.

I can make a spec, but it might be mostly empty except for the client
impact section. Also, this is a < 40 line change. :)

I tend to think a spec is a bit overkill for this, but I won't deny Ruby's
request.
Ping me when it's up and I'm happy to review it ASAP.

// jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 35
Date: Wed, 8 Mar 2017 17:42:02 +0000
From: "Fox, Kevin M" Kevin.Fox@pnnl.gov
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [tc][appcat] The future of the App
Catalog
Message-ID:

Content-Type: text/plain; charset="us-ascii"

For the OpenStack Applications Catalog to be successful in its mission, it required other parts of OpenStack to consider the use case a priority. Over the years it became quite clear to me that a significant part of the OpenStack community does not want OpenStack to become a place where cloud native applications would be built/packaged/provided to users using OpenStacks apis but instead just a place to run virtual machines on which you might deploy a cloud native platform to handle that use case. As time goes on, and COE's gain multitenancy, I see a big contraction in the number of OpenStack deployments or deployed node count and a shifting of OpenStack based workloads more towards managing pet vm's, as the cloud native stuff moves more and more towards containers/COE's which don't actually need vm's.

This I think will bring the issue to a head in the OpenStack community soon. What is OpenStack? Is it purely an IaaS implementation? Its pretty good at that now. But something that will be very niche soon I think. Is it an Cloud Operating system? The community seems to have made that a resounding no. Is it an OpenSource competitor to AWS? Today, its getting further and further behind in that. If nothing changes, that will be impossible.

My 2 cents? I think the world does need an OpenSource implementation of what AWS provides. That can't happen on the path we're all going down now. We're struggling with division of vision between the two ideologies and lack of decision around a COE, causing us to spend a huge amount of effort on things like Trove/Sahara/etc to reproduce functionality in AWS, but not being as agile as AWS so we can't ever make headway. If we want to be an OpenSource AWS competitor, that requires us to make some hard calls, pick a COE (Kubernetes has won that space I believe), start integrating it quickly, and retool advanced services like Trove/Sahara/etc to target the COE rather then VM's for deployment. This should greatly enhance our ability to produce functional solutions quickly.

But, its ultimately the Community who decide what OpenStack will become. If we're ok with the path its headed down, to basically just be an IaaS, that's fine with me. I'd just like it to be a conscious decision rather then one that just happens. If thats the way it goes, lets just decide on it now, and let the folks that are spinning their wheels move on to a system that will help them make headway in their goals. It will be better for everyone.

Thanks,
Kevin


From: David Moreau Simard [dms@redhat.com]
Sent: Wednesday, March 08, 2017 8:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Wed, Mar 8, 2017 at 9:41 AM, Jay Pipes jaypipes@gmail.com wrote:
On 03/06/2017 06:26 AM, Thierry Carrez wrote:

Hello everyone,

The App Catalog was created early 2015 as a marketplace of pre-packaged
applications that you can deploy using Murano. Initially a demo by
Mirantis, it was converted into an open upstream project team, and
deployed as a "beta" as apps.openstack.org.

Since then it grew additional categories (Glance images, Heat & Tosca
templates), but otherwise did not pick up a lot of steam. The website
(still labeled "beta") features 45 glance images, 6 Tosca templates, 13
heat templates and 94 murano packages (~30% of which are just thin
wrappers around Docker containers). Traffic stats show around 100 visits
per week, 75% of which only read the index page.

In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

In the past we have retired projects that were dead upstream. The App
Catalog is not in this case: it has an active maintenance team, which
has been successfully maintaining the framework and accepting
applications. If we end up retiring the App Catalog, it would clearly
not be a reflection on that team performance, which has been stellar
despite limited resources. It would be because the beta was arguably not
successful in building an active marketplace of applications, and
because its continuous existence is not a great fit from a strategy
perspective. Such removal would be a first for our community, but I
think it's now time to consider it.

Before we discuss or decide anything at the TC level, I'd like to
collect everyone thoughts (and questions) on this. Please feel free to
reply to this thread (or reach out to me privately if you prefer). Thanks
!

Mirantis' position is that the App Catalog was a good idea, but we agree
with you that other application repositories like DockerHub and Quay.io are
both more useful and more actively used.

The OpenStack App Catalog does indeed seem to unnecessarily compete with
those application repositories, and we would support its retirement if that
is what the community would like to do. We'll provide resources and help in
winding anything down if needed.

Best,
-jay


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 36
Date: Wed, 8 Mar 2017 17:52:43 +0000
From: "Kwasniewska, Alicja" alicja.kwasniewska@intel.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core
Message-ID: E5DE2A5A-DCA7-4900-BFB7-4849CE6D9DAF@intel.com
Content-Type: text/plain; charset="utf-8"

+1

From: Mauricio Lima mauriciolimab@gmail.com
Reply-To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Date: Wednesday, March 8, 2017 at 5:34 AM
To: "OpenStack Development Mailing List (not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core

+1

2017-03-08 7:34 GMT-03:00 Christian Berendt berendt@betacloud-solutions.de:
+1

On 8 Mar 2017, at 07:41, Micha? Jastrz?bski inc007@gmail.com wrote:

Hello,

I'd like to start voting to include Duong (duonghq) in Kolla and
Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
21st of March).

Consider this my +1 vote.

Cheers,
Michal


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Christian Berendt
Chief Executive Officer (CEO)

Mail: berendt@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Gesch?ftsf?hrer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 37
Date: Wed, 8 Mar 2017 13:17:14 -0500
From: Corey Bryant corey.bryant@canonical.com
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: openstack openstack@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][ubuntu][libvirt] Is libvirt 2.5.0
in ubuntu cloud archive ocata repo bust
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Wed, Mar 8, 2017 at 11:50 AM, Jeffrey Zhang zhang.lei.fly@gmail.com
wrote:

Thanks Corey, But i tried ocata proposed repo, the issue is still
happening.

In that case, would you mind opening a bug if you haven't already?

Thanks,
Corey
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 38
Date: Wed, 8 Mar 2017 18:29:52 +0000
From: Jeremy Stanley fungi@yuggoth.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [infra][tripleo] initial discussion for a
new periodic pipeline
Message-ID: 20170308182952.GG12827@yuggoth.org
Content-Type: text/plain; charset=us-ascii

On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin wrote:
The TripleO team would like to initiate a conversation about the
possibility of creating a new pipeline in Openstack Infra to allow
a set of jobs to run periodically every four hours
[...]

The request doesn't strike me as contentious/controversial. Why not
just propose your addition to the zuul/layout.yaml file in the
openstack-infra/project-config repo and hash out any resulting
concerns via code review?
--
Jeremy Stanley


Message: 39
Date: Wed, 8 Mar 2017 13:03:58 -0600
From: Matthew Thode prometheanfire@gentoo.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [requirements] pycrypto is dead, long live
pycryptodome... or cryptography...
Message-ID: cba43a52-7c71-5ad0-15c1-5127ff4c302e@gentoo.org
Content-Type: text/plain; charset="utf-8"

So, pycrypto upstream is dead and has been for a while, we should look
at moving off of it for both bugfix and security reasons.

Currently it's used by the following.

barbican, cinder, trove, glance, heat, keystoneauth, keystonemiddleware,
kolla, openstack-ansible, and a couple of other smaller places.

Development of it was forked into pycryptodome, which is supposed to be
a drop in replacement. The problem is that due to co-installability
requirements we can't have half of packages out there using pycrypto and
the other half using pycryptodome. We'd need to hard switch everyone as
both packages install into the same namespace.

Another alternative would be to use something like cryptography instead,
though it is not a drop in replacement, the migration would be able to
be done piecemeal.

I'd be interested in hearing about migration plans, especially from the
affected projects.

--
Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL:


Message: 40
Date: Wed, 8 Mar 2017 20:04:10 +0100
From: Andreas Jaeger aj@suse.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID: 9617a5f5-2f01-c713-c1bf-86c6308422f3@suse.com
Content-Type: text/plain; charset="windows-1252"

On 2017-03-08 17:23, Brian Rosmaita wrote:
On 3/8/17 10:12 AM, Monty Taylor wrote:

Hey all,

We have a set of old vanity redirect URLs from back when we made a URL
for each project:

cinder.openstack.org
glance.openstack.org
horizon.openstack.org
keystone.openstack.org
nova.openstack.org
qa.openstack.org
swift.openstack.org

They are being served from an old server we'd like to retire. Obviously,
moving a set of http redirects is trivial, but these domains have been
deprecated for about 4 now, so we figured we'd clean house if we can.

We know that the swift team has previously expressed that there are
links out in the wild pointing to swift.o.o/content that still work and
that they don't want to break anyone, which is fine. (although if the
swift team has changed their minds, that's also welcome)

for the rest of you, can we kill these rather than transfer them?

My concern is that glance.openstack.org is easy to remember and type, so
I imagine there are links out there that we have no control over using
that URL. So what are the consequences of it 404'ing or "site cannot be
reached" in a browser?

glance.o.o currently redirects to docs.o.o/developer/glance

If glance.o.o failed for me, I'd next try openstack.org (or
www.openstack.org). Those would give me a page with a top bar of links,
one of which is DOCS. If I took that link, I'd get the docs home page.

There's a search bar there; typing in 'glance' gets me
docs.o.o/developer/glance as the first hit.

If instead I scroll past the search bar, I have to scroll down to
"Project-Specific Guides" and follow "Services & Libraries" ->
"Openstack Services" -> "image service (glance) -> docs.o.o/developer/glance

Which sounds kind of bad ... until I type "glance docs" into google.
Right now the first hit is docs.o.o/developer/glance. And all the kids
these days use the google. So by trying to be clever and hack the URL,
I could get lost, but if I just google 'glance docs', I find what I'm
looking for right away.

So I'm willing to go with the majority on this, with the caveat that if
one or two teams keep the redirect, it's going to be confusing to end
users if the redirect doesn't work for other projects.

Very few people know about these URLs at all and there are only a few
places that use it in openstack (I just send a few patches for those).
If you google for "openstack glance", you won't get this URL at all,

Andreas
--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
HRB 21284 (AG N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


Message: 41
From: no-reply@openstack.org
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] kolla 4.0.0.0rc2 (ocata)
Message-ID:

Hello everyone,

A new release candidate for kolla for the end of the Ocata
cycle is available! You can find the source code tarball at:

https://tarballs.openstack.org/kolla/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/kolla/log/?h=stable/ocata

Release notes for kolla can be found at:

http://docs.openstack.org/releasenotes/kolla/

Message: 42
Date: Wed, 8 Mar 2017 14:11:59 -0500
From: Davanum Srinivas davanum@gmail.com
To: prometheanfire@gentoo.org, "OpenStack Development Mailing List
(not for usage questions)" openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [requirements] pycrypto is dead, long
live pycryptodome... or cryptography...
Message-ID:

Content-Type: text/plain; charset=UTF-8

Matthew,

Please see the last time i took inventory:
https://review.openstack.org/#/q/pycryptodome+owner:dims-v

Thanks,
Dims

On Wed, Mar 8, 2017 at 2:03 PM, Matthew Thode prometheanfire@gentoo.org wrote:
So, pycrypto upstream is dead and has been for a while, we should look
at moving off of it for both bugfix and security reasons.

Currently it's used by the following.

barbican, cinder, trove, glance, heat, keystoneauth, keystonemiddleware,
kolla, openstack-ansible, and a couple of other smaller places.

Development of it was forked into pycryptodome, which is supposed to be
a drop in replacement. The problem is that due to co-installability
requirements we can't have half of packages out there using pycrypto and
the other half using pycryptodome. We'd need to hard switch everyone as
both packages install into the same namespace.

Another alternative would be to use something like cryptography instead,
though it is not a drop in replacement, the migration would be able to
be done piecemeal.

I'd be interested in hearing about migration plans, especially from the
affected projects.

--
Matthew Thode (prometheanfire)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Davanum Srinivas :: https://twitter.com/dims


Message: 43
From: no-reply@openstack.org
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] kolla-ansible 4.0.0.0rc2 (ocata)
Message-ID:

Hello everyone,

A new release candidate for kolla-ansible for the end of the Ocata
cycle is available! You can find the source code tarball at:

https://tarballs.openstack.org/kolla-ansible/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Ocata release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/ocata release
branch at:

http://git.openstack.org/cgit/openstack/kolla-ansible/log/?h=stable/ocata

Release notes for kolla-ansible can be found at:

http://docs.openstack.org/releasenotes/kolla-ansible/

Message: 44
Date: Wed, 8 Mar 2017 14:17:59 -0500
From: Steve Martinelli s.martinelli@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev]
[cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed:
Removal of legacy per-project vanity domain redirects
Message-ID:

Content-Type: text/plain; charset="utf-8"

On Wed, Mar 8, 2017 at 2:04 PM, Andreas Jaeger aj@suse.com wrote:

Very few people know about these URLs at all and there are only a few
places that use it in openstack (I just send a few patches for those).

++

I had no idea they existed...
-------------- next part --------------
An HTML attachment was scrubbed...
URL:


Message: 45
Date: Wed, 8 Mar 2017 13:24:50 -0600
From: Matthew Thode prometheanfire@gentoo.org
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [requirements] pycrypto is dead, long
live pycryptodome... or cryptography...
Message-ID: b6af5257-85dd-28be-2629-0ada5af81b7c@gentoo.org
Content-Type: text/plain; charset="utf-8"

I'm aware, iirc it was brought up when pysaml2 had to be fixed due to a
CVE. This thread is more looking for a long term fix.

On 03/08/2017 01:11 PM, Davanum Srinivas wrote:
Matthew,

Please see the last time i took inventory:
https://review.openstack.org/#/q/pycryptodome+owner:dims-v

Thanks,
Dims

On Wed, Mar 8, 2017 at 2:03 PM, Matthew Thode prometheanfire@gentoo.org wrote:

So, pycrypto upstream is dead and has been for a while, we should look
at moving off of it for both bugfix and security reasons.

Currently it's used by the following.

barbican, cinder, trove, glance, heat, keystoneauth, keystonemiddleware,
kolla, openstack-ansible, and a couple of other smaller places.

Development of it was forked into pycryptodome, which is supposed to be
a drop in replacement. The problem is that due to co-installability
requirements we can't have half of packages out there using pycrypto and
the other half using pycryptodome. We'd need to hard switch everyone as
both packages install into the same namespace.

Another alternative would be to use something like cryptography instead,
though it is not a drop in replacement, the migration would be able to
be done piecemeal.

I'd be interested in hearing about migration plans, especially from the
affected projects.

--
Matthew Thode (prometheanfire)


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Matthew Thode (prometheanfire)

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL:



OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

End of OpenStack-dev Digest, Vol 59, Issue 24



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Mar 11, 2017 by Steven_Dake_(stdake) (24,540 points)   2 6 20
...