settingsLogin | Registersettings

[openstack-dev] Facing error (HTTP 500) while generating token through KeyStone

0 votes

Dear All,

Requesting your help on following issue:

I am trying to deploy Multi Node OpenStack (Liberty) env and following
exact instructions given at following link:
http://docs.openstack.org/liberty/install-guide-ubuntu/

After Keystone configuration, while trying to verify its operation and
generating token, facing following error:
Command:
openstack --os-auth-url http://controller:35357/v3 --os-project-domain-id
default --os-user-domain-id default --os-project-name admin --os-username
admin --os-auth-type password token issue

Error:
An unexpected error prevented the server from fulfilling your request.
(HTTP 500) (Request-ID: req-c89801db-f8de-457b-8fc5-df0d3c72d44e)

Keystone log (/var/log/apache2/keystone.log) shows following:
2016-02-12 11:50:48.044811 2016-02-12 11:50:48.0442808 INFO
keystone.common.wsgi [req-c89801db-f8de-457b-8fc5-df0d3c72d44e - - - - -]
POST http://controller:5000/v3/auth/tokens
2016-02-12 11:50:48.363340 2016-02-12 11:50:48.362 2808 INFO
keystone.common.kvs.core [req-c89801db-f8de-457b-8fc5-df0d3c72d44e - - - -
-] Using default dogpile sha1manglekey as KVS region token-driver
key_mangler
2016-02-12 11:50:55.692962 2016-02-12 11:50:55.692 2808 WARNING
keystone.common.wsgi [req-c89801db-f8de-457b-8fc5-df0d3c72d44e - - - - -]
An unexpected error prevented the server from fulfilling your request.

Best Regards
Ankit Agrawal

From: openstack-dev-request@lists.openstack.org
To: openstack-dev@lists.openstack.org
Date: 02/12/2016 04:48 AM
Subject: OpenStack-dev Digest, Vol 46, Issue 32

Send OpenStack-dev mailing list submissions to
openstack-dev@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
or, via email, send a message with subject or body 'help' to
openstack-dev-request@lists.openstack.org

You can reach the person managing the list at
openstack-dev-owner@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-dev digest..."

Today's Topics:

  1. Re: [all] [tc] "No Open Core" in 2016 (Flavio Percoco)
  2. Re: [Fuel][QA] What is the preferred way to bootstrap a
    baremetal node with Fuel on product CI? (Dennis Dmitriev)
  3. Re: [qa] deprecating Tempest stress framework (Daniel Mellado)
  4. Re: [Fuel] URL of Horizon is hard to find on the dashboard
    (Igor Kalnitsky)
  5. [ironic] More midcycle details (Jim Rollenhagen)
  6. Glance Image signing and verification (Pankaj Mishra)
  7. Re: Glance Image signing and verification (Nikhil Komawar)
  8. Re: [infra] [trove] gate jobs failing with ovh apt mirrors
    (Jeremy Stanley)
  9. Re: [neutron] [ipam] Migration to pluggable IPAM (John Belamaric)

    1. Re: [OpenStack-Infra] Gerrit downtime on Friday 2016-02-12 at
      22:00 UTC (Mateusz Matuszkowiak)
    2. Re: [Fuel][Plugins] Multi release packages (Simon Pasquier)
    3. Re: [infra] [trove] gate jobs failing with ovh apt mirrors
      (Craig Vyvial)
    4. [neutron] Broken Gate tests on branch stable/kilo, Project:
      openstack/neutron (John Joyce (joycej))
    5. Re: [neutron] Broken Gate tests on branch stable/kilo,
      Project: openstack/neutron (Ihar Hrachyshka)
    6. Re: All hail the new per-region pypi, wheel and apt mirrors
      (Matthew Treinish)
    7. Re: [magnum][heat] Bug 1544227 (Hongbin Lu)
    8. [nova] Updating a volume attachment (Shoham Peller)
    9. Re: [nova] Updating a volume attachment (Andrea Rosa)
    10. Re: [neutron] [ipam] Migration to pluggable IPAM (Armando M.)
    11. Re: [neutron] [ipam] Migration to pluggable IPAM (Armando M.)
    12. Re: [nova] Updating a volume attachment (Shoham Peller)
    13. Re: [Nova][Cinder] Multi-attach, determining when to call
      os-brick's connector.disconnect_volume (Walter A. Boring IV)
    14. Re: [infra] [trove] gate jobs failing with ovh apt mirrors
      (Clint Byrum)
    15. [release] Release countdown for week R-7, Feb 15-19
      (Doug Hellmann)
    16. Re: [Fuel][Plugins] Multi release packages (Ilya Kutukov)
    17. Re: [Fuel][Plugins] Multi release packages (Ilya Kutukov)
    18. Re: [neutron] [ipam] Migration to pluggable IPAM (Carl Baldwin)
    19. Re: [neutron] [ipam] Migration to pluggable IPAM (Carl Baldwin)
    20. Re: [neutron] [ipam] Migration to pluggable IPAM (Carl Baldwin)
    21. Multiple delete of network through CLI is not available
      as of
      now (Monika Parkar)
    22. Re: [neutron] [ipam] Migration to pluggable IPAM (John Belamaric)
    23. [mitaka][hackathon] Mitaka Bug Smash Hackathon in Bay Area
      (March 7-9) (Boris Pavlovic)
    24. [nova] Update on scheduler and resource tracker progress
      (Jay Pipes)
    25. Re: [infra][keystone][kolla][bandit] linters jobs
      (Steven Dake (stdake))
    26. Re: [magnum][heat] Bug 1544227 (Thomas Herve)
    27. [QA][grenade] Create new grenade job (Christopher N Solis)

Message: 1
Date: Thu, 11 Feb 2016 07:33:12 -0430
From: Flavio Percoco flavio@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016
Message-ID: 20160211120312.GF11619@redhat.com
Content-Type: text/plain; charset="utf-8"; Format="flowed"

On 11/02/16 17:31 +0800, Thomas Goirand wrote:

On 02/08/2016 09:54 PM, Flavio Percoco wrote:

Would our votes change if Poppy had support for OpenCDN (imagine it's
being
maintained) even if that solution is terrible?

Let's say it was doing that, and spawning instances containing OpenCDN
running on a multi-datacenter OpenStack deployment, then IMO it would be
a good candidate.

This might be an overkill for cloud admins and there'll be close to no
clouds
running their own CDN.

Oh, that, and ... not using CassandraDB. And yes, this thread is a good
place to have this topic. I'm not sure who replied to me this thread
wasn't the place to discuss it: I respectfully disagree, since it's
another major blocker, IMO as important, if not more, as using a free
software CDN solution.

It was me and I disagree. This thread is to talk about the open core
issue. In
fact, the first email didn't even call out Poppy to begin with. For all
the
remaining issues there's a spec in the governance repo that can be used.
Also,
this topic was discussed in the latest TC meeting.

Flavio

--
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/4d9d2c6d/attachment-0001.pgp
>


Message: 2
Date: Thu, 11 Feb 2016 14:45:43 +0200
From: Dennis Dmitriev ddmitriev@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Fuel][QA] What is the preferred way to
bootstrap a baremetal node with Fuel on product CI?
Message-ID: 56BC8277.9020202@mirantis.com
Content-Type: text/plain; charset="windows-1252"

Thanks to all for answers!

We will leave Fuel master node on a VM for our testing until some
specific cases will require it on a baremetal.
Ironic looks like a good tool for PXE provisioning and manage other
baremetal slaves via IPMI, we will investigate how it could be used in
our testing tools later.

On 02/10/2016 12:43 PM, Vladimir Kuklin wrote:

Folks

I think the easiest and the best option here is to boot iPXE or
pxelinux with NFS and put master node image onto an NFS mount. This
one should work seamlessly.

On Wed, Feb 10, 2016 at 1:36 AM, Andrew Woodward
<awoodward@mirantis.com awoodward@mirantis.com> wrote:

Unless we hope to gain some insight and specific testing by
installing the ISO on a bare-metal node (like UEFI), I'd propose
that we stop testing things that are well tested elsewhere (a
given ISO produces a working fuel master) and just focus on what
we want to test in this environment. 

Along this line, we cold

a) keep fuel masternode as a VM that is set up with access to the
networks with the BM nodes. We have a good set of tools to build
the master node in a VM already we can just re-use time 

b) use cobbler to control PXE based ISO boot/install, then either
create new profiles in cobbler for various fuel nodes with
different ISO or replace the single download link. (Make sure you
transfer the image over HTTP as TFTP will be slow for such size.
We have some tools and knowledge around using cobbler as this is
effectively what fuel does its self.

c) fuel on fuel, as an extension of b, we can just use cobbler on
an existing fuel node to provision another fuel node, either from
ISO or even it's own repo's (we just need to send a kickstart)

d) you can find servers with good BMC or DRAC that we can issue
remote mount commands to the virtual cd-rom

e) consider using live-cd approach (long implmentation). I've been
asked about supporting this in product where we start an
environment with live-cd, the master node may make it's own home
and then it can be moved off the live-cd when it's ready


On Tue, Feb 9, 2016 at 10:25 AM Pavlo Shchelokovskyy
<pshchelokovskyy@mirantis.com
<mailto:pshchelokovskyy@mirantis.com>> wrote:

    Hi,

    Ironic also supports running it as standalone service, w/o
    Keystone/Glance/Neutron/Nova etc integration, deploying images
    from HTTP links. Could that be an option too?

    BTW, there is already an official project under OpenStack
    Baremetal program called Bifrost [0] that, quoting, "automates
    the task of deploying a base image onto a set of known
    hardware using Ironic" by installing and configuring Ironic in
    standalone mode.

    [0] https://github.com/openstack/bifrost

    Cheers,


    On Tue, Feb 9, 2016 at 6:46 PM Dennis Dmitriev
    <ddmitriev@mirantis.com <mailto:ddmitriev@mirantis.com>> wrote:

        Hi all!

        To run system tests on CI on a daily basis using baremetal
        servers
        instead of VMs, Fuel admin node also should be bootstrapped.

        There is no a simple way to mount an ISO with Fuel as a
        CDROM or USB
        device to a baremetal server, so we choose the
        provisioning with PXE.

        It could be done in different ways:

        - Configure a libvirt bridge as dnsmasq/tftp server for
        admin/PXE network.
              Benefits: no additional services to be configured.
              Doubts: ISO should be mounted on the CI host (via
        fusefs?); a HTTP
        or NFS server for basic provisioning should be started in
        the admin/PXE
        network (on the CI host);

        - Start a VM that is connected to admin/PXE network, and
        configure
        dnsmasq/tftp there.
              Benefits: no additional configuration on the CI host
        should be
        performed
              Doubts: starting the PXE service becomes a little
        complicated

        - Use Ironic for manage baremetal nodes.
              Benefits: good support for different hardware,
        support for
        provisioning from ISO 'out of the box'.
              Doubts: support for Ironic cannot be implemented in
        short terms,
        and there should be performed additional investigations.

        My question is:  what other benefits or doubts I missed
        for first two
        ways? Is there other ways to provision baremetal with Fuel
        that can be
        automated in short terms?

        Thanks for any suggestions!


        --
        Regards,
        Dennis Dmitriev
        QA Engineer,
        Mirantis Inc. http://www.mirantis.com
        e-mail/jabber: dis.xcom@gmail.com <mailto:dis.xcom@gmail.com

        OpenStack Development Mailing List (not for usage questions)
        Unsubscribe:

OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
<
http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

    -- 
    Dr. Pavlo Shchelokovskyy
    Senior Software Engineer
    Mirantis Inc
    www.mirantis.com

    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
    <

http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
-- 
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community 

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
<

http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru http://www.mirantis.ru/
vkuklin@mirantis.com vkuklin@mirantis.com


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Regards,
Dennis Dmitriev
QA Engineer,
Mirantis Inc. http://www.mirantis.com
e-mail/jabber: dis.xcom@gmail.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/a4d88bff/attachment-0001.html
>


Message: 3
Date: Thu, 11 Feb 2016 13:59:16 +0100
From: Daniel Mellado daniel.mellado.es@ieee.org
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] deprecating Tempest stress framework
Message-ID: 56BC85A4.5080000@ieee.org
Content-Type: text/plain; charset=windows-1252

+1 to that, it was my 2nd to-be-deprecated after javelin ;)

El 11/02/16 a las 12:47, Sean Dague escribi?:

In order to keep Tempest healthy I feel like it's time to prune things
that are outside of the core mission, especially when there are other
options out there.

The stress test framework in tempest is one of those. It builds on other
things in Tempest, but isn't core to it.

I'd propose that becomes deprecated now, and removed in Newton. If there
are folks that would like to carry it on from there, I think we should
spin it into a dedicated repository and just have it require tempest.

           -Sean

Message: 4
Date: Thu, 11 Feb 2016 15:10:03 +0200
From: Igor Kalnitsky ikalnitsky@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Fuel] URL of Horizon is hard to find on
the dashboard
Message-ID:
CACo6NWCoUvPY+naZVhVZM+tubtAndeGtZ-jra=dpB+3MC5AJhg@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

Vitaly,

What about adding some button with "Go" or "Visit" text? Somewhere on
the right size of line? It'd be easy to understand what to click to
visit the dashboard.

  • Igor

On Thu, Feb 11, 2016 at 1:38 PM, Vitaly Kramskikh
vkramskikh@mirantis.com wrote:

Roman,

For with enabled SSL it still can be quite long as it contains FQDN. And
we
also need to change plugin link representation accordingly, which I
don't
fine acceptable. I think you just got used to the old interface where
the
link to Horizon was a part of deployment task result message. We've
merged
small style update to underline Horizon/plugin links, I think it would
be
enough to solve the issue.

2016-02-09 20:31 GMT+07:00 Roman Prykhodchenko me@romcheg.me:

Cannot we use display the same link we use in the title?

9 ???. 2016 ?. ? 14:14 Vitaly Kramskikh vkramskikh@mirantis.com
???????(??):

Hi, Roman,

I think the only solution here is to underline the title so it would
look
like a link. I don't think it's a good idea to show full URL because:

If SSL is enabled, there will be 2 links - HTTP and HTTPS.
Plugins can provide their own links for their dashboards, and they
would
be shown using exactly the same representation which is used for
Horizon.
These links could be quite long.

2016-02-09 20:04 GMT+07:00 Roman Prykhodchenko me@romcheg.me:

Whoops! I forgot to attach the link. Sorry!

  1. http://i.imgur.com/8GaUtDq.png

9 ???. 2016 ?. ? 13:48 Roman Prykhodchenko me@romcheg.me
???????(??):

Hi fuelers!

I?m not sure, if it?s my personal problem or the UX can be improved
a
little, but I?ve literary spend more than 5 minutes trying to figure
out how
to find a URL of Horizon. I?ve made a screenshot [1] and I suggest
to add a
full a link with the full URL in its test after "The OpenStack
dashboard
Horizon is now available". That would make things much more usable.

  • romcheg

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Message: 5
Date: Thu, 11 Feb 2016 05:20:27 -0800
From: Jim Rollenhagen jim@jimrollenhagen.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic] More midcycle details
Message-ID: 20160211132027.GE18744@jimrollenhagen.com
Content-Type: text/plain; charset=us-ascii

Hi all,

Our midcycle is next week! Here's everything you need to know.

First and foremost, please RSVP on the etherpad, and add any topics
(with your name!) that you'd like to discuss.
https://etherpad.openstack.org/p/ironic-mitaka-midcycle

Secondly, here are the time slots we'll be meeting at. All times UTC.
February 16 15:00-20:00
February 17 00:00-04:00
February 17 15:00-20:00
February 18 00:00-04:00
February 18 15:00-20:00
February 19 00:00-04:00

Our regular weekly meeting for February 15 is cancelled.

Communications: we'll be using VOIP, IRC, and etherpad.

The VOIP system is provided by the infra team. We'll be in room 7777.
More details: https://wiki.openstack.org/wiki/Infrastructure/Conferencing
You may use a telephone or any SIP client to connect.
We will not be officially recording the audio; however do note that I
can't stop anyone from doing so.

We'll be using #openstack-sprint on Freenode for the main IRC channel
for the meetup. Most of us will also be in #openstack-ironic.
Please note that these channels are publically logged.

The main etherpad for the meetup is here:
https://etherpad.openstack.org/p/ironic-mitaka-midcycle
It also contains all of these details.
We may start additional etherpads for some topics during the meeting;
those will be linked from the main etherpad.

If you need help during the midcycle, here's a good list of people to
ping in IRC, who will be at most of the time slots:
* jroll (me)
* devananda
* jlvillal
* TheJulia

If you have any questions or comments, please reply to this email or
ping me directly in IRC.

Hope to see you all there! :)

// jim


Message: 6
Date: Thu, 11 Feb 2016 19:15:51 +0530
From: Pankaj Mishra pm.mishra167@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] Glance Image signing and verification
Message-ID:
CAD1-J_Da7ko+Qr_HiOgG7PmCxbkB3Xb6iix7gmhrqnW90bZVow@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,

I am new in OpenStack and I want to create image through glance CLI and I
am referring blueprint
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support

and I am using below mentioned command to create the image. So what is
the
step for Glance Image signing and verification by using glance cli.

glance --os-image-api-version 2 image-create [--architecture
] [--protected [True|False]]
[--name ] [--instance-uuid
] [--min-disk ]
[--visibility ] [--kernel-id
] [--tags [ ...]]
[--os-version ]
[--disk-format ] [--self ]
[--os-distro ] [--id ]
[--owner ] [--ramdisk-id ]
[--min-ram ] [--container-format
] [--property <key=value>]
[--file ] [--progress]

Please any one suggest me, how to execute this command for image signing
and verification.

Thanks & Regards,

Pankaj
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/ab72d6f1/attachment-0001.html
>


Message: 7
Date: Thu, 11 Feb 2016 08:51:08 -0500
From: Nikhil Komawar nik.komawar@gmail.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Glance Image signing and verification
Message-ID: 56BC91CC.9090503@gmail.com
Content-Type: text/plain; charset=utf-8

Hi Pankaj,

Here's a example instruction set for that feature.

https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions

Hope it helps.

On 2/11/16 8:45 AM, Pankaj Mishra wrote:

Hi,

I am new in OpenStack and I want to create image through glance CLI
and I am referring blueprint

https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support

and I am using below mentioned command to create the image. So what
is the step for Glance Image signing and verification by using glance
cli.

glance --os-image-api-version 2 image-create [--architecture
]
[--protected [True|False]] [--name ]
[--instance-uuid ]
[--min-disk ] [--visibility ]
[--kernel-id ]
[--tags [ ...]]
[--os-version ]
[--disk-format ] [--self ]
[--os-distro ] [--id ]
[--owner ] [--ramdisk-id ]
[--min-ram ]
[--container-format ]
[--property <key=value>] [--file ]
[--progress]

Please any one suggest me, how to execute this command for image
signing and verification.

Thanks & Regards,

Pankaj


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

Thanks,
Nikhil


Message: 8
Date: Thu, 11 Feb 2016 14:44:24 +0000
From: Jeremy Stanley fungi@yuggoth.org
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra] [trove] gate jobs failing with
ovh apt mirrors
Message-ID: 20160211144424.GE2343@yuggoth.org
Content-Type: text/plain; charset=us-ascii

On 2016-02-11 07:00:01 +0000 (+0000), Craig Vyvial wrote:

I started noticing more of the Trove gate jobs failing in the last 24
hours
and I think i've tracked it down to this mirror specifically.
http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/main/p/
It looks like its missing the python-software-properties package and
causing our gate job to fail.
[...]

I think you're looking for this:

http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/universe/s/software-properties/

[...]

The error there implies that diskimage-builder invocation in your
job is reusing the host's apt sources but not its apt configuration,
and so is expecting the packages on the mirrors to be secure-apt
signed by a trusted key.
--
Jeremy Stanley


Message: 9
Date: Thu, 11 Feb 2016 15:01:15 +0000
From: John Belamaric jbelamaric@infoblox.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID: 1CF4D11A-B29E-4327-8525-AF4F5A8C8B6C@infoblox.com
Content-Type: text/plain; charset="Windows-1252"


John Belamaric
(240) 383-6963

On Feb 11, 2016, at 5:37 AM, Ihar Hrachyshka ihrachys@redhat.com
wrote:

What?s the user visible change in behaviour after the switch? If it?s
only internal implementation change, I don?t see why we want to leave the
choice to operators.

It is only internal implementation changes.

The other aspect is the deprecation process. If you add the switch into
the DB migration path then the whole deprecation becomes superseded as the
old IPAM logic should be abandoned immediately after that. But perhaps the
other way of looking at it is that we should make an exception in the
deprecation process.

Salvatore

On 11 February 2016 at 00:19, Carl Baldwin carl@ecbaldwin.net wrote:
On Thu, Feb 4, 2016 at 8:12 PM, Armando M. armamig@gmail.com wrote:

Technically we can make this as sophisticated and seamless as we
want, but
this is a one-off, once it's done the pain goes away, and we won't be
doing
another migration like this ever again. So I wouldn't over engineer
it.

Frankly, I was worried that going the other way was over-engineering
it. It will be more difficult for us to manage this transition.

I'm still struggling to see what makes this particular migration
different than other cases where we change the database schema and the
code a bit and we automatically migrate everyone to it as part of the
routine migration. What is it about this case that necessitates
giving the operator the option?

Carl


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Message: 10
Date: Thu, 11 Feb 2016 16:30:28 +0100
From: Mateusz Matuszkowiak mmatuszkowiak@mirantis.com
To: "Elizabeth K. Joseph" lyz@princessleia.com
Cc: OpenStack Development Mailing List
openstack-dev@lists.openstack.org, OpenStack Infra
openstack-infra@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Infra] Gerrit downtime on
Friday 2016-02-12 at 22:00 UTC
Message-ID: 0922A746-C0AC-4EF8-A7AF-F49805EA8467@mirantis.com
Content-Type: text/plain; charset="utf-8"

Hello!

I have created a small patch [0] which is about renaming
"openstack/fuel-plugin-astra? to "openstack/fuel-plugin-astara? (one
character missing).
Please also include it within Gerrit downtime.

Thanks in advance.

[0] https://review.openstack.org/#/c/279138/ <
https://review.openstack.org/#/c/279138/

Regards,
--
Fuel DevOps
Mateusz Matuszkowiak

On Feb 10, 2016, at 12:05 AM, Elizabeth K. Joseph lyz@princessleia.com
wrote:

Hi everyone,

On Friday, February 12th at 22:00 UTC Gerrit will be unavailable for
about 60 minutes while we rename some projects.

Existing reviews, project watches, etc, should all be carried over.
Currently, we plan on renaming the following projects:

openstack/ceilometer-specs -> openstack/telemetry-specs
openstack/sahara-scenario -> openstack/sahara-tests

This list is subject to change.

If you have any questions about the maintenance, please reply here or
contact us in #openstack-infra on Freenode.

--
Elizabeth Krumbach Joseph || Lyz || pleia2


OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/57cf78f3/attachment-0001.html
>


Message: 11
Date: Thu, 11 Feb 2016 16:31:52 +0100
From: Simon Pasquier spasquier@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Fuel][Plugins] Multi release packages
Message-ID:
CAOq3GZU1O=eUenRsgbT9BCNxQ5=o7zh9gnaFM5ms_0A0V_5HMg@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,

On Thu, Feb 11, 2016 at 11:46 AM, Igor Kalnitsky ikalnitsky@mirantis.com
wrote:

Hey folks,

The original idea is to provide a way to build plugin that are
compatible with few releases. It makes sense to me, cause it looks
awful if you need to maintain different branches for different Fuel
releases and there's no difference in the sources. In that case, each
bugfix to deployment scripts requires:

  • backport bugfix to other branches (N backports)
  • build new packages for supported releases (N builds)
  • release new packages (N releases)

It's somehow.. annoying.

A big +1 on Igor's remark. I've already expressed it in another thread but
it should be expected that plugin developers want to support 2 consecutive
versions of Fuel for a given version of their plugin.
That being said, I've never had issues to do it with the current plugin
framework. Except when Fuel breaks the backward compatibility but it's
another story...

Simon

However, I starting agree that having all-in-one RPM when deployment
scripts are different, tasks are different, roles/volumes are
different, probably isn't a good idea. It basically means that your
sources are completely different, and that means you have different
implementations of the same plugin. In that case, in order to avoid
mess in source tree, it'd be better to separate such implementations
on VCS level.

But I'd like to hear more opinion from plugin developers.

  • Igor

On Thu, Feb 11, 2016 at 9:16 AM, Bulat Gaifullin
bgaifullin@mirantis.com wrote:

I agree with Stas, one rpm - one version.

But plugin builder allows to specify several releases as compatible.
The
deployment tasks and repositories can be specified per release, at the
same
time the deployment graph is one for all releases.
Currently it looks like half-implemented feature. Can we drop this
feature?
or should we finish implementation of this feature.

Regards,
Bulat Gaifullin
Mirantis Inc.

On 11 Feb 2016, at 02:41, Andrew Woodward xarses@gmail.com wrote:

On Wed, Feb 10, 2016 at 2:23 PM Dmitry Borodaenko <
dborodaenko@mirantis.com>
wrote:

+1 to Stas, supplanting VCS branches with code duplication is a path
to
madness and despair. The dubious benefits of a cross-release
backwards
compatible plugin binary are not worth the code and infra technical
debt
that such approach would accrue over time.

Supporting multiple fuel releases will likely result in madness as
discussed, however as we look to support multiple OpenStack releases
from
the same version of fuel, this methodology becomes much more
important.

On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:

It changes mostly nothing for case of furious plugin development
when
big
parts of code changed from one release to another.

You will have 6 different deploymenttasks directories and 30 a
little
bit
different files in root directory of plugin. Also you forgot about
repositories directory (+6 at least), pre
build hooks (also 6) and
so
on.
It will look as hell after just 3 years of development.

Also I can't imagine how to deal with plugin licensing if you have
Apache
for liberty but BSD for mitaka release, for example.

Much easier way to develop a plugin is to keep it's source in VCS
like
Git
and just make a branches for every fuel release. It will give us
opportunity to not store a bunch of similar but a little bit
different
files in repo. There is no reason to drag all different versions of
code
for specific release.

On other hand there is a pros - your plugin can survive after
upgrade
if
it
supports new release, no changes needed here.

On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov
ashtokolov@mirantis.com
wrote:

Fuelers,

We are discussing the idea to extend the multi release packages
for
plugins.

Fuel plugin builder (FPB) can create one rpm-package for all
supported
releases (from metadata.yaml) but we can specify only deployment
scripts
and repositories per release.

Current release definition (in metadata.yaml):
- os: ubuntu
version: liberty-8.0
mode: ['ha']
deploymentscriptspath: deploymentscripts/
repository
path: repositories/ubuntu

This will result in far too much clutter.
For starters we should support nested over rides. for example the
author
may
have already taken account for the changes between one openstack
version
to
another. In this case they only should need to define the releases
they
support and not specify any additional locations. Later they may
determine
that they only need to replace packages, or one other file they should
not
be required to code every location for each release

Also, at the same time we MUST clean up importing various yaml files.
Specifically, tasks, volumes, node roles, and network roles. Requiring
that
they all be maintained in a single file doesn't scale, we don't
require
it
for tasks.yaml in fuel library, and we should not require it in
plugins.
We
should simply do the same thing as tasks.yaml in library, scan the
subtree
for specific file names and just merge them all together. (This has
been
expressed multiple times by people with larger plugins)

So the idea [0] is to make releases fully configurable.
Suggested changes for release definition (in metadata.yaml):
componentspath: componentsliberty.yaml
deploymenttaskspath: deploymenttasksliberty/ # <-
folder

  environment_config_path: environment_config_liberty.yaml
  network_roles_path: network_roles_liberty.yaml
  node_roles_path: node_roles_liberty.yaml
  volumes_path: volumes_liberty.yaml

I see the issue: if we change anything for one release (e.g.
deployment_task typo) revalidation is needed for all releases.

Your Pros and cons please?

[0] https://review.openstack.org/#/c/271417/


WBR, Alexey Shtokolov

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
with best regards,
Stan.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/6368b0b9/attachment-0001.html
>


Message: 12
Date: Thu, 11 Feb 2016 15:43:40 +0000
From: Craig Vyvial cp16net@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra] [trove] gate jobs failing with
ovh apt mirrors
Message-ID:
CAOK58XTpsBmbjWafyMsotsa__dfhP7rBTYohdBcttm0BS2NFVQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Jeremy,

Thanks for looking at this. That makes sense but I'm not sure how to
resolve this issue with the current diskimage-builder elements. If anyone
has ideas it would be greatly appreciated.

Thanks,
-Craig Vyvial

On Thu, Feb 11, 2016 at 8:44 AM Jeremy Stanley fungi@yuggoth.org wrote:

On 2016-02-11 07:00:01 +0000 (+0000), Craig Vyvial wrote:

I started noticing more of the Trove gate jobs failing in the last 24
hours
and I think i've tracked it down to this mirror specifically.
http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/main/p/
It looks like its missing the python-software-properties package and
causing our gate job to fail.
[...]

I think you're looking for this:

http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/universe/s/software-properties/

>

http://logs.openstack.org/50/278050/1/check/gate-trove-functional-dsvm-mysql/e70f5c0/logs/devstack-gate-post_test_hook.txt.gz#_2016-02-11_05_12_01_023

[...]

The error there implies that diskimage-builder invocation in your
job is reusing the host's apt sources but not its apt configuration,
and so is expecting the packages on the mirrors to be secure-apt
signed by a trusted key.
--
Jeremy Stanley


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/79951157/attachment-0001.html
>


Message: 13
Date: Thu, 11 Feb 2016 16:01:21 +0000
From: "John Joyce (joycej)" joycej@cisco.com
To: "openstack-dev@lists.openstack.org"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] Broken Gate tests on branch
stable/kilo, Project: openstack/neutron
Message-ID: 98ed36b1c2c54f6b9902a0b13068c8c2@XCH-RTP-013.cisco.com
Content-Type: text/plain; charset="utf-8"

I was trying to cherry pick a change in Liberty back to stable Kilo:
https://review.openstack.org/#/c/277962/
Many of the gate tests failed and I notice that appears to be the case
with most of the reviews going back a while in the past. From a quick
check I did not see anything related to this change that would have caused
the failure signatures I was seeing.

Is this a known problem? Is anyone already working on it in any
capacity?
John


Message: 14
Date: Thu, 11 Feb 2016 17:08:19 +0100
From: Ihar Hrachyshka ihrachys@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Broken Gate tests on branch
stable/kilo, Project: openstack/neutron
Message-ID: A3538DD1-86CB-4051-B0D8-FE75FEF2C087@redhat.com
Content-Type: text/plain; charset=us-ascii; delsp=yes; format=flowed

John Joyce (joycej) joycej@cisco.com wrote:

I was trying to cherry pick a change in Liberty back to stable Kilo:
https://review.openstack.org/#/c/277962/
Many of the gate tests failed and I notice that appears to be the case
with most of the reviews going back a while in the past. From a quick

check I did not see anything related to this change that would have
caused the failure signatures I was seeing.

Is this a known problem? Is anyone already working on it in any
capacity?
John

The issue was due to fixtures release. Now fixed. You already rechecked,
so
just wait for new test results.

Ihar


Message: 15
Date: Thu, 11 Feb 2016 11:16:01 -0500
From: Matthew Treinish mtreinish@kortar.org
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] All hail the new per-region pypi, wheel
and apt mirrors
Message-ID: 20160211161601.GA22458@sazabi.kortar.org
Content-Type: text/plain; charset="us-ascii"

On Wed, Feb 10, 2016 at 06:45:25PM -0600, Monty Taylor wrote:

Hey everybody,

tl;dr - We have new AFS-based consistent per-region mirrors of PyPI and
APT
repos with additional wheel repos containing pre-built wheels for all
the
modules in global-requirements

We've just rolled out a new change that you should mostly never notice -
except that jobs should be a bit faster and more reliable.

The underpinning of the new mirrors is AFS, which is a global
distributed
filesystem developed by Carnegie Mellon back in the 1980's. In a lovely
fit
of old-is-new-again, the challenges that software had to deal with in
the
80s (flaky networks, frequent computer failures) mirror life in the
cloud
pretty nicely, and the engineering work to solve them winds up being
quite
relevant.

One of the nice things we get from AFS is the ability to do atomic
consistent releases of new filesystem snapshots to read-only replicas.
That
means we can build a new version of our mirror content, check it for
consistency, and then release it for consumption to all of the consumers
at
the same time. That's important for the gate, because our "package not
found" errors are usually about the mirror state shifting during a test
job
run.

We've had per-region PyPI mirrors for quite some time (and indeed the
gate
would largely be dead in the water without them). The improvement from
this
work for them is that they're now AFS based, so we should never have a
visible mirror state that's wonky or inconsistent between regions, and
we
can more easily expand into new cloud regions.

We've added per-region apt mirrors (with yum to come soon) to the mix
based
on the same concept - we build the new mirror state then release it.
There
is one additional way that apt can fail even with consistent mirror
states,
which is that apt repos purge old versions of packages that are no
longer
referenced. If a new mirror state rolls out between the time devstack
runs
apt-get update and the time it tries to do apt-get install of something,
you
can get a situation where apt is trying to install a version of a
package
that is no longer present in the archive. To mitigate this, we're
purging
our mirror on a delay ... in our mirror runs every 2 hours we add new
packages and update the index, and then in the next mirror run we'll
delete
the packages the previous run made unreferenced. This should make apt
errors
about package not found go away.

Last but certainly not least, there are now also wheel repositories of
wheels built for all of our python packages from global-requirements.
This
is a speed increase and shaves 1.8 tens of minutes off of a normal
devstack
run.

This is a big win for everyone. You can see the speed improvement:

http://status.openstack.org/openstack-health/#/test/devstack?end=2016-02-10T15:04:32.039Z&resolutionKey=hour&duration=P1M

It's quite obvious devstack started getting much faster when the wheel
mirror
was enabled.

With these changes, it means we're writing not only pip.conf but now
sources.list files into the test nodes. If you happen to be doing extra
special things with either of those in your jobs, you'll want to make
sure
you consume the config files we're laying down

Finally, although all Infra projects are a team effort - a big shout out
to
Michael Krotschek and Jim Blair for diving in and getting this finished
over
the past couple of weeks.

Monty
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/9b0d1285/attachment-0001.pgp


Message: 16
Date: Thu, 11 Feb 2016 16:23:54 +0000
From: Hongbin Lu hongbin.lu@huawei.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][heat] Bug 1544227
Message-ID:
0957CD8F4B55C0418161614FEC580D6B01BE49AB@YYZEML702-CHM.china.huawei.com

Content-Type: text/plain; charset="us-ascii"

Rabi,

As you observed, I have uploaded two testing patches [1][2] that depends
on your fix patch [3] and the reverted patch [4] respectively. An
observation is that the test "gate-functional-dsvm-magnum-mesos" failed in
[1], but passed in [2]. That implies the reverted patch does resolve an
issue (although I am not sure exactly how).

I did notice there are several 404 errors from Neutron, but those errors
exist in successful tests as well so I don't think they are the root
cause.

[1] https://review.openstack.org/#/c/278578/
[2] https://review.openstack.org/#/c/278778/
[3] https://review.openstack.org/#/c/278576/
[4] https://review.openstack.org/#/c/278575/

Best regards,
Hongbin

-----Original Message-----
From: Rabi Mishra [mailto:ramishra@redhat.com]
Sent: February-11-16 12:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][heat] Bug 1544227

Hi,

We did some analysis of the issue you are facing.

One of the issues from heat side is, we convert None(singleton) resource
references to 'None'(string) and the translation logic is not ignoring
them. Though we don't apply translation rules to resource references[1].We
don't see this issue after this patch[2].

The issue you mentioned below with respect to SD and SDG, does not look
like something to do with this patch. I also see the similar issues when
you tested with the reverted patch[3].

I also noticed that there are some 404 from neutron in the engine logs[4]
for the test patch.
I did not notice them when I tested locally with the templates you had
provided.

Having said that, we can still revert the patch, if that resolves your
issue.

[1]
https://github.com/openstack/heat/blob/master/heat/engine/translation.py#L234

[2] https://review.openstack.org/#/c/278576/
[3]
http://logs.openstack.org/78/278778/1/check/gate-functional-dsvm-magnum-k8s/ea48ba2/console.html#_2016-02-11_03_07_49_039

[4]
http://logs.openstack.org/78/278578/1/check/gate-functional-dsvm-magnum-swarm/51eeb3b/logs/screen-h-eng.txt

Regards,
Rabi

Hi Heat team,

As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi
submitted on a fix (https://review.openstack.org/#/c/278576/), but it
doesn't seem to be enough to unlock the broken gate. In particular, it
seems templates with SoftwareDeploymentGroup resource failed to
complete (I have commented on the review above for how to reproduce).

Right now, I prefer to merge the reverted patch
(https://review.openstack.org/#/c/278575/) to unlock our gate
immediately, unless someone can work on a quick fix. We appreciate the
help.

Best regards,
Hongbin


____ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 17
Date: Thu, 11 Feb 2016 18:51:03 +0200
From: Shoham Peller shoham.peller@stratoscale.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Updating a volume attachment
Message-ID:
CACKFtusThtMUi2_RAETwRjXGtH0-t4=56u2yS9uE3ZMTWkTVxQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi,

Currently there is no way to update the volume-attachment bdm parameters,
i.e. the bus type or device name, without detaching and re-attaching the
volume, and supplying the new parameters. This is obviously not ideal, and
if the volume we want to update is the boot volume, detaching to update
the
bdm, is not even possible.

I want to send a spec proposition to allow updating those attachment
parameters, while the server is shutoff of course.

A question is what do you think is the right API for this request.
My first thought is to expand the "os-volumeattachments" API and to add a
PUT method which will get new bdm parameters. Problem is the POST method
doesn't get all the parameters in a bdm - only a device
name.

The options I can think off are:
1. Add to the POST and to the PUT "os-volumeattachments" methods, in the
volumeAttachment dict all the parameters from the bdm that we want the
user
to be able to update - probably just device
type and bustype (devicename
is already present). They will be optional of course, for
backward-compatibility.
2. Instead of expanding the "os-volume_attachments" API, expand the other
way to attach a volume - through a server "attach" action request. We can
add another action - "attachUpdate", which will get the same parameters as
the volume-attach, including the bdm parameters, and with that the user
can
update the bdm.

What do you think about the proposal and about the API dilemma?

Thanks,
Shoham
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/9f0e7166/attachment-0001.html
>


Message: 18
Date: Thu, 11 Feb 2016 17:04:42 +0000
From: Andrea Rosa andrea.rosa@hpe.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Updating a volume attachment
Message-ID: 56BCBF2A.3080908@hpe.com
Content-Type: text/plain; charset=windows-1252

Hi

On 11/02/16 16:51, Shoham Peller wrote:

if the volume we want to update is the boot
volume, detaching to update the bdm, is not even possible.

You might be interested in the approved spec [1] we have for Mitaka
(ref. detach boot volume).
Unfortunately the spec was not part of the high-priority list and it
didn't get implemented but we are going to propose it again for Newton.

Regards
--
Andrea Rosa

[1]
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/detach-boot-volume.html


Message: 19
Date: Thu, 11 Feb 2016 09:04:10 -0800
From: "Armando M." armamig@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID:
CAK+RQeZ+2OSeRKJWznHsG8dVifD=hrNDSWf4nzG5x9NFC6tD0A@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

On 10 February 2016 at 15:19, Carl Baldwin carl@ecbaldwin.net wrote:

On Thu, Feb 4, 2016 at 8:12 PM, Armando M. armamig@gmail.com wrote:

Technically we can make this as sophisticated and seamless as we want,
but
this is a one-off, once it's done the pain goes away, and we won't be
doing
another migration like this ever again. So I wouldn't over engineer
it.

Frankly, I was worried that going the other way was over-engineering
it. It will be more difficult for us to manage this transition.

I'm still struggling to see what makes this particular migration
different than other cases where we change the database schema and the
code a bit and we automatically migrate everyone to it as part of the
routine migration. What is it about this case that necessitates
giving the operator the option?

I believe we have more recovery options out a potentially fatal situation.
In fact the offline script can provide a dry-run option that can just
validate that the migration will succeed before it is even actually
performed; I think that the size and the amount of tables involved in the
data migration justifies this course of action rather than the other.
Think
about what Sean said, bugs are always lurking in the dark and as much as
we
can strive for correctness, things might go bad. This is not a routine
migration and some operators may not be in a rush to embrace pluggable
IPAM, hence I don't think we are in the position to make the decision on
their behalf and go through the usual fast-path deprecation process.

Carl


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/be3c9d9f/attachment-0001.html
>


Message: 20
Date: Thu, 11 Feb 2016 09:04:47 -0800
From: "Armando M." armamig@gmail.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID:
CAK+RQeayK7mRarVwj114Djb16T4zpKxH19YYeC2cLFztWA4HQQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

On 11 February 2016 at 07:01, John Belamaric jbelamaric@infoblox.com
wrote:


John Belamaric
(240) 383-6963

On Feb 11, 2016, at 5:37 AM, Ihar Hrachyshka ihrachys@redhat.com
wrote:

What?s the user visible change in behaviour after the switch? If it?s
only internal implementation change, I don?t see why we want to leave
the
choice to operators.

It is only internal implementation changes.

That's not entirely true, is it? There are config variables to change and
it opens up the possibility of a scenario that the operator may not care
about.

The other aspect is the deprecation process. If you add the switch
into
the DB migration path then the whole deprecation becomes superseded as
the
old IPAM logic should be abandoned immediately after that. But perhaps
the
other way of looking at it is that we should make an exception in the
deprecation process.

Salvatore

On 11 February 2016 at 00:19, Carl Baldwin carl@ecbaldwin.net
wrote:
On Thu, Feb 4, 2016 at 8:12 PM, Armando M. armamig@gmail.com wrote:

Technically we can make this as sophisticated and seamless as we
want, but
this is a one-off, once it's done the pain goes away, and we won't
be
doing
another migration like this ever again. So I wouldn't over engineer
it.

Frankly, I was worried that going the other way was over-engineering
it. It will be more difficult for us to manage this transition.

I'm still struggling to see what makes this particular migration
different than other cases where we change the database schema and
the
code a bit and we automatically migrate everyone to it as part of the
routine migration. What is it about this case that necessitates
giving the operator the option?

Carl


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/9ab83cef/attachment-0001.html
>


Message: 21
Date: Thu, 11 Feb 2016 19:20:27 +0200
From: Shoham Peller shoham.peller@stratoscale.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Updating a volume attachment
Message-ID:
CACKFtuvbFA4pD2-YjEeOM5yYh5gnyYjrduKZXB4hsZEJ8gda_Q@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Thank you Andrea for your reply.

I know this spec and it is indeed a viable solution.
However, I think allowing users to update the attachment detail, rather
than detach and re-attach a volume for every change is more robust and
more
convenient.

Also, IMHO it's a better user-experience if users can use a single API
call
instead of detach API call, poll for the detachment, re-attach the volume,
and poll again for the attachment if they want to powerup the VM.
The bdm DB updating, can happen from nova-api, without IRC'ing a compute
node, and thus return only when the request has been completed fully.

Don't you agree it's needed, even when detaching a boot volume is
possible?

Shoham

On Thu, Feb 11, 2016 at 7:04 PM, Andrea Rosa andrea.rosa@hpe.com wrote:

Hi

On 11/02/16 16:51, Shoham Peller wrote:

if the volume we want to update is the boot
volume, detaching to update the bdm, is not even possible.

You might be interested in the approved spec [1] we have for Mitaka
(ref. detach boot volume).
Unfortunately the spec was not part of the high-priority list and it
didn't get implemented but we are going to propose it again for Newton.

Regards
--
Andrea Rosa

[1]

http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/detach-boot-volume.html


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/032e4de7/attachment-0001.html

Message: 22
Date: Thu, 11 Feb 2016 09:31:29 -0800
From: "Walter A. Boring IV" walter.boring@hpe.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining
when to call os-brick's connector.disconnect_volume
Message-ID: 56BCC571.1010102@hpe.com
Content-Type: text/plain; charset=windows-1252; format=flowed

There seems to be a few discussions going on here wrt to detaches. One
is what to do on the Nova side with calling os-brick's
disconnectvolume, and also when to or not to call Cinder's
terminate
connection and detach.

My original post was simply to discuss a mechanism to try and figure out
the first problem. When should nova call brick to remove
the local volume, prior to calling Cinder to do something.

Nova needs to know if it's safe to call disconnectvolume or not. Cinder
already tracks each attachment, and it can return the connection
info
for each attachment with a call to initializeconnection. If 2 of
those connection
info dicts are the same, it's a shared volume/target.
Don't call disconnect_volume if there are any more of those left.

On the Cinder side of things, if terminateconnection, detach is called,
the volume manager can find the list of attachments for a volume, and
compare that to the attachments on a host. The problem is, Cinder
doesn't track the host along with the instance
uuid in the attachments
table. I plan on allowing that as an API change after microversions
lands, so we know how many times a volume is attached/used on a
particular host. The driver can decide what to do with it at
terminateconnection, detach time. This helps account for
the differences in each of the Cinder backends, which we will never get
all aligned to the same model. Each array/backend handles attachments
different and only the driver knows if it's safe to remove the target or
not, depending on how many attachments/usages it has
on the host itself. This is the same thing as a reference counter,
which we don't need, because we have the count in the attachments table,
once we allow setting the host and the instance
uuid at the same time.

Walt

On Tue, Feb 09, 2016 at 11:49:33AM -0800, Walter A. Boring IV wrote:

Hey folks,
One of the challenges we have faced with the ability to attach a
single
volume to multiple instances, is how to correctly detach that volume.
The
issue is a bit complex, but I'll try and explain the problem, and then
describe one approach to solving one part of the detach puzzle.

Problem:
When a volume is attached to multiple instances on the same host.
There
are 2 scenarios here.

1) Some Cinder drivers export a new target for every attachment on a
compute host. This means that you will get a new unique volume path on
a
host, which is then handed off to the VM instance.

2) Other Cinder drivers export a single target for all instances on
a
compute host. This means that every instance on a single host, will
reuse
the same host volume path.

This problem isn't actually new. It is a problem we already have in Nova
even with single attachments per volume. eg, with NFS and SMBFS there
is a single mount setup on the host, which can serve up multiple
volumes.
We have to avoid unmounting that until no VM is using any volume
provided
by that mount point. Except we pretend the problem doesn't exist and
just
try to unmount every single time a VM stops, and rely on the kernel
failing umout() with EBUSY. Except this has a race condition if one VM
is stopping right as another VM is starting

There is a patch up to try to solve this for SMBFS:

https://review.openstack.org/#/c/187619/

but I don't really much like it, because it only solves it for one
driver.

I think we need a general solution that solves the problem for all
cases, including multi-attach.

AFAICT, the only real answer here is to have nova record more info
about volume attachments, so it can reliably decide when it is safe
to release a connection on the host.

Proposed solution:
Nova needs to determine if the volume that's being detached is a
shared or
non shared volume. Here is one way to determine that.

Every Cinder volume has a list of it's attachments. In those
attachments
it contains the instanceuuid that the volume is attached to. I
presume
Nova can find which of the volume attachments are on the same host.
Then
Nova can call Cinder's initialize
connection for each of those
attachments
to get the target's connectioninfo dictionary. This connectioninfo
dictionary describes how to connect to the target on the cinder
backend. If
the target is shared, then each of the connectioninfo dicts for each
attachment on that host will be identical. Then Nova would know that
it's a
shared target, and then only call os-brick's disconnect
volume, if it's
the
last attachment on that host. I think at most 2 calls to cinder's
initialize_connection would suffice to determine if the volume is a
shared
target. This would only need to be done if the volume is multi-attach
capable and if there are more than 1 attachments on the same host,
where the
detach is happening.
As above, we need to solve this more generally than just multi-attach,
even single-attach is flawed today.

Regards,
Daniel


Message: 23
Date: Thu, 11 Feb 2016 09:59:36 -0800
From: Clint Byrum clint@fewbar.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra] [trove] gate jobs failing with
ovh apt mirrors
Message-ID: 1455213419-sup-4364@fewbar.com
Content-Type: text/plain; charset=UTF-8

Excerpts from Craig Vyvial's message of 2016-02-11 07:43:40 -0800:

Jeremy,

Thanks for looking at this. That makes sense but I'm not sure how to
resolve this issue with the current diskimage-builder elements. If
anyone
has ideas it would be greatly appreciated.

Any job using these images and sources lists will need to add in the
'apt-conf' element, and set DIBAPTCONF=/etc/apt/apt.conf

Thanks,
-Craig Vyvial

On Thu, Feb 11, 2016 at 8:44 AM Jeremy Stanley fungi@yuggoth.org
wrote:

On 2016-02-11 07:00:01 +0000 (+0000), Craig Vyvial wrote:

I started noticing more of the Trove gate jobs failing in the last
24
hours
and I think i've tracked it down to this mirror specifically.
http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/main/p/
It looks like its missing the python-software-properties package and
causing our gate job to fail.
[...]

I think you're looking for this:

http://mirror.bhs1.ovh.openstack.org/ubuntu/pool/universe/s/software-properties/

>

http://logs.openstack.org/50/278050/1/check/gate-trove-functional-dsvm-mysql/e70f5c0/logs/devstack-gate-post_test_hook.txt.gz#_2016-02-11_05_12_01_023

[...]

The error there implies that diskimage-builder invocation in your
job is reusing the host's apt sources but not its apt configuration,
and so is expecting the packages on the mirrors to be secure-apt
signed by a trusted key.
--
Jeremy Stanley


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Message: 24
Date: Thu, 11 Feb 2016 13:25:58 -0500
From: Doug Hellmann doug@doughellmann.com
To: openstack-dev openstack-dev@lists.openstack.org
Subject: [openstack-dev] [release] Release countdown for week R-7, Feb
15-19
Message-ID: 1455215085-sup-8769@lrrr.local
Content-Type: text/plain; charset=UTF-8

Focus


We have 1 more week before the final releases for non-client libraries
for this cycle, and 2 weeks before the final releases for client
libraries. Project teams should be focusing on wrapping up new
feature work in all libraries.

We have 2 more weeks before the Mitaka-3 milestone and overall
feature freeze.

Release Actions


We will be more strictly enforcing the library release freeze before
M3 in 2 weeks. Please review client libraries, integration libraries,
and any other libraries managed by your team and ensure that recent
changes have been released and the global requirements and constraints
lists are up to date with accurate minimum versions and exclusions.

Projects using the cycle-with-intermediary release model need to
produce intermediate releases, if you are going to have one this
cycle. See Thierry's email for details [1].

Liaisons should be familiar with the final release process, documented
in Thierry's email [1]. We have some time to respond to questions
before we get into the crush of actually preparing the release, so
please post follow-ups on the mailing list if you have them.

Review your stable/liberty branches for necessary releases and
submit patches to openstack/releases if you want them.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/086152.html

Important Dates


Final release for non-client libraries: Feb 24
Final release for client libraries: Mar 2
Mitaka 3: Feb 29-Mar 4 (includes feature freeze and soft string freeze)

Mitaka release schedule:
http://docs.openstack.org/releases/schedules/mitaka.html


Message: 25
Date: Thu, 11 Feb 2016 21:35:12 +0300
From: Ilya Kutukov ikutukov@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Fuel][Plugins] Multi release packages
Message-ID:
CABizYvT1L=hKWwun6Z5skHgrZw3sbQ-fWqkYXrCnCLw4=qQg0g@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

My opinion that i've seen no example of multiple software of plugins
versions shipped in one package or other form of bundle. Its not a common
practice.

Anyway we need to provide ability to override paths in manifest
(metadata.yaml).

So the plugin developers could use this approaches to provide multiple
versions support:

  • tasks logic (do the plugin developers have access to current release
    info?)
  • hooks in pre-build process. Its not a big deal to preprocess source
    folder to build different packages with scripts that adding or removing
    some files or replacing some paths.
  • and, perhaps, logic anchors with YACL or other DSL in tasks dependancies
    if this functionality will be added this in theory could allow to use or
    not to use some graph parts depending on release.

I think its already better than nothing and more flexible than any
standardised approach.

On Thu, Feb 11, 2016 at 6:31 PM, Simon Pasquier spasquier@mirantis.com
wrote:

Hi,

On Thu, Feb 11, 2016 at 11:46 AM, Igor Kalnitsky
ikalnitsky@mirantis.com
wrote:

Hey folks,

The original idea is to provide a way to build plugin that are
compatible with few releases. It makes sense to me, cause it looks
awful if you need to maintain different branches for different Fuel
releases and there's no difference in the sources. In that case, each
bugfix to deployment scripts requires:

  • backport bugfix to other branches (N backports)
  • build new packages for supported releases (N builds)
  • release new packages (N releases)

It's somehow.. annoying.

A big +1 on Igor's remark. I've already expressed it in another thread
but
it should be expected that plugin developers want to support 2
consecutive
versions of Fuel for a given version of their plugin.
That being said, I've never had issues to do it with the current plugin
framework. Except when Fuel breaks the backward compatibility but it's
another story...

Simon

However, I starting agree that having all-in-one RPM when deployment
scripts are different, tasks are different, roles/volumes are
different, probably isn't a good idea. It basically means that your
sources are completely different, and that means you have different
implementations of the same plugin. In that case, in order to avoid
mess in source tree, it'd be better to separate such implementations
on VCS level.

But I'd like to hear more opinion from plugin developers.

  • Igor

On Thu, Feb 11, 2016 at 9:16 AM, Bulat Gaifullin
bgaifullin@mirantis.com wrote:

I agree with Stas, one rpm - one version.

But plugin builder allows to specify several releases as compatible.
The
deployment tasks and repositories can be specified per release, at
the
same
time the deployment graph is one for all releases.
Currently it looks like half-implemented feature. Can we drop this
feature?
or should we finish implementation of this feature.

Regards,
Bulat Gaifullin
Mirantis Inc.

On 11 Feb 2016, at 02:41, Andrew Woodward xarses@gmail.com wrote:

On Wed, Feb 10, 2016 at 2:23 PM Dmitry Borodaenko <
dborodaenko@mirantis.com>
wrote:

+1 to Stas, supplanting VCS branches with code duplication is a path
to
madness and despair. The dubious benefits of a cross-release
backwards
compatible plugin binary are not worth the code and infra technical
debt
that such approach would accrue over time.

Supporting multiple fuel releases will likely result in madness as
discussed, however as we look to support multiple OpenStack releases
from
the same version of fuel, this methodology becomes much more
important.

On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:

It changes mostly nothing for case of furious plugin development
when
big
parts of code changed from one release to another.

You will have 6 different deploymenttasks directories and 30 a
little
bit
different files in root directory of plugin. Also you forgot about
repositories directory (+6 at least), pre
build hooks (also 6) and
so
on.
It will look as hell after just 3 years of development.

Also I can't imagine how to deal with plugin licensing if you have
Apache
for liberty but BSD for mitaka release, for example.

Much easier way to develop a plugin is to keep it's source in VCS
like
Git
and just make a branches for every fuel release. It will give us
opportunity to not store a bunch of similar but a little bit
different
files in repo. There is no reason to drag all different versions
of
code
for specific release.

On other hand there is a pros - your plugin can survive after
upgrade if
it
supports new release, no changes needed here.

On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov
ashtokolov@mirantis.com
wrote:

Fuelers,

We are discussing the idea to extend the multi release packages
for
plugins.

Fuel plugin builder (FPB) can create one rpm-package for all
supported
releases (from metadata.yaml) but we can specify only deployment
scripts
and repositories per release.

Current release definition (in metadata.yaml):
- os: ubuntu
version: liberty-8.0
mode: ['ha']
deploymentscriptspath: deploymentscripts/
repository
path: repositories/ubuntu

This will result in far too much clutter.
For starters we should support nested over rides. for example the
author may
have already taken account for the changes between one openstack
version to
another. In this case they only should need to define the releases
they
support and not specify any additional locations. Later they may
determine
that they only need to replace packages, or one other file they
should
not
be required to code every location for each release

Also, at the same time we MUST clean up importing various yaml files.
Specifically, tasks, volumes, node roles, and network roles.
Requiring
that
they all be maintained in a single file doesn't scale, we don't
require
it
for tasks.yaml in fuel library, and we should not require it in
plugins. We
should simply do the same thing as tasks.yaml in library, scan the
subtree
for specific file names and just merge them all together. (This has
been
expressed multiple times by people with larger plugins)

So the idea [0] is to make releases fully configurable.
Suggested changes for release definition (in metadata.yaml):
componentspath: componentsliberty.yaml
deploymenttaskspath: deploymenttasksliberty/ # <-
folder

  environment_config_path: environment_config_liberty.yaml
  network_roles_path: network_roles_liberty.yaml
  node_roles_path: node_roles_liberty.yaml
  volumes_path: volumes_liberty.yaml

I see the issue: if we change anything for one release (e.g.
deployment_task typo) revalidation is needed for all releases.

Your Pros and cons please?

[0] https://review.openstack.org/#/c/271417/


WBR, Alexey Shtokolov

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

--
with best regards,
Stan.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/bf72876a/attachment-0001.html
>


Message: 26
Date: Thu, 11 Feb 2016 21:53:27 +0300
From: Ilya Kutukov ikutukov@mirantis.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Fuel][Plugins] Multi release packages
Message-ID:
CABizYvQSNzhb1qiCCYbU1ZQbtfss7iTw29g50LpOrT3fQ+A-MQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

r/YACL/YAQL/

On Thu, Feb 11, 2016 at 9:35 PM, Ilya Kutukov ikutukov@mirantis.com
wrote:

My opinion that i've seen no example of multiple software of plugins
versions shipped in one package or other form of bundle. Its not a
common
practice.

Anyway we need to provide ability to override paths in manifest
(metadata.yaml).

So the plugin developers could use this approaches to provide multiple
versions support:

  • tasks logic (do the plugin developers have access to current release
    info?)
  • hooks in pre-build process. Its not a big deal to preprocess source
    folder to build different packages with scripts that adding or removing
    some files or replacing some paths.
  • and, perhaps, logic anchors with YACL or other DSL in tasks
    dependancies
    if this functionality will be added this in theory could allow to use or
    not to use some graph parts depending on release.

I think its already better than nothing and more flexible than any
standardised approach.

On Thu, Feb 11, 2016 at 6:31 PM, Simon Pasquier spasquier@mirantis.com
wrote:

Hi,

On Thu, Feb 11, 2016 at 11:46 AM, Igor Kalnitsky
<ikalnitsky@mirantis.com

wrote:

Hey folks,

The original idea is to provide a way to build plugin that are
compatible with few releases. It makes sense to me, cause it looks
awful if you need to maintain different branches for different Fuel
releases and there's no difference in the sources. In that case, each
bugfix to deployment scripts requires:

  • backport bugfix to other branches (N backports)
  • build new packages for supported releases (N builds)
  • release new packages (N releases)

It's somehow.. annoying.

A big +1 on Igor's remark. I've already expressed it in another thread
but it should be expected that plugin developers want to support 2
consecutive versions of Fuel for a given version of their plugin.
That being said, I've never had issues to do it with the current plugin
framework. Except when Fuel breaks the backward compatibility but it's
another story...

Simon

However, I starting agree that having all-in-one RPM when deployment
scripts are different, tasks are different, roles/volumes are
different, probably isn't a good idea. It basically means that your
sources are completely different, and that means you have different
implementations of the same plugin. In that case, in order to avoid
mess in source tree, it'd be better to separate such implementations
on VCS level.

But I'd like to hear more opinion from plugin developers.

  • Igor

On Thu, Feb 11, 2016 at 9:16 AM, Bulat Gaifullin
bgaifullin@mirantis.com wrote:

I agree with Stas, one rpm - one version.

But plugin builder allows to specify several releases as compatible.
The
deployment tasks and repositories can be specified per release, at
the
same
time the deployment graph is one for all releases.
Currently it looks like half-implemented feature. Can we drop this
feature?
or should we finish implementation of this feature.

Regards,
Bulat Gaifullin
Mirantis Inc.

On 11 Feb 2016, at 02:41, Andrew Woodward xarses@gmail.com wrote:

On Wed, Feb 10, 2016 at 2:23 PM Dmitry Borodaenko <
dborodaenko@mirantis.com>
wrote:

+1 to Stas, supplanting VCS branches with code duplication is a
path
to
madness and despair. The dubious benefits of a cross-release
backwards
compatible plugin binary are not worth the code and infra technical
debt
that such approach would accrue over time.

Supporting multiple fuel releases will likely result in madness as
discussed, however as we look to support multiple OpenStack releases
from
the same version of fuel, this methodology becomes much more
important.

On Wed, Feb 10, 2016 at 07:36:30PM +0300, Stanislaw Bogatkin wrote:

It changes mostly nothing for case of furious plugin development
when
big
parts of code changed from one release to another.

You will have 6 different deploymenttasks directories and 30 a
little
bit
different files in root directory of plugin. Also you forgot
about
repositories directory (+6 at least), pre
build hooks (also 6)
and
so
on.
It will look as hell after just 3 years of development.

Also I can't imagine how to deal with plugin licensing if you
have
Apache
for liberty but BSD for mitaka release, for example.

Much easier way to develop a plugin is to keep it's source in VCS
like
Git
and just make a branches for every fuel release. It will give us
opportunity to not store a bunch of similar but a little bit
different
files in repo. There is no reason to drag all different versions
of
code
for specific release.

On other hand there is a pros - your plugin can survive after
upgrade if
it
supports new release, no changes needed here.

On Wed, Feb 10, 2016 at 4:04 PM, Alexey Shtokolov
ashtokolov@mirantis.com
wrote:

Fuelers,

We are discussing the idea to extend the multi release packages
for
plugins.

Fuel plugin builder (FPB) can create one rpm-package for all
supported
releases (from metadata.yaml) but we can specify only
deployment
scripts
and repositories per release.

Current release definition (in metadata.yaml):
- os: ubuntu
version: liberty-8.0
mode: ['ha']
deploymentscriptspath: deploymentscripts/
repository
path: repositories/ubuntu

This will result in far too much clutter.
For starters we should support nested over rides. for example the
author may
have already taken account for the changes between one openstack
version to
another. In this case they only should need to define the releases
they
support and not specify any additional locations. Later they may
determine
that they only need to replace packages, or one other file they
should
not
be required to code every location for each release

Also, at the same time we MUST clean up importing various yaml
files.
Specifically, tasks, volumes, node roles, and network roles.
Requiring
that
they all be maintained in a single file doesn't scale, we don't
require it
for tasks.yaml in fuel library, and we should not require it in
plugins. We
should simply do the same thing as tasks.yaml in library, scan the
subtree
for specific file names and just merge them all together. (This has
been
expressed multiple times by people with larger plugins)

So the idea [0] is to make releases fully configurable.
Suggested changes for release definition (in metadata.yaml):
componentspath: componentsliberty.yaml
deploymenttaskspath: deploymenttasksliberty/ # <-
folder

  environment_config_path: environment_config_liberty.yaml
  network_roles_path: network_roles_liberty.yaml
  node_roles_path: node_roles_liberty.yaml
  volumes_path: volumes_liberty.yaml

I see the issue: if we change anything for one release (e.g.
deployment_task typo) revalidation is needed for all releases.

Your Pros and cons please?

[0] https://review.openstack.org/#/c/271417/


WBR, Alexey Shtokolov

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

--
with best regards,
Stan.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/6d78f3b7/attachment-0001.html
>


Message: 27
Date: Thu, 11 Feb 2016 11:53:56 -0700
From: Carl Baldwin carl@ecbaldwin.net
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID:
CALiLy7qQ1kOzYFCmO_bhyMZ86OtuNFFQxnrGqDMWkF=sy-=RMw@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Thu, Feb 11, 2016 at 3:20 AM, Salvatore Orlando
salv.orlando@gmail.com wrote:

The difference lies in the process in my opinion.
If the switch is added into the migration path then we will tell
operators
when to switch.
I was suggesting doing it manual because we just don't know if every
operator is happy about doing the switch when upgrading to Newton, but
perhaps it is just me over-worrying about operator behaviour.

I think this is the point that makes this discussion worth having. It
does help me to hear you state this concern in this way. I'd like to
hear/read some other opinions.

The other aspect is the deprecation process. If you add the switch into
the
DB migration path then the whole deprecation becomes superseded as the
old
IPAM logic should be abandoned immediately after that. But perhaps the
other
way of looking at it is that we should make an exception in the
deprecation
process.

I agree. If we do decide that we force the switch then we should
immediately abandon the old code. However, I don't this should drive
the decision to do the switch with the migration.

Carl


Message: 28
Date: Thu, 11 Feb 2016 11:56:15 -0700
From: Carl Baldwin carl@ecbaldwin.net
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID:
CALiLy7p5JHrwZT+ZqZOJ8vK_HzZK7e0G1R94NqrJ7k1x7rwmrg@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Thu, Feb 11, 2016 at 3:32 AM, Ihar Hrachyshka ihrachys@redhat.com
wrote:

Salvatore Orlando salv.orlando@gmail.com wrote:

The difference lies in the process in my opinion.
If the switch is added into the migration path then we will tell
operators
when to switch.
I was suggesting doing it manual because we just don't know if every
operator is happy about doing the switch when upgrading to Newton, but
perhaps it is just me over-worrying about operator behaviour.

What?s the user visible change in behaviour after the switch? If it?s
only
internal implementation change, I don?t see why we want to leave the
choice
to operators.

This was my thinking exactly. However, I did not do the
implementation, Salvatore did. So, ultimately, I think that he should
be convinced that we're doing the right thing. I value his input
highly.

Carl


Message: 29
Date: Thu, 11 Feb 2016 11:58:16 -0700
From: Carl Baldwin carl@ecbaldwin.net
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID:
CALiLy7reBzPtVbeCRXTP4cWb6g8Zj+gO_N8we_avj=8X6ZC3FA@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Thu, Feb 11, 2016 at 10:04 AM, Armando M. armamig@gmail.com wrote:

On 11 February 2016 at 07:01, John Belamaric jbelamaric@infoblox.com
wrote:

It is only internal implementation changes.

That's not entirely true, is it? There are config variables to change
and it
opens up the possibility of a scenario that the operator may not care
about.

You're right. I was thinking that if we handled the switch then we
could obsolete the config variable. Maybe we shot ourselves in the
foot already by having the config option in the first place. Is that
what you're thinking?

Carl


Message: 30
Date: Fri, 12 Feb 2016 00:44:41 +0530
From: Monika Parkar monikaparkar25@gmail.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Multiple delete of network through CLI is not
available as of now
Message-ID:
CAJ7mFWn8yXXLywbiDGhXqNJfQxJz3u8eM3akBgAfDUaXUQj-oQ@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hello,

I am monika. new to the openstack community.Having working experience in
Python. I was going through the the neutron use-cases I observed that we
can delete multiple network at a time through the dashboard but the same
is
not possible through the command line.

But in the ironic component multiple delete feature is available through
the command line.

So may I propose this as a blueprint.

Awaiting for your kind response/suggestion .

Thanks & Regards
Monika
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160212/1e6f21a9/attachment-0001.html
>


Message: 31
Date: Thu, 11 Feb 2016 19:17:39 +0000
From: John Belamaric jbelamaric@infoblox.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [ipam] Migration to pluggable
IPAM
Message-ID: 69446D19-EB1A-4B4F-8A44-80B177560489@infoblox.com
Content-Type: text/plain; charset="us-ascii"

On Feb 11, 2016, at 12:04 PM, Armando M. <armamig@gmail.com<
mailto:armamig@gmail.com>> wrote:

On 11 February 2016 at 07:01, John Belamaric <jbelamaric@infoblox.com<
mailto:jbelamaric@infoblox.com>> wrote:

It is only internal implementation changes.

That's not entirely true, is it? There are config variables to change and
it opens up the possibility of a scenario that the operator may not care
about.

If we were to remove the non-pluggable version altogether, then the
default for ipam_driver would switch from None to internal. Therefore,
there would be no config file changes needed.

John

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/0d1dd02f/attachment-0001.html
>


Message: 32
Date: Thu, 11 Feb 2016 11:21:27 -0800
From: Boris Pavlovic boris@pavlovic.me
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [mitaka][hackathon] Mitaka Bug Smash
Hackathon in Bay Area (March 7-9)
Message-ID:
CAD85om2OPMieJ2waaFVzMvBeSDg64FQzQ3rsFq2YufS5bkrtnA@mail.gmail.com
Content-Type: text/plain; charset="utf-8"

Hi stackers,

If you are in Bay Area and you would to work together with your friends
from community on fixing non trivial bugs together, you have a great
chance.

There is going to be special event "Mitaka bug smash".
Here is the full information:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

*If you would like to take a part and you are in Bay Area, please register
here: *
https://www.eventbrite.com/e/global-openstack-bug-smash-bay-area-tickets-21241532997?utm_source=eb_email&utm_medium=email&utm_campaign=new_event_email&utm_term=viewmyevent_button

As well, please provide here info in which project you are interested:
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka-BayArea

Best regards,
Boris Pavlovic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/1e709ca7/attachment-0001.html
>


Message: 33
Date: Thu, 11 Feb 2016 15:24:04 -0500
From: Jay Pipes jaypipes@gmail.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Update on scheduler and resource
tracker progress
Message-ID: 56BCEDE4.5020507@gmail.com
Content-Type: text/plain; charset=utf-8; format=flowed

Hello all,

Performance working group, please pay attention to Chapter 2 in the
details section.

tl;dr


At the Nova mid-cycle, we finalized decisions on a way forward in
redesigning the way that resources are tracked in Nova. This work is a
major undertaking and has implications for splitting out the scheduler
from Nova, for the ability of the placement engine to scale, and for
removing long-standing reporting and race condition bugs that have
plagued Nova for years.

The following blueprint specifications outline the effort, which we are
calling the "resource providers framework":

  • resource-classes (bp MERGED, code MERGED)
  • pci-generate-stats (bp MERGED, code IN REVIEW)
  • resource-providers (bp MERGED, code IN REVIEW)
  • generic-resource-pools (bp IN REVIEW, code TODO)
  • compute-node-inventory (bp IN REVIEW, code TODO)
  • resource-providers-allocations (bp IN REVIEW, code TODO)
  • resource-providers-scheduler (bp IN REVIEW, code TODO)

The group working on this code and doing the reviews are hopeful that
the generic-resource-pools work can be completed in Mitaka, and we also
are going to aim to get the compute-node-inventory work done in Mitaka,
though that will be more of a stretch.

The remainder of the resource providers framework blueprints will be
targeted to Newton. The resource-providers-scheduler blueprint is the
final blueprint required before the scheduler can be fully separated
from Nova.

details


Chapter 1 - How the blueprints fit together

A request to launch an instance in Nova involves requests for two
different things: resources and capabilities. Resources are the
quantitative part of the request spec. Capabilities are the qualitative
part of the request.

The resource providers framework is a set of 7 blueprints that
reorganize the way that Nova handles the quantitative side of the
equation. These 7 blueprints are described below.

Compute nodes are a type of resource provider, since they allow
instances to consume some portion of its inventory of various types
of resources. We call these types of resources "resource classes".

resource-classes bp: https://review.openstack.org/256297

The resource-providers blueprint introduces a new set of tables for
storing capacity and usage amounts of all resources in the system:

resource-providers bp: https://review.openstack.org/225546

While all compute nodes are resource providers [1], not all resource
providers are compute nodes. Generic resource pools are resource
providers that have an inventory of a single resource class and that
provide that resource class to consumers that are placed on multiple
compute nodes.

The canonical example of a generic resource pool is a shared storage
system. Currently, a Nova compute node doesn't really know whether the
storage location it uses for storing disk images is a shared
drive/cluster (ala NFS or RBD) or if the storage location is a local
disk drive [2]. The generic-resource-pools blueprint covers the addition
of these generic resource pools, their relation to host aggregates, and
the RESTful API [3] added to control this external resource pool
information.

generic-resource-pools bp: https://review.openstack.org/253187

Within the Nova database schemas [4], capacity and inventory information
is stored in a variety of tables, columns and formats. vCPU, RAM and
DISK capacity information is stored in integer fields, PCI capacity
information is stored in the pcidevices table, NUMA inventory is stored
combined together with usage information in a JSON blob, etc. The
compute-node-inventory blueprint migrates all of the disparate capacity
information from compute
nodes into the new inventory table.

compute-node-inventory bp: https://review.openstack.org/260048

For the PCI resource classes, Nova currently has an entirely different
resource tracker (in /nova/pci/*) that stores an aggregate view of the
PCI resources (grouped by product, vendor, and numa node) in the
compute_nodes.pci_stats field. This information is entirely redundant
information since all fine-grained PCI resource information is stored in
the pci_devices table. This storage of summary information presents a
sync problem. The pci-generate-stats blueprint describes the effort to
remove this storage of summary device pool information and instead
generate this summary information on the fly for the scheduler. This
work is a pre-requisite to having all resource classes managed in a
unified manner in Nova:

pci-generate-stats bp: https://review.openstack.org/240852

In the same way that capacity fields are scattered among different
tables, columns and formats, so too are the fields that store usage
information. Some fields are in the instances table, some in the
instanceextra table, some information is derived from the pcidevices
table, other bits from a JSON blob field. In short, it's an inconsistent
mess. This mess means adding support for adding additional types of
resources typically involves adding yet more inconsistency and
conditional logic into the scheduler and nova-compute's resource
tracker. The resource-providers-allocations blueprint involves work to
migrate all usage record information out of the disparate fields in the
current schema and into the allocations table introduced in the
resource-providers blueprint:

resource-providers-allocations bp: https://review.openstack.org/271779

Once all of the inventory (capacity) and allocation (usage) information
has been migrated to the database schema described in the
resource-providers blueprint, Nova will be treating all types of
resources in a generic fashion. The next step is to modify the scheduler
to take advantage of this new resource representation. The
resource-providers-scheduler blueprint undertakes this important step:

resource-providers-scheduler bp: https://review.openstack.org/271823

Chapter 2 - Addressing performance and scale
============================================

One of the significant performance problems with the Nova scheduler is
the fact that for every call to the selectdestinations() RPC API method
-- which itself is called at least once every time a launch or migration
request is made -- the scheduler grabs all records for all compute nodes
in the deployment. Once retrieving all these compute node records, the
scheduler runs each through a set of filters to determine which compute
nodes have the required capacity to service the instance's requested
resources. Having the scheduler continually retrieve every compute node
record on each request to select
destinations() is extremely
inefficient. The greater the number of compute nodes, the bigger the
performance and scale problem this becomes.

On a loaded cloud deployment -- say there are 1000 compute nodes and 900
of them are fully loaded with active virtual machines -- the scheduler
is still going to retrieve all 1000 compute node records on every
request to selectdestinations() and process each one of those records
through all scheduler filters. Clearly, if we could filter the amount of
compute node records that are returned by removing those nodes that do
not have available capacity, we could dramatically reduce the amount of
work that each call to select
destinations() would need to perform.

The resource-providers-scheduler blueprint attempts to address the above
problem by replacing a number of the scheduler filters that currently
run after the database has returned all compute node records with
instead a series of WHERE clauses and join conditions on the database
query. The idea here is to winnow the number of returned compute node
results as much as possible. The fewer records the scheduler must
post-process, the faster the performance of each individual call to
select_destinations().

The second major scale problem with the current Nova scheduler design
has to do with the fact that the scheduler does not actually claim
resources on a provider. Instead, the scheduler selects a destination
host to place the instance on and the Nova conductor then sends a
message to that target host which attempts to spawn the instance on its
hypervisor. If the spawn succeeds, the target compute host updates the
Nova database and decrements its count of available resources. These
steps (from nova-scheduler to nova-conductor to nova-compute to
database) all take some not insignificant amount of time. During this
time window, a different scheduler process may pick the exact same
target host for a like-sized launch request. If there is only room on
the target host for one of those size requests [5], one of those spawn
requests will fail and trigger a retry operation. This retry operation
will attempt to repeat the scheduler placement decisions (by calling
select_destinations()).

This retry operation is relatively expensive and needlessly so: if the
scheduler claimed the resources on the target host before sending its
pick back to the scheduler, then the chances of producing a retry will
be almost eliminated [6]. The resource-providers-scheduler blueprint
attempts to remedy this second scaling design problem by having the
scheduler write records to the allocations table before sending the
selected target host back to the Nova conductor.

Conclusions
===========

Thanks if you've made it this far in this epic email. :) If you have
questions about the plans, please do feel free to respond here or come
find us on Freenode #openstack-nova IRC. Your reviews and comments are
also very welcome on the specs and patches.

Best,
-jay

[1] One might argue that nova-compute daemons that proxy for some other
resource manager like vCenter or Ironic are not actually resource
providers, but just go with me on this one...

[2] This results in a number of resource reporting bugs, including Nova
reporting that the deployment has X times as much disk capacity as it
really does (where X is the number of compute nodes sharing the same
storage location).

[3] The RESTful API in the generic-resource-pools blueprint actually
will be a completely new REST endpoint and service (/placement) that
will be the start of the new extracted schd

[4] Nova has two database schemas. The first is what is known as the
Child Cell database and contains the majority of database tables. The
second is known as the API database and contains global and top-level
routing tables.

[5] This situation is more common than you might originally think. Any
cloud that runs a pack-first placement strategy with multiple scheduler
daemon processes will suffer from this problem.

[6] Technically, it cannot be eliminated because an out-of-band
operation could theoretically occur (for example, an administrator could
manually -- not through Nova -- launch a virtual machine on the target
host) and therefore introduce some unaccounted-for amount of used
resources for a small window of time in between the periodic interval by
which the nova-compute runs an audit task.


Message: 34
Date: Thu, 11 Feb 2016 20:26:10 +0000
From: "Steven Dake (stdake)" stdake@cisco.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [infra][keystone][kolla][bandit] linters
jobs
Message-ID: D2E23BEA.1670F%stdake@cisco.com
Content-Type: text/plain; charset="iso-8859-1"

Andreas,

Totally understand the overload problem with no short-term workarounds. I
think all engineering in OpenStack is over capacity a bit and folks are
really burning the midnight oil to make sure Mitaka is the best release of
OpenStack yet!

Please feel free to drop by #kolla and ping the core reviewers if you need
any help getting this work reverted until the timing is better.

Regards
-steve

On 2/11/16, 1:50 AM, "Andreas Jaeger" aj@suse.com wrote:

On 2016-02-11 02:50, Joshua Hesketh wrote:

Hey Andreas,

Why not keep pep8 as an alias for the new linters target? Would this
allow for a transition path while work on updating the PTI is done?

pep8 and linters do different work in infra, and infra calls pep8. A
project can have both...

It's more than updating PTI, it's also taking care that linters does the
same tests as pep8 and then updating all projects...

Andreas

Cheers,
Josh

On Thu, Feb 11, 2016 at 6:55 AM, Andreas Jaeger <aj@suse.com
aj@suse.com> wrote:

Hi,

the pep8 target is our usual target to include style and lint

checks and
thus is used besides pep8 also for doc8, bashate, bandit, etc as
documented in the PTI (=Python Test Interface,
http://governance.openstack.org/reference/cti/python_cti.html).

We've had some discussions to introduce a new target called linters

as
better name for this and when I mentioned this in a few
discussions, it
resonated with these projects. Unfortunately, I missed the
relevance of
the PTI for such a change - and changing the PTI to replace pep8
with
linters and then pushing that one through to all projects is more
than I
can commit to right now.

I apologize for being a too eager and will send patches for 

official
projects moving them back to pep8, so consider this is heads up and
background about my incoming patches with topic "pti-pep8-linters".

If somebody else wants to do the whole conversion in the future, I

can
give pointers on what to do,

Andreas
--
 Andreas Jaeger aj@{suse.com <http://suse.com>,opensuse.org
<http://opensuse.org>} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
   GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
       HRB 21284 (AG N?rnberg)
    GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272

A126


_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Felix Imend?rffer, Jane Smithard, Graham Norton,
HRB 21284 (AG N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Message: 35
Date: Thu, 11 Feb 2016 21:30:26 +0100
From: Thomas Herve therve@redhat.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum][heat] Bug 1544227
Message-ID:
CAFsfq7Kgt16VOYpj1jQRGN2Y8tKsWhMyByws9daP3DLhBuZY1A@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

On Thu, Feb 11, 2016 at 5:23 PM, Hongbin Lu hongbin.lu@huawei.com wrote:

Rabi,

As you observed, I have uploaded two testing patches [1][2] that depends
on your fix patch [3] and the reverted patch [4] respectively. An
observation is that the test "gate-functional-dsvm-magnum-mesos" failed in
[1], but passed in [2]. That implies the reverted patch does resolve an
issue (although I am not sure exactly how).

I did notice there are several 404 errors from Neutron, but those errors
exist in successful tests as well so I don't think they are the root
cause.

[1] https://review.openstack.org/#/c/278578/
[2] https://review.openstack.org/#/c/278778/
[3] https://review.openstack.org/#/c/278576/
[4] https://review.openstack.org/#/c/278575/

Hi,

Interestingly, [2] fails with a different error. At some point, we get
the following error:

Unable to find network with name '3bc0ffd2-6c4a-4e46-9a0e-4fbc91920daf'

It doesn't make much sense, because we retrieve that network seconds
before, but suddenly it fails. In the neutron log, you can find this:

SAWarning: The IN-predicate on "ml2networksegments.network_id" was
invoked with an empty sequence. This results in a contradiction, which
nonetheless can be expensive to evaluate. Consider alternative
strategies for improved performance.

It's possible that patch highlights a bug in neutron.

--
Thomas


Message: 36
Date: Thu, 11 Feb 2016 23:17:24 +0000
From: "Christopher N Solis" cnsolis@us.ibm.com
To: "OpenStack Development Mailing List (not for usage questions)"
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [QA][grenade] Create new grenade job
Message-ID: 201602112317.u1BNHSj2002493@d01av04.pok.ibm.com
Content-Type: text/plain; charset="us-ascii"

An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160211/622837d3/attachment.html
>



OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

End of OpenStack-dev Digest, Vol 46, Issue 32


=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Feb 12, 2016 in openstack-dev by Ankit29_A (160 points)   1 2
...