Following below is a high-level update of what was discussed during
the PTG. If there is something I left out, I either forgot about it or
didn't get a chance to sync up with the contributors and I don't have
a summary so please reply to this email thread to add it. However, if
you want to continue the discussion on any of these individual points,
please start a new email thread so we don't have a ton of discussions
going on attached to a single mega-thread.
I apologize in advance because this is pretty long even though I tried
to summarize as much as possible. :)
* Functional job: In order to bring the failure related to the OVSDB
native interface timeouts we are going to try reducing the concurrency
and then splitting out the OVSDB native interface tests into a
separate non-voting job as a last resort.
* Fullstack job: See if reducing concurrency brings down failure rate
since each test is spawning multiple threads for agents/servers/etc.
* Reducing job count: consolidate many existing jobs. For example,
multi-node grenade only instead of single-node grenade jobs, no legacy
routing jobs with OVS, etc.
* Jobs for other projects (e.g. dragonflow, tripleo, ironic). We need
to consider having post-merge triggers for these since they are just
meant to be a way to trace when Neutron introduces a failure into
their project and regular Neutron contributors won't pay attention to
their check queue status.
* OVS compilation from source: stop doing this for as many jobs as
possible so we test what is shipped with the distro.
* Bug Triage: current bug deputy system working well
* Gate failures: encourage use of elastic recheck and Ihar enabled the
IRC bot to message the channel for elastic recheck failures.
* Making frequent use of the auto-abandon script so we don't go quite
so long when contributors don't reply to negative feedback.
* Un-assign the bug assignee when patches are auto-abandoned.
* Generate email of abandoned patches to bring to wider attention.
* Look into IRC notifications for patches that require core reviewer
feedback that have been sitting for weeks.
* Releases: switch to more frequent release cycle so consumers get
changes faster. Release after weekly meeting if someone requests it.
* Review velocity: solicit more reviews from Neutron and stadium
project cores since they have +2 power. Then drivers can more easily
scan for patches ready to merge for final +W.
* modwsgi support: Add dedicated RPC server, create entry script for
modwsgi, switch to pecan.
* python3 support: switch some tempest jobs to python3 and leave some
in python2 to avoid explosion of job types while maintaining decent
* Storyboard: we can't really adopt this until the other core projects
do since we have a lot of cross-project bugs.
* Tap as a service intends to be included in Pike
* Stadium requirements will now include python3 support since it's a
community-wide goal. Projects should run tempest tests for both
python2 and python3 since python2 support won't be dropped.
* For Pike we will try to synchronize releases.
* Use tempest stable API for our tests (tempest-lib)
* Split out our tempest tests into a separate branchless repo
* DocImpact tag. Always include a one-sentence summary of what docs
need to be added/changed.
* Consider requiring at least some initial documentation from the
feature developer as a WIP patch to docs before allowing feature
No downtime upgrades
* Need online-db-migration option to neutron-db-manage like other projects have
* Need mechanism to disable new features in new servers until all old
servers are taken offline.
* Finish adoption of OVO across the code-base by reviewing outstanding patches.
* Consider moving some OVO base classes into neutron-lib so other
neutron projects can start adopting OVO for their custom objects.
* Figure out how to handle case where a different DB table provides an
API field depending on the loaded core plugin (e.g. port bindings).
New Network Types
* Explore how a new "network-type" field might be implemented that can
be used to get non-L2 networks where the IP addressing and segment
semantics currently offered by ML2 don't fit well. RFE:
* We could consider making use of the flavor framework for dispatching
calls to different plugins, but this will require some significant
refactoring and considerations for things like different extensions
supported by different plugins.
* Outstanding reviews:
* Figuring out a strategy to load relationships on newly created
sqlalchemy objects to ensure we don't emit queries after the
transaction is closed: https://review.openstack.org/#/c/434454/
* Needs some reviews for the L2 code:
* Once L2 code has landed, we can re-use much of the same logic for
the L3 agent and DHCP agent.
Extensions (not)Supported by Loaded ML2 Drivers
* Generic mechanism being developed by ML2 contributors to ensure that
the necessary drivers in a given port binding are actually enforcing
the extensions requested by the port model (e.g. security groups, QoS,
* Will reconcile with QoS-specific approach to validation here:
Neutronclient -> OSC Transition
* Continue migration of commands. Status here:
* Only high/critical priority bug fixes to neutronclient at this
point. No new features.
* Add a new API for nova to retrieve all things related to a port in
one call (e.g. networks, subnets, floating IPs, subports, etc).
* Negotiate and return os-vif objects from Neutron to Nova on port
* Continue multiple port bindings work for live migration case.
* Avoid making any of these dependent on each other since we need to
ensure os-vif has parity with all of the existing plugging Nova can do
before strictly returning os-vif objects.
* Ironic support for routed networks:
* vlan-aware-vms support addressed by https://review.openstack.org/#/c/436756/
* Allow DVR to work with unbound ports by scheduling to central node.
* Allow DVR to perform floating IP translations at central node if
compute node has no external network access. Pending RFE.
* Allow DVR to burn more external IPs and do SNAT at the compute node.
* Examine offloading east-west routing to openflow.
* This spreadsheet was put together to examine failure cases of
current solutions: https://ethercalc.openstack.org/Pike-Neutron-L3-HA
* There does not appear to be enough interest upstream to develop an
alternative approach using something other than keepalived in-tree. So
an alternative will need to be developed out of tree first and then be
considered for inclusion after it's shown to work well.
* We should examine downgrading VRRP priority on track script failure
and allow preemption to avoid a full outage if upstream gateway stops
responding to ping.
OVS native interfaces
* OVSDB: split out into a separate repo since we have several projects
consuming it. https://review.openstack.org/#/c/438086/
* OVSFW: switch the devstack default to the OVS native firewall and
implement logic to cleanup iptables rules on hybrid bridges so we have
a minimal migration path. Rolling migration will work to eliminate
hybrid bridge once multiple port bindings is implemented in Nova.
Common Classification Framework
* Spec is here with notes from PTG discussions:
Security Groups Logging
* Target OVS native firewall for logging actions since that will
become the devstack default.
* Spec: https://review.openstack.org/#/c/203509/
* Development of new ML2 type drivers to allow some NFV and tripleO
use cases: QinQ with VM setting inner tag, VLAN filters for
VLAN-transparent networks, QinQ double tag applied to untagged traffic
from VM. All pending RFEs.
* Target OVS native firewall for compute-node filtering since that
will become the devstack default
* FWaaS internal development etherpad:
* Switch is in progress. We need the shim layer in Neutron to not
break old clients. https://review.openstack.org/#/c/418530/
* We have several contributors interested in keeping it maintained and
alive. We just need to get the testing/documentation/etc fixed up to
bring it back into the stadium.
OpenStack Development Mailing List (not for usage questions)