Simplification PTG Recap
This goal was started off in the May 2017 leadership workshop . We are
collecting feedback from the community that OpenStack can be complex for
deployers, contributors, and ultimately the people we’re all supporting, our
consumers of clouds. This goal is purposely broad in response to some feedback
of OpenStack being complex. As a community, we must work together, and from an
objective standpoint set proper goals to this never-ending effort.
 - https://wiki.openstack.org/wiki/Governance/Foundation/8Mar2017BoardMeeting
We have a growing thread  on this topic, and the dev digest summary .
Let's move the discussion to this thread for better focus.
Let's recognize we’re not going to solve this problem with just some group or
code. It’s is going to be never-ending.
So far with the etherpad, we have allowed the community to identify some of the
known things that make OpenStack complex. Some areas have more information than
others. Let's start research on those more identified areas first. We can
always revisit the other identified areas as interest and more information is
The three areas are Installation, Operation, and Upgrade … otherwise known as
Below are the areas, some snippets from the etherpad and then also from our
2017 user survey .
 - http://lists.openstack.org/pipermail/openstack-dev/2017-September/thread.html#122075
 - https://www.openstack.org/blog/2017/09/developer-mailing-list-digest-september-23-29-2017/
 - https://www.openstack.org/assets/survey/April2017SurveyReport.pdf
- Our documentation team is moving towards an effort of decentralizing install
guides and more.
- We’ve been bridging the gap between project names and service names with the
project navigator , and service-type-authority repository .
User Survey Feedback
- What we have today is varied installation/ deployment models.
- Need the installation to become easier—the architecture is still too complex
- Installation, particularly around TripleO and HA UPGRADES deployments, are
- A common deployment and lifecycle management tool/framework would make things
easier. Having every distribution use its tools (Triple-O- Fuel- Crowbar-
...) really doesn’t help. And yes, I know that this is not OpenStack’s fault
but if the community unites behind one tool (or maybe two), we could put some
pressure to the vendors.
- Automate installation. Require consistent installation between projects.
- Standardized automated deployment methods to minimize the risk of splitting
the developments in vendor-specific branches.
- Deployment is still a nightmare of complexity and riddled with failure unless
you are covered in scars from previous deployments.
- Initial build-up needs to be much easier, such as using a simple scripted
installer that analyzes the hardware and then can build a working OpenStack.
When upgrades become available, it can do a rolling upgrade with 0 down time.
 - https://www.openstack.org/software/project-navigator/
 - http://git.openstack.org/cgit/openstack/service-types-authority/
- Easier to burn down clouds than to go from Newton -> Ocata -> Etc.
- It’s recognized things are getting better and will continue to improve
assuming operators partner with Dev like with the skip level upgrade effort.
- More requests on publishing binaries. Lets refer back to our discussion on
publish binary images  also dev digest version .
User Survey Feedback
End of Life Upstream
The lifecycle could use a lot of attention. Most large customers move slowly
and thus are running older versions, which are EOL upstream sometimes before
they even deploy them. Doing in-place upgrades is risky business with just
a one or two release jumps, so the prospect of trying to jump 4 or 5 releases
to get to a current, non-EOL version is daunting and results in either a lot of
outages or simply green-fielding new releases and letting the old die on the
vine. This causes significant operational overhead as getting tenants to move
to a new deploy entirely is a big ask, and you end up operating multiple
Containerizing OpenStack Itself
Many organizations appear to be moving toward containerizing their OpenStack
control plane. Continued work on multi-version interoperability would allow
organizations to upgrade a lot more seamlessly and rapidly by deploying
newer-versioned containers in parallel with their existing older-versioned
containers. And it may have a profoundly positive effect on the upgrade and
lifecycle for larger deployments.
- The biggest challenge is to upgrade the production system since there are
a lot of dependencies and bugs that we are facing.
- Releases need more feature and bugfix backporting.
Longer Development Cycles
Stop coming out with all of these releases. Only do a release once every two years
 - https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades
 - http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
 - https://www.openstack.org/blog/2017/05/openstack-developer-mailing-list-digest-20170526/
Confusion with OpenStack Client and Project Clients
- OpenStack client doesn’t entirely support micro versions on some nova
- Functions that work in OpenStack Client, but not in the project clients
themselves (e.g., Kerberos)
- Given that OpenStack Client is now available and widely used, I still see new
projects being created with their project clients which is strange.
OpenStack Client Needs to Be Better
- The documentation needs to be better, possibly its interface.
- A couple of examples from the current documentation .
- getrdpconsole doesn't even tell me what it returns (many calls there have
- The first object on the page, novaclient.v2.servers.NetworkInterface,
refers to a 'manager' - what's a manager? (The answer is probably that this
isn't user callable, but I'd be fine with it saying that.)
- If people are expected to use these in the right way, let alone use the
right versions, we need to offer them more help than this.
- Scaling the infrastructure. How to do this? When? How to detect?
- Ensure high availability. Recommendations?
- Networking: production vs. testing
- Integration with LDAP. Scarcely documented.
 - https://docs.openstack.org/python-novaclient/pike/reference/api/v2/servers.html
OpenStack Development Mailing List (not for usage questions)