settingsLogin | Registersettings

[openstack-dev] [simplification] PTG Recap

0 votes

Simplification PTG Recap

Introduction

This goal was started off in the May 2017 leadership workshop [1]. We are
collecting feedback from the community that OpenStack can be complex for
deployers, contributors, and ultimately the people we’re all supporting, our
consumers of clouds. This goal is purposely broad in response to some feedback
of OpenStack being complex. As a community, we must work together, and from an
objective standpoint set proper goals to this never-ending effort.

[1] - https://wiki.openstack.org/wiki/Governance/Foundation/8Mar2017BoardMeeting

Moving Forward

We have a growing thread [1] on this topic, and the dev digest summary [2].
Let's move the discussion to this thread for better focus.

Let's recognize we’re not going to solve this problem with just some group or
code. It’s is going to be never-ending.

So far with the etherpad, we have allowed the community to identify some of the
known things that make OpenStack complex. Some areas have more information than
others. Let's start research on those more identified areas first. We can
always revisit the other identified areas as interest and more information is
brought forward.

The three areas are Installation, Operation, and Upgrade … otherwise known as
I.O.U.

Below are the areas, some snippets from the etherpad and then also from our
2017 user survey [3].

[1] - http://lists.openstack.org/pipermail/openstack-dev/2017-September/thread.html#122075
[2] - https://www.openstack.org/blog/2017/09/developer-mailing-list-digest-september-23-29-2017/
[3] - https://www.openstack.org/assets/survey/April2017SurveyReport.pdf

Installation

Etherpad summary

  • Our documentation team is moving towards an effort of decentralizing install
    guides and more.
  • We’ve been bridging the gap between project names and service names with the
    project navigator [1], and service-type-authority repository [2].

User Survey Feedback

  • What we have today is varied installation/ deployment models.
  • Need the installation to become easier—the architecture is still too complex
    right now.
  • Installation, particularly around TripleO and HA UPGRADES deployments, are
    very complicated.
  • A common deployment and lifecycle management tool/framework would make things
    easier. Having every distribution use its tools (Triple-O- Fuel- Crowbar-
    ...) really doesn’t help. And yes, I know that this is not OpenStack’s fault
    but if the community unites behind one tool (or maybe two), we could put some
    pressure to the vendors.
  • Automate installation. Require consistent installation between projects.
  • Standardized automated deployment methods to minimize the risk of splitting
    the developments in vendor-specific branches.
  • Deployment is still a nightmare of complexity and riddled with failure unless
    you are covered in scars from previous deployments.
  • Initial build-up needs to be much easier, such as using a simple scripted
    installer that analyzes the hardware and then can build a working OpenStack.
    When upgrades become available, it can do a rolling upgrade with 0 down time.

[1] - https://www.openstack.org/software/project-navigator/
[2] - http://git.openstack.org/cgit/openstack/service-types-authority/

Upgrades

Etherpad summary

  • Easier to burn down clouds than to go from Newton -> Ocata -> Etc.
  • It’s recognized things are getting better and will continue to improve
    assuming operators partner with Dev like with the skip level upgrade effort.
  • More requests on publishing binaries. Lets refer back to our discussion on
    publish binary images [2] also dev digest version [3].

User Survey Feedback

End of Life Upstream

The lifecycle could use a lot of attention. Most large customers move slowly
and thus are running older versions, which are EOL upstream sometimes before
they even deploy them. Doing in-place upgrades is risky business with just
a one or two release jumps, so the prospect of trying to jump 4 or 5 releases
to get to a current, non-EOL version is daunting and results in either a lot of
outages or simply green-fielding new releases and letting the old die on the
vine. This causes significant operational overhead as getting tenants to move
to a new deploy entirely is a big ask, and you end up operating multiple
versions.

Containerizing OpenStack Itself

Many organizations appear to be moving toward containerizing their OpenStack
control plane. Continued work on multi-version interoperability would allow
organizations to upgrade a lot more seamlessly and rapidly by deploying
newer-versioned containers in parallel with their existing older-versioned
containers. And it may have a profoundly positive effect on the upgrade and
lifecycle for larger deployments.

Bugs
  • The biggest challenge is to upgrade the production system since there are
    a lot of dependencies and bugs that we are facing.
  • Releases need more feature and bugfix backporting.
Longer Development Cycles

Stop coming out with all of these releases. Only do a release once every two years

[1] - https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades
[2] - http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#116677
[3] - https://www.openstack.org/blog/2017/05/openstack-developer-mailing-list-digest-20170526/

Operations

Etherpad Summary

Confusion with OpenStack Client and Project Clients
  • OpenStack client doesn’t entirely support micro versions on some nova
    functionality.
  • Functions that work in OpenStack Client, but not in the project clients
    themselves (e.g., Kerberos)
  • Given that OpenStack Client is now available and widely used, I still see new
    projects being created with their project clients which is strange.
OpenStack Client Needs to Be Better
  • The documentation needs to be better, possibly its interface.
  • A couple of examples from the current documentation [1].

    • getrdpconsole doesn't even tell me what it returns (many calls there have
      this issue).
    • The first object on the page, novaclient.v2.servers.NetworkInterface,
      refers to a 'manager' - what's a manager? (The answer is probably that this
      isn't user callable, but I'd be fine with it saying that.)
    • If people are expected to use these in the right way, let alone use the
      right versions, we need to offer them more help than this.
More Documentation
  • Scaling the infrastructure. How to do this? When? How to detect?
  • Recommendations?
  • Ensure high availability. Recommendations?
  • Networking: production vs. testing
  • Integration with LDAP. Scarcely documented.
  • Quotas

[1] - https://docs.openstack.org/python-novaclient/pike/reference/api/v2/servers.html

--
Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

asked Oct 4, 2017 in openstack-dev by Mike_Perez (13,120 points)   2 3 4
...