settingsLogin | Registersettings

[openstack-dev] [ptg] Simplification in OpenStack

0 votes

Hey all,

Back in a joint meeting with the TC, UC, Foundation and The Board it
was decided as an area of OpenStack to focus was Simplifying
OpenStack. This intentionally was very broad so the community can kick
start the conversation and help tackle some broad feedback we get.

Unfortunately yesterday there was a low turn out in the Simplification
room. A group of people from the Swift team, Kevin Fox and Swimingly
were nice enough to start the conversation and give some feedback. You
can see our initial ether pad work here:

https://etherpad.openstack.org/p/simplifying-os

There are efforts happening everyday helping with this goal, and our
team has made some documented improvements that can be found in our
report to the board within the ether pad. I would like to take a step
back with this opportunity to have in person discussions for us to
identify what are the area of simplifying that are worthwhile. I’m
taking a break from the room at the moment for lunch, but I encourage
people at 13:30 local time to meet at the simplification room level b
in the big thompson room. Thank you!


Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Oct 4, 2017 in openstack-dev by Mike_Perez (13,120 points)   2 3 4

34 Responses

0 votes

Hey all,

The session is over. I’m hanging near registration if anyone wants to
discuss things. Shout out to John for coming by on discussions with
simplifying dependencies. I welcome more packagers to join the
discussion.

https://etherpad.openstack.org/p/simplifying-os


Mike Perez

On September 12, 2017 at 11:45:05, Mike Perez (thingee@gmail.com) wrote:
Hey all,

Back in a joint meeting with the TC, UC, Foundation and The Board it was decided as an area
of OpenStack to focus was Simplifying OpenStack. This intentionally was very broad
so the community can kick start the conversation and help tackle some broad feedback
we get.

Unfortunately yesterday there was a low turn out in the Simplification room. A group
of people from the Swift team, Kevin Fox and Swimingly were nice enough to start the conversation
and give some feedback. You can see our initial ether pad work here:

https://etherpad.openstack.org/p/simplifying-os

There are efforts happening everyday helping with this goal, and our team has made some
documented improvements that can be found in our report to the board within the ether
pad. I would like to take a step back with this opportunity to have in person discussions
for us to identify what are the area of simplifying that are worthwhile. I’m taking a break
from the room at the moment for lunch, but I encourage people at 13:30 local time to meet
at the simplification room level b in the big thompson room. Thank you!


Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 12, 2017 by Mike_Perez (13,120 points)   2 3 4
0 votes

Mike,

Great intiative, unfortunately I wasn't able to attend it, however I have
some thoughts...
You can't simplify OpenStack just by fixing few issues that are described
in the etherpad mostly..

TC should work on shrinking the OpenStack use cases and moving towards the
product (box) complete solution instead of pieces of bunch barely related
things..

*Simple things to improve: *
*This is going to allow community to work together, and actually get
feedback in standard way, and incrementally improve quality. *

1) There should be one and only one:
1.1) deployment/packaging(may be docker) upgrade mechanism used by
everybody
1.2) monitoring/logging/tracing mechanism used by everybody
1.3) way to configure all services (e.g. k8 etcd way)
2) Projects must have standardize interface that allows these projects to
use them in same way.
3) Testing & R&D should be performed only against this standard deployment

*Hard things to improve: *

OpenStack projects were split in far from ideal way, which leads to bunch
of gaps that we have now:
1.1) Code & functional duplications: Quotas, Schedulers, Reservations,
Health checks, Loggign, Tracing, ....
1.2) Non optimal workflows (booting VM takes 400 DB requests) because data
is stored in Cinder,Nova,Neutron....
1.3) Lack of resources (as every project is doing again and again same work
about same parts)

What we can do:

*1) Simplify internal communication *
1.1) Instead of AMQP for internal communication inside projects use just
HTTP, load balancing & retries.

*2) Use API Gateway pattern *
3.1) Provide to use high level API one IP address with one client
3.2) Allows to significant reduce load on Keystone because tokens are
checked only in API gateway
3.3) Simplifies communication between projects (they are now in trusted
network, no need to check token)

*3) Fix the OpenStack split *
3.1) Move common functionality to separated internal services: Scheduling,
Logging, Monitoring, Tracing, Quotas, Reservations (it would be even better
if this thing would have more or less monolithic architecture)
3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and
Networks data which is heavily connected.

4) Don't be afraid to break things
Maybe it's time for OpenStack 2:

  • In any case most of people provide API on top of OpenStack for usage
  • In any case there is no standard and easy way to upgrade

So basically we are not losing anything even if we do not backward
compatible changes and rethink completely architecture and API.

I know this sounds like science fiction, but I believe community will
appreciate steps in this direction...

Best regards,
Boris Pavlovic

On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez thingee@gmail.com wrote:

Hey all,

The session is over. I’m hanging near registration if anyone wants to
discuss things. Shout out to John for coming by on discussions with
simplifying dependencies. I welcome more packagers to join the
discussion.

https://etherpad.openstack.org/p/simplifying-os


Mike Perez

On September 12, 2017 at 11:45:05, Mike Perez (thingee@gmail.com) wrote:

Hey all,

Back in a joint meeting with the TC, UC, Foundation and The Board it was
decided as an area
of OpenStack to focus was Simplifying OpenStack. This intentionally was
very broad
so the community can kick start the conversation and help tackle some
broad feedback
we get.

Unfortunately yesterday there was a low turn out in the Simplification
room. A group
of people from the Swift team, Kevin Fox and Swimingly were nice enough
to start the conversation
and give some feedback. You can see our initial ether pad work here:

https://etherpad.openstack.org/p/simplifying-os

There are efforts happening everyday helping with this goal, and our
team has made some
documented improvements that can be found in our report to the board
within the ether
pad. I would like to take a step back with this opportunity to have in
person discussions
for us to identify what are the area of simplifying that are worthwhile.
I’m taking a break
from the room at the moment for lunch, but I encourage people at 13:30
local time to meet
at the simplification room level b in the big thompson room. Thank you!


Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 12, 2017 by boris_at_pavlovic.me (6,900 points)   1 4 7
0 votes

On 09/12/2017 06:53 PM, Boris Pavlovic wrote:
Mike,

Great intiative, unfortunately I wasn't able to attend it, however I
have some thoughts...
You can't simplify OpenStack just by fixing few issues that are
described in the etherpad mostly..

TC should work on shrinking the OpenStack use cases and moving towards
the product (box) complete solution instead of pieces of bunch barely
related things..

OpenStack is not a product. It's a collection of projects that represent
a toolkit for various cloud-computing functionality.

*Simple things to improve: *
/This is going to allow community to work together, and actually get
feedback in standard way, and incrementally improve quality. /

1) There should be one and only one:
1.1) deployment/packaging(may be docker) upgrade mechanism used by
everybody

Good luck with that :) The likelihood of the deployer/packager community
agreeing on a single solution is zero.

1.2) monitoring/logging/tracing mechanism used by everybody

Also close to zero chance of agreeing on a single solution. Better to
focus instead on ensuring various service projects are monitorable and
transparent.

1.3) way to configure all services (e.g. k8 etcd way)

Are you referring to the way to configure k8s services or the way to
configure/setup an application that is running on k8s? If the former,
then there is not a single way of configuring k8s services. If the
latter, there isn't a single way of configuring that either. In fact,
despite Helm being a popular new entrant to the k8s application package
format discussion, k8s itself is decidedly not opinionated about how
an application is configured. Use a CMDB, use Helm, use env variables,
use confd, use whatever. k8s doesn't care.

2) Projects must have standardize interface that allows these projects
to use them in same way.

Give examples of services that communicate over non-standard
interfaces. I don't know of any.

3) Testing & R&D should be performed only against this standard deployment

Sorry, this is laughable. There will never be a standard deployment
because there are infinite use cases that infrastructure supports.
Your definition of what works for GoDaddy is decidedly different from
someone else's definition of what works for them.

*Hard things to improve: *

OpenStack projects were split in far from ideal way, which leads to
bunch of gaps that we have now:
1.1) Code & functional duplications: Quotas, Schedulers, Reservations,
Health checks, Loggign, Tracing, ....

There is certainly code duplication in some areas, yes.

1.2) Non optimal workflows (booting VM takes 400 DB requests) because
data is stored in Cinder,Nova,Neutron....

Sorry, I call bullshit on this. It does not take 400 DB requests to boot
a VM. Also: the DB is not at all the bottleneck in the VM launch
process. You've been saying it is for years with no justification to
back you up. Pointing to a Rally scenario that doesn't reflect a
real-world usage of OpenStack services isn't useful.

1.3) Lack of resources (as every project is doing again and again same
work about same parts)

Provide specific examples please.

What we can do:

*1) Simplify internal communication *
1.1) Instead of AMQP for internal communication inside projects use just
HTTP, load balancing & retries.

Prove to me that this would solve a problem. First describe what the
problem is, then show me that using AMQP is the source of that problem,
then show me that using HTTP requests would solve that problem.

*2) Use API Gateway pattern *
3.1) Provide to use high level API one IP address with one client
3.2) Allows to significant reduce load on Keystone because tokens are
checked only in API gateway
3.3) Simplifies communication between projects (they are now in trusted
network, no need to check token)

Why is this a problem for OpenStack projects to deal with? If you want a
single IP address for all APIs that your users consume, then simply
deploy all the public-facing services on a single set of web servers and
make each service's root endpoint be a subresource on the root IP/DNS name.

*3) Fix the OpenStack split *
3.1) Move common functionality to separated internal services:
Scheduling, Logging, Monitoring, Tracing, Quotas, Reservations (it would
be even better if this thing would have more or less monolithic
architecture)

Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software
development over the last two decades. While we're at it, let's rewrite
OpenStack projects in COBOL.

3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and
Networks data which is heavily connected.

How are these things connected?

4) Don't be afraid to break things
Maybe it's time for OpenStack 2:

  • In any case most of people provide API on top of OpenStack for usage
  • In any case there is no standard and easy way to upgrade

So basically we are not losing anything even if we do not backward
compatible changes and rethink completely architecture and API.

Awesome news. I will keep this in mind when users (like GoDaddy) ask
Nova to never break anything ever and keep behaviour like scheduler
retries that represent giant technical debt.

-jay

I know this sounds like science fiction, but I believe community will
appreciate steps in this direction...

Best regards,
Boris Pavlovic

On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thingee@gmail.com
thingee@gmail.com> wrote:

Hey all,

The session is over. I’m hanging near registration if anyone wants to
discuss things. Shout out to John for coming by on discussions with
simplifying dependencies. I welcome more packagers to join the
discussion.

https://etherpad.openstack.org/p/simplifying-os
<https://etherpad.openstack.org/p/simplifying-os>

—
Mike Perez


On September 12, 2017 at 11:45:05, Mike Perez (thingee@gmail.com
<mailto:thingee@gmail.com>) wrote:
 > Hey all,
 >
 > Back in a joint meeting with the TC, UC, Foundation and The Board
it was decided as an area
 > of OpenStack to focus was Simplifying OpenStack. This
intentionally was very broad
 > so the community can kick start the conversation and help tackle
some broad feedback
 > we get.
 >
 > Unfortunately yesterday there was a low turn out in the
Simplification room. A group
 > of people from the Swift team, Kevin Fox and Swimingly were nice
enough to start the conversation
 > and give some feedback. You can see our initial ether pad work here:
 >
 > https://etherpad.openstack.org/p/simplifying-os
<https://etherpad.openstack.org/p/simplifying-os>
 >
 > There are efforts happening everyday helping with this goal, and
our team has made some
 > documented improvements that can be found in our report to the
board within the ether
 > pad. I would like to take a step back with this opportunity to
have in person discussions
 > for us to identify what are the area of simplifying that are
worthwhile. I’m taking a break
 > from the room at the moment for lunch, but I encourage people at
13:30 local time to meet
 > at the simplification room level b in the big thompson room.
Thank you!
 >
 > —
 > Mike Perez

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 13, 2017 by Jay_Pipes (59,760 points)   3 11 14
0 votes

Jay,

All that you say exactly explains the reason why more and more companies
are leaving OpenStack.

Companies and actually end users care only about their things and how can
they get their job done. They want thing that they can run and support
easily and that resolves their problems.

They initially think that it's a good idea to take a OpenStack as a
Framework and build sort of product on top of it because it's so open and
large and everybody uses...

Soon they understand that OpenStack has very complicated operations because
it's not designed to be a product but rather framework and that the
complexity of running OpenStack is similar to development in house solution
and as time is spend they have only few options: move to public cloud or
some other private cloud solution...

We as a community can continue saying that the current OpenStack approach
is the best and keep loosing customers/users/community, or change something
drastically, like bring technical leadership to OpenStack Foundation that
is going to act like benevolent dictator that focuses OpenStack effort on
shrinking uses cases, redesigning architecture and moving to the right
direction...

I know this all sounds like a big change, but let's be honest current
situation doesn't look healthy...
By the way, almost all successful projects in open source have benevolent
dictator and everybody is OK with that's how things works...

Awesome news. I will keep this in mind when users (like GoDaddy) ask Nova
to never break anything ever and keep behaviour like scheduler retries that
represent giant technical debt.

I am writing here on my behalf (using my personal email, if you haven't
seen), are we actually Open Source? or Enterprise Source?

More over I don't think that what you say is going to be an issue for
GoDaddy, at least soon, because we still can't upgrade, because it's NP
complete problem (even if you run just core projects), which is what my
email was about, and I saw the same stories in bunch of other companies.....

Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software development
over the last two decades. While we're at it, let's rewrite OpenStack
projects in COBOL.

I really don't want to answer on this provocation, because it shifts the
focus from major topic. But I really can't stop myself ;)

  • There is no sliver bullet in programming. For example, would Git or Linux
    be better if it was written using microservices approach?
  • Mircroservices are obsolete you should use new hype thing called FaaS (I
    am just curios when these FaaS fellow are going to implement modules for
    FaaS and when they are going to understand that they will need actually
    everything development in programming languages (OOP, AOP, DI, ...) to glue
    these things;) )
  • I was talking about architectural changes, not a programming language, so
    it's sort of big type mismatch and logically wrong. However, what's wrong
    with Cobol? If you use right architecture and right algorithms it will
    definitely work better than implementation of program in any other language
    with wrong architecture and bad algorithms... so not sure that I understand
    this point/joke...

Best regards,
Boris Pavlovic

On Wed, Sep 13, 2017 at 10:44 AM, Jay Pipes jaypipes@gmail.com wrote:

On 09/12/2017 06:53 PM, Boris Pavlovic wrote:

Mike,

Great intiative, unfortunately I wasn't able to attend it, however I have
some thoughts...
You can't simplify OpenStack just by fixing few issues that are described
in the etherpad mostly..

TC should work on shrinking the OpenStack use cases and moving towards
the product (box) complete solution instead of pieces of bunch barely
related things..

OpenStack is not a product. It's a collection of projects that represent a
toolkit for various cloud-computing functionality.

*Simple things to improve: *

/This is going to allow community to work together, and actually get
feedback in standard way, and incrementally improve quality. /

1) There should be one and only one:
1.1) deployment/packaging(may be docker) upgrade mechanism used by
everybody

Good luck with that :) The likelihood of the deployer/packager community
agreeing on a single solution is zero.

1.2) monitoring/logging/tracing mechanism used by everybody
>

Also close to zero chance of agreeing on a single solution. Better to
focus instead on ensuring various service projects are monitorable and
transparent.

1.3) way to configure all services (e.g. k8 etcd way)
>

Are you referring to the way to configure k8s services or the way to
configure/setup an application that is running on k8s? If the former,
then there is not a single way of configuring k8s services. If the
latter, there isn't a single way of configuring that either. In fact,
despite Helm being a popular new entrant to the k8s application package
format discussion, k8s itself is decidedly not opinionated about how an
application is configured. Use a CMDB, use Helm, use env variables, use
confd, use whatever. k8s doesn't care.

2) Projects must have standardize interface that allows these projects to

use them in same way.

Give examples of services that communicate over non-standard interfaces.
I don't know of any.

3) Testing & R&D should be performed only against this standard deployment
>

Sorry, this is laughable. There will never be a standard deployment
because there are infinite use cases that infrastructure supports. Your
definition of what works for GoDaddy is decidedly different from someone
else's definition of what works for them.

*Hard things to improve: *

OpenStack projects were split in far from ideal way, which leads to bunch
of gaps that we have now:
1.1) Code & functional duplications: Quotas, Schedulers, Reservations,
Health checks, Loggign, Tracing, ....

There is certainly code duplication in some areas, yes.

1.2) Non optimal workflows (booting VM takes 400 DB requests) because data

is stored in Cinder,Nova,Neutron....

Sorry, I call bullshit on this. It does not take 400 DB requests to boot a
VM. Also: the DB is not at all the bottleneck in the VM launch process.
You've been saying it is for years with no justification to back you up.
Pointing to a Rally scenario that doesn't reflect a real-world usage of
OpenStack services isn't useful.

1.3) Lack of resources (as every project is doing again and again same

work about same parts)

Provide specific examples please.

What we can do:

*1) Simplify internal communication *
1.1) Instead of AMQP for internal communication inside projects use just
HTTP, load balancing & retries.

Prove to me that this would solve a problem. First describe what the
problem is, then show me that using AMQP is the source of that problem,
then show me that using HTTP requests would solve that problem.

*2) Use API Gateway pattern *

3.1) Provide to use high level API one IP address with one client
3.2) Allows to significant reduce load on Keystone because tokens are
checked only in API gateway
3.3) Simplifies communication between projects (they are now in trusted
network, no need to check token)

Why is this a problem for OpenStack projects to deal with? If you want a
single IP address for all APIs that your users consume, then simply deploy
all the public-facing services on a single set of web servers and make each
service's root endpoint be a subresource on the root IP/DNS name.

*3) Fix the OpenStack split *

3.1) Move common functionality to separated internal services:
Scheduling, Logging, Monitoring, Tracing, Quotas, Reservations (it would be
even better if this thing would have more or less monolithic architecture)

Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software development
over the last two decades. While we're at it, let's rewrite OpenStack
projects in COBOL.

3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and

Networks data which is heavily connected.

How are these things connected?

4) Don't be afraid to break things

Maybe it's time for OpenStack 2:

  • In any case most of people provide API on top of OpenStack for usage
  • In any case there is no standard and easy way to upgrade
    So basically we are not losing anything even if we do not backward
    compatible changes and rethink completely architecture and API.

Awesome news. I will keep this in mind when users (like GoDaddy) ask Nova
to never break anything ever and keep behaviour like scheduler retries that
represent giant technical debt.

-jay

I know this sounds like science fiction, but I believe community will

appreciate steps in this direction...

Best regards,
Boris Pavlovic

On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thingee@gmail.com > wrote:

Hey all,

The session is over. I’m hanging near registration if anyone wants to
discuss things. Shout out to John for coming by on discussions with
simplifying dependencies. I welcome more packagers to join the
discussion.

https://etherpad.openstack.org/p/simplifying-os
<https://etherpad.openstack.org/p/simplifying-os>

—
Mike Perez


On September 12, 2017 at 11:45:05, Mike Perez (thingee@gmail.com
<mailto:thingee@gmail.com>) wrote:
 > Hey all,
 >
 > Back in a joint meeting with the TC, UC, Foundation and The Board
it was decided as an area
 > of OpenStack to focus was Simplifying OpenStack. This
intentionally was very broad
 > so the community can kick start the conversation and help tackle
some broad feedback
 > we get.
 >
 > Unfortunately yesterday there was a low turn out in the
Simplification room. A group
 > of people from the Swift team, Kevin Fox and Swimingly were nice
enough to start the conversation
 > and give some feedback. You can see our initial ether pad work

here:

https://etherpad.openstack.org/p/simplifying-os
https://etherpad.openstack.org/p/simplifying-os

There are efforts happening everyday helping with this goal, and
our team has made some
documented improvements that can be found in our report to the
board within the ether
pad. I would like to take a step back with this opportunity to
have in person discussions
for us to identify what are the area of simplifying that are
worthwhile. I’m taking a break
from the room at the moment for lunch, but I encourage people at
13:30 local time to meet
at the simplification room level b in the big thompson room.
Thank you!


Mike Perez

____________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscrib
e
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 14, 2017 by boris_at_pavlovic.me (6,900 points)   1 4 7
0 votes

On Tue, Sep 12, 2017 at 3:53 PM, Boris Pavlovic boris@pavlovic.me wrote:
Mike,

Great intiative, unfortunately I wasn't able to attend it, however I have
some thoughts...
You can't simplify OpenStack just by fixing few issues that are described in
the etherpad mostly..

This is exactly how one gets started, though, by dragging the
skeletons to light. I, too, was unable to attend due to scheduling,
but as PTL of a project complicated by years of tech debt, before even
being an anointed OpenStack project, this topic is of particular
interest to me.

TC should work on shrinking the OpenStack use cases and moving towards the
product (box) complete solution instead of pieces of bunch barely related
things..

I agree and disagree with what you say here. Shrinking use cases
misses the mark an order of magnitude or three. However, focusing on
the outcome is exactly what needs to happen for everyone to walk away
with the warm fuzzies, upstream and downstream alike.

Simple things to improve:
This is going to allow community to work together, and actually get feedback
in standard way, and incrementally improve quality.

1) There should be one and only one:
1.1) deployment/packaging(may be docker) upgrade mechanism used by everybody
1.2) monitoring/logging/tracing mechanism used by everybody
1.3) way to configure all services (e.g. k8 etcd way)
2) Projects must have standardize interface that allows these projects to
use them in same way.
3) Testing & R&D should be performed only against this standard deployment

You keep using that word. This feels like a "you can have it any color
you like, so long as it's black" argument. This is great for
manufacturing tangible products that sit on a shelf somewhere. Not so
much for a collection of software, already well into the maturation
phase, that is the collective output of hundreds, nay, thousands of
minds. What you propose almost never happens in practice, as nice as
it sounds. The outcome is significantly more important than what
people to do get there. I hereby refer to XKCD rule #927 on the topic
of standards, only partly in jest.

Hard things to improve:

OpenStack projects were split in far from ideal way, which leads to bunch of
gaps that we have now:
1.1) Code & functional duplications: Quotas, Schedulers, Reservations,
Health checks, Loggign, Tracing, ....

Yup. Large software projects have some duplication, it's natural and
requires occasional love. It takes people to actively battle the tech
debt, and not everyone has the luxury of a fully dedicated team.

1.2) Non optimal workflows (booting VM takes 400 DB requests) because data
is stored in Cinder,Nova,Neutron....

SQL is SQL, though, so I don't see what you're getting at. I'm sure
some things need some tuning and queries need some optimization, but I
hung up my DBA hat years ago.

1.3) Lack of resources (as every project is doing again and again same work
about same parts)

I read that last part to mean people and not so much technical
limitations. If I've correctly read things with my corporate lens on,
that's a universal pain that is felt by nearly every specialized field
of work, and OpenStack is by no means unique. Downstream consumers of
OpenStack code are only willing to financially support so many
specialists, and they can support more than the Foundation. If the
problem is people, convince more people to contribute, since we're
remaking the universe.

What we can do:

1) Simplify internal communication
1.1) Instead of AMQP for internal communication inside projects use just
HTTP, load balancing & retries.

In my experiences, AMQP has mostly sat there in the background, until
someone comes along and touches it. We haven't touched the
openstack-ops-messaging cookbook beyond testing enhancements and
deprecations in at least a cycle because it just works. Retries just
mask an underlying problem. With my operator hat on, I don't want my
client to try N times if the service is intermittently failing.

2) Use API Gateway pattern
3.1) Provide to use high level API one IP address with one client
3.2) Allows to significant reduce load on Keystone because tokens are
checked only in API gateway
3.3) Simplifies communication between projects (they are now in trusted
network, no need to check token)

I don't see this as being something to beholden OpenStack development
teams to implement and maintain, even if people pay for this
functionality or implement it on their own. That's more of a use case,
not a feature request.

3) Fix the OpenStack split
3.1) Move common functionality to separated internal services: Scheduling,
Logging, Monitoring, Tracing, Quotas, Reservations (it would be even better
if this thing would have more or less monolithic architecture)

No, please, just... no. A monolithic architecture is fine for dev, but
it falls apart prematurely in the lifecycle when you throw the spurs
to it.

3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and
Networks data which is heavily connected.

That's for the implementation phase, not development. You can put
volume storage and VMs on the same machine, if you want/need to do so.
This smells like... another use case!

4) Don't be afraid to break things
Maybe it's time for OpenStack 2:

Blue polka dots with green stripes! With a racing stripe! And a
whipped pony on top.

In any case most of people provide API on top of OpenStack for usage
In any case there is no standard and easy way to upgrade

So basically we are not losing anything even if we do not backward
compatible changes and rethink completely architecture and API.

Quis custodiet ipsos custodes? Who ensures the usage APIs align with
the service APIs align with the architecture? What happens when one
group responsible for one API doesn't talk to the other because their
employers changed directions? I'm not convinced an "incremental" all
the things approach can benefit anyone, particularly one that demands
more of people.

I know this sounds like science fiction, but I believe community will
appreciate steps in this direction...

I'm going to invoke PHK here and show my roots: ahem Quality happens
only when someone is responsible for it. A dramatic sweeping change
from one extreme to the other is just being along for the ride when
the pendulum swings. It's not time to throw in the towel on OpenStack
quite yet. We're all looking for an agreeable positive outcome that
will benefit all of our employers and their customers, but it doesn't
work to profess a Grand Unified Way when there needn't necessarily be
one. I thought there needed to be one, on my flight back from Boston.

Then I ran for PTL a third cycle. :)

Best regards,
Boris Pavlovic

On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez thingee@gmail.com wrote:

Hey all,

The session is over. I’m hanging near registration if anyone wants to
discuss things. Shout out to John for coming by on discussions with
simplifying dependencies. I welcome more packagers to join the
discussion.

https://etherpad.openstack.org/p/simplifying-os


Mike Perez

On September 12, 2017 at 11:45:05, Mike Perez (thingee@gmail.com) wrote:

Hey all,

Back in a joint meeting with the TC, UC, Foundation and The Board it was
decided as an area
of OpenStack to focus was Simplifying OpenStack. This intentionally was
very broad
so the community can kick start the conversation and help tackle some
broad feedback
we get.

Unfortunately yesterday there was a low turn out in the Simplification
room. A group
of people from the Swift team, Kevin Fox and Swimingly were nice enough
to start the conversation
and give some feedback. You can see our initial ether pad work here:

https://etherpad.openstack.org/p/simplifying-os

There are efforts happening everyday helping with this goal, and our
team has made some
documented improvements that can be found in our report to the board
within the ether
pad. I would like to take a step back with this opportunity to have in
person discussions
for us to identify what are the area of simplifying that are worthwhile.
I’m taking a break
from the room at the moment for lunch, but I encourage people at 13:30
local time to meet
at the simplification room level b in the big thompson room. Thank you!


Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best,
Samuel Cassiba


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 14, 2017 by s_at_cassiba.com (1,200 points)  
0 votes

OK, I'll bite.

On 09/13/2017 08:56 PM, Boris Pavlovic wrote:
Jay,

All that you say exactly explains the reason why more and more companies
are leaving OpenStack.

All that I say? The majority of what I was "saying" was actually asking
you to back up your statements with actual proof points instead of
making wild conjectures.

Companies and actually end users care only about their things and how
can they get their job done. They want thing that they can run and
support easily and that resolves their problems.

No disagreement from me. That said, I fail to see what the above
statement has to do with anything I wrote.

They initially think that it's a good idea to take a OpenStack as a
Framework and build sort of product on top of it because it's so open
and large and everybody uses...

End users of OpenStack don't "build sort of product on top". End users
of OpenStack call APIs or use Horizon to launch VMs, create networks,
volumes, and whatever else those end users need for their own use cases.

Soon they understand that OpenStack has very complicated operations
because it's not designed to be a product but rather framework and that
the complexity of running OpenStack is similar to development in house
solution and as time is spend they have only few options: move to public
cloud or some other private cloud solution...

Deployers of OpenStack use the method of installing and configuring
OpenStack that matches best their cultural fit, experience and level of
comfort with underlying technologies and vendors (packages vs. source
vs. images, using a vendor distribution vs. going it alone, Chef vs.
Puppet vs. Ansible vs. SaltStack vs. Terraform, etc). The way they
configure OpenStack services is entirely dependent on the use cases they
wish to support for their end users. And, to repeat myself, there is NO
SINGLE USE CASE for infrastructure services like OpenStack. Therefore
there is zero chance for a "standard deployment" of OpenStack becoming a
reality.

Just like there are myriad ways of deploying and configuring OpenStack,
there are myriad ways of deploying and configuring k8s. Why? Because
deploying and configuring highly distributed systems is a hard problem
to solve. And maintaining and operating those systems is an even harder
problem to solve.

We as a community can continue saying that the current OpenStack
approach is the best

Nobody is saying that the current OpenStack approach is the best. I
certainly have never said this. All that I have asked is that you
actually back up your statements with proof points that demonstrate how
and why a different approach to building software will lead to specific
improvements in quality or user experience.

and keep loosing customers/users/community, or change something
drastically, like bring technical leadership to OpenStack Foundation
that is going to act like benevolent dictator that focuses OpenStack
effort on shrinking uses cases, redesigning architecture and moving
to the right direction...

What specifically is the "right direction" for OpenStack to take?
Please, as I asked you in the original response, provide actual details
other than "we should have a monolithic application". Provide an
argument as to how and why your direction is "right" for every user of
OpenStack.

When you say "technical leadership", what specifically are you wanting
to see?

I know this all sounds like a big change, but let's be honest current
situation doesn't look healthy...
By the way, almost all successful projects in open source have
benevolent dictator and everybody is OK with that's how things works...

Who is the benevolent dictator of k8s? Who is the benevolent dictator of
MySQL? Of PostgreSQL? Of etcd?

You have a particularly myopic view of what "successful" is for open
source, IMHO.

Awesome news. I will keep this in mind when users (like GoDaddy) ask
Nova to never break anything ever and keep behaviour like scheduler
retries that represent giant technical debt.

I am writing here on my behalf (using my personal email, if you haven't
seen), are we actually Open Source? or Enterprise Source?

More over I don't think that what you say is going to be an issue for
GoDaddy, at least soon, because we still can't upgrade, because it's NP
complete problem (even if you run just core projects), which is what my
email was about, and I saw the same stories in bunch of other companies.....

You continue to speak in hyperbole and generalizations. What
specifically about your recommendations will improve the upgrade
ability and story for OpenStack?

Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software
development over the last two decades. While we're at it, let's
rewrite OpenStack projects in COBOL.

I really don't want to answer on this provocation, because it shifts the
focus from major topic. But I really can't stop myself ;)

  • There is no sliver bullet in programming. For example, would Git or
    Linux be better if it was written using microservices approach?

I am fully aware that there is no silver bullet in programming. That was
actually my entire point. It is you that continues to espouse various
opinions that imply that there is some sort of silver bullet solution
to OpenStack's problems.

You imply that monolithic architecture will magically solve problems
inherent in highly distributed systems.

You imply that having a benevolent dictator will magically result in a
productized infrastructure platform that meets everyone's needs.

And you imply that using a single deployment/packaging solution (Docker)
will magically solve all issues with upgrades.

Please answer the questions in my original response with some specific
details.

Thanks
-jay

Best regards,
Boris Pavlovic

On Wed, Sep 13, 2017 at 10:44 AM, Jay Pipes <jaypipes@gmail.com
jaypipes@gmail.com> wrote:

On 09/12/2017 06:53 PM, Boris Pavlovic wrote:

    Mike,

    Great intiative, unfortunately I wasn't able to attend it,
    however I have some thoughts...
    You can't simplify OpenStack just by fixing few issues that are
    described in the etherpad mostly..

    TC should work on shrinking the OpenStack use cases and moving
    towards the product (box) complete solution instead of pieces of
    bunch barely related things..


OpenStack is not a product. It's a collection of projects that
represent a toolkit for various cloud-computing functionality.

    *Simple things to improve: *
    /This is going to allow community to work together, and actually
    get feedback in standard way, and incrementally improve quality. /

    1) There should be one and only one:
    1.1) deployment/packaging(may be docker) upgrade mechanism used
    by everybody


Good luck with that :) The likelihood of the deployer/packager
community agreeing on a single solution is zero.

    1.2) monitoring/logging/tracing mechanism used by everybody


Also close to zero chance of agreeing on a single solution. Better
to focus instead on ensuring various service projects are
monitorable and transparent.

    1.3) way to configure all services (e.g. k8 etcd way)


Are you referring to the way to configure k8s services or the way to
configure/setup an *application* that is running on k8s? If the
former, then there is *not* a single way of configuring k8s
services. If the latter, there isn't a single way of configuring
that either. In fact, despite Helm being a popular new entrant to
the k8s application package format discussion, k8s itself is
decidedly *not* opinionated about how an application is configured.
Use a CMDB, use Helm, use env variables, use confd, use whatever.
k8s doesn't care.

    2) Projects must have standardize interface that allows these
    projects to use them in same way.


Give examples of services that communicate over *non-standard*
interfaces. I don't know of any.

    3) Testing & R&D should be performed only against this standard
    deployment


Sorry, this is laughable. There will never be a standard deployment
because there are infinite use cases that infrastructure supports.
*Your* definition of what works for GoDaddy is decidedly different
from someone else's definition of what works for them.

    *Hard things to improve: *

    OpenStack projects were split in far from ideal way, which leads
    to bunch of gaps that we have now:
    1.1) Code & functional duplications:  Quotas, Schedulers,
    Reservations, Health checks, Loggign, Tracing, ....


There is certainly code duplication in some areas, yes.

    1.2) Non optimal workflows (booting VM takes 400 DB requests)
    because data is stored in Cinder,Nova,Neutron....


Sorry, I call bullshit on this. It does not take 400 DB requests to
boot a VM. Also: the DB is not at all the bottleneck in the VM
launch process. You've been saying it is for years with no
justification to back you up. Pointing to a Rally scenario that
doesn't reflect a real-world usage of OpenStack services isn't useful.

    1.3) Lack of resources (as every project is doing again and
    again same work about same parts)


Provide specific examples please.

    What we can do:

    *1) Simplify internal communication *
    1.1) Instead of AMQP for internal communication inside projects
    use just HTTP, load balancing & retries.


Prove to me that this would solve a problem. First describe what the
problem is, then show me that using AMQP is the source of that
problem, then show me that using HTTP requests would solve that problem.

    *2) Use API Gateway pattern *
    3.1) Provide to use high level API one IP address with one client
    3.2) Allows to significant reduce load on Keystone because
    tokens are checked only in API gateway
    3.3) Simplifies communication between projects (they are now in
    trusted network, no need to check token)


Why is this a problem for OpenStack projects to deal with? If you
want a single IP address for all APIs that your users consume, then
simply deploy all the public-facing services on a single set of web
servers and make each service's root endpoint be a subresource on
the root IP/DNS name.

    *3) Fix the OpenStack split *
    3.1) Move common functionality to separated internal services:
    Scheduling, Logging, Monitoring, Tracing, Quotas, Reservations
    (it would be even better if this thing would have more or less
    monolithic architecture)


Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software
development over the last two decades. While we're at it, let's
rewrite OpenStack projects in COBOL.

    3.2) Somehow deal with defragmentation of resources e.g. VM
    Volumes and Networks data which is heavily connected.


How are these things connected?

    *4) Don't be afraid to break things*
    Maybe it's time for OpenStack 2:

       * In any case most of people provide API on top of OpenStack
    for usage
       * In any case there is no standard and easy way to upgrade
    So basically we are not losing anything even if we do not
    backward compatible changes and rethink completely architecture
    and API.


Awesome news. I will keep this in mind when users (like GoDaddy) ask
Nova to never break anything ever and keep behaviour like scheduler
retries that represent giant technical debt.

-jay

    I know this sounds like science fiction, but I believe community
    will appreciate steps in this direction...


    Best regards,
    Boris Pavlovic

    On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thingee@gmail.com
    <mailto:thingee@gmail.com> <mailto:thingee@gmail.com
    <mailto:thingee@gmail.com>>> wrote:

         Hey all,

         The session is over. I’m hanging near registration if
    anyone wants to
         discuss things. Shout out to John for coming by on
    discussions with
         simplifying dependencies. I welcome more packagers to join the
         discussion.

    https://etherpad.openstack.org/p/simplifying-os
    <https://etherpad.openstack.org/p/simplifying-os>
         <https://etherpad.openstack.org/p/simplifying-os
    <https://etherpad.openstack.org/p/simplifying-os>>

         —
         Mike Perez


         On September 12, 2017 at 11:45:05, Mike Perez
    (thingee@gmail.com <mailto:thingee@gmail.com>
         <mailto:thingee@gmail.com <mailto:thingee@gmail.com>>) wrote:
          > Hey all,
          >
          > Back in a joint meeting with the TC, UC, Foundation and
    The Board
         it was decided as an area
          > of OpenStack to focus was Simplifying OpenStack. This
         intentionally was very broad
          > so the community can kick start the conversation and
    help tackle
         some broad feedback
          > we get.
          >
          > Unfortunately yesterday there was a low turn out in the
         Simplification room. A group
          > of people from the Swift team, Kevin Fox and Swimingly
    were nice
         enough to start the conversation
          > and give some feedback. You can see our initial ether
    pad work here:
          >
          > https://etherpad.openstack.org/p/simplifying-os
    <https://etherpad.openstack.org/p/simplifying-os>
         <https://etherpad.openstack.org/p/simplifying-os
    <https://etherpad.openstack.org/p/simplifying-os>>
          >
          > There are efforts happening everyday helping with this
    goal, and
         our team has made some
          > documented improvements that can be found in our report
    to the
         board within the ether
          > pad. I would like to take a step back with this
    opportunity to
         have in person discussions
          > for us to identify what are the area of simplifying that are
         worthwhile. I’m taking a break
          > from the room at the moment for lunch, but I encourage
    people at
         13:30 local time to meet
          > at the simplification room level b in the big thompson room.
         Thank you!
          >
          > —
          > Mike Perez

        
    __________________________________________________________________________
         OpenStack Development Mailing List (not for usage questions)
         Unsubscribe:
    OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
    
        
             >
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
        
             >




    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
    
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 14, 2017 by Jay_Pipes (59,760 points)   3 11 14
0 votes

Jay,

OK, I'll bite.

This doesn't sound like a constructive discussion. Bye Bye.

Best regards,
Boris Pavlovic

On Thu, Sep 14, 2017 at 8:50 AM, Jay Pipes jaypipes@gmail.com wrote:

OK, I'll bite.

On 09/13/2017 08:56 PM, Boris Pavlovic wrote:

Jay,

All that you say exactly explains the reason why more and more companies
are leaving OpenStack.

All that I say? The majority of what I was "saying" was actually asking
you to back up your statements with actual proof points instead of making
wild conjectures.

Companies and actually end users care only about their things and how can

they get their job done. They want thing that they can run and support
easily and that resolves their problems.

No disagreement from me. That said, I fail to see what the above statement
has to do with anything I wrote.

They initially think that it's a good idea to take a OpenStack as a

Framework and build sort of product on top of it because it's so open and
large and everybody uses...

End users of OpenStack don't "build sort of product on top". End users of
OpenStack call APIs or use Horizon to launch VMs, create networks, volumes,
and whatever else those end users need for their own use cases.

Soon they understand that OpenStack has very complicated operations

because it's not designed to be a product but rather framework and that the
complexity of running OpenStack is similar to development in house solution
and as time is spend they have only few options: move to public cloud or
some other private cloud solution...

Deployers of OpenStack use the method of installing and configuring
OpenStack that matches best their cultural fit, experience and level of
comfort with underlying technologies and vendors (packages vs. source vs.
images, using a vendor distribution vs. going it alone, Chef vs. Puppet vs.
Ansible vs. SaltStack vs. Terraform, etc). The way they configure OpenStack
services is entirely dependent on the use cases they wish to support for
their end users. And, to repeat myself, there is NO SINGLE USE CASE for
infrastructure services like OpenStack. Therefore there is zero chance for
a "standard deployment" of OpenStack becoming a reality.

Just like there are myriad ways of deploying and configuring OpenStack,
there are myriad ways of deploying and configuring k8s. Why? Because
deploying and configuring highly distributed systems is a hard problem to
solve. And maintaining and operating those systems is an even harder
problem to solve.

We as a community can continue saying that the current OpenStack approach

is the best

Nobody is saying that the current OpenStack approach is the best. I
certainly have never said this. All that I have asked is that you actually
back up your statements with proof points that demonstrate how and why a
different approach to building software will lead to specific improvements
in quality or user experience.

and keep loosing customers/users/community, or change something

drastically, like bring technical leadership to OpenStack Foundation
that is going to act like benevolent dictator that focuses OpenStack
effort on shrinking uses cases, redesigning architecture and moving
to the right direction...

What specifically is the "right direction" for OpenStack to take?
Please, as I asked you in the original response, provide actual details
other than "we should have a monolithic application". Provide an argument
as to how and why your direction is "right" for every user of OpenStack.

When you say "technical leadership", what specifically are you wanting to
see?

I know this all sounds like a big change, but let's be honest current
situation doesn't look healthy...
By the way, almost all successful projects in open source have benevolent
dictator and everybody is OK with that's how things works...

Who is the benevolent dictator of k8s? Who is the benevolent dictator of
MySQL? Of PostgreSQL? Of etcd?

You have a particularly myopic view of what "successful" is for open
source, IMHO.

Awesome news. I will keep this in mind when users (like GoDaddy) ask
Nova to never break anything ever and keep behaviour like scheduler
retries that represent giant technical debt.

I am writing here on my behalf (using my personal email, if you haven't
seen), are we actually Open Source? or Enterprise Source?

More over I don't think that what you say is going to be an issue for
GoDaddy, at least soon, because we still can't upgrade, because it's NP
complete problem (even if you run just core projects), which is what my
email was about, and I saw the same stories in bunch of other companies.....

You continue to speak in hyperbole and generalizations. What
specifically about your recommendations will improve the upgrade ability
and story for OpenStack?

Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software
development over the last two decades. While we're at it, let's
rewrite OpenStack projects in COBOL.

I really don't want to answer on this provocation, because it shifts the
focus from major topic. But I really can't stop myself ;)

  • There is no sliver bullet in programming. For example, would Git or
    Linux be better if it was written using microservices approach?

I am fully aware that there is no silver bullet in programming. That was
actually my entire point. It is you that continues to espouse various
opinions that imply that there is some sort of silver bullet solution to
OpenStack's problems.

You imply that monolithic architecture will magically solve problems
inherent in highly distributed systems.

You imply that having a benevolent dictator will magically result in a
productized infrastructure platform that meets everyone's needs.

And you imply that using a single deployment/packaging solution (Docker)
will magically solve all issues with upgrades.

Please answer the questions in my original response with some specific
details.

Thanks
-jay

Best regards,

Boris Pavlovic

On Wed, Sep 13, 2017 at 10:44 AM, Jay Pipes <jaypipes@gmail.com > wrote:

On 09/12/2017 06:53 PM, Boris Pavlovic wrote:

    Mike,

    Great intiative, unfortunately I wasn't able to attend it,
    however I have some thoughts...
    You can't simplify OpenStack just by fixing few issues that are
    described in the etherpad mostly..

    TC should work on shrinking the OpenStack use cases and moving
    towards the product (box) complete solution instead of pieces of
    bunch barely related things..


OpenStack is not a product. It's a collection of projects that
represent a toolkit for various cloud-computing functionality.

    *Simple things to improve: *
    /This is going to allow community to work together, and actually
    get feedback in standard way, and incrementally improve quality. /

    1) There should be one and only one:
    1.1) deployment/packaging(may be docker) upgrade mechanism used
    by everybody


Good luck with that :) The likelihood of the deployer/packager
community agreeing on a single solution is zero.

    1.2) monitoring/logging/tracing mechanism used by everybody


Also close to zero chance of agreeing on a single solution. Better
to focus instead on ensuring various service projects are
monitorable and transparent.

    1.3) way to configure all services (e.g. k8 etcd way)


Are you referring to the way to configure k8s services or the way to
configure/setup an *application* that is running on k8s? If the
former, then there is *not* a single way of configuring k8s
services. If the latter, there isn't a single way of configuring
that either. In fact, despite Helm being a popular new entrant to
the k8s application package format discussion, k8s itself is
decidedly *not* opinionated about how an application is configured.
Use a CMDB, use Helm, use env variables, use confd, use whatever.
k8s doesn't care.

    2) Projects must have standardize interface that allows these
    projects to use them in same way.


Give examples of services that communicate over *non-standard*
interfaces. I don't know of any.

    3) Testing & R&D should be performed only against this standard
    deployment


Sorry, this is laughable. There will never be a standard deployment
because there are infinite use cases that infrastructure supports.
*Your* definition of what works for GoDaddy is decidedly different
from someone else's definition of what works for them.

    *Hard things to improve: *

    OpenStack projects were split in far from ideal way, which leads
    to bunch of gaps that we have now:
    1.1) Code & functional duplications:  Quotas, Schedulers,
    Reservations, Health checks, Loggign, Tracing, ....


There is certainly code duplication in some areas, yes.

    1.2) Non optimal workflows (booting VM takes 400 DB requests)
    because data is stored in Cinder,Nova,Neutron....


Sorry, I call bullshit on this. It does not take 400 DB requests to
boot a VM. Also: the DB is not at all the bottleneck in the VM
launch process. You've been saying it is for years with no
justification to back you up. Pointing to a Rally scenario that
doesn't reflect a real-world usage of OpenStack services isn't useful.

    1.3) Lack of resources (as every project is doing again and
    again same work about same parts)


Provide specific examples please.

    What we can do:

    *1) Simplify internal communication *
    1.1) Instead of AMQP for internal communication inside projects
    use just HTTP, load balancing & retries.


Prove to me that this would solve a problem. First describe what the
problem is, then show me that using AMQP is the source of that
problem, then show me that using HTTP requests would solve that

problem.

    *2) Use API Gateway pattern *
    3.1) Provide to use high level API one IP address with one client
    3.2) Allows to significant reduce load on Keystone because
    tokens are checked only in API gateway
    3.3) Simplifies communication between projects (they are now in
    trusted network, no need to check token)


Why is this a problem for OpenStack projects to deal with? If you
want a single IP address for all APIs that your users consume, then
simply deploy all the public-facing services on a single set of web
servers and make each service's root endpoint be a subresource on
the root IP/DNS name.

    *3) Fix the OpenStack split *
    3.1) Move common functionality to separated internal services:
    Scheduling, Logging, Monitoring, Tracing, Quotas, Reservations
    (it would be even better if this thing would have more or less
    monolithic architecture)


Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software
development over the last two decades. While we're at it, let's
rewrite OpenStack projects in COBOL.

    3.2) Somehow deal with defragmentation of resources e.g. VM
    Volumes and Networks data which is heavily connected.


How are these things connected?

    *4) Don't be afraid to break things*
    Maybe it's time for OpenStack 2:

       * In any case most of people provide API on top of OpenStack
    for usage
       * In any case there is no standard and easy way to upgrade
    So basically we are not losing anything even if we do not
    backward compatible changes and rethink completely architecture
    and API.


Awesome news. I will keep this in mind when users (like GoDaddy) ask
Nova to never break anything ever and keep behaviour like scheduler
retries that represent giant technical debt.

-jay

    I know this sounds like science fiction, but I believe community
    will appreciate steps in this direction...


    Best regards,
    Boris Pavlovic

    On Tue, Sep 12, 2017 at 2:33 PM, Mike Perez <thingee@gmail.com
    <mailto:thingee@gmail.com> <mailto:thingee@gmail.com
    <mailto:thingee@gmail.com>>> wrote:

         Hey all,

         The session is over. I’m hanging near registration if
    anyone wants to
         discuss things. Shout out to John for coming by on
    discussions with
         simplifying dependencies. I welcome more packagers to join

the
discussion.

    https://etherpad.openstack.org/p/simplifying-os
    <https://etherpad.openstack.org/p/simplifying-os>
         <https://etherpad.openstack.org/p/simplifying-os
    <https://etherpad.openstack.org/p/simplifying-os>>

         —
         Mike Perez


         On September 12, 2017 at 11:45:05, Mike Perez
    (thingee@gmail.com <mailto:thingee@gmail.com>
         <mailto:thingee@gmail.com <mailto:thingee@gmail.com>>)

wrote:

Hey all,

Back in a joint meeting with the TC, UC, Foundation and
The Board
it was decided as an area
of OpenStack to focus was Simplifying OpenStack. This
intentionally was very broad
so the community can kick start the conversation and
help tackle
some broad feedback
we get.

Unfortunately yesterday there was a low turn out in the
Simplification room. A group
of people from the Swift team, Kevin Fox and Swimingly
were nice
enough to start the conversation
and give some feedback. You can see our initial ether
pad work here:

https://etherpad.openstack.org/p/simplifying-os
https://etherpad.openstack.org/p/simplifying-os
https://etherpad.openstack.org/p/simplifying-os
https://etherpad.openstack.org/p/simplifying-os>

There are efforts happening everyday helping with this
goal, and
our team has made some
documented improvements that can be found in our report
to the
board within the ether
pad. I would like to take a step back with this
opportunity to
have in person discussions
for us to identify what are the area of simplifying that
are
worthwhile. I’m taking a break
from the room at the moment for lunch, but I encourage
people at
13:30 local time to meet
at the simplification room level b in the big thompson
room.
Thank you!


Mike Perez

                ______________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe

subscribe>
lists.openstack.org?subject:unsubscribe
subscribe>>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

                > i-bin/mailman/listinfo/openstack-dev
    > ck-dev>>




    ____________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
____________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 14, 2017 by boris_at_pavlovic.me (6,900 points)   1 4 7
0 votes

Hey all,

I would like to encourage people from different teams to add items of
things they learned at the PTG about simplifying their own projects.
Maybe we can see some themes that can contribute to community wide
goals?

https://etherpad.openstack.org/p/simplifying-os


Mike Perez

On September 12, 2017 at 15:33:14, Mike Perez (thingee@gmail.com) wrote:
Hey all,

The session is over. I’m hanging near registration if anyone wants to discuss things.
Shout out to John for coming by on discussions with simplifying dependencies. I welcome
more packagers to join the discussion.

https://etherpad.openstack.org/p/simplifying-os


Mike Perez

On September 12, 2017 at 11:45:05, Mike Perez (thingee@gmail.com) wrote:

Hey all,

Back in a joint meeting with the TC, UC, Foundation and The Board it was decided as an area
of OpenStack to focus was Simplifying OpenStack. This intentionally was very broad
so the community can kick start the conversation and help tackle some broad feedback
we get.

Unfortunately yesterday there was a low turn out in the Simplification room. A group
of people from the Swift team, Kevin Fox and Swimingly were nice enough to start the conversation
and give some feedback. You can see our initial ether pad work here:

https://etherpad.openstack.org/p/simplifying-os

There are efforts happening everyday helping with this goal, and our team has made some
documented improvements that can be found in our report to the board within the ether
pad. I would like to take a step back with this opportunity to have in person discussions
for us to identify what are the area of simplifying that are worthwhile. I’m taking a
break
from the room at the moment for lunch, but I encourage people at 13:30 local time to meet
at the simplification room level b in the big thompson room. Thank you!


Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 15, 2017 by Mike_Perez (13,120 points)   2 3 4
0 votes

On 15:53 Sep 12, Boris Pavlovic wrote:
Mike,

Great intiative, unfortunately I wasn't able to attend it, however I have
some thoughts...
You can't simplify OpenStack just by fixing few issues that are described
in the etherpad mostly..

Definitely agree that it's not going to be a few issues to fix. I purposely was
leading this effort being broad so we can take the comments of OpenStack being
complex, and have a conversation on what that actually means to people.

The feedback from people on the etherpad, as well as the in person discussions
have been valuable in getting those different perspectives. Unfortunately
participation was low, but I'm interested in seeing if we can identify some
themes to have some actual doable objectives.

I appreciate you taking the time in writing up your feedback on this Boris.
I will make sure it's included in the more polished summary I'll be giving the
TC and the Board to act on. Thank you!

--
Mike Perez


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

responded Sep 15, 2017 by Mike_Perez (13,120 points)   2 3 4
0 votes

Wading in a bit late as I've been off-list for a while, but I have thoughts here.

Excerpts from Jay Pipes's message of 2017-09-13 13:44:55 -0400:

On 09/12/2017 06:53 PM, Boris Pavlovic wrote:

Mike,

Great intiative, unfortunately I wasn't able to attend it, however I
have some thoughts...
You can't simplify OpenStack just by fixing few issues that are
described in the etherpad mostly..

TC should work on shrinking the OpenStack use cases and moving towards
the product (box) complete solution instead of pieces of bunch barely
related things..

OpenStack is not a product. It's a collection of projects that represent
a toolkit for various cloud-computing functionality.

I think Boris was suggesting that making it a product would simplify it.

I believe there is some effort under way to try this, but my brain
has ceased to remember what that effort is called or how it is being
implemented. Something about common use cases and the exact mix of
projects + configuration to get there, and testing it? Help?

*Simple things to improve: *
/This is going to allow community to work together, and actually get
feedback in standard way, and incrementally improve quality. /

1) There should be one and only one:
1.1) deployment/packaging(may be docker) upgrade mechanism used by
everybody

Good luck with that :) The likelihood of the deployer/packager community
agreeing on a single solution is zero.

I think Boris is suggesting that the OpenStack development community
pick one to use, not the packaging and deployer community. The
only common thing dev has in this area is devstack, and that
has allowed dev to largely ignore issues they create because
they're not feeling the pain of the average user who is using
puppet/chef/ansible/tripleo/kolla/in-house-magic to deploy.

1.2) monitoring/logging/tracing mechanism used by everybody

Also close to zero chance of agreeing on a single solution. Better to
focus instead on ensuring various service projects are monitorable and
transparent.

I'm less enthused about this one as well. Monitoring, alerting, defining
business rules for what is broken and what isn't are very org-specific
things.

I also don't think OpenStack fails at this and there is plenty exposed
in clear ways for monitors to be created.

1.3) way to configure all services (e.g. k8 etcd way)

Are you referring to the way to configure k8s services or the way to
configure/setup an application that is running on k8s? If the former,
then there is not a single way of configuring k8s services. If the
latter, there isn't a single way of configuring that either. In fact,
despite Helm being a popular new entrant to the k8s application package
format discussion, k8s itself is decidedly not opinionated about how
an application is configured. Use a CMDB, use Helm, use env variables,
use confd, use whatever. k8s doesn't care.

We do have one way to configure things. Well.. two.

*) Startup-time things are configured in config files.
*) Run-time changable things are in databases fronted by admin APIs/tools.

2) Projects must have standardize interface that allows these projects
to use them in same way.

Give examples of services that communicate over non-standard
interfaces. I don't know of any.

Agreed here too. I'd like to see a more clear separation between nova,
neutron, and cinder on the hypervisor, but the way they're coupled now
is standardized.

3) Testing & R&D should be performed only against this standard deployment

Sorry, this is laughable. There will never be a standard deployment
because there are infinite use cases that infrastructure supports.
Your definition of what works for GoDaddy is decidedly different from
someone else's definition of what works for them.

If there were a few well defined product definitions, there could be. It's
not laughable at all to me. devstack and the configs it creates are useful
for lightweight testing, but they're not necessarily representative of
the standard makeup of real-world clouds.

*Hard things to improve: *

OpenStack projects were split in far from ideal way, which leads to
bunch of gaps that we have now:
1.1) Code & functional duplications: Quotas, Schedulers, Reservations,
Health checks, Loggign, Tracing, ....

There is certainly code duplication in some areas, yes.

I feel like this de-duplication has been moving at the slow-but-consistent
pace anyone can hope for since it was noticed and oslo was created.

It's now at the things that are really hard to de-dupe like quotas and policy.

1.2) Non optimal workflows (booting VM takes 400 DB requests) because
data is stored in Cinder,Nova,Neutron....

Sorry, I call bullshit on this. It does not take 400 DB requests to boot
a VM. Also: the DB is not at all the bottleneck in the VM launch
process. You've been saying it is for years with no justification to
back you up. Pointing to a Rally scenario that doesn't reflect a
real-world usage of OpenStack services isn't useful.

Separation of concerns often beats performance anyway. I do think this
was just Boris's optimization muscle flexing a little too hard.

1.3) Lack of resources (as every project is doing again and again same
work about same parts)

Provide specific examples please.

Glance is constantly teetering on the brink of being unmaintained. There
are, in fact, hundreds of open bugs in Nova, with 47 marked as High
importance. Though IMO, that is just the way software works: if we had
enough people to fix everything, we'd think of more things to break first.

What we can do:

*1) Simplify internal communication *
1.1) Instead of AMQP for internal communication inside projects use just
HTTP, load balancing & retries.

Prove to me that this would solve a problem. First describe what the
problem is, then show me that using AMQP is the source of that problem,
then show me that using HTTP requests would solve that problem.

RabbitMQ is a bottleneck for all projects that use it. There aren't any
really well tested alternatives, and projects that need the scale are
turning to things like Cellsv2 to work around this problem.

Lately I've been wondering more why we don't just replace MySQL+RabbitMQ with
something like etcd3 or zookeeper. They notify you when things change and
offer enough scalability and resilience to failure that it just might work
without sharding being necessary below the thousands-of-hypervisors mark.

But, R&D time is short, so I accept our RabbitMQ overlord until such
time as I can plan a peaceful coup.

*2) Use API Gateway pattern *
3.1) Provide to use high level API one IP address with one client
3.2) Allows to significant reduce load on Keystone because tokens are
checked only in API gateway
3.3) Simplifies communication between projects (they are now in trusted
network, no need to check token)

Why is this a problem for OpenStack projects to deal with? If you want a
single IP address for all APIs that your users consume, then simply
deploy all the public-facing services on a single set of web servers and
make each service's root endpoint be a subresource on the root IP/DNS name.

We effectively get this from a user perspective with the single auth_url +
catalog. That said, it might simplify things for users if we didn't need
the catalog part on the user end. Just answer requests to any API from
one place that finds the thing int he catalog for you.

*3) Fix the OpenStack split *
3.1) Move common functionality to separated internal services:
Scheduling, Logging, Monitoring, Tracing, Quotas, Reservations (it would
be even better if this thing would have more or less monolithic
architecture)

Yes, let's definitely go the opposite direction of microservices and
loosely coupled domains which is the best practices of software
development over the last two decades. While we're at it, let's rewrite
OpenStack projects in COBOL.

Actually I think he argued for micro-services. "... separated internal
services." and then argued that a monolithic implementation would
be better.

I personally like separation of concerns, and thus, microservices.

3.2) Somehow deal with defragmentation of resources e.g. VM Volumes and
Networks data which is heavily connected.

How are these things connected?

4) Don't be afraid to break things
Maybe it's time for OpenStack 2:

  • In any case most of people provide API on top of OpenStack for usage
  • In any case there is no standard and easy way to upgrade

So basically we are not losing anything even if we do not backward
compatible changes and rethink completely architecture and API.

Awesome news. I will keep this in mind when users (like GoDaddy) ask
Nova to never break anything ever and keep behaviour like scheduler
retries that represent giant technical debt.

Please don't break anything in OpenStack 1.

Please lets break everything when we start OpenStack 2, but
provide compatibility layers and legacy services for those who are
green-field-challenged.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Sep 21, 2017 by Clint_Byrum (40,940 points)   4 5 9
...