settingsLogin | Registersettings

[openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

0 votes

Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused. I think using Mistral for
TripleO processes that are obviously workflows - stack deployment, node
registration - makes perfect sense. That thread is exploring practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should we'
than 'can we'. And to do that, I want to indulge in a thought exercise
stemming from an IRC discussion with Dan and others. All, please correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat stack
directly from a Swift container. With an updated patch, the Heat CLI can
support this functionality natively. Then we don't need a TripleO API; we
can use Mistral to access that functionality, and we're done, with no need
for additional code within TripleO. And, as I understand it, that's the
true motivation for using Mistral instead of a TripleO API: avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective, the story
doesn't quite end there. A GUI needs additional functionality, which boils
down to: understanding the Heat deployment templates in order to provide
options for a user; and persisting those options within a Heat environment
file.

Right away I think we hit a problem. Where does the code for 'understanding
options' go? Much of that understanding comes from the capabilities map
in tripleo-heat-templates [2]; it would make sense to me that responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to give API
access to 'getDeploymentOptions', we can create a Mistral workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage and
business logic, a problem that is compounded because 'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and needs
exposure through an API. And, as has been discussed on a separate TripleO
thread, we're not even sure Swift is sufficient for our needs; one possible
consideration right now is allowing deployment from templates stored in
multiple places, such as the file system or git.

Are we going to have duplicate 'getDeploymentOptions' workflows for each
storage mechanism? If we consolidate the storage code within a TripleO
library, do we really need a workflow to call a single function? Is a
thin TripleO API that contains no additional business logic really so bad
at that point?

My gut reaction is to say that proposing Mistral in place of a TripleO API
is to look at the engineering concerns from the wrong direction. The
Mistral alternative comes from a desire to limit custom TripleO code at all
costs. I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code base
full of design flaws that make it difficult to implement or extend any
functionality cleanly.

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture. For these get/set
methods that the API needs, it's pretty simple: storage -> some logic ->
a REST API. Adding a workflow engine on top of that is unneeded, and I
believe that means it's an incorrect solution.

Thanks,
Tzu-Mainn Chen

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/083757.html
[2] https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities_map.yaml


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked Jan 13, 2016 in openstack-dev by Tzu-Mainn_Chen (1,420 points)   4

71 Responses

0 votes

On 01/13/2016 10:41 AM, Tzu-Mainn Chen wrote:
Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused. I think using Mistral for
TripleO processes that are obviously workflows - stack deployment, node
registration - makes perfect sense. That thread is exploring practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should we'
than 'can we'. And to do that, I want to indulge in a thought exercise
stemming from an IRC discussion with Dan and others. All, please correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat stack
directly from a Swift container. With an updated patch, the Heat CLI can
support this functionality natively. Then we don't need a TripleO API; we
can use Mistral to access that functionality, and we're done, with no need
for additional code within TripleO. And, as I understand it, that's the
true motivation for using Mistral instead of a TripleO API: avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective, the story
doesn't quite end there. A GUI needs additional functionality, which boils
down to: understanding the Heat deployment templates in order to provide
options for a user; and persisting those options within a Heat environment
file.

We also have things like profile matching (on review now), which don't
fall into any other project..

Right away I think we hit a problem. Where does the code for 'understanding
options' go? Much of that understanding comes from the capabilities map
in tripleo-heat-templates [2]; it would make sense to me that responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to give API
access to 'getDeploymentOptions', we can create a Mistral workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage and
business logic, a problem that is compounded because 'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and needs
exposure through an API. And, as has been discussed on a separate TripleO
thread, we're not even sure Swift is sufficient for our needs; one possible
consideration right now is allowing deployment from templates stored in
multiple places, such as the file system or git.

Are we going to have duplicate 'getDeploymentOptions' workflows for each
storage mechanism? If we consolidate the storage code within a TripleO
library, do we really need a workflow to call a single function? Is a
thin TripleO API that contains no additional business logic really so bad
at that point?

My gut reaction is to say that proposing Mistral in place of a TripleO API
is to look at the engineering concerns from the wrong direction. The
Mistral alternative comes from a desire to limit custom TripleO code at all
costs. I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code base
full of design flaws that make it difficult to implement or extend any
functionality cleanly.

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture. For these get/set
methods that the API needs, it's pretty simple: storage -> some logic ->
a REST API. Adding a workflow engine on top of that is unneeded, and I
believe that means it's an incorrect solution.

Thanks,
Tzu-Mainn Chen

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/083757.html
[2] https://github.com/openstack/tripleo-heat-templates/blob/master/capabilities_map.yaml


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 13, 2016 by Dmitry_Tantsur (18,080 points)   2 3 7
0 votes

On Wed, 2016-01-13 at 04:41 -0500, Tzu-Mainn Chen wrote:
Hey all,

I realize now from the title of the other TripleO/Mistral thread [1]
that
the discussion there may have gotten confused.  I think using Mistral
for
TripleO processes that are obviously workflows - stack deployment,
node
registration - makes perfect sense.  That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement
for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should
we'
than 'can we'.  And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others.  All, please
correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container.  With an updated patch, the Heat CLI
can
support this functionality natively.  Then we don't need a TripleO
API; we
can use Mistral to access that functionality, and we're done, with no
need
for additional code within TripleO.  And, as I understand it, that's
the
true motivation for using Mistral instead of a TripleO API: avoiding
custom
code within TripleO.

The True motivation for investigating Mistral was to counter the
assertion that we needed to build our own REST API for workflows in the
TripleO API spec. This:

 "We need a REST API that supports the overcloud deployment workflow."

https://review.openstack.org/#/c/230432/13/specs/mitaka/tripleo-overclo
ud-deployment-api.rst

In doing that I was trying to wrap some of the existing code in
tripleo-common in Mistral actions and it occurred to me that some
things (like the ability to deploy heat templates from Swift) might
benefit others more if we put them in the respective client libraries
(like heatclient) instead of carrying them in tripleo-common where they
only benefit us. Especially since in the heatclient case it already had
a --template-object option to begin with (https://bugs.launchpad.net/py
thon-heatclient/+bug/1532326)

This goes towards a similar pattern we follow with other components
like Puppet which is that instead of trying to create our own
functionality in puppet-tripleo we are trying to as much as possible
add the features we need to the individual modules for each project
(puppet-nova, puppet-swift, etc).

In other words avoiding custom code within TripleO is just good
practice in general, not something that is specific to the Mistral vs
TripleO API discussion. When someone looks at TripleO as a project I
would very much like them to admire our architecture... not what we've
had to build to support it. As an example I'm very glad to see TripleO
be a bit more friendly to config management tooling... rather than
trying to build our own version (os-apply-config, etc.). And adding
code to the right place usually works out better for everyone and can
help build up the larger OpenStack community too.

That's definitely a worthy goal... except from my perspective, the
story
doesn't quite end there.  A GUI needs additional functionality, which
boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

TripleO API was previously described as what would be our "workflows
API" for TripleO. A places for common workflows that are used for the
CLI, and UI together (one code path for all!).

What you describe here sounds like a different sort of idea which is a
GUI helper sort of API. FWIW I think it is totally fine for a GUI to
maintain its own API for caching reasons etc if the members of that
team find it to be a requirement. Please don't feel that any of the
workflow API comparisons block requirements for things to build a GUI
though.

Right away I think we hit a problem.  Where does the code for
'understanding
options' go?  Much of that understanding comes from the capabilities
map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to give
API
access to 'getDeploymentOptions', we can create a Mistral workflow.

  Retrieve Heat templates from Swift -> Parse capabilities map

I have no issues with this workflow if it helps the UI team to create
say a cached copy of the capabilities map.

Similar to the above heatclient discussion I would ask why put this
into a TripleO library if we don't have to. Could this code perhaps be
made into a more generic Heat feature? If not immediately, then perhaps
eventually? What I mean is... it would be totally fine to write this
code in say TripleO common with an eye towards moving it eventually to
someplace better like Heat API (proper) or heatclient perhaps.

Regardless of where the code for parsing the capabilities map lives
though I think it would be a fine use of a Mistral workflow.

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage
and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and
needs
exposure through an API.  And, as has been discussed on a separate
TripleO
thread, we're not even sure Swift is sufficient for our needs; one
possible
consideration right now is allowing deployment from templates stored
in
multiple places, such as the file system or git.

Are we going to have duplicate 'getDeploymentOptions' workflows for
each
storage mechanism?  If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function?  Is a
thin TripleO API that contains no additional business logic really so
bad
at that point?

Do we really need our TripleO libaries to support multiple storage
backends at all right now. As a cloud deployer I can store my heat
templates wherever I like... git, svn, cvs... whatever. When I'm ready
to deploy I would then use python-tripleoclient or the UI to "upload"
my local copy of the templates into Swift.  At this point I could
either use the UI or python-tripleoclient to drive the rest of the
deployment to completion. The fact that the TripleO workflows use Swift
for storage is actually abstracted from me such that I don't really
care much.

My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong direction.  The
Mistral alternative comes from a desire to limit custom TripleO code
at all
costs.  I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code
base
full of design flaws that make it difficult to implement or extend
any
functionality cleanly.

Similar to what I said above limiting code in TripleO isn't necessarily
the primary desire here I think. It is more of a question of can we use
a generic workflow API like Mistral, or should we go and build our own.

Whether more or less of the code lives in a TripleO common library is
something we can debate and take a step at a time I think.

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture.  For these get/set
methods that the API needs, it's pretty simple: storage -> some logic
->
a REST API.  Adding a workflow engine on top of that is unneeded, and
I
believe that means it's an incorrect solution.

Totally agree on finding the correct architecture. What you suggest as
a "dangerous attitude" is actually just trying to ask the question that
to me have not yet been asked. Trying out tools and solutions we may
not have tried yet. (FWIW I found the dangerous attitude bit to be a
bit un-constructive to this conversation and would prefer to leave that
verbiage elsewhere)

If you feel strongly that this particular case of parsing the
capabilities map and managing config settings is not suitably
implemented with a generic workflow tool then perhaps we implement both
and see how it works out. Continue work on creating TripleO API...

I would however like to perhaps see us scale back selling TripleO API
as our generic workflow API. Perhaps there still is a niche where it
helps with some things and I've got no issue with pursuing this idea. I
will likely continue to ask the questions about putting code into the
right place (like could some of this be implemented as a Heat API
feature instead). Perhaps this just comes in time after we've refined
our deployment workflows a bit to accomidate but UI and CLI.

Dan

Thanks,
Tzu-Mainn Chen

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/0
83757.html
[2] https://github.com/openstack/tripleo-heat-templates/blob/master/c
apabilities_map.yaml



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 13, 2016 by Dan_Prince (8,160 points)   1 5 7
0 votes

----- Original Message -----
On Wed, 2016-01-13 at 04:41 -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread [1]
that
the discussion there may have gotten confused.  I think using Mistral
for
TripleO processes that are obviously workflows - stack deployment,
node
registration - makes perfect sense.  That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement
for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should
we'
than 'can we'.  And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others.  All, please
correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container.  With an updated patch, the Heat CLI
can
support this functionality natively.  Then we don't need a TripleO
API; we
can use Mistral to access that functionality, and we're done, with no
need
for additional code within TripleO.  And, as I understand it, that's
the
true motivation for using Mistral instead of a TripleO API: avoiding
custom
code within TripleO.

The True motivation for investigating Mistral was to counter the
assertion that we needed to build our own REST API for workflows in the
TripleO API spec. This:

 "We need a REST API that supports the overcloud deployment workflow."

https://review.openstack.org/#/c/230432/13/specs/mitaka/tripleo-overclo
ud-deployment-api.rst

In doing that I was trying to wrap some of the existing code in
tripleo-common in Mistral actions and it occurred to me that some
things (like the ability to deploy heat templates from Swift) might
benefit others more if we put them in the respective client libraries
(like heatclient) instead of carrying them in tripleo-common where they
only benefit us. Especially since in the heatclient case it already had
a --template-object option to begin with (https://bugs.launchpad.net/py
thon-heatclient/+bug/1532326)

This goes towards a similar pattern we follow with other components
like Puppet which is that instead of trying to create our own
functionality in puppet-tripleo we are trying to as much as possible
add the features we need to the individual modules for each project
(puppet-nova, puppet-swift, etc).

In other words avoiding custom code within TripleO is just good
practice in general, not something that is specific to the Mistral vs
TripleO API discussion. When someone looks at TripleO as a project I
would very much like them to admire our architecture... not what we've
had to build to support it. As an example I'm very glad to see TripleO
be a bit more friendly to config management tooling... rather than
trying to build our own version (os-apply-config, etc.). And adding
code to the right place usually works out better for everyone and can
help build up the larger OpenStack community too.

That's definitely a worthy goal... except from my perspective, the
story
doesn't quite end there.  A GUI needs additional functionality, which
boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

TripleO API was previously described as what would be our "workflows
API" for TripleO. A places for common workflows that are used for the
CLI, and UI together (one code path for all!).

What you describe here sounds like a different sort of idea which is a
GUI helper sort of API. FWIW I think it is totally fine for a GUI to
maintain its own API for caching reasons etc if the members of that
team find it to be a requirement. Please don't feel that any of the
workflow API comparisons block requirements for things to build a GUI
though.

Right away I think we hit a problem.  Where does the code for
'understanding
options' go?  Much of that understanding comes from the capabilities
map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to give
API
access to 'getDeploymentOptions', we can create a Mistral workflow.

  Retrieve Heat templates from Swift -> Parse capabilities map

I have no issues with this workflow if it helps the UI team to create
say a cached copy of the capabilities map.

Similar to the above heatclient discussion I would ask why put this
into a TripleO library if we don't have to. Could this code perhaps be
made into a more generic Heat feature? If not immediately, then perhaps
eventually? What I mean is... it would be totally fine to write this
code in say TripleO common with an eye towards moving it eventually to
someplace better like Heat API (proper) or heatclient perhaps.

Regardless of where the code for parsing the capabilities map lives
though I think it would be a fine use of a Mistral workflow.

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage
and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and
needs
exposure through an API.  And, as has been discussed on a separate
TripleO
thread, we're not even sure Swift is sufficient for our needs; one
possible
consideration right now is allowing deployment from templates stored
in
multiple places, such as the file system or git.

Are we going to have duplicate 'getDeploymentOptions' workflows for
each
storage mechanism?  If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function?  Is a
thin TripleO API that contains no additional business logic really so
bad
at that point?

Do we really need our TripleO libaries to support multiple storage
backends at all right now. As a cloud deployer I can store my heat
templates wherever I like... git, svn, cvs... whatever. When I'm ready
to deploy I would then use python-tripleoclient or the UI to "upload"
my local copy of the templates into Swift.  At this point I could
either use the UI or python-tripleoclient to drive the rest of the
deployment to completion. The fact that the TripleO workflows use Swift
for storage is actually abstracted from me such that I don't really
care much.

My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong direction.  The
Mistral alternative comes from a desire to limit custom TripleO code
at all
costs.  I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code
base
full of design flaws that make it difficult to implement or extend
any
functionality cleanly.

Similar to what I said above limiting code in TripleO isn't necessarily
the primary desire here I think. It is more of a question of can we use
a generic workflow API like Mistral, or should we go and build our own.

Whether more or less of the code lives in a TripleO common library is
something we can debate and take a step at a time I think.

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture.  For these get/set
methods that the API needs, it's pretty simple: storage -> some logic
->
a REST API.  Adding a workflow engine on top of that is unneeded, and
I
believe that means it's an incorrect solution.

Totally agree on finding the correct architecture. What you suggest as
a "dangerous attitude" is actually just trying to ask the question that
to me have not yet been asked. Trying out tools and solutions we may
not have tried yet. (FWIW I found the dangerous attitude bit to be a
bit un-constructive to this conversation and would prefer to leave that
verbiage elsewhere)

If you feel strongly that this particular case of parsing the
capabilities map and managing config settings is not suitably
implemented with a generic workflow tool then perhaps we implement both
and see how it works out. Continue work on creating TripleO API...

I would however like to perhaps see us scale back selling TripleO API
as our generic workflow API. Perhaps there still is a niche where it
helps with some things and I've got no issue with pursuing this idea. I
will likely continue to ask the questions about putting code into the
right place (like could some of this be implemented as a Heat API
feature instead). Perhaps this just comes in time after we've refined
our deployment workflows a bit to accomidate but UI and CLI.

That seems more than fair (and I apologize for the 'dangerous attitude'
comment, which I did not intend to be construed as an attack). I think
I'll give the API spec one more polish and then re-submit, with the hope
that the discussion here brings us much closer to consensus as to the
path forward. Thanks!

Mainn

Dan

Thanks,
Tzu-Mainn Chen

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/0
83757.html
[2] https://github.com/openstack/tripleo-heat-templates/blob/master/c
apabilities_map.yaml



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 13, 2016 by Tzu-Mainn_Chen (1,420 points)   4
0 votes

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused. I think using Mistral for
TripleO processes that are obviously workflows - stack deployment, node
registration - makes perfect sense. That thread is exploring practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should we'
than 'can we'. And to do that, I want to indulge in a thought exercise
stemming from an IRC discussion with Dan and others. All, please correct me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat stack
directly from a Swift container. With an updated patch, the Heat CLI can
support this functionality natively. Then we don't need a TripleO API; we
can use Mistral to access that functionality, and we're done, with no need
for additional code within TripleO. And, as I understand it, that's the
true motivation for using Mistral instead of a TripleO API: avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective, the story
doesn't quite end there. A GUI needs additional functionality, which boils
down to: understanding the Heat deployment templates in order to provide
options for a user; and persisting those options within a Heat environment
file.

Right away I think we hit a problem. Where does the code for 'understanding
options' go? Much of that understanding comes from the capabilities map
in tripleo-heat-templates [2]; it would make sense to me that responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to give API
access to 'getDeploymentOptions', we can create a Mistral workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage and
business logic, a problem that is compounded because 'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and needs
exposure through an API. And, as has been discussed on a separate TripleO
thread, we're not even sure Swift is sufficient for our needs; one possible
consideration right now is allowing deployment from templates stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a missing
feature in Heat, which I have proposed, but am having a hard time reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by the
proposed TripleO API, I'd welcome feedback and collaboration so we can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows for each
storage mechanism? If we consolidate the storage code within a TripleO
library, do we really need a workflow to call a single function? Is a
thin TripleO API that contains no additional business logic really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage mechanism
becomes more easily pluggable vs baked into an opaque-to-operators API.

E.g, in the long term, imagine the capabilities feature exists in Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to a
    template store

This is a pre-deployment validation workflow, and it's a superset of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning that we've
always implemented it either via shell scripts (tripleo-incubator) or
python code (tripleo-common/tripleo-client, potentially TripleO API).

So I think what Dan is exploring is, how do we avoid reimplementing a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a TripleO API
is to look at the engineering concerns from the wrong direction. The
Mistral alternative comes from a desire to limit custom TripleO code at all
costs. I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code base
full of design flaws that make it difficult to implement or extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's about
learning from past mistakes, where long-term TripleO specific workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was a
workaround for missing heat features, then Tuskar was another workaround
(arguably) for missing heat features, and now we're again proposing a
long-term workaround for some missing heat features, some of which are
already proposed (referring to the API for capabilities resolution).

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture. For these get/set
methods that the API needs, it's pretty simple: storage -> some logic ->
a REST API. Adding a workflow engine on top of that is unneeded, and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements, it'll
probably be much easier to rationalize over the value of a bespoke API vs
mistral?

Steve


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 14, 2016 by Steven_Hardy (16,900 points)   2 7 12
0 votes

----- Original Message -----
On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread [1] that
the discussion there may have gotten confused. I think using Mistral for
TripleO processes that are obviously workflows - stack deployment, node
registration - makes perfect sense. That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a 'should we'
than 'can we'. And to do that, I want to indulge in a thought exercise
stemming from an IRC discussion with Dan and others. All, please correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat stack
directly from a Swift container. With an updated patch, the Heat CLI can
support this functionality natively. Then we don't need a TripleO API; we
can use Mistral to access that functionality, and we're done, with no need
for additional code within TripleO. And, as I understand it, that's the
true motivation for using Mistral instead of a TripleO API: avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective, the story
doesn't quite end there. A GUI needs additional functionality, which boils
down to: understanding the Heat deployment templates in order to provide
options for a user; and persisting those options within a Heat environment
file.

Right away I think we hit a problem. Where does the code for
'understanding
options' go? Much of that understanding comes from the capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to give API
access to 'getDeploymentOptions', we can create a Mistral workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between storage and
business logic, a problem that is compounded because 'getDeploymentOptions'
is not the only functionality that accesses the Heat templates and needs
exposure through an API. And, as has been discussed on a separate TripleO
thread, we're not even sure Swift is sufficient for our needs; one possible
consideration right now is allowing deployment from templates stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a missing
feature in Heat, which I have proposed, but am having a hard time reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by the
proposed TripleO API, I'd welcome feedback and collaboration so we can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows for each
storage mechanism? If we consolidate the storage code within a TripleO
library, do we really need a workflow to call a single function? Is a
thin TripleO API that contains no additional business logic really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage mechanism
becomes more easily pluggable vs baked into an opaque-to-operators API.

E.g, in the long term, imagine the capabilities feature exists in Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to a
    template store

This is a pre-deployment validation workflow, and it's a superset of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning that we've
always implemented it either via shell scripts (tripleo-incubator) or
python code (tripleo-common/tripleo-client, potentially TripleO API).

So I think what Dan is exploring is, how do we avoid reimplementing a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a TripleO API
is to look at the engineering concerns from the wrong direction. The
Mistral alternative comes from a desire to limit custom TripleO code at all
costs. I think that is an extremely dangerous attitude that leads to
compromises and workarounds that will quickly lead to a shaky code base
full of design flaws that make it difficult to implement or extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's about
learning from past mistakes, where long-term TripleO specific workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was a
workaround for missing heat features, then Tuskar was another workaround
(arguably) for missing heat features, and now we're again proposing a
long-term workaround for some missing heat features, some of which are
already proposed (referring to the API for capabilities resolution).

This is an important point, thanks for bringing it up!

I think that I might have a different understanding of the lessons to be
learned from Tuskar's limitations. There were actually two issues that
arose. The first was that Tuskar was far too specific in how it tried to
manipulated Heat pieces. The second - and more serious, from my point of
view - was that there literally was no way for an API-based GUI to
perform the tasks it needed to in order to do the correct manipulation
(environment selection), because there was no Heat API in place for doing
so.

My takeaway from the first issue was that any potential TripleO API in
the future needed to be very low-level, a light skimming on top of the
OpenStack services it uses. The plan creation process that the
tripleo-common library spec describes is that: it's just a couple of
methods designed to allow a user to create an environment file, which
can then be used for deploying the overcloud.

My takeaway from the second issue was a bit more complicated. A
required feature was missing, and although the proper functionality
needed to enable it in Heat was identified, it was unclear (and remains
unclear) whether that feature truly belonged in Heat. What does a GUI
do then? The GUI could take a cycle off, which is essentially what
happened here; I don't think that's a reasonable solution. We could
hope that we arrive at a 100% foolproof and immutable deployment solution
in the future, arriving at a point where no new features would ever be
needed; I don't think that's a practical hope.

The third solution that came to mind was the idea of creating the
TripleO API. It gives us a place to add in missing features if needed.
And I think it also gives us a useful layer of indirection. The
consumers of TripleO want a stable API, so that a new release doesn't
force them to do a massive update of their code; the TripleO API would
provide that, allowing us to switch code behind the scenes (say, if
the capabilities feature lands in Heat).

I think I kinda view TripleO as a 'best practices' project. Using
OpenStack is a confusing experience, with a million different options
and choices to make. TripleO provides users with an excellent guide.
But the problem is that best practices change, and I think that
perceived instability is dangerous for adoption of TripleO.

So having a TripleO library and its associated API be a 'best practices'
library makes sense to me. It gives consumers a stable platform upon
which to use TripleO, while allowing us to be flexible behind the scenes.
The 'best practice' for Heat capabilities right now is a workaround,
because it hasn't been judged to be suitable to go into Heat itself.
If that changes, we get to shift as well - and all of these changes are
invisible to the API consumer.

Mainn

I think the correct attitude is to simply look at the problem we're
trying to solve and find the correct architecture. For these get/set
methods that the API needs, it's pretty simple: storage -> some logic ->
a REST API. Adding a workflow engine on top of that is unneeded, and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements, it'll
probably be much easier to rationalize over the value of a bespoke API vs
mistral?

Steve


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 14, 2016 by Tzu-Mainn_Chen (1,420 points)   4
0 votes

On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:

----- Original Message -----

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread
[1] that
the discussion there may have gotten confused.  I think using
Mistral for
TripleO processes that are obviously workflows - stack
deployment, node
registration - makes perfect sense.  That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a
replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a
'should we'
than 'can we'.  And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others.  All, please
correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container.  With an updated patch, the Heat
CLI can
support this functionality natively.  Then we don't need a
TripleO API; we
can use Mistral to access that functionality, and we're done,
with no need
for additional code within TripleO.  And, as I understand it,
that's the
true motivation for using Mistral instead of a TripleO API:
avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective,
the story
doesn't quite end there.  A GUI needs additional functionality,
which boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

Right away I think we hit a problem.  Where does the code for
'understanding
options' go?  Much of that understanding comes from the
capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to
give API
access to 'getDeploymentOptions', we can create a Mistral
workflow.

  Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between
storage and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates
and needs
exposure through an API.  And, as has been discussed on a
separate TripleO
thread, we're not even sure Swift is sufficient for our needs;
one possible
consideration right now is allowing deployment from templates
stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a
missing
feature in Heat, which I have proposed, but am having a hard time
reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by
the
proposed TripleO API, I'd welcome feedback and collaboration so we
can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows
for each
storage mechanism?  If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function?  Is a
thin TripleO API that contains no additional business logic
really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage
mechanism
becomes more easily pluggable vs baked into an opaque-to-operators
API.

E.g, in the long term, imagine the capabilities feature exists in
Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines
    features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a
    mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to
    a
    template store

This is a pre-deployment validation workflow, and it's a superset
of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning
that we've
always implemented it either via shell scripts (tripleo-incubator)
or
python code (tripleo-common/tripleo-client, potentially TripleO
API).

So I think what Dan is exploring is, how do we avoid reimplementing
a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong
direction.  The
Mistral alternative comes from a desire to limit custom TripleO
code at all
costs.  I think that is an extremely dangerous attitude that
leads to
compromises and workarounds that will quickly lead to a shaky
code base
full of design flaws that make it difficult to implement or
extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's
about
learning from past mistakes, where long-term TripleO specific
workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was
a
workaround for missing heat features, then Tuskar was another
workaround
(arguably) for missing heat features, and now we're again proposing
a
long-term workaround for some missing heat features, some of which
are
already proposed (referring to the API for capabilities
resolution).

This is an important point, thanks for bringing it up!

I think that I might have a different understanding of the lessons to
be
learned from Tuskar's limitations.  There were actually two issues
that
arose.  The first was that Tuskar was far too specific in how it
tried to
manipulated Heat pieces.  The second - and more serious, from my
point of
view - was that there literally was no way for an API-based GUI to
perform the tasks it needed to in order to do the correct
manipulation
(environment selection), because there was no Heat API in place for
doing
so.

My takeaway from the first issue was that any potential TripleO API
in
the future needed to be very low-level, a light skimming on top of
the
OpenStack services it uses.  The plan creation process that the
tripleo-common library spec describes is that: it's just a couple of
methods designed to allow a user to create an environment file, which
can then be used for deploying the overcloud.

My takeaway from the second issue was a bit more complicated.  A
required feature was missing, and although the proper functionality
needed to enable it in Heat was identified, it was unclear (and
remains
unclear) whether that feature truly belonged in Heat.  What does a
GUI
do then?  The GUI could take a cycle off, which is essentially what
happened here; I don't think that's a reasonable solution.  We could
hope that we arrive at a 100% foolproof and immutable deployment
solution
in the future, arriving at a point where no new features would ever
be
needed; I don't think that's a practical hope.

The third solution that came to mind was the idea of creating the
TripleO API.  It gives us a place to add in missing features if
needed.
And I think it also gives us a useful layer of indirection.  The
consumers of TripleO want a stable API, so that a new release doesn't
force them to do a massive update of their code; the TripleO API
would
provide that, allowing us to switch code behind the scenes (say, if
the capabilities feature lands in Heat).

I think the above example would work equally well in a generic workflow
sort of tool. You could image that the inputs to the workflow remain
the same... but rather than running our own code in some interim step
we simply call Heat directly for the capabilities map feature.

So regardless of whether we build our own API or use a generic workflow
too I think we still have what I would call a "release valve" to let us
inject some custom code (actions) into the workflow. Like we discussed
last week on IRC I would like to minimize the number of custom actions
we have (with an eye towards things living in the upstream OpenStack
projects) but it is fine to do this either way and would work equally
well w/ Mistral and TripleO API.

I think I kinda view TripleO as a 'best practices' project.  Using
OpenStack is a confusing experience, with a million different options
and choices to make.  TripleO provides users with an excellent guide.
But the problem is that best practices change, and I think that
perceived instability is dangerous for adoption of TripleO.

So having a TripleO library and its associated API be a 'best
practices'
library makes sense to me.  It gives consumers a stable platform upon
which to use TripleO, while allowing us to be flexible behind the
scenes.
The 'best practice' for Heat capabilities right now is a workaround,
because it hasn't been judged to be suitable to go into Heat itself.
If that changes, we get to shift as well - and all of these changes
are
invisible to the API consumer.

I mentioned this in my "Driving workflows with Mistral" thread but with
regards to stability I view say Heat's v1 API or Mistral's v2 API as
both being way more stable that what we could ever achieve with TripleO
API. The real trick to API stability with something like Heat or
Mistral is how we manage the inputs and outputs to Stacks and Workflows
themselves. So long as we are mindful of this I can't image an end user
(say a GUI writer or whoever) would really care whether they POST to
Mistral or something we've created. The nice thing about using other
OpenStack projects like Heat or Mistral is that they very likely have
better community and documentation around these things as well that we
would ever have.

The more I look at using Mistral for some of the cases that have been
brought up the more it seems to make sense for a lot of the workflows
we need. I don't believe we can achieve better stability by creating
what sounds more and more like a shim/proxy API rather than using the
versioned API's that OpenStack already provides.

There may be some corner cases where a "GUI helper" API comes into play
for some sort of caching or something. I'm not blocking anyone from
creating these sorts of features if they need them. And again if it is
something that could be added to an upstream OpenStack project like
Heat or Mistral I would look there first. So perhaps Zaqar for
websockets instead of rolling our own, this sort of thing.

What does concern me is that we are overstating what TripleO API should
actually contain should we choose to pursue it. Initially it was
positioned as the "TripleO workflow API". I think we now agree that we
probably shouldn't put all of our workflows behind it. So if our stance
has changed would it make sense to compile a new list of what we
believe belongs behind our own TripleO API vs. what we consider
workflows.

Dan

Mainn

I think the correct attitude is to simply look at the problem
we're
trying to solve and find the correct architecture.  For these
get/set
methods that the API needs, it's pretty simple: storage -> some
logic ->
a REST API.  Adding a workflow engine on top of that is unneeded,
and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs
those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements,
it'll
probably be much easier to rationalize over the value of a bespoke
API vs
mistral?

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 15, 2016 by Dan_Prince (8,160 points)   1 5 7
0 votes

----- Original Message -----
On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:

----- Original Message -----

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread
[1] that
the discussion there may have gotten confused.  I think using
Mistral for
TripleO processes that are obviously workflows - stack
deployment, node
registration - makes perfect sense.  That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a
replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a
'should we'
than 'can we'.  And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others.  All, please
correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container.  With an updated patch, the Heat
CLI can
support this functionality natively.  Then we don't need a
TripleO API; we
can use Mistral to access that functionality, and we're done,
with no need
for additional code within TripleO.  And, as I understand it,
that's the
true motivation for using Mistral instead of a TripleO API:
avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective,
the story
doesn't quite end there.  A GUI needs additional functionality,
which boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

Right away I think we hit a problem.  Where does the code for
'understanding
options' go?  Much of that understanding comes from the
capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code.  So to
give API
access to 'getDeploymentOptions', we can create a Mistral
workflow.

  Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between
storage and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates
and needs
exposure through an API.  And, as has been discussed on a
separate TripleO
thread, we're not even sure Swift is sufficient for our needs;
one possible
consideration right now is allowing deployment from templates
stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a
missing
feature in Heat, which I have proposed, but am having a hard time
reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by
the
proposed TripleO API, I'd welcome feedback and collaboration so we
can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows
for each
storage mechanism?  If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function?  Is a
thin TripleO API that contains no additional business logic
really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage
mechanism
becomes more easily pluggable vs baked into an opaque-to-operators
API.

E.g, in the long term, imagine the capabilities feature exists in
Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines
    features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a
    mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to
    a
    template store

This is a pre-deployment validation workflow, and it's a superset
of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning
that we've
always implemented it either via shell scripts (tripleo-incubator)
or
python code (tripleo-common/tripleo-client, potentially TripleO
API).

So I think what Dan is exploring is, how do we avoid reimplementing
a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong
direction.  The
Mistral alternative comes from a desire to limit custom TripleO
code at all
costs.  I think that is an extremely dangerous attitude that
leads to
compromises and workarounds that will quickly lead to a shaky
code base
full of design flaws that make it difficult to implement or
extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's
about
learning from past mistakes, where long-term TripleO specific
workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was
a
workaround for missing heat features, then Tuskar was another
workaround
(arguably) for missing heat features, and now we're again proposing
a
long-term workaround for some missing heat features, some of which
are
already proposed (referring to the API for capabilities
resolution).

This is an important point, thanks for bringing it up!

I think that I might have a different understanding of the lessons to
be
learned from Tuskar's limitations.  There were actually two issues
that
arose.  The first was that Tuskar was far too specific in how it
tried to
manipulated Heat pieces.  The second - and more serious, from my
point of
view - was that there literally was no way for an API-based GUI to
perform the tasks it needed to in order to do the correct
manipulation
(environment selection), because there was no Heat API in place for
doing
so.

My takeaway from the first issue was that any potential TripleO API
in
the future needed to be very low-level, a light skimming on top of
the
OpenStack services it uses.  The plan creation process that the
tripleo-common library spec describes is that: it's just a couple of
methods designed to allow a user to create an environment file, which
can then be used for deploying the overcloud.

My takeaway from the second issue was a bit more complicated.  A
required feature was missing, and although the proper functionality
needed to enable it in Heat was identified, it was unclear (and
remains
unclear) whether that feature truly belonged in Heat.  What does a
GUI
do then?  The GUI could take a cycle off, which is essentially what
happened here; I don't think that's a reasonable solution.  We could
hope that we arrive at a 100% foolproof and immutable deployment
solution
in the future, arriving at a point where no new features would ever
be
needed; I don't think that's a practical hope.

The third solution that came to mind was the idea of creating the
TripleO API.  It gives us a place to add in missing features if
needed.
And I think it also gives us a useful layer of indirection.  The
consumers of TripleO want a stable API, so that a new release doesn't
force them to do a massive update of their code; the TripleO API
would
provide that, allowing us to switch code behind the scenes (say, if
the capabilities feature lands in Heat).

I think the above example would work equally well in a generic workflow
sort of tool. You could image that the inputs to the workflow remain
the same... but rather than running our own code in some interim step
we simply call Heat directly for the capabilities map feature.

So regardless of whether we build our own API or use a generic workflow
too I think we still have what I would call a "release valve" to let us
inject some custom code (actions) into the workflow. Like we discussed
last week on IRC I would like to minimize the number of custom actions
we have (with an eye towards things living in the upstream OpenStack
projects) but it is fine to do this either way and would work equally
well w/ Mistral and TripleO API.

I think I kinda view TripleO as a 'best practices' project.  Using
OpenStack is a confusing experience, with a million different options
and choices to make.  TripleO provides users with an excellent guide.
But the problem is that best practices change, and I think that
perceived instability is dangerous for adoption of TripleO.

So having a TripleO library and its associated API be a 'best
practices'
library makes sense to me.  It gives consumers a stable platform upon
which to use TripleO, while allowing us to be flexible behind the
scenes.
The 'best practice' for Heat capabilities right now is a workaround,
because it hasn't been judged to be suitable to go into Heat itself.
If that changes, we get to shift as well - and all of these changes
are
invisible to the API consumer.

I mentioned this in my "Driving workflows with Mistral" thread but with
regards to stability I view say Heat's v1 API or Mistral's v2 API as
both being way more stable that what we could ever achieve with TripleO
API. The real trick to API stability with something like Heat or
Mistral is how we manage the inputs and outputs to Stacks and Workflows
themselves. So long as we are mindful of this I can't image an end user
(say a GUI writer or whoever) would really care whether they POST to
Mistral or something we've created. The nice thing about using other
OpenStack projects like Heat or Mistral is that they very likely have
better community and documentation around these things as well that we
would ever have.

The more I look at using Mistral for some of the cases that have been
brought up the more it seems to make sense for a lot of the workflows
we need. I don't believe we can achieve better stability by creating
what sounds more and more like a shim/proxy API rather than using the
versioned API's that OpenStack already provides.

There may be some corner cases where a "GUI helper" API comes into play
for some sort of caching or something. I'm not blocking anyone from
creating these sorts of features if they need them. And again if it is
something that could be added to an upstream OpenStack project like
Heat or Mistral I would look there first. So perhaps Zaqar for
websockets instead of rolling our own, this sort of thing.

What does concern me is that we are overstating what TripleO API should
actually contain should we choose to pursue it. Initially it was
positioned as the "TripleO workflow API". I think we now agree that we
probably shouldn't put all of our workflows behind it. So if our stance
has changed would it make sense to compile a new list of what we
believe belongs behind our own TripleO API vs. what we consider
workflows.

I wonder if it would be helpful to get operator feedback here - show them
the advantages/disadvantages of both options and to get a sense of what
might be useful/necessary for them to use TripleO effectively?

Mainn

Dan

Mainn

I think the correct attitude is to simply look at the problem
we're
trying to solve and find the correct architecture.  For these
get/set
methods that the API needs, it's pretty simple: storage -> some
logic ->
a REST API.  Adding a workflow engine on top of that is unneeded,
and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs
those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements,
it'll
probably be much easier to rationalize over the value of a bespoke
API vs
mistral?

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 18, 2016 by Tzu-Mainn_Chen (1,420 points)   4
0 votes

On 18.1.2016 19:49, Tzu-Mainn Chen wrote:
----- Original Message -----

On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:

----- Original Message -----

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread
[1] that
the discussion there may have gotten confused. I think using
Mistral for
TripleO processes that are obviously workflows - stack
deployment, node
registration - makes perfect sense. That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a
replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a
'should we'
than 'can we'. And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others. All, please
correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container. With an updated patch, the Heat
CLI can
support this functionality natively. Then we don't need a
TripleO API; we
can use Mistral to access that functionality, and we're done,
with no need
for additional code within TripleO. And, as I understand it,
that's the
true motivation for using Mistral instead of a TripleO API:
avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective,
the story
doesn't quite end there. A GUI needs additional functionality,
which boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

Right away I think we hit a problem. Where does the code for
'understanding
options' go? Much of that understanding comes from the
capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to
give API
access to 'getDeploymentOptions', we can create a Mistral
workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between
storage and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates
and needs
exposure through an API. And, as has been discussed on a
separate TripleO
thread, we're not even sure Swift is sufficient for our needs;
one possible
consideration right now is allowing deployment from templates
stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a
missing
feature in Heat, which I have proposed, but am having a hard time
reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by
the
proposed TripleO API, I'd welcome feedback and collaboration so we
can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows
for each
storage mechanism? If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function? Is a
thin TripleO API that contains no additional business logic
really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage
mechanism
becomes more easily pluggable vs baked into an opaque-to-operators
API.

E.g, in the long term, imagine the capabilities feature exists in
Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines
    features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a
    mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to
    a
    template store

This is a pre-deployment validation workflow, and it's a superset
of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning
that we've
always implemented it either via shell scripts (tripleo-incubator)
or
python code (tripleo-common/tripleo-client, potentially TripleO
API).

So I think what Dan is exploring is, how do we avoid reimplementing
a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong
direction. The
Mistral alternative comes from a desire to limit custom TripleO
code at all
costs. I think that is an extremely dangerous attitude that
leads to
compromises and workarounds that will quickly lead to a shaky
code base
full of design flaws that make it difficult to implement or
extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's
about
learning from past mistakes, where long-term TripleO specific
workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was
a
workaround for missing heat features, then Tuskar was another
workaround
(arguably) for missing heat features, and now we're again proposing
a
long-term workaround for some missing heat features, some of which
are
already proposed (referring to the API for capabilities
resolution).

This is an important point, thanks for bringing it up!

I think that I might have a different understanding of the lessons to
be
learned from Tuskar's limitations. There were actually two issues
that
arose. The first was that Tuskar was far too specific in how it
tried to
manipulated Heat pieces. The second - and more serious, from my
point of
view - was that there literally was no way for an API-based GUI to
perform the tasks it needed to in order to do the correct
manipulation
(environment selection), because there was no Heat API in place for
doing
so.

My takeaway from the first issue was that any potential TripleO API
in
the future needed to be very low-level, a light skimming on top of
the
OpenStack services it uses. The plan creation process that the
tripleo-common library spec describes is that: it's just a couple of
methods designed to allow a user to create an environment file, which
can then be used for deploying the overcloud.

My takeaway from the second issue was a bit more complicated. A
required feature was missing, and although the proper functionality
needed to enable it in Heat was identified, it was unclear (and
remains
unclear) whether that feature truly belonged in Heat. What does a
GUI
do then? The GUI could take a cycle off, which is essentially what
happened here; I don't think that's a reasonable solution. We could
hope that we arrive at a 100% foolproof and immutable deployment
solution
in the future, arriving at a point where no new features would ever
be
needed; I don't think that's a practical hope.

The third solution that came to mind was the idea of creating the
TripleO API. It gives us a place to add in missing features if
needed.
And I think it also gives us a useful layer of indirection. The
consumers of TripleO want a stable API, so that a new release doesn't
force them to do a massive update of their code; the TripleO API
would
provide that, allowing us to switch code behind the scenes (say, if
the capabilities feature lands in Heat).

I think the above example would work equally well in a generic workflow
sort of tool. You could image that the inputs to the workflow remain
the same... but rather than running our own code in some interim step
we simply call Heat directly for the capabilities map feature.

So regardless of whether we build our own API or use a generic workflow
too I think we still have what I would call a "release valve" to let us
inject some custom code (actions) into the workflow. Like we discussed
last week on IRC I would like to minimize the number of custom actions
we have (with an eye towards things living in the upstream OpenStack
projects) but it is fine to do this either way and would work equally
well w/ Mistral and TripleO API.

I think I kinda view TripleO as a 'best practices' project. Using
OpenStack is a confusing experience, with a million different options
and choices to make. TripleO provides users with an excellent guide.
But the problem is that best practices change, and I think that
perceived instability is dangerous for adoption of TripleO.

So having a TripleO library and its associated API be a 'best
practices'
library makes sense to me. It gives consumers a stable platform upon
which to use TripleO, while allowing us to be flexible behind the
scenes.
The 'best practice' for Heat capabilities right now is a workaround,
because it hasn't been judged to be suitable to go into Heat itself.
If that changes, we get to shift as well - and all of these changes
are
invisible to the API consumer.

I mentioned this in my "Driving workflows with Mistral" thread but with
regards to stability I view say Heat's v1 API or Mistral's v2 API as
both being way more stable that what we could ever achieve with TripleO
API. The real trick to API stability with something like Heat or
Mistral is how we manage the inputs and outputs to Stacks and Workflows
themselves. So long as we are mindful of this I can't image an end user
(say a GUI writer or whoever) would really care whether they POST to
Mistral or something we've created. The nice thing about using other
OpenStack projects like Heat or Mistral is that they very likely have
better community and documentation around these things as well that we
would ever have.

The more I look at using Mistral for some of the cases that have been
brought up the more it seems to make sense for a lot of the workflows
we need. I don't believe we can achieve better stability by creating
what sounds more and more like a shim/proxy API rather than using the
versioned API's that OpenStack already provides.

There may be some corner cases where a "GUI helper" API comes into play
for some sort of caching or something. I'm not blocking anyone from
creating these sorts of features if they need them. And again if it is
something that could be added to an upstream OpenStack project like
Heat or Mistral I would look there first. So perhaps Zaqar for
websockets instead of rolling our own, this sort of thing.

What does concern me is that we are overstating what TripleO API should
actually contain should we choose to pursue it. Initially it was
positioned as the "TripleO workflow API". I think we now agree that we
probably shouldn't put all of our workflows behind it. So if our stance
has changed would it make sense to compile a new list of what we
believe belongs behind our own TripleO API vs. what we consider
workflows.

I wonder if it would be helpful to get operator feedback here - show them
the advantages/disadvantages of both options and to get a sense of what
might be useful/necessary for them to use TripleO effectively?

(I'm going off on a tangent a bit, but please bear with me, i'm using
all that to support the point in the end. The implications of building a
TripleO API touch on various topics.)

Yes i think we should gather operator feedback. We already got some, but
we should gather more whenever possible.

One kind of (negative) feedback i've heard is that overcloud management
is too much of a "blackbox" compared to what operators are used to. The
feedback i recall was that it's hard to tell what is going to happen
when running an overcloud stack update, and that we cannot re-execute
the software config management independently.

Building another umbrella API to rule the already largely umbrella-like
deployment process (think what all responsibilities lie within the
tripleo-heat-templates codebase, and within the single 'overcloud' Heat
stack) would probably make matters more blackboxy and go further in the
direction of "i feel like i don't know what's happening to my cloud when
i use the management tool".

What i think could improve the situation for operators is trying to
chunk up what we already have into smaller, more independently operable
parts. The split-stack approach already discussed on the TripleO meeting
and on #tripleo could help with this. Essentially separating our
hardware management from our software config management. Being able to
re-apply software configuration without being afraid of having nodes
accidentally re-provisioned from scratch.

In general i think TripleO could be a little more "UNIXy" - composed of
smaller parts that make sense on their own, transparent to the operator,
more modular and modifiable, and in effect more receptive of how varying
are the real world deployment environments (various Neutron and Cinder
plugins, Keystone backends, composable set of services, custom node
types etc.).

Workflow persisted in a data-like fashion is probably more modifiable by
the operator than Python code of a REST API. We've seen hard assumptions
cause problems in the past. (Think the unoverridable CLI parameters
issue we used to have, and how we had to move to a model of "CLI
provides its values, but you can always override them or provide
additional ones with an environment file if needed", which we now use
extensively). I'm a bit concerned that building a new REST API on top of
everything would impose new rigid assumptions that could cause more harm
than good in the end. I'm concerned that it would be usable only for
very basic deployments, while the world of real deployments has its own
pace and requirements not fitting the "best practices" as defined by the
API, having to bypass the API far too often and slowly pushing it into
abandonment over time.

My mind is probably biased towards the the operator feedback that
resonated with me the most, i've heard pro-blackbox opinions too (though
not from operators yet IIRC). So take what i wrote just as my 2 cents,
but i think it's necessary to consider the above issues when thinking
about the implications of building a TripleO API.

Regarding the non-workflow kind of features we need for empowering GUI,
wouldn't those be useful for normal (tenant) Heat stack deployments in
the overcloud too? It sounds to me that features like "driving a Heat
stack deployment with the same powers from CLI or GUI", "updating a
CLI-created stack from GUI and vice versa", "understanding/parsing what
are the configuration options of my Heat templates" are all features
that are not specific to TripleO, and could be useful for tenant Heat
stacks too. So perhaps these should be implemented in Heat? If that
can't happen fast enough, then we might need to put some workarounds in
place for now, but it might be better if we didn't advertise those as a
stable solution.

Jirka

Mainn

Dan

Mainn

I think the correct attitude is to simply look at the problem
we're
trying to solve and find the correct architecture. For these
get/set
methods that the API needs, it's pretty simple: storage -> some
logic ->
a REST API. Adding a workflow engine on top of that is unneeded,
and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs
those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements,
it'll
probably be much easier to rationalize over the value of a bespoke
API vs
mistral?

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 20, 2016 by =?UTF-8?B?SmnFmcOtIF (3,860 points)   2 3
0 votes

On 20 January 2016 at 10:03, Jiří Stránský jistr@redhat.com wrote:

On 18.1.2016 19:49, Tzu-Mainn Chen wrote:

----- Original Message -----

On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:

----- Original Message -----

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread
[1] that
the discussion there may have gotten confused. I think using
Mistral for
TripleO processes that are obviously workflows - stack
deployment, node
registration - makes perfect sense. That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a
replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a
'should we'
than 'can we'. And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others. All, please
correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container. With an updated patch, the Heat
CLI can
support this functionality natively. Then we don't need a
TripleO API; we
can use Mistral to access that functionality, and we're done,
with no need
for additional code within TripleO. And, as I understand it,
that's the
true motivation for using Mistral instead of a TripleO API:
avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective,
the story
doesn't quite end there. A GUI needs additional functionality,
which boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

Right away I think we hit a problem. Where does the code for
'understanding
options' go? Much of that understanding comes from the
capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to
give API
access to 'getDeploymentOptions', we can create a Mistral
workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between
storage and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates
and needs
exposure through an API. And, as has been discussed on a
separate TripleO
thread, we're not even sure Swift is sufficient for our needs;
one possible
consideration right now is allowing deployment from templates
stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a
missing
feature in Heat, which I have proposed, but am having a hard time
reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by
the
proposed TripleO API, I'd welcome feedback and collaboration so we
can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows

for each
storage mechanism? If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function? Is a
thin TripleO API that contains no additional business logic
really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage
mechanism
becomes more easily pluggable vs baked into an opaque-to-operators
API.

E.g, in the long term, imagine the capabilities feature exists in
Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines
    features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a
    mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to
    a
    template store

This is a pre-deployment validation workflow, and it's a superset
of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning
that we've
always implemented it either via shell scripts (tripleo-incubator)
or
python code (tripleo-common/tripleo-client, potentially TripleO
API).

So I think what Dan is exploring is, how do we avoid reimplementing
a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a

TripleO API
is to look at the engineering concerns from the wrong
direction. The
Mistral alternative comes from a desire to limit custom TripleO
code at all
costs. I think that is an extremely dangerous attitude that
leads to
compromises and workarounds that will quickly lead to a shaky
code base
full of design flaws that make it difficult to implement or
extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's
about
learning from past mistakes, where long-term TripleO specific
workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was
a
workaround for missing heat features, then Tuskar was another
workaround
(arguably) for missing heat features, and now we're again proposing
a
long-term workaround for some missing heat features, some of which
are
already proposed (referring to the API for capabilities
resolution).

This is an important point, thanks for bringing it up!

I think that I might have a different understanding of the lessons to
be
learned from Tuskar's limitations. There were actually two issues
that
arose. The first was that Tuskar was far too specific in how it
tried to
manipulated Heat pieces. The second - and more serious, from my
point of
view - was that there literally was no way for an API-based GUI to
perform the tasks it needed to in order to do the correct
manipulation
(environment selection), because there was no Heat API in place for
doing
so.

My takeaway from the first issue was that any potential TripleO API
in
the future needed to be very low-level, a light skimming on top of
the
OpenStack services it uses. The plan creation process that the
tripleo-common library spec describes is that: it's just a couple of
methods designed to allow a user to create an environment file, which
can then be used for deploying the overcloud.

My takeaway from the second issue was a bit more complicated. A
required feature was missing, and although the proper functionality
needed to enable it in Heat was identified, it was unclear (and
remains
unclear) whether that feature truly belonged in Heat. What does a
GUI
do then? The GUI could take a cycle off, which is essentially what
happened here; I don't think that's a reasonable solution. We could
hope that we arrive at a 100% foolproof and immutable deployment
solution
in the future, arriving at a point where no new features would ever
be
needed; I don't think that's a practical hope.

The third solution that came to mind was the idea of creating the
TripleO API. It gives us a place to add in missing features if
needed.
And I think it also gives us a useful layer of indirection. The
consumers of TripleO want a stable API, so that a new release doesn't
force them to do a massive update of their code; the TripleO API
would
provide that, allowing us to switch code behind the scenes (say, if
the capabilities feature lands in Heat).

I think the above example would work equally well in a generic workflow
sort of tool. You could image that the inputs to the workflow remain
the same... but rather than running our own code in some interim step
we simply call Heat directly for the capabilities map feature.

So regardless of whether we build our own API or use a generic workflow
too I think we still have what I would call a "release valve" to let us
inject some custom code (actions) into the workflow. Like we discussed
last week on IRC I would like to minimize the number of custom actions
we have (with an eye towards things living in the upstream OpenStack
projects) but it is fine to do this either way and would work equally
well w/ Mistral and TripleO API.

I think I kinda view TripleO as a 'best practices' project. Using
OpenStack is a confusing experience, with a million different options
and choices to make. TripleO provides users with an excellent guide.
But the problem is that best practices change, and I think that
perceived instability is dangerous for adoption of TripleO.

So having a TripleO library and its associated API be a 'best
practices'
library makes sense to me. It gives consumers a stable platform upon
which to use TripleO, while allowing us to be flexible behind the
scenes.
The 'best practice' for Heat capabilities right now is a workaround,
because it hasn't been judged to be suitable to go into Heat itself.
If that changes, we get to shift as well - and all of these changes
are
invisible to the API consumer.

I mentioned this in my "Driving workflows with Mistral" thread but with
regards to stability I view say Heat's v1 API or Mistral's v2 API as
both being way more stable that what we could ever achieve with TripleO
API. The real trick to API stability with something like Heat or
Mistral is how we manage the inputs and outputs to Stacks and Workflows
themselves. So long as we are mindful of this I can't image an end user
(say a GUI writer or whoever) would really care whether they POST to
Mistral or something we've created. The nice thing about using other
OpenStack projects like Heat or Mistral is that they very likely have
better community and documentation around these things as well that we
would ever have.

The more I look at using Mistral for some of the cases that have been
brought up the more it seems to make sense for a lot of the workflows
we need. I don't believe we can achieve better stability by creating
what sounds more and more like a shim/proxy API rather than using the
versioned API's that OpenStack already provides.

There may be some corner cases where a "GUI helper" API comes into play
for some sort of caching or something. I'm not blocking anyone from
creating these sorts of features if they need them. And again if it is
something that could be added to an upstream OpenStack project like
Heat or Mistral I would look there first. So perhaps Zaqar for
websockets instead of rolling our own, this sort of thing.

What does concern me is that we are overstating what TripleO API should
actually contain should we choose to pursue it. Initially it was
positioned as the "TripleO workflow API". I think we now agree that we
probably shouldn't put all of our workflows behind it. So if our stance
has changed would it make sense to compile a new list of what we
believe belongs behind our own TripleO API vs. what we consider
workflows.

I wonder if it would be helpful to get operator feedback here - show them
the advantages/disadvantages of both options and to get a sense of what
might be useful/necessary for them to use TripleO effectively?

(I'm going off on a tangent a bit, but please bear with me, i'm using all
that to support the point in the end. The implications of building a
TripleO API touch on various topics.)

Yes i think we should gather operator feedback. We already got some, but
we should gather more whenever possible.

One kind of (negative) feedback i've heard is that overcloud management is
too much of a "blackbox" compared to what operators are used to. The
feedback i recall was that it's hard to tell what is going to happen when
running an overcloud stack update, and that we cannot re-execute the
software config management independently.

Building another umbrella API to rule the already largely umbrella-like
deployment process (think what all responsibilities lie within the
tripleo-heat-templates codebase, and within the single 'overcloud' Heat
stack) would probably make matters more blackboxy and go further in the
direction of "i feel like i don't know what's happening to my cloud when i
use the management tool".

I completely agree that we want to make the tool less of a blackbox. I am
not convinced that Mistral will do this (do tripleo-heat-templates make
things less blackbox-y because they are YAML users can look at? Maybe for
some users but they still confuse me!). However, given that I think we all
agree Mistral is a good fit for some of the workflow tasks (introspection,
deploying, etc.) I think it is a good idea to see if Mistral will work
well, or well enough for the other tasks we need (essentially some template
introspection/processing). It will certainly be more obvious what is going
on if all the actions are in Mistral and now split between it and a custom
API.

What i think could improve the situation for operators is trying to chunk
up what we already have into smaller, more independently operable parts.
The split-stack approach already discussed on the TripleO meeting and on

tripleo could help with this. Essentially separating our hardware

management from our software config management. Being able to re-apply
software configuration without being afraid of having nodes accidentally
re-provisioned from scratch.

+1, this would be a very valuable change for the project generally.

In general i think TripleO could be a little more "UNIXy" - composed of

smaller parts that make sense on their own, transparent to the operator,
more modular and modifiable, and in effect more receptive of how varying
are the real world deployment environments (various Neutron and Cinder
plugins, Keystone backends, composable set of services, custom node types
etc.).

Workflow persisted in a data-like fashion is probably more modifiable by
the operator than Python code of a REST API. We've seen hard assumptions
cause problems in the past. (Think the unoverridable CLI parameters issue
we used to have, and how we had to move to a model of "CLI provides its
values, but you can always override them or provide additional ones with an
environment file if needed", which we now use extensively). I'm a bit
concerned that building a new REST API on top of everything would impose
new rigid assumptions that could cause more harm than good in the end. I'm
concerned that it would be usable only for very basic deployments, while
the world of real deployments has its own pace and requirements not fitting
the "best practices" as defined by the API, having to bypass the API far
too often and slowly pushing it into abandonment over time.

My mind is probably biased towards the the operator feedback that
resonated with me the most, i've heard pro-blackbox opinions too (though
not from operators yet IIRC). So take what i wrote just as my 2 cents, but
i think it's necessary to consider the above issues when thinking about the
implications of building a TripleO API.

Regarding the non-workflow kind of features we need for empowering GUI,
wouldn't those be useful for normal (tenant) Heat stack deployments in the
overcloud too? It sounds to me that features like "driving a Heat stack
deployment with the same powers from CLI or GUI", "updating a CLI-created
stack from GUI and vice versa", "understanding/parsing what are the
configuration options of my Heat templates" are all features that are not
specific to TripleO, and could be useful for tenant Heat stacks too. So
perhaps these should be implemented in Heat? If that can't happen fast
enough, then we might need to put some workarounds in place for now, but it
might be better if we didn't advertise those as a stable solution.

Jirka

Mainn

Dan

Mainn

I think the correct attitude is to simply look at the problem

we're
trying to solve and find the correct architecture. For these
get/set
methods that the API needs, it's pretty simple: storage -> some
logic ->
a REST API. Adding a workflow engine on top of that is unneeded,
and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs
those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements,
it'll
probably be much easier to rationalize over the value of a bespoke
API vs
mistral?

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 20, 2016 by Dougal_Matthews (4,960 points)   1 1 3
0 votes

----- Original Message -----
On 18.1.2016 19:49, Tzu-Mainn Chen wrote:

----- Original Message -----

On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:

----- Original Message -----

On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:

Hey all,

I realize now from the title of the other TripleO/Mistral thread
[1] that
the discussion there may have gotten confused. I think using
Mistral for
TripleO processes that are obviously workflows - stack
deployment, node
registration - makes perfect sense. That thread is exploring
practicalities
for doing that, and I think that's great work.

What I inappropriately started to address in that thread was a
somewhat
orthogonal point that Dan asked in his original email, namely:

"what it might look like if we were to use Mistral as a
replacement for the
TripleO API entirely"

I'd like to create this thread to talk about that; more of a
'should we'
than 'can we'. And to do that, I want to indulge in a thought
exercise
stemming from an IRC discussion with Dan and others. All, please
correct
me
if I've misstated anything.

The IRC discussion revolved around one use case: deploying a Heat
stack
directly from a Swift container. With an updated patch, the Heat
CLI can
support this functionality natively. Then we don't need a
TripleO API; we
can use Mistral to access that functionality, and we're done,
with no need
for additional code within TripleO. And, as I understand it,
that's the
true motivation for using Mistral instead of a TripleO API:
avoiding custom
code within TripleO.

That's definitely a worthy goal... except from my perspective,
the story
doesn't quite end there. A GUI needs additional functionality,
which boils
down to: understanding the Heat deployment templates in order to
provide
options for a user; and persisting those options within a Heat
environment
file.

Right away I think we hit a problem. Where does the code for
'understanding
options' go? Much of that understanding comes from the
capabilities map
in tripleo-heat-templates [2]; it would make sense to me that
responsibility
for that would fall to a TripleO library.

Still, perhaps we can limit the amount of TripleO code. So to
give API
access to 'getDeploymentOptions', we can create a Mistral
workflow.

Retrieve Heat templates from Swift -> Parse capabilities map

Which is fine-ish, except from an architectural perspective
'getDeploymentOptions' violates the abstraction layer between
storage and
business logic, a problem that is compounded because
'getDeploymentOptions'
is not the only functionality that accesses the Heat templates
and needs
exposure through an API. And, as has been discussed on a
separate TripleO
thread, we're not even sure Swift is sufficient for our needs;
one possible
consideration right now is allowing deployment from templates
stored in
multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a
missing
feature in Heat, which I have proposed, but am having a hard time
reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by
the
proposed TripleO API, I'd welcome feedback and collaboration so we
can move
that forward, vs solving only for TripleO.

Are we going to have duplicate 'getDeploymentOptions' workflows
for each
storage mechanism? If we consolidate the storage code within a
TripleO
library, do we really need a workflow to call a single
function? Is a
thin TripleO API that contains no additional business logic
really so bad
at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage
mechanism
becomes more easily pluggable vs baked into an opaque-to-operators
API.

E.g, in the long term, imagine the capabilities feature exists in
Heat, you
then have a pre-deployment workflow that looks something like:

  1. Retrieve golden templates from a template store
  2. Pass templates to Heat, get capabilities map which defines
    features user
    must/may select.
  3. Prompt user for input to select required capabilites
  4. Pass user input to Heat, validate the configuration, get a
    mapping of
    required options for the selected capabilities (nested validation)
  5. Push the validated pieces ("plan" in TripleO API terminology) to
    a
    template store

This is a pre-deployment validation workflow, and it's a superset
of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning
that we've
always implemented it either via shell scripts (tripleo-incubator)
or
python code (tripleo-common/tripleo-client, potentially TripleO
API).

So I think what Dan is exploring is, how do we avoid reimplementing
a
workflow engine, when a project exists which already does that.

My gut reaction is to say that proposing Mistral in place of a
TripleO API
is to look at the engineering concerns from the wrong
direction. The
Mistral alternative comes from a desire to limit custom TripleO
code at all
costs. I think that is an extremely dangerous attitude that
leads to
compromises and workarounds that will quickly lead to a shaky
code base
full of design flaws that make it difficult to implement or
extend any
functionality cleanly.

I think it's not about limiting TripleO code at all costs, it's
about
learning from past mistakes, where long-term TripleO specific
workarounds
for gaps in other projects have become serious technical debt.

For example, the old merge.py approach to template composition was
a
workaround for missing heat features, then Tuskar was another
workaround
(arguably) for missing heat features, and now we're again proposing
a
long-term workaround for some missing heat features, some of which
are
already proposed (referring to the API for capabilities
resolution).

This is an important point, thanks for bringing it up!

I think that I might have a different understanding of the lessons to
be
learned from Tuskar's limitations. There were actually two issues
that
arose. The first was that Tuskar was far too specific in how it
tried to
manipulated Heat pieces. The second - and more serious, from my
point of
view - was that there literally was no way for an API-based GUI to
perform the tasks it needed to in order to do the correct
manipulation
(environment selection), because there was no Heat API in place for
doing
so.

My takeaway from the first issue was that any potential TripleO API
in
the future needed to be very low-level, a light skimming on top of
the
OpenStack services it uses. The plan creation process that the
tripleo-common library spec describes is that: it's just a couple of
methods designed to allow a user to create an environment file, which
can then be used for deploying the overcloud.

My takeaway from the second issue was a bit more complicated. A
required feature was missing, and although the proper functionality
needed to enable it in Heat was identified, it was unclear (and
remains
unclear) whether that feature truly belonged in Heat. What does a
GUI
do then? The GUI could take a cycle off, which is essentially what
happened here; I don't think that's a reasonable solution. We could
hope that we arrive at a 100% foolproof and immutable deployment
solution
in the future, arriving at a point where no new features would ever
be
needed; I don't think that's a practical hope.

The third solution that came to mind was the idea of creating the
TripleO API. It gives us a place to add in missing features if
needed.
And I think it also gives us a useful layer of indirection. The
consumers of TripleO want a stable API, so that a new release doesn't
force them to do a massive update of their code; the TripleO API
would
provide that, allowing us to switch code behind the scenes (say, if
the capabilities feature lands in Heat).

I think the above example would work equally well in a generic workflow
sort of tool. You could image that the inputs to the workflow remain
the same... but rather than running our own code in some interim step
we simply call Heat directly for the capabilities map feature.

So regardless of whether we build our own API or use a generic workflow
too I think we still have what I would call a "release valve" to let us
inject some custom code (actions) into the workflow. Like we discussed
last week on IRC I would like to minimize the number of custom actions
we have (with an eye towards things living in the upstream OpenStack
projects) but it is fine to do this either way and would work equally
well w/ Mistral and TripleO API.

I think I kinda view TripleO as a 'best practices' project. Using
OpenStack is a confusing experience, with a million different options
and choices to make. TripleO provides users with an excellent guide.
But the problem is that best practices change, and I think that
perceived instability is dangerous for adoption of TripleO.

So having a TripleO library and its associated API be a 'best
practices'
library makes sense to me. It gives consumers a stable platform upon
which to use TripleO, while allowing us to be flexible behind the
scenes.
The 'best practice' for Heat capabilities right now is a workaround,
because it hasn't been judged to be suitable to go into Heat itself.
If that changes, we get to shift as well - and all of these changes
are
invisible to the API consumer.

I mentioned this in my "Driving workflows with Mistral" thread but with
regards to stability I view say Heat's v1 API or Mistral's v2 API as
both being way more stable that what we could ever achieve with TripleO
API. The real trick to API stability with something like Heat or
Mistral is how we manage the inputs and outputs to Stacks and Workflows
themselves. So long as we are mindful of this I can't image an end user
(say a GUI writer or whoever) would really care whether they POST to
Mistral or something we've created. The nice thing about using other
OpenStack projects like Heat or Mistral is that they very likely have
better community and documentation around these things as well that we
would ever have.

The more I look at using Mistral for some of the cases that have been
brought up the more it seems to make sense for a lot of the workflows
we need. I don't believe we can achieve better stability by creating
what sounds more and more like a shim/proxy API rather than using the
versioned API's that OpenStack already provides.

There may be some corner cases where a "GUI helper" API comes into play
for some sort of caching or something. I'm not blocking anyone from
creating these sorts of features if they need them. And again if it is
something that could be added to an upstream OpenStack project like
Heat or Mistral I would look there first. So perhaps Zaqar for
websockets instead of rolling our own, this sort of thing.

What does concern me is that we are overstating what TripleO API should
actually contain should we choose to pursue it. Initially it was
positioned as the "TripleO workflow API". I think we now agree that we
probably shouldn't put all of our workflows behind it. So if our stance
has changed would it make sense to compile a new list of what we
believe belongs behind our own TripleO API vs. what we consider
workflows.

I wonder if it would be helpful to get operator feedback here - show them
the advantages/disadvantages of both options and to get a sense of what
might be useful/necessary for them to use TripleO effectively?

(I'm going off on a tangent a bit, but please bear with me, i'm using
all that to support the point in the end. The implications of building a
TripleO API touch on various topics.)

Yes i think we should gather operator feedback. We already got some, but
we should gather more whenever possible.

One kind of (negative) feedback i've heard is that overcloud management
is too much of a "blackbox" compared to what operators are used to. The
feedback i recall was that it's hard to tell what is going to happen
when running an overcloud stack update, and that we cannot re-execute
the software config management independently.

Building another umbrella API to rule the already largely umbrella-like
deployment process (think what all responsibilities lie within the
tripleo-heat-templates codebase, and within the single 'overcloud' Heat
stack) would probably make matters more blackboxy and go further in the
direction of "i feel like i don't know what's happening to my cloud when
i use the management tool".

What i think could improve the situation for operators is trying to
chunk up what we already have into smaller, more independently operable
parts. The split-stack approach already discussed on the TripleO meeting
and on #tripleo could help with this. Essentially separating our
hardware management from our software config management. Being able to
re-apply software configuration without being afraid of having nodes
accidentally re-provisioned from scratch.

In general i think TripleO could be a little more "UNIXy" - composed of
smaller parts that make sense on their own, transparent to the operator,
more modular and modifiable, and in effect more receptive of how varying
are the real world deployment environments (various Neutron and Cinder
plugins, Keystone backends, composable set of services, custom node
types etc.).

Workflow persisted in a data-like fashion is probably more modifiable by
the operator than Python code of a REST API. We've seen hard assumptions
cause problems in the past. (Think the unoverridable CLI parameters
issue we used to have, and how we had to move to a model of "CLI
provides its values, but you can always override them or provide
additional ones with an environment file if needed", which we now use
extensively). I'm a bit concerned that building a new REST API on top of
everything would impose new rigid assumptions that could cause more harm
than good in the end. I'm concerned that it would be usable only for
very basic deployments, while the world of real deployments has its own
pace and requirements not fitting the "best practices" as defined by the
API, having to bypass the API far too often and slowly pushing it into
abandonment over time.

My mind is probably biased towards the the operator feedback that
resonated with me the most, i've heard pro-blackbox opinions too (though
not from operators yet IIRC). So take what i wrote just as my 2 cents,
but i think it's necessary to consider the above issues when thinking
about the implications of building a TripleO API.

Those are completely valid points, thanks for bringing them up!

I think I should step back a bit and express my views from a different
perspective (which I probably should have done far earlier). I've been
working with a GUI application that deploys OpenStack using TripleO. The
deprecation of Tuskar was somewhat disruptive, but it was understood that
there are workarounds while we work towards a more permanent solution.

What this application wants from TripleO is a set of APIs that provides
confidence that they can use TripleO without the risk of having to
fundamentally change their codebase if TripleO changes. TripleO needs
to guarantee support for its deployment practices and to have a formal
deprecation process if it moves to a different architecture.

The proposed TripleO API spec differs from Tuskar in that (in my view)
it's far lower level. There are operations to get/set various types of
deployment parameters; there's an operation to deploy the result. None
of that precludes us from, say, eventually adding options to allow the
API to distinguish between hardware management and software config, or
to add an operation to apply the software config.

Another reason this is important is it addresses your point below:

Regarding the non-workflow kind of features we need for empowering GUI,
wouldn't those be useful for normal (tenant) Heat stack deployments in
the overcloud too? It sounds to me that features like "driving a Heat
stack deployment with the same powers from CLI or GUI", "updating a
CLI-created stack from GUI and vice versa", "understanding/parsing what
are the configuration options of my Heat templates" are all features
that are not specific to TripleO, and could be useful for tenant Heat
stacks too. So perhaps these should be implemented in Heat? If that
can't happen fast enough, then we might need to put some workarounds in
place for now, but it might be better if we didn't advertise those as a
stable solution.

I think TripleO benefits from controlling the access to these operations,
simply because it allows the underlying TripleO architecture change
without forcing integrators to change all their API calls. For example,
let's say we create a TripleO API that gets/sets parameters in the form
of a Heat environment file, and then deploys through Heat. If we want
to move to having the deployment driven through a Mistral workflow, we
can change the underlying code - write the parameters into a Mistral
environment file, drive the deployment through Mistral - without
affecting the outward facing API.

Mainn

Jirka

Mainn

Dan

Mainn

I think the correct attitude is to simply look at the problem
we're
trying to solve and find the correct architecture. For these
get/set
methods that the API needs, it's pretty simple: storage -> some
logic ->
a REST API. Adding a workflow engine on top of that is unneeded,
and I
believe that means it's an incorrect solution.

What may help is if we can work through the proposed API spec, and
identify which calls can reasonably be considered workflows vs
those where
it's really just proxying an API call with some logic?

When we have a defined list of "not workflow" API requirements,
it'll
probably be much easier to rationalize over the value of a bespoke
API vs
mistral?

Steve



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jan 20, 2016 by Tzu-Mainn_Chen (1,420 points)   4
...