settingsLogin | Registersettings

[openstack-dev] [keystone] Token providers and Fernet as the default

0 votes

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

So that while one is rolling out new keystone nodes and syncing fernet
keys, all tokens issued would validated properly, with minimal extra
cost to support both (basically just a number of UUID tokens will need
to be parsed twice, once as Fernet, and once as UUID).

Thoughts? I think doing this would make changing the default fairly
uncontroversial.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
asked May 2, 2016 in openstack-dev by Clint_Byrum (40,940 points)   4 5 9

16 Responses

0 votes

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

From an operational POV, I can't imagine that any operators will go to work
one day and find out that they have a new token provider because of a new
default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

So that while one is rolling out new keystone nodes and syncing fernet
keys, all tokens issued would validated properly, with minimal extra
cost to support both (basically just a number of UUID tokens will need
to be parsed twice, once as Fernet, and once as UUID).

Thoughts? I think doing this would make changing the default fairly
uncontroversial.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 2, 2016 by Matt_Fischer (9,340 points)   1 3 7
0 votes

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer matt@mattfischer.com wrote:

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?

With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).

I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315

For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernetsetup" and "keystone-manage
fernet
rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.

So that while one is rolling out new keystone nodes and syncing fernet

keys, all tokens issued would validated properly, with minimal extra
cost to support both (basically just a number of UUID tokens will need
to be parsed twice, once as Fernet, and once as UUID).

Thoughts? I think doing this would make changing the default fairly
uncontroversial.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by s.martinelli_at_gmai (5,460 points)   1 2 2
0 votes

Excerpts from Matt Fischer's message of 2016-05-02 16:39:02 -0700:

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

From an operational POV, I can't imagine that any operators will go to work
one day and find out that they have a new token provider because of a new
default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?

"Production defaults" is something we used to mention a lot. One would
hope you can run a very nice Keystone with only the required settings
such as database connection details.

Agreed that upgrades will be conscious decisions by operators, no doubt!

However, the operator is not the one who gets the surprise. It is the
user who doesn't expect their tokens to be invalidated until their TTL
is up. The cloud changes when the operator decides it changes. And if
that is in the middle of something important, the operator has just
induced unnecessary complication on the user.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Clint_Byrum (40,940 points)   4 5 9
0 votes

On Mon, May 2, 2016 at 6:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:

This would have to check that there's an active fernet key and not just a
staging one. You'll want to push out a staging key to all the nodes first
to enable fernet validation before pushing out the active key to enable
token creation. Maybe there's a trick to getting keystone-manage
fernet_setup to only setup a staging key, or you just copy that key around.

Also, we could have keystone keep track of if there aren't any uuid tokens
since there's no need to query the database everytime we get an invalid
token just to see an empty table.

  • Brant

return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:

try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

So that while one is rolling out new keystone nodes and syncing fernet
keys, all tokens issued would validated properly, with minimal extra
cost to support both (basically just a number of UUID tokens will need
to be parsed twice, once as Fernet, and once as UUID).

Thoughts? I think doing this would make changing the default fairly
uncontroversial.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
- Brant


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Brant_Knudson (5,640 points)   1 2 2
0 votes

Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer matt@mattfischer.com wrote:

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?

With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).

Assume all users are using defaults.

I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315

Right, I responded there, but just to be clear, this is not about
operators being inconvenienced, it is about users.

For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernetsetup" and "keystone-manage
fernet
rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.

Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.

You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
self.validatefernettoken()
except NotAFernetToken:
self.
validateuuidtoken()

I fight for the users -- Tron


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Clint_Byrum (40,940 points)   4 5 9
0 votes

On 05/03/2016 09:55 AM, Clint Byrum wrote:
Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer matt@mattfischer.com wrote:

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?

With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).

Assume all users are using defaults.

I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315

Right, I responded there, but just to be clear, this is not about
operators being inconvenienced, it is about users.

For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernetsetup" and "keystone-manage
fernet
rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.

Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.

You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
self.validatefernettoken()
except NotAFernetToken:
self.
validateuuidtoken()

I was actually thinking of a different migration strategy, exactly the
opposite: for a while, run with the uuid tokens, but store the Fernet
body. After while, switch from validating the uuid token body to the
stored Fernet. Finally, switch to validating the Fernet token from the
request. That way, we always have only one token provider, and the
migration can happen step by step.

It will not help someone that migrates from Icehouse to Ocata. Then
again, the dual plan you laid out above will not either; at some point,
people will have to dump the token table to make major migrations.

I fight for the users -- Tron


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Adam_Young (19,940 points)   2 7 9
0 votes

If we were to write a uuid/fernet hybrid provider, it would only be
expected to support something like stable/liberty to stable/mitaka, right?
This is something that we could contribute to stackforge, too.

On Tue, May 3, 2016 at 9:21 AM, Adam Young ayoung@redhat.com wrote:

On 05/03/2016 09:55 AM, Clint Byrum wrote:

Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer matt@mattfischer.com
wrote:

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider

work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens.
These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly
catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those
in
our dev environment.

From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because
of a
new default. Wouldn't the settings in keystone.conf be under some kind
of
config management? I don't know what distros do with new defaults
however,
maybe that would be the surprise?

With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token
format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).

Assume all users are using defaults.

I'm glad Matt outlines his reasoning above since that is nearly exactly

what Jesse Keating said at the Fernet token work session we had in
Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315

Right, I responded there, but just to be clear, this is not about
operators being inconvenienced, it is about users.

For all new installations, deploying your cloud will now have two extra

steps, running "keystone-manage fernetsetup" and "keystone-manage
fernet
rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the
Newton
release.

Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).

I wonder if one could merge UUID and Fernet into a provider which

handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.

You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
self.validatefernettoken()
except NotAFernetToken:
self.
validateuuidtoken()

I was actually thinking of a different migration strategy, exactly the
opposite: for a while, run with the uuid tokens, but store the Fernet
body. After while, switch from validating the uuid token body to the
stored Fernet. Finally, switch to validating the Fernet token from the
request. That way, we always have only one token provider, and the
migration can happen step by step.

It will not help someone that migrates from Icehouse to Ocata. Then again,
the dual plan you laid out above will not either; at some point, people
will have to dump the token table to make major migrations.

I fight for the users -- Tron


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Lance_Bragstad (11,080 points)   2 3 6
0 votes

On 05/03/2016 08:55 AM, Clint Byrum wrote:
Excerpts from Steve Martinelli's message of 2016-05-02 19:56:15 -0700:

Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer matt@mattfischer.com wrote:

On Mon, May 2, 2016 at 5:26 PM, Clint Byrum clint@fewbar.com wrote:

Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

This will reduce the interruption and will also as you said possibly catch
bugs. We had bugs in some custom python code that didn't get a new token
when the keystone server returned certain code, but we found all those in
our dev environment.

From an operational POV, I can't imagine that any operators will go to
work one day and find out that they have a new token provider because of a
new default. Wouldn't the settings in keystone.conf be under some kind of
config management? I don't know what distros do with new defaults however,
maybe that would be the surprise?

With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).

Assume all users are using defaults.

I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315

Right, I responded there, but just to be clear, this is not about
operators being inconvenienced, it is about users.

I have confusion.

token format isn't really a thing users care about, like, ever. A token
is an opaque blob you get from authenticating, and sometimes it expires
and you have to reauthenticate. That re-auth must be accounted for in
all of your user code, or else you'll have random sads (if you use
keystoneauth it's handled for you, if you don't, it's on you_

If the operator rolls out fernet where it was uuid, the worst thing that
will happen is that a token will "expire" before it needed to. As much
as I'm normally a fountain for user indignation and rage ... I'm not
sure end-users have any issues here.

For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernetsetup" and "keystone-manage
fernet
rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.

Great! They are supremely efficient and I love that we're moving
forward. However, users really do not care about something that just
makes the operator's life easier if it causes all of their stuff to blow
up in non-deterministic ways (since their new jobs won't have that fail,
it will be a really fun day in the debug chair).

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self.fernetkeys:
return self.issuefernettoken()
else:
return self.
issueuuidtoken()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self.fernetkeys:
try:
self.validatefernettoken()
except InvalidFernetFormatting:
self.
validateuuidtoken()
else:
self.validateuuid_token()

This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.

You say "on the fly" I say "when the operator has configured things
fully".

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

I agree - I see no reason we can't validate previously emitted tokens.
But I don't agree strongly, because re-authing on invalid token is a
thing users do hundreds of times a day. (these aren't oauth API Keys or
anything)

Anyway, the idea could use a few kicks, and I think perhaps a better
way to state what I'm thinking is this:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
self.validatefernettoken()
except NotAFernetToken:
self.
validateuuidtoken()

I fight for the users -- Tron


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Monty_Taylor (22,780 points)   2 5 7
0 votes

Excerpts from Monty Taylor's message of 2016-05-03 07:59:21 -0700:

On 05/03/2016 08:55 AM, Clint Byrum wrote:

Perhaps we have different perspectives. How is accepting what we
previously emitted and told the user would be valid sneaky or wrong?
Sounds like common sense due diligence to me.

I agree - I see no reason we can't validate previously emitted tokens.
But I don't agree strongly, because re-authing on invalid token is a
thing users do hundreds of times a day. (these aren't oauth API Keys or
anything)

Sure, one should definitely not be expecting everything to always work
without errors. On this we agree for sure. However, when we do decide to
intentionally induce errors for reasons we have not done so before, we
should weigh the cost of avoiding that with the cost of having it
happen. Consider this strawman:

  • User gets token, it says "expires_at Now+4 hours"
  • User starts a brief set of automation tasks in their system
    that does not use python and has not failed with invalid tokens thus
    far.
  • Keystone nodes are all updated at one time (AMAZING cloud ops team)
  • User's automation jobs fail at next OpenStack REST call
  • User begins debugging, wasting hours of time figuring out that
    their tokens, which they stored and show should still be valid, were
    rejected.

And now they have to refactor their app, because this may happen again,
and they have to make sure that invalid token errors can bubble up to the
layer that has the username/password, or accept rolling back and
retrying the whole thing.

I'm not saying anybody has this system, I'm suggesting we're putting
undue burden on users with an unknown consequence. Falling back to UUID
for a while has a known cost of a little bit of code and checking junk
tokens twice.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Clint_Byrum (40,940 points)   4 5 9
0 votes

Excerpts from Adam Young's message of 2016-05-03 07:21:52 -0700:

On 05/03/2016 09:55 AM, Clint Byrum wrote:

When the operator has configured a new token format to emit, they should
also be able to allow any previously emitted formats to be validated to
allow users a smooth transition to the new format. We can then make the
default behavior for one release cycle to emit Fernet, and honor both
Fernet and UUID.

Perhaps ignore the other bit that I put in there about switching formats
just because you have fernet keys. Let's say the new pseudo code only
happens in validation:

try:
self.validatefernettoken()
except NotAFernetToken:
self.
validateuuidtoken()

I was actually thinking of a different migration strategy, exactly the
opposite: for a while, run with the uuid tokens, but store the Fernet
body. After while, switch from validating the uuid token body to the
stored Fernet. Finally, switch to validating the Fernet token from the
request. That way, we always have only one token provider, and the
migration can happen step by step.

It will not help someone that migrates from Icehouse to Ocata. Then
again, the dual plan you laid out above will not either; at some point,
people will have to dump the token table to make major migrations.

Your plan has a nice aspect that it allows validating Fernet tokens on
UUID-configured nodes too, which means operators don't have to be careful
to update all nodes at one time. So I think what you describe above is
an even better plan.

Either way, the point is to avoid an immediate mass token invalidation
event on change of provider.


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded May 3, 2016 by Clint_Byrum (40,940 points)   4 5 9
...