settingsLogin | Registersettings

[openstack-dev] [marconi] RabbitMQ (AMQP 0.9) driver for Marconi

0 votes

I the last few days I attempted to implement a RabbitMQ (AMQP 0.9) storage driver for Marconi. These are the take-aways from this experiment. High level, it showed that current Marconi APIs cannot be mapped onto the AMQP 0.9 abstractions. In fact, currently it is not even possible to support a subset of functionality that would allow both message publication and consumption.

  1. Marconi exposes HTTP APIs that allow messages to be listed without consuming them. This API cannot be implemented on top of AMQP 0.9 which implements a strict queueing semantics.

  2. Marconi exposes HTTP APIs that allow random access to messages by ID. This API cannot be implemented on top of AMQP 0.9 which does not allow random access to messages, and the message ID concept is not present in the model.

  3. Marconi exposes HTTP APIs that allow queues to be created, deleted, and listed. Queue creation and deletion can be implemented with AMQP 0.9, but listing queues is not possible with AMQP. However, listing queues can be implemented by accessing RabbitMQ management plugin over proprietary HTTP APIs that Rabbit exposes.

  4. Marconi message publishing APIs return server-assigned message IDs. Message IDs are absent from the AMQP 0.9 model and so the result of message publication cannot provide them.

  5. Marconi message consumption API creates a ?claim ID? for a set of consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS and Azure Queues), ?claim ID? maps onto the concept of ?delivery tag? which has a 1-1 relationship with a message. Since there is no way to represent the 1-N mapping between claimID and messages in the AMQP 0.9 model, it effectively restrict consumption of messages to one per claimID. This in turn prevents batch consumption benefits.

  6. Marconi message consumption acknowledgment requires both claimID and messageID to be provided. MessageID concept is missing in AMQP 0.9. In order to implement this API, assuming the artificial 1-1 restriction of claim-message mapping from #5 above, this API could be implemented by requiring that messageID === claimID. This is really a workaround.

  7. RabbitMQ message acknowledgment MUST be sent over the same AMQP channel instance on which the message was originally received. This requires that the two Marconi HTTP calls that receive and acknowledge a message are affinitized to the same Marconi backend. It either substantially complicates driver implementation (server-side reverse proxing of requests) or adds new requirements onto the Marconi deployment (server affinity through load balancing).

  8. Currently Marconi does not support an HTTP API that allows a message to be consumed with immediate acknowledgement (such API is in the pipeline however). Despite the fact that such API would not even support the at-least-once guarantee, combined with the restriction from #7 it means that there is simply no way currently for a RabbitMQ based driver to implement any form of message consumption using today?s HTTP API.

If Marconi aspires to support a range of implementation choices for the HTTP APIs it prescribes, the HTTP APIs will likely need to be re-factored and simplified. Key issues are related to the APIs that allow messages to be looked up without consuming them, the explicit modeling of message IDs (unnecessary in systems with strict queuing semantics), and the acknowledgment (claim) model that is different from most acknowledgments models out there (SQS, Azure Queues, AMQP).

I believe Marconi would benefit from a small set of core HTTP APIs that reflect a strict messaging semantics, providing a scenario parity with SQS or Azure Storage Queues.

Thanks,
Tomasz Janczuk
@tjanczuk
HP

asked Jun 10, 2014 in openstack-dev by Janczuk,_Tomasz (720 points)   1 2
retagged Feb 25, 2015 by admin

4 Responses

0 votes

On 10/06/14 18:12 +0000, Janczuk, Tomasz wrote:
I the last few days I attempted to implement a RabbitMQ (AMQP 0.9) storage driver for Marconi. These are the take-aways from this experiment. High level, it showed that current Marconi APIs cannot be mapped onto the AMQP 0.9 abstractions. In fact, currently it is not even possible to support a subset of functionality that would allow both message publication and consumption.

First and foremost, thank you!

This is a great summary. I've been willing to do the same for quite
some time and never got to it.

Based on your findings, I'm making some questions that are not related
to supporting AMQP drivers but having a better and reasonable API.

  1. Marconi exposes HTTP APIs that allow messages to be listed without consuming them. This API cannot be implemented on top of AMQP 0.9 which implements a strict queueing semantics.

I believe this is quite an important endpoint for Marconi. It's not
about listing messages but getting batch of messages. Wether it is
through claims or not doesn't really matter. What matters is giving
the user the opportunity to get a set of messages, do some work and
decide what to do with those messages afterwards.

  1. Marconi exposes HTTP APIs that allow random access to messages by ID. This API cannot be implemented on top of AMQP 0.9 which does not allow random access to messages, and the message ID concept is not present in the model.

We recently discussed the faith of this endpoint[0]. We can followup
there or later on when we start discussing v2.

  1. Marconi exposes HTTP APIs that allow queues to be created, deleted, and listed. Queue creation and deletion can be implemented with AMQP 0.9, but listing queues is not possible with AMQP. However, listing queues can be implemented by accessing RabbitMQ management plugin over proprietary HTTP APIs that Rabbit exposes.

We were really close to get rid of queues but we decided to keep them
around at the summit. One of the reasons to keep queues is their
metadata - rarely used but still useful for some use cases.

Monitoring and UIs may be another interesting use case to keep queues
around as a first citizen resource.

I must admit that keeping queues around still bugs me a bit but I'll
get over it.

  1. Marconi message publishing APIs return server-assigned message IDs. Message IDs are absent from the AMQP 0.9 model and so the result of message publication cannot provide them.

I think this is related to whatever we decide for #2

  1. Marconi message consumption API creates a ?claim ID? for a set of consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS and Azure Queues), ?claim ID? maps onto the concept of ?delivery tag? which has a 1-1 relationship with a message. Since there is no way to represent the 1-N mapping between claimID and messages in the AMQP 0.9 model, it effectively restrict consumption of messages to one per claimID. This in turn prevents batch consumption benefits.

  2. Marconi message consumption acknowledgment requires both claimID and messageID to be provided. MessageID concept is missing in AMQP 0.9. In order to implement this API, assuming the artificial 1-1 restriction of claim-message mapping from #5 above, this API could be implemented by requiring that messageID === claimID. This is really a workaround.

These 2 points represent quite a change in the way Marconi works and a
trade-off in terms of batch consumption (as you mentioned). I believe
we can have support for both things. For example, claimID+suffix where
suffix point to a specific claimed messages.

I don't want to start an extended discussion about this here but lets
keep in mind that we may be able to support both. I personally think
Marconi's claim's are reasonable as they are, which means I currently
like them better than SQS's.

  1. RabbitMQ message acknowledgment MUST be sent over the same AMQP channel instance on which the message was originally received. This requires that the two Marconi HTTP calls that receive and acknowledge a message are affinitized to the same Marconi backend. It either substantially complicates driver implementation (server-side reverse proxing of requests) or adds new requirements onto the Marconi deployment (server affinity through load balancing).

Nothing to do here. I guess a combination with a persistent-transport
protocol would help but we don't want to make the store driver depend
on the transport.

  1. Currently Marconi does not support an HTTP API that allows a message to be consumed with immediate acknowledgement (such API is in the pipeline however). Despite the fact that such API would not even support the at-least-once guarantee, combined with the restriction from #7 it means that there is simply no way currently for a RabbitMQ based driver to implement any form of message consumption using today?s HTTP API.

If Marconi aspires to support a range of implementation choices for the HTTP APIs it prescribes, the HTTP APIs will likely need to be re-factored and simplified. Key issues are related to the APIs that allow messages to be looked up without consuming them, the explicit modeling of message IDs (unnecessary in systems with strict queuing semantics), and the acknowledgment (claim) model that is different from most acknowledgments models out there (SQS, Azure Queues, AMQP).

I believe Marconi would benefit from a small set of core HTTP APIs that reflect a strict messaging semantics, providing a scenario parity with SQS or Azure Storage Queues.

Simplifying the API is definitely something the team wants to do. v2.0
of the API seems a good target for this simplification. Lets iterate
over the existing endpoints, write everything down on an etherpad and
evaluate everything.

One thing I do want to say is that despite Marconi being akin to SQS
and Azure Queues, it doesn't aim to be just like them. Lets learn
from their experiences and do the changes that we consider best for
the final user.

Flavio

[0] https://www.mail-archive.com/openstack-dev at lists.openstack.org/msg25385.html

--
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL:

responded Jun 11, 2014 by Flavio_Percoco (36,960 points)   3 7 11
0 votes

Thanks Flavio, some comments inline below.

On 6/11/14, 5:15 AM, "Flavio Percoco" wrote:

  1. Marconi exposes HTTP APIs that allow messages to be listed without
    consuming them. This API cannot be implemented on top of AMQP 0.9 which
    implements a strict queueing semantics.

I believe this is quite an important endpoint for Marconi. It's not
about listing messages but getting batch of messages. Wether it is
through claims or not doesn't really matter. What matters is giving
the user the opportunity to get a set of messages, do some work and
decide what to do with those messages afterwards.

The sticky point here is that this Marconi?s endpoint allows messages to
be obtained without consuming them in the traditional messaging system
sense: the messages remain visible to other consumers. It could be argued
that such semantics can be implemented on top of AMQP by first getting the
messages and then immediately releasing them for consumption by others,
before the Marconi call returns. However, even that is only possible for
messages that are at the front of the queue - the "paging" mechanism using
markers cannot be supported.

  1. Marconi exposes HTTP APIs that allow queues to be created,
    deleted, and listed. Queue creation and deletion can be implemented with
    AMQP 0.9, but listing queues is not possible with AMQP. However, listing
    queues can be implemented by accessing RabbitMQ management plugin over
    proprietary HTTP APIs that Rabbit exposes.

We were really close to get rid of queues but we decided to keep them
around at the summit. One of the reasons to keep queues is their
metadata - rarely used but still useful for some use cases.

Monitoring and UIs may be another interesting use case to keep queues
around as a first citizen resource.

I must admit that keeping queues around still bugs me a bit but I'll
get over it.

I suspect the metadata requirements will ultimately weigh a lot on this
decision, and I understand Marconi so far did not really have a smoking
gun case for queue metadata. In particular, if and when Marconi introduces
an authentication/authorization model, the ACLs will need to be stored and
managed somewhere. Queue metadata is a natural place to configure
per-queue security. Both SQS and Azure Storage Queues model it that way.

  1. Marconi message consumption API creates a ?claim ID? for a set of
    consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS
    and Azure Queues), ?claim ID? maps onto the concept of ?delivery tag?
    which has a 1-1 relationship with a message. Since there is no way to
    represent the 1-N mapping between claimID and messages in the AMQP 0.9
    model, it effectively restrict consumption of messages to one per
    claimID. This in turn prevents batch consumption benefits.

  2. Marconi message consumption acknowledgment requires both claimID
    and messageID to be provided. MessageID concept is missing in AMQP 0.9.
    In order to implement this API, assuming the artificial 1-1 restriction
    of claim-message mapping from #5 above, this API could be implemented by
    requiring that messageID === claimID. This is really a workaround.

These 2 points represent quite a change in the way Marconi works and a
trade-off in terms of batch consumption (as you mentioned). I believe
we can have support for both things. For example, claimID+suffix where
suffix point to a specific claimed messages.

I don't want to start an extended discussion about this here but lets
keep in mind that we may be able to support both. I personally think
Marconi's claim's are reasonable as they are, which means I currently
like them better than SQS's.

What are the advantages of the Marconi model for claims over the SQS and
Azure Queue model for acknowledgements?

I think the SQS and Azure Queue model is both simpler and more flexible.
But the key advantage is that it has been around for a while, has been
proven to work, and people understand it.

  1. SQS and Azure require only one concept to acknowledge a message
    (receipt handle/pop receipt) as opposed to Marconi?s two concepts (message
    ID + claim ID). SQS/Azure model is simpler.

  2. Similarly to Marconi, SQS and Azure allow individual claimed messages
    to be deleted. This is a wash.

  3. SQS and Azure allow a batch of messages up to a particular receipt
    handle/pop receit
    to be deleted. This is more flexible than Marconi?s
    mechanism or deleting all messages associated with a particular claim, and
    works very well for the most common scenario of in-order message delivery.

  1. RabbitMQ message acknowledgment MUST be sent over the same AMQP
    channel instance on which the message was originally received. This
    requires that the two Marconi HTTP calls that receive and acknowledge a
    message are affinitized to the same Marconi backend. It either
    substantially complicates driver implementation (server-side reverse
    proxing of requests) or adds new requirements onto the Marconi
    deployment (server affinity through load balancing).

Nothing to do here. I guess a combination with a persistent-transport
protocol would help but we don't want to make the store driver depend
on the transport.

The bigger issue here is that the storage layer abstraction has been
design with a focus on a set of atomic, independent operations, with no
way to correlate them or share any state between them. This works well for
transport layers that are operation-centered, like HTTP. This abstraction
is likely to be insufficient when one wants to use a connection-oriented
transport (e.g. WebSocket, AMQP, MQTT), or a storage driver based on a
connection-oriented protocol (e.g. AMQP). Without explicit modeling of a
state that spans several atomic calls to the storage driver, in particular
around the lifetime of the connection, current storage abstraction cannot
satisfy these configurations well.

Simplifying the API is definitely something the team wants to do. v2.0
of the API seems a good target for this simplification. Lets iterate
over the existing endpoints, write everything down on an etherpad and
evaluate everything.

One thing I do want to say is that despite Marconi being akin to SQS
and Azure Queues, it doesn't aim to be just like them. Lets learn
from their experiences and do the changes that we consider best for
the final user.

It is perfectly valid to aspire to do something better than SQS or Azure
Queues. On the flip side there is also the cost associated with doing
things differently, introducing concepts, models, or patterns that are new
and that impose new learning on customers. Is the new thing 10x better
then the old thing? If yes, folks will learn it. If not, folks will think
twice. As I said before on IRC, I think it is fine to reinvent a wheel;
but it better be super-duper-hands-down-better wheel.

I don?t think Marconi should "compete" with SQS or Azure. It should learn
from it by embracing its proven models and concepts. The reason I think
Marconi is interesting is that I expect it to compete with RabbitMQ,
ActiveMQ, [name-your-own]. Be 10x better by supporting multi-tenancy and
HTTP. That would be a better wheel for me.

Tomasz Janczuk
@tjanczuk
HP

responded Jun 11, 2014 by Janczuk,_Tomasz (720 points)   1 2
0 votes

On 11/06/14 18:01 +0000, Janczuk, Tomasz wrote:
Thanks Flavio, some comments inline below.

On 6/11/14, 5:15 AM, "Flavio Percoco" wrote:

  1. Marconi exposes HTTP APIs that allow messages to be listed without
    consuming them. This API cannot be implemented on top of AMQP 0.9 which
    implements a strict queueing semantics.

I believe this is quite an important endpoint for Marconi. It's not
about listing messages but getting batch of messages. Wether it is
through claims or not doesn't really matter. What matters is giving
the user the opportunity to get a set of messages, do some work and
decide what to do with those messages afterwards.

The sticky point here is that this Marconi?s endpoint allows messages to
be obtained without consuming them in the traditional messaging system
sense: the messages remain visible to other consumers. It could be argued
that such semantics can be implemented on top of AMQP by first getting the
messages and then immediately releasing them for consumption by others,
before the Marconi call returns. However, even that is only possible for
messages that are at the front of the queue - the "paging" mechanism using
markers cannot be supported.

What matters is whether the listing functionality is useful or not.
Lets not think about it as "listing" or "paging" but about getting
batches of messages that are still available for others to process in
parallel. As mentioned in my previous email, AMQP has been a good way
to analyze the extra set of features Marconi exposes in the API but I
don't want to make the choice of usability based on whether
traditional messaging systems support it and how it could be
implemented there.

  1. Marconi message consumption API creates a ?claim ID? for a set of
    consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS
    and Azure Queues), ?claim ID? maps onto the concept of ?delivery tag?
    which has a 1-1 relationship with a message. Since there is no way to
    represent the 1-N mapping between claimID and messages in the AMQP 0.9
    model, it effectively restrict consumption of messages to one per
    claimID. This in turn prevents batch consumption benefits.

  2. Marconi message consumption acknowledgment requires both claimID
    and messageID to be provided. MessageID concept is missing in AMQP 0.9.
    In order to implement this API, assuming the artificial 1-1 restriction
    of claim-message mapping from #5 above, this API could be implemented by
    requiring that messageID === claimID. This is really a workaround.

These 2 points represent quite a change in the way Marconi works and a
trade-off in terms of batch consumption (as you mentioned). I believe
we can have support for both things. For example, claimID+suffix where
suffix point to a specific claimed messages.

I don't want to start an extended discussion about this here but lets
keep in mind that we may be able to support both. I personally think
Marconi's claim's are reasonable as they are, which means I currently
like them better than SQS's.

What are the advantages of the Marconi model for claims over the SQS and
Azure Queue model for acknowledgements?

I think the SQS and Azure Queue model is both simpler and more flexible.
But the key advantage is that it has been around for a while, has been
proven to work, and people understand it.

  1. SQS and Azure require only one concept to acknowledge a message
    (receipt handle/pop receipt) as opposed to Marconi?s two concepts (message
    ID + claim ID). SQS/Azure model is simpler.

TBH, I'm not exactly sure where you're going with this. I mean, the
model may look simpler but it's not necessarily better nor easier to
implement. Keeping both, messages and claims, separate in terms of IDs
and management is flexible and powerful enough, IMHO. But I'm probably
missing your point.

I don't believe requiring the messageID+ClaimID to delete a specific,
claimed, messages is hard.

  1. Similarly to Marconi, SQS and Azure allow individual claimed messages
    to be deleted. This is a wash.

Calling it a wash is neither helpful nor friendly. Why do you think it
is a wash?

Claiming a message does not delete the message, which means consumers
may want to delete it before the claim is released. Do you have a
better way to do it?

  1. SQS and Azure allow a batch of messages up to a particular receipt
    handle/pop receit
    to be deleted. This is more flexible than Marconi?s
    mechanism or deleting all messages associated with a particular claim, and
    works very well for the most common scenario of in-order message delivery.

Pop semantic is on its way to the codebase. Limited claim deletes
sounds like an interesting thing, lets talk about it. Want to submit a
new spec?

  1. RabbitMQ message acknowledgment MUST be sent over the same AMQP
    channel instance on which the message was originally received. This
    requires that the two Marconi HTTP calls that receive and acknowledge a
    message are affinitized to the same Marconi backend. It either
    substantially complicates driver implementation (server-side reverse
    proxing of requests) or adds new requirements onto the Marconi
    deployment (server affinity through load balancing).

Nothing to do here. I guess a combination with a persistent-transport
protocol would help but we don't want to make the store driver depend
on the transport.

The bigger issue here is that the storage layer abstraction has been
design with a focus on a set of atomic, independent operations, with no
way to correlate them or share any state between them. This works well for
transport layers that are operation-centered, like HTTP. This abstraction
is likely to be insufficient when one wants to use a connection-oriented
transport (e.g. WebSocket, AMQP, MQTT), or a storage driver based on a
connection-oriented protocol (e.g. AMQP). Without explicit modeling of a
state that spans several atomic calls to the storage driver, in particular
around the lifetime of the connection, current storage abstraction cannot
satisfy these configurations well.

This is the current status, True. However, it was not always like
that. We decided to go this way until we actually needed to support
other transports (YAGNI). The time to revisit this is, perhaps,
coming.

That said, I agree that some things should change in the storage to
support other transports. But this is a different discussion from the
changes in the API to support other storage drivers.

Flavio

--
@flaper87
Flavio Percoco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL:

responded Jun 13, 2014 by Flavio_Percoco (36,960 points)   3 7 11
0 votes

Thanks Flavio, inline.

On 6/13/14, 1:37 AM, "Flavio Percoco" wrote:

On 11/06/14 18:01 +0000, Janczuk, Tomasz wrote:
Thanks Flavio, some comments inline below.

On 6/11/14, 5:15 AM, "Flavio Percoco" wrote:

  1. Marconi exposes HTTP APIs that allow messages to be listed
    without
    consuming them. This API cannot be implemented on top of AMQP 0.9 which
    implements a strict queueing semantics.

I believe this is quite an important endpoint for Marconi. It's not
about listing messages but getting batch of messages. Wether it is
through claims or not doesn't really matter. What matters is giving
the user the opportunity to get a set of messages, do some work and
decide what to do with those messages afterwards.

The sticky point here is that this Marconi?s endpoint allows messages to
be obtained without consuming them in the traditional messaging system
sense: the messages remain visible to other consumers. It could be argued
that such semantics can be implemented on top of AMQP by first getting
the
messages and then immediately releasing them for consumption by others,
before the Marconi call returns. However, even that is only possible for
messages that are at the front of the queue - the "paging" mechanism
using
markers cannot be supported.

What matters is whether the listing functionality is useful or not.
Lets not think about it as "listing" or "paging" but about getting
batches of messages that are still available for others to process in
parallel. As mentioned in my previous email, AMQP has been a good way
to analyze the extra set of features Marconi exposes in the API but I
don't want to make the choice of usability based on whether
traditional messaging systems support it and how it could be
implemented there.

This functionality is very useful in a number of scenarios. It has
traditionally been the domain of database systems - flexible access to
data is what DBs excel in (select top 1000 * from X order by create_date).
Vast majority of existing messaging systems has a much more constrained
and prescriptive way of accessing data than a database. Why does this
functionality need to be part of Marconi? What are the benefits of listing
messages in Marconi that cannot be realized with a plain database?

In other words, if I need to access my data in that way, why would I use
Marconi rather than a DB?

  1. Marconi message consumption API creates a ?claim ID? for a set of
    consumed messages, up to a limit. In the AMQP 0.9 model (as well as SQS
    and Azure Queues), ?claim ID? maps onto the concept of ?delivery tag?
    which has a 1-1 relationship with a message. Since there is no way to
    represent the 1-N mapping between claimID and messages in the AMQP 0.9
    model, it effectively restrict consumption of messages to one per
    claimID. This in turn prevents batch consumption benefits.

  2. Marconi message consumption acknowledgment requires both claimID
    and messageID to be provided. MessageID concept is missing in AMQP 0.9.
    In order to implement this API, assuming the artificial 1-1 restriction
    of claim-message mapping from #5 above, this API could be implemented
    by
    requiring that messageID === claimID. This is really a workaround.

These 2 points represent quite a change in the way Marconi works and a
trade-off in terms of batch consumption (as you mentioned). I believe
we can have support for both things. For example, claimID+suffix where
suffix point to a specific claimed messages.

I don't want to start an extended discussion about this here but lets
keep in mind that we may be able to support both. I personally think
Marconi's claim's are reasonable as they are, which means I currently
like them better than SQS's.

What are the advantages of the Marconi model for claims over the SQS and
Azure Queue model for acknowledgements?

I think the SQS and Azure Queue model is both simpler and more flexible.
But the key advantage is that it has been around for a while, has been
proven to work, and people understand it.

  1. SQS and Azure require only one concept to acknowledge a message
    (receipt handle/pop receipt) as opposed to Marconi?s two concepts
    (message
    ID + claim ID). SQS/Azure model is simpler.

TBH, I'm not exactly sure where you're going with this. I mean, the
model may look simpler but it's not necessarily better nor easier to
implement. Keeping both, messages and claims, separate in terms of IDs
and management is flexible and powerful enough, IMHO. But I'm probably
missing your point.

I don't believe requiring the messageID+ClaimID to delete a specific,
claimed, messages is hard.

It may not be hard. It is just more complex than it needs to be to
accomplish the same task.

  1. Similarly to Marconi, SQS and Azure allow individual claimed messages
    to be deleted. This is a wash.

Calling it a wash is neither helpful nor friendly. Why do you think it
is a wash?

Claiming a message does not delete the message, which means consumers
may want to delete it before the claim is released. Do you have a
better way to do it?

By "wash" I mean neither SQS/Azure or Marconi offers something the other
does not related to this aspect. I don't see a better way of doing it,
this is pretty standard.

  1. SQS and Azure allow a batch of messages up to a particular receipt
    handle/pop receit
    to be deleted. This is more flexible than Marconi?s
    mechanism or deleting all messages associated with a particular claim,
    and
    works very well for the most common scenario of in-order message
    delivery.

Pop semantic is on its way to the codebase. Limited claim deletes
sounds like an interesting thing, lets talk about it. Want to submit a
new spec?

I will be happy to propose a spec, but I'd rather take a more
comprehensive approach than addressing this single issue. We have
discussed a number of problems both on this thread and outside of it.
Perhaps all this learning should come together as a V2 of the HTTP API.

  1. RabbitMQ message acknowledgment MUST be sent over the same AMQP
    channel instance on which the message was originally received. This
    requires that the two Marconi HTTP calls that receive and acknowledge a
    message are affinitized to the same Marconi backend. It either
    substantially complicates driver implementation (server-side reverse
    proxing of requests) or adds new requirements onto the Marconi
    deployment (server affinity through load balancing).

Nothing to do here. I guess a combination with a persistent-transport
protocol would help but we don't want to make the store driver depend
on the transport.

The bigger issue here is that the storage layer abstraction has been
design with a focus on a set of atomic, independent operations, with no
way to correlate them or share any state between them. This works well
for
transport layers that are operation-centered, like HTTP. This abstraction
is likely to be insufficient when one wants to use a connection-oriented
transport (e.g. WebSocket, AMQP, MQTT), or a storage driver based on a
connection-oriented protocol (e.g. AMQP). Without explicit modeling of a
state that spans several atomic calls to the storage driver, in
particular
around the lifetime of the connection, current storage abstraction cannot
satisfy these configurations well.

This is the current status, True. However, it was not always like
that. We decided to go this way until we actually needed to support
other transports (YAGNI). The time to revisit this is, perhaps,
coming.

That said, I agree that some things should change in the storage to
support other transports. But this is a different discussion from the
changes in the API to support other storage drivers.

Yes, this is a separate discussion. I will start a new thread to talk
about this.

Flavio

--
@flaper87
Flavio Percoco


OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
responded Jun 13, 2014 by Janczuk,_Tomasz (720 points)   1 2
...