settingsLogin | Registersettings

[Openstack-operators] Experience with Cinder volumes as root disks?

0 votes

In our process of standing up an OpenStack internal cloud we are facing the question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent volumes for the root disk. AWS started out with images booting onto ephemeral disk, but soon after they released Elastic Block Storage and ever since the clear trend has been to EBS-backed instances, and now when I look at their quick-start list of 33 AMIs, all of them are EBS-backed. And I'm not even sure one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / default behavior to boot onto Cinder-backed volumes instead of onto ephemeral storage. But then I look at OpenStack and its design point appears to be booting images onto ephemeral storage, and while it is possible to boot an image onto a new volume this is clumsy (haven't found a way to make this the default behavior) and we are experiencing performance problems (that admittedly we have not yet run to ground).

So ...

  • Are other operators routinely booting onto Cinder volumes instead of ephemeral storage?

  • What has been your experience with this; any advice?

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Application Infrastructure Services / Global IT Infrastructure / Information Technology & Data Analytics
conrad.kimball@boeing.comconrad.kimball@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA 98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile: 425-591-7802


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
asked Aug 2, 2017 in openstack-operators by Kimball,_Conrad (180 points)   1 1 3

19 Responses

0 votes

On 08/01/2017 08:50 AM, Kimball, Conrad wrote:

·Are other operators routinely booting onto Cinder volumes instead of ephemeral
storage?

It's up to the end-user, but yes.

·What has been your experience with this; any advice?

It works fine. With Horizon you can do it in one step (select the image but
tell it to boot from volume) but with the CLI I think you need two steps (make
the volume from the image, then boot from the volume). The extra steps are a
moot point if you are booting programmatically (from a custom script or
something like heat).

I think that generally speaking the default is to use ephemeral storage because
it's:

a) cheaper
b) "cloudy" in that if anything goes wrong you just spin up another instance

On the other hand, booting from volume does allow for faster migrations since it
avoids the need to transfer the boot disk contents as part of the migration.

Chris


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Chris_Friesen (20,420 points)   3 16 24
0 votes

Just my two cents here but we started out using mostly Ephemeral storage in
our builds and looking back I wish we hadn't. Note we're using Ceph as a
backend so my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an RBD
snapshot occurs and is very quick and uses very little additional storage,
however the snapshot is then copied into the images pool and in the process
is converted from a snapshot to a full size image. This takes a long time
because you have to copy a lot of data and it takes up a lot of space. It
also causes a great deal of IO on the storage and means you end up with a
bunch of "snapshot images" creating clutter. On the other hand volume
snapshots are near instantaneous without the other drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images
works better. As long as your image is configured properly it's just a
matter of initiating a resize and letting the instance reboot to grow the
root disk. When using volumes as your root disk you instead have to
shutdown the instance, grow the volume and boot.

I hope this help! If anyone on the list knows something I don't know
regarding these issues please chime in. I'd love to know if there's a
better way.

Regards,

John Petrini

On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad conrad.kimball@boeing.com
wrote:

In our process of standing up an OpenStack internal cloud we are facing
the question of ephemeral storage vs. Cinder volumes for instance root
disks.

As I look at public clouds such as AWS and Azure, the norm is to use
persistent volumes for the root disk. AWS started out with images booting
onto ephemeral disk, but soon after they released Elastic Block Storage and
ever since the clear trend has been to EBS-backed instances, and now when I
look at their quick-start list of 33 AMIs, all of them are EBS-backed. And
I’m not even sure one can have anything except persistent root disks in
Azure VMs.

Based on this and a number of other factors I think we want our user
normal / default behavior to boot onto Cinder-backed volumes instead of
onto ephemeral storage. But then I look at OpenStack and its design point
appears to be booting images onto ephemeral storage, and while it is
possible to boot an image onto a new volume this is clumsy (haven’t found a
way to make this the default behavior) and we are experiencing performance
problems (that admittedly we have not yet run to ground).

So …

· Are other operators routinely booting onto Cinder volumes
instead of ephemeral storage?

· What has been your experience with this; any advice?

Conrad Kimball

Associate Technical Fellow

Chief Architect, Enterprise Cloud Services

Application Infrastructure Services / Global IT Infrastructure /
Information Technology & Data Analytics

conrad.kimball@boeing.com

P.O. Box 3707, Mail Code 7M-TE

Seattle, WA 98124-2207

Bellevue 33-11 bldg, office 3A6-3.9

Mobile: 425-591-7802 <(425)%20591-7802>


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by John_Petrini (1,880 points)   4 5
0 votes

Hi Conrad,

We boot to ephemeral disk by default but our ephemeral disk is Ceph
RBD just like out cinder volumes.

Using Ceph for Cinder Volumes and Glance Images storage it is possible
to very quickly create new Persistent Volumes from Glance Images
becasue on the backend it's just a CoW snapshot operation (even though
we use seperate pools for ephemeral disk, persistent volumes, and
images). This is also what happens for ephemeral boting which is much
faster than copying image to local disk on hypervisor first so we get
quick starts and relatively easy live migrations (which we use for
maintenance like hypervisor reboots and reinstalls).

I don't know how to make it the "default" but ceph definately makes
it faster. Other backends I've used basicly mount the raw storage
volume download the imaage then 'dd' in into place which is
painfully slow.

As to why ephemeral rather than volume backed by default it's much
easier to boot amny copies of the same thing and be sure they're the
same using ephemeral storage and iamges or snapshots. Volume backed
instances tend to drift.

That said workign in a research lab many of my users go for the more
"Pet" like persistent VM workflow. We just manage it with docs and
education, though there is always someone who misses the red flashing
"ephemeral means it gets deleted when you turn it off" sign and is
sad.

-Jon

On Tue, Aug 01, 2017 at 02:50:45PM +0000, Kimball, Conrad wrote:
:In our process of standing up an OpenStack internal cloud we are facing the question of ephemeral storage vs. Cinder volumes for instance root disks.
:
:As I look at public clouds such as AWS and Azure, the norm is to use persistent volumes for the root disk. AWS started out with images booting onto ephemeral disk, but soon after they released Elastic Block Storage and ever since the clear trend has been to EBS-backed instances, and now when I look at their quick-start list of 33 AMIs, all of them are EBS-backed. And I'm not even sure one can have anything except persistent root disks in Azure VMs.
:
:Based on this and a number of other factors I think we want our user normal / default behavior to boot onto Cinder-backed volumes instead of onto ephemeral storage. But then I look at OpenStack and its design point appears to be booting images onto ephemeral storage, and while it is possible to boot an image onto a new volume this is clumsy (haven't found a way to make this the default behavior) and we are experiencing performance problems (that admittedly we have not yet run to ground).
:
:So ...
:
:* Are other operators routinely booting onto Cinder volumes instead of ephemeral storage?
:
:* What has been your experience with this; any advice?
:
:Conrad Kimball
:Associate Technical Fellow
:Chief Architect, Enterprise Cloud Services
:Application Infrastructure Services / Global IT Infrastructure / Information Technology & Data Analytics
:conrad.kimball@boeing.com
:P.O. Box 3707, Mail Code 7M-TE
:Seattle, WA 98124-2207
:Bellevue 33-11 bldg, office 3A6-3.9
:Mobile: 425-591-7802
:

:_______________________________________________
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by jon_at_csail.mit.edu (4,720 points)   1 4 7
0 votes

At Overstock we do both, in different clouds. Our preferred option is a Ceph backend for Nova ephemeral storage. We like it because it is fast to boot and makes resize easy. Our use case doesn’t require snapshots nor do we have a need for keeping the data around if a server needs to be rebuilt. It may not work for other people, but it works well for us.

In some of our other clouds, where we don’t have Ceph available, we do use Cinder volumes for booting VMs off of backend SAN services. It works ok, but there are a few painpoints in regard to disk resizing - it’s a bit of a cumbersome process compared the experience with Nova ephemeral. Depending on the solution used, creating the volume for boot can take much much longer and that can be annoying. On the plus side, Cinder does allow you to do QOS to limit I/O, whereas I do not believe that’s an option with Nova ephemeral. And, again depending on the Cinder solution employed, the disk I/O for this kind of setup can be significantly better than some other options including Nova ephemeral with a Ceph backend.

Bottom line: it depends what you need, as both options work well and there are people doing both out there in the wild.

Good luck!

On Aug 1, 2017, at 9:14 AM, John Petrini jpetrini@coredial.com wrote:

Just my two cents here but we started out using mostly Ephemeral storage in our builds and looking back I wish we hadn't. Note we're using Ceph as a backend so my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an RBD snapshot occurs and is very quick and uses very little additional storage, however the snapshot is then copied into the images pool and in the process is converted from a snapshot to a full size image. This takes a long time because you have to copy a lot of data and it takes up a lot of space. It also causes a great deal of IO on the storage and means you end up with a bunch of "snapshot images" creating clutter. On the other hand volume snapshots are near instantaneous without the other drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images works better. As long as your image is configured properly it's just a matter of initiating a resize and letting the instance reboot to grow the root disk. When using volumes as your root disk you instead have to shutdown the instance, grow the volume and boot.

I hope this help! If anyone on the list knows something I don't know regarding these issues please chime in. I'd love to know if there's a better way.

Regards,

John Petrini

On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad conrad.kimball@boeing.com wrote:
In our process of standing up an OpenStack internal cloud we are facing the question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent volumes for the root disk. AWS started out with images booting onto ephemeral disk, but soon after they released Elastic Block Storage and ever since the clear trend has been to EBS-backed instances, and now when I look at their quick-start list of 33 AMIs, all of them are EBS-backed. And I’m not even sure one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / default behavior to boot onto Cinder-backed volumes instead of onto ephemeral storage. But then I look at OpenStack and its design point appears to be booting images onto ephemeral storage, and while it is possible to boot an image onto a new volume this is clumsy (haven’t found a way to make this the default behavior) and we are experiencing performance problems (that admittedly we have not yet run to ground).

So …

• Are other operators routinely booting onto Cinder volumes instead of ephemeral storage?

• What has been your experience with this; any advice?

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Application Infrastructure Services / Global IT Infrastructure / Information Technology & Data Analytics
conrad.kimball@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA 98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile: 425-591-7802<tel:(425)%20591-7802>


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you.


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Mike_Smith (2,280 points)   1 4
0 votes

On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:

On the plus side for ephemeral storage; resizing the root disk of images
works better. As long as your image is configured properly it's just a
matter of initiating a resize and letting the instance reboot to grow the
root disk. When using volumes as your root disk you instead have to
shutdown the instance, grow the volume and boot.

Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.

Sean


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Sean_McGinnis (11,820 points)   2 3 6
0 votes

One other thing to think about - I think at least starting with the Mitaka
release, we added a feature called image volume cache. So if you create a
boot volume, the first time you do so it takes some time as the image is
pulled down and written to the backend volume.

With image volume cache enabled, that still happens on the first volume
creation of the image. But then any subsequent volume creations on that
backend for that image will be much, much faster.

This is something that needs to be configured. Details can be found here:

https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html

Sean

On Tue, Aug 01, 2017 at 10:47:26AM -0500, Sean McGinnis wrote:
On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:

On the plus side for ephemeral storage; resizing the root disk of images
works better. As long as your image is configured properly it's just a
matter of initiating a resize and letting the instance reboot to grow the
root disk. When using volumes as your root disk you instead have to
shutdown the instance, grow the volume and boot.

Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.

Sean


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Sean_McGinnis (11,820 points)   2 3 6
0 votes

On 08/01/2017 11:14 AM, John Petrini wrote:
Just my two cents here but we started out using mostly Ephemeral storage
in our builds and looking back I wish we hadn't. Note we're using Ceph
as a backend so my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an
RBD snapshot occurs and is very quick and uses very little additional
storage, however the snapshot is then copied into the images pool and in
the process is converted from a snapshot to a full size image. This
takes a long time because you have to copy a lot of data and it takes up
a lot of space. It also causes a great deal of IO on the storage and
means you end up with a bunch of "snapshot images" creating clutter. On
the other hand volume snapshots are near instantaneous without the other
drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images
works better. As long as your image is configured properly it's just a
matter of initiating a resize and letting the instance reboot to grow
the root disk. When using volumes as your root disk you instead have to
shutdown the instance, grow the volume and boot.

I hope this help! If anyone on the list knows something I don't know
regarding these issues please chime in. I'd love to know if there's a
better way.

I'd just like to point out that the above is exactly the right way to
think about things.

Don't boot from volume (i.e. don't use a volume as your root disk).

Instead, separate the operating system from your application data. Put
the operating system on a small disk image (small == fast boot times),
use a config drive for injectable configuration and create Cinder
volumes for your application data.

Detach and attach the application data Cinder volume as needed to your
server instance. Make your life easier by not coupling application data
and the operating system together.

Best,
-jay


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Jay_Pipes (59,760 points)   3 11 14
0 votes

·What has been your experience with this; any advice?

It works fine. With Horizon you can do it in one step (select the image but
tell it to boot from volume) but with the CLI I think you need two steps
(make the volume from the image, then boot from the volume). The extra
steps are a moot point if you are booting programmatically (from a custom
script or something like heat).

One thing to keep in mind when using Horizon for this - there's currently
no way in Horizon to specify the volume type you would like to use for
creating this boot volume. So it will always only use the default volume
type.

That may be fine if you only have one, but if you have multiple backends,
or multiple settings controlled by volume types, then you will probably
want to use the CLI method for creating your boot volumes.

There has been some discussion about creating a Nova driver to just use
Cinder for ephemeral storage. There are some design challenges with how
to best implement that, but if operators are interested, it would be
great to hear that at the Forum and elsewhere so we can help raise the
priority of that between teams.

Sean


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Sean_McGinnis (11,820 points)   2 3 6
0 votes

Strictly speaking I don’t think this is the case anymore for Mitaka or later. Snapping nova does take more space as the image is flattened, but the dumb download then upload back into ceph has been cut out. With careful attention paid to discard/TRIM I believe you can maintain the thin provisioning properties of RBD. The workflow is explained here. https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/ https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/

On Aug 1, 2017, at 11:14 AM, John Petrini jpetrini@coredial.com wrote:

Just my two cents here but we started out using mostly Ephemeral storage in our builds and looking back I wish we hadn't. Note we're using Ceph as a backend so my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an RBD snapshot occurs and is very quick and uses very little additional storage, however the snapshot is then copied into the images pool and in the process is converted from a snapshot to a full size image. This takes a long time because you have to copy a lot of data and it takes up a lot of space. It also causes a great deal of IO on the storage and means you end up with a bunch of "snapshot images" creating clutter. On the other hand volume snapshots are near instantaneous without the other drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images works better. As long as your image is configured properly it's just a matter of initiating a resize and letting the instance reboot to grow the root disk. When using volumes as your root disk you instead have to shutdown the instance, grow the volume and boot.

I hope this help! If anyone on the list knows something I don't know regarding these issues please chime in. I'd love to know if there's a better way.

Regards,
John Petrini

On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <conrad.kimball@boeing.com conrad.kimball@boeing.com> wrote:
In our process of standing up an OpenStack internal cloud we are facing the question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent volumes for the root disk. AWS started out with images booting onto ephemeral disk, but soon after they released Elastic Block Storage and ever since the clear trend has been to EBS-backed instances, and now when I look at their quick-start list of 33 AMIs, all of them are EBS-backed. And I’m not even sure one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / default behavior to boot onto Cinder-backed volumes instead of onto ephemeral storage. But then I look at OpenStack and its design point appears to be booting images onto ephemeral storage, and while it is possible to boot an image onto a new volume this is clumsy (haven’t found a way to make this the default behavior) and we are experiencing performance problems (that admittedly we have not yet run to ground).

So …

· Are other operators routinely booting onto Cinder volumes instead of ephemeral storage?

· What has been your experience with this; any advice?

Conrad Kimball

Associate Technical Fellow

Chief Architect, Enterprise Cloud Services

Application Infrastructure Services / Global IT Infrastructure / Information Technology & Data Analytics

conrad.kimball@boeing.com conrad.kimball@boeing.com
P.O. Box 3707, Mail Code 7M-TE

Seattle, WA 98124-2207

Bellevue 33-11 bldg, office 3A6-3.9

Mobile: 425-591-7802 <tel:(425)%20591-7802>


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by Mike_Lowe (1,060 points)   3 3
0 votes

Yes from Mitaka onward the snapshot happens at the RBD level which is fast.
It's the flattening and uploading of the image to glance that's the major
pain point. Still it's worlds better than the qemu snapshots to the local
disk prior to Mitaka.

John Petrini

Platforms Engineer // CoreDial, LLC // coredial.com // [image:
Twitter] https://twitter.com/coredial [image: LinkedIn]
[image: Google Plus]
https://plus.google.com/104062177220750809525/posts [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
P: 215.297.4400 x232 // *F: *215.297.4401 // *E: *
jpetrini@coredial.com

The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.

On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe jomlowe@iu.edu wrote:

Strictly speaking I don’t think this is the case anymore for Mitaka or
later. Snapping nova does take more space as the image is flattened, but
the dumb download then upload back into ceph has been cut out. With
careful attention paid to discard/TRIM I believe you can maintain the thin
provisioning properties of RBD. The workflow is explained here.
https://www.sebastien-han.fr/blog/2015/10/05/openstack-
nova-snapshots-on-ceph-rbd/

On Aug 1, 2017, at 11:14 AM, John Petrini jpetrini@coredial.com wrote:

Just my two cents here but we started out using mostly Ephemeral storage
in our builds and looking back I wish we hadn't. Note we're using Ceph as a
backend so my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an RBD
snapshot occurs and is very quick and uses very little additional storage,
however the snapshot is then copied into the images pool and in the process
is converted from a snapshot to a full size image. This takes a long time
because you have to copy a lot of data and it takes up a lot of space. It
also causes a great deal of IO on the storage and means you end up with a
bunch of "snapshot images" creating clutter. On the other hand volume
snapshots are near instantaneous without the other drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images
works better. As long as your image is configured properly it's just a
matter of initiating a resize and letting the instance reboot to grow the
root disk. When using volumes as your root disk you instead have to
shutdown the instance, grow the volume and boot.

I hope this help! If anyone on the list knows something I don't know
regarding these issues please chime in. I'd love to know if there's a
better way.

Regards,

John Petrini

On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <
conrad.kimball@boeing.com> wrote:

In our process of standing up an OpenStack internal cloud we are facing
the question of ephemeral storage vs. Cinder volumes for instance root
disks.

As I look at public clouds such as AWS and Azure, the norm is to use
persistent volumes for the root disk. AWS started out with images booting
onto ephemeral disk, but soon after they released Elastic Block Storage and
ever since the clear trend has been to EBS-backed instances, and now when I
look at their quick-start list of 33 AMIs, all of them are EBS-backed. And
I’m not even sure one can have anything except persistent root disks in
Azure VMs.

Based on this and a number of other factors I think we want our user
normal / default behavior to boot onto Cinder-backed volumes instead of
onto ephemeral storage. But then I look at OpenStack and its design point
appears to be booting images onto ephemeral storage, and while it is
possible to boot an image onto a new volume this is clumsy (haven’t found a
way to make this the default behavior) and we are experiencing performance
problems (that admittedly we have not yet run to ground).

So …

· Are other operators routinely booting onto Cinder volumes
instead of ephemeral storage?

· What has been your experience with this; any advice?

Conrad Kimball

Associate Technical Fellow

Chief Architect, Enterprise Cloud Services

Application Infrastructure Services / Global IT Infrastructure /
Information Technology & Data Analytics

conrad.kimball@boeing.com

P.O. Box 3707, Mail Code 7M-TE

Seattle, WA 98124-2207

Bellevue 33-11 bldg, office 3A6-3.9

Mobile: 425-591-7802 <(425)%20591-7802>


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
responded Aug 1, 2017 by John_Petrini (1,880 points)   4 5
...