settingsLogin | Registersettings

[Openstack] [Fuel] storage question. (Fuel 10 Newton deploy with storage nodes)

0 votes

Ive been learning a bit more about storage. Let me share what think I know
and ask a more specific question. Please correct me if I am off on what I
think I know.

Glance Images and Cinder Volumes are traditionally stored on the storage
node. Ephemeral volumes (Nova managed, traditionally on the compute node)
are the copy of the Glance image that has been copied to the compute node
and booted as an instances' vHD. Cinder volumes can (among other things) be
added to an instance as additional storage besides this Glance Image.

In Fuel I set the "Ceph RBD for volumes (Cinder)" and "Ceph RBD for images
(Glance)" settings, which will setup Glance and Cinder on the CEPH OSD
storage nodes.

But I am not sure about what the setting "Ceph RBD for ephemeral volumes
(Nova)" will do.

Would selecting it move the running instances' vHD off the hypervisors and
onto the storage node? (aka: move ephemeral from local to over the network?

Thanks

--jim

On Thu, Aug 24, 2017 at 12:14 PM, Jim Okken jim@jokken.com wrote:

Hi all,

We have a pretty complicated storage setup and I am not sure how to
configure Fuel for deployment of the storage nodes. I'm using Fuel
10/Newton. Plus i'm a bit confused on some of the storage aspects
(image/glance, volume/cinder, ephemeral/?.)

We have 3 nodes dedicated to be storage nodes, for HA.

We’re using fiber channel extents and need to use the CEPH filesystem.

I’ll try to simplify the storage situation at first to ask my initial
question without too many details.

We have a fast and a slow storage location. Management tells me they want
the slow location for the Glance images and the fast location for the place
where the instances actually run. (assume compute nodes with slow hard
drives but access to a fast fiber channel volume.)

Where is “the place where the instances actually run”. It isn’t via Glance
nor Cinder is it?

When I configure the storage for CEPH OSD node I see volume settings for
Base System, CEPH and CEPH journal. (I see my slow storage and my fast
storage disks).

When I configure the storage for a Compute node I see volume settings for
Base system and Virtual Storage. Is this Ephemeral storage? How does a
Virtual Storage volume here compare to the CEPH volume on the CEPH OSD?

I have seen an openstack instance who’s .xml file on the compute node
shows the vHD as a CEPH path (ie: rbd:compute/f63e4d30-7706-40be-8eda-b74e91b9dac1_disk.
Is this a CEPH local to the compute node or CEPH on the storage node? (Is
this Ephemeral storage?)

Thanks for any help you might have, I’m a bit confused

thanks

-- Jim


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Sep 6, 2017 in openstack by Jim_Okken (480 points)   1 3

6 Responses

0 votes

Ceph is basically a ‘swiss army knife of storage’. It can play multiple roles in an Openstack deployment, which is one reason why it is so popular among this crowd. It can be used as storage for:

  • nova ephemeral disks (Ceph RBD)
  • replacement for swift (Ceph Object)
  • cinder volume backend (Ceph RBD)
  • glance image backend (Ceph RBD)
  • gnocchi metrics storage (Ceph Object)
  • generic filesystem (CephFS)

…and probably a few more that I’m missing.

The combination of Ceph as backend for glance and nova ephemeral and/or cinder volumes is gorgeous because it’s an ‘instance clone’ of the glance image into the disk/volume which means very fast VM provisioning. Some people boot instances off of nova ephemeral storage, some prefer to boot off of cinder volumes. It depends if you want features like QoS (I/O limiting), snapshots, backup, and whether you want the data to be able to ‘persist’ as a volume after the VM that uses it is removed or if you want it to disappear when the VM is deleted (i.e. ‘ephemeral’)

I’m not a ‘fuel/mirantis guy’ so I can’t tell you specifically what those options in their installer do, but generally Ceph storage is often housed on separate servers dedicated to Ceph regardless of how you want to use it. Some people to colocate Ceph onto their compute nodes and have them perform double duty (i.e. ‘hyperconverged’)

Hopefully this gives you a little bit of information regarding how Ceph is used.

Mike Smith
Lead Cloud System Architect
Overstock.com

On Aug 24, 2017, at 9:22 PM, Jim Okken jim@jokken.com wrote:

Ive been learning a bit more about storage. Let me share what think I know and ask a more specific question. Please correct me if I am off on what I think I know.

Glance Images and Cinder Volumes are traditionally stored on the storage node. Ephemeral volumes (Nova managed, traditionally on the compute node) are the copy of the Glance image that has been copied to the compute node and booted as an instances' vHD. Cinder volumes can (among other things) be added to an instance as additional storage besides this Glance Image.

In Fuel I set the "Ceph RBD for volumes (Cinder)" and "Ceph RBD for images (Glance)" settings, which will setup Glance and Cinder on the CEPH OSD storage nodes.
But I am not sure about what the setting "Ceph RBD for ephemeral volumes (Nova)" will do.
Would selecting it move the running instances' vHD off the hypervisors and onto the storage node? (aka: move ephemeral from local to over the network?

Thanks

--jim

On Thu, Aug 24, 2017 at 12:14 PM, Jim Okken jim@jokken.com wrote:
Hi all,

We have a pretty complicated storage setup and I am not sure how to configure Fuel for deployment of the storage nodes. I'm using Fuel 10/Newton. Plus i'm a bit confused on some of the storage aspects (image/glance, volume/cinder, ephemeral/?.)

We have 3 nodes dedicated to be storage nodes, for HA.
We’re using fiber channel extents and need to use the CEPH filesystem.

I’ll try to simplify the storage situation at first to ask my initial question without too many details.

We have a fast and a slow storage location. Management tells me they want the slow location for the Glance images and the fast location for the place where the instances actually run. (assume compute nodes with slow hard drives but access to a fast fiber channel volume.)

Where is “the place where the instances actually run”. It isn’t via Glance nor Cinder is it?
When I configure the storage for CEPH OSD node I see volume settings for Base System, CEPH and CEPH journal. (I see my slow storage and my fast storage disks).

When I configure the storage for a Compute node I see volume settings for Base system and Virtual Storage. Is this Ephemeral storage? How does a Virtual Storage volume here compare to the CEPH volume on the CEPH OSD?

I have seen an openstack instance who’s .xml file on the compute node shows the vHD as a CEPH path (ie: rbd:compute/f63e4d30-7706-40be-8eda-b74e91b9dac1_disk. Is this a CEPH local to the compute node or CEPH on the storage node? (Is this Ephemeral storage?)

Thanks for any help you might have, I’m a bit confused

thanks

-- Jim


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Aug 25, 2017 by Mike_Smith (2,280 points)   1 4
0 votes

thanks Mike for the info

yes I do want very fast VM provisioning and all the useful features that
comes with having all 3 glance/cinder/ephemeral in CEPH on the storage node.

But I can't afford to have my vHD (either as a cinder volume, or as a
ephemeral volume) over the network on the storage node.

Do any FUEL experts know exactly what the "Ceph RBD for ephemeral volumes
(Nova)" option in Fuel 10 does?
Does it move the running instances vHD off the hypervisors, and onto the
storage node? (aka: move ephemeral from local IO to, network IO?)

thanks!

-- Jim

On Fri, Aug 25, 2017 at 12:08 AM, Mike Smith mismith@overstock.com wrote:

Ceph is basically a ‘swiss army knife of storage’. It can play multiple
roles in an Openstack deployment, which is one reason why it is so popular
among this crowd. It can be used as storage for:

  • nova ephemeral disks (Ceph RBD)
  • replacement for swift (Ceph Object)
  • cinder volume backend (Ceph RBD)
  • glance image backend (Ceph RBD)
  • gnocchi metrics storage (Ceph Object)
  • generic filesystem (CephFS)

…and probably a few more that I’m missing.

The combination of Ceph as backend for glance and nova ephemeral and/or
cinder volumes is gorgeous because it’s an ‘instance clone’ of the glance
image into the disk/volume which means very fast VM provisioning. Some
people boot instances off of nova ephemeral storage, some prefer to boot
off of cinder volumes. It depends if you want features like QoS (I/O
limiting), snapshots, backup, and whether you want the data to be able to
‘persist’ as a volume after the VM that uses it is removed or if you want
it to disappear when the VM is deleted (i.e. ‘ephemeral’)

I’m not a ‘fuel/mirantis guy’ so I can’t tell you specifically what those
options in their installer do, but generally Ceph storage is often housed
on separate servers dedicated to Ceph regardless of how you want to use
it. Some people to colocate Ceph onto their compute nodes and have them
perform double duty (i.e. ‘hyperconverged’)

Hopefully this gives you a little bit of information regarding how Ceph is
used.

Mike Smith
Lead Cloud System Architect
Overstock.com

On Aug 24, 2017, at 9:22 PM, Jim Okken jim@jokken.com wrote:

Ive been learning a bit more about storage. Let me share what think I know
and ask a more specific question. Please correct me if I am off on what I
think I know.

Glance Images and Cinder Volumes are traditionally stored on the storage
node. Ephemeral volumes (Nova managed, traditionally on the compute node)
are the copy of the Glance image that has been copied to the compute node
and booted as an instances' vHD. Cinder volumes can (among other things) be
added to an instance as additional storage besides this Glance Image.

In Fuel I set the "Ceph RBD for volumes (Cinder)" and "Ceph RBD for images
(Glance)" settings, which will setup Glance and Cinder on the CEPH OSD
storage nodes.

But I am not sure about what the setting "Ceph RBD for ephemeral volumes
(Nova)" will do.

Would selecting it move the running instances' vHD off the hypervisors and
onto the storage node? (aka: move ephemeral from local to over the network?

Thanks

--jim

On Thu, Aug 24, 2017 at 12:14 PM, Jim Okken jim@jokken.com wrote:

Hi all,

We have a pretty complicated storage setup and I am not sure how to
configure Fuel for deployment of the storage nodes. I'm using Fuel
10/Newton. Plus i'm a bit confused on some of the storage aspects
(image/glance, volume/cinder, ephemeral/?.)

We have 3 nodes dedicated to be storage nodes, for HA.

We’re using fiber channel extents and need to use the CEPH filesystem.

I’ll try to simplify the storage situation at first to ask my initial
question without too many details.

We have a fast and a slow storage location. Management tells me they want
the slow location for the Glance images and the fast location for the place
where the instances actually run. (assume compute nodes with slow hard
drives but access to a fast fiber channel volume.)

Where is “the place where the instances actually run”. It isn’t via
Glance nor Cinder is it?

When I configure the storage for CEPH OSD node I see volume settings for
Base System, CEPH and CEPH journal. (I see my slow storage and my fast
storage disks).

When I configure the storage for a Compute node I see volume settings
for Base system and Virtual Storage. Is this Ephemeral storage? How does a
Virtual Storage volume here compare to the CEPH volume on the CEPH OSD?

I have seen an openstack instance who’s .xml file on the compute node
shows the vHD as a CEPH path (ie: rbd:compute/f63e4d30-7706-40be-8eda-b74e91b9dac1_disk.
Is this a CEPH local to the compute node or CEPH on the storage node? (Is
this Ephemeral storage?)

Thanks for any help you might have, I’m a bit confused

thanks

-- Jim


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack

CONFIDENTIALITY NOTICE: This message is intended only for the use and
review of the individual or entity to which it is addressed and may contain
information that is privileged and confidential. If the reader of this
message is not the intended recipient, or the employee or agent responsible
for delivering the message solely to the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify sender immediately by telephone or
return email. Thank you.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Aug 25, 2017 by Jim_Okken (480 points)   1 3
0 votes

Hi Jim,

"Ceph RBD for ephemeral volumes (Nova)" means put Nova ephemeral on the
CEPH.
That means all VM will use network as vHD I/O on the CEPH cluster.

If you're not select it, Nova ephemeral will put on the local disk of
compute node.
(For example, if instance created at "node-1" compute node, it will use
node-1 disk space to put vHD file.)

But note that if you create instances by ephemeral, and Nova ephemeral
space is not set on storage cluster (like CEPH, Swift , etc.)
You can't evacuate these instances if compute node crashes.

BTW, here is few tips that may helpful if you decided to build CEPH OSD as
OpenStack storage backend.

  1. Normally, 1G ethernet is enough for many cases, but I suggest that let
    Storage network running on independent ethernet port.
    If still worry, can also try build LACP for Storage network.

  2. For disks that will be use for CEPH, it's better that don't build RAID
    for them, let them as one OSD per disk will be a better performance.

  3. It's better that using SSD as CEPH Journal, it will speed up OSD's
    performance.

Just my 2 cents, and sorry about my bad English.

Eddie.

2017-08-25 23:38 GMT+08:00 Jim Okken jim@jokken.com:

thanks Mike for the info

yes I do want very fast VM provisioning and all the useful features that
comes with having all 3 glance/cinder/ephemeral in CEPH on the storage node.

But I can't afford to have my vHD (either as a cinder volume, or as a
ephemeral volume) over the network on the storage node.

Do any FUEL experts know exactly what the "Ceph RBD for ephemeral volumes
(Nova)" option in Fuel 10 does?
Does it move the running instances vHD off the hypervisors, and onto the
storage node? (aka: move ephemeral from local IO to, network IO?)

thanks!

-- Jim

On Fri, Aug 25, 2017 at 12:08 AM, Mike Smith mismith@overstock.com
wrote:

Ceph is basically a ‘swiss army knife of storage’. It can play multiple
roles in an Openstack deployment, which is one reason why it is so popular
among this crowd. It can be used as storage for:

  • nova ephemeral disks (Ceph RBD)
  • replacement for swift (Ceph Object)
  • cinder volume backend (Ceph RBD)
  • glance image backend (Ceph RBD)
  • gnocchi metrics storage (Ceph Object)
  • generic filesystem (CephFS)

…and probably a few more that I’m missing.

The combination of Ceph as backend for glance and nova ephemeral and/or
cinder volumes is gorgeous because it’s an ‘instance clone’ of the glance
image into the disk/volume which means very fast VM provisioning. Some
people boot instances off of nova ephemeral storage, some prefer to boot
off of cinder volumes. It depends if you want features like QoS (I/O
limiting), snapshots, backup, and whether you want the data to be able to
‘persist’ as a volume after the VM that uses it is removed or if you want
it to disappear when the VM is deleted (i.e. ‘ephemeral’)

I’m not a ‘fuel/mirantis guy’ so I can’t tell you specifically what those
options in their installer do, but generally Ceph storage is often housed
on separate servers dedicated to Ceph regardless of how you want to use
it. Some people to colocate Ceph onto their compute nodes and have them
perform double duty (i.e. ‘hyperconverged’)

Hopefully this gives you a little bit of information regarding how Ceph
is used.

Mike Smith
Lead Cloud System Architect
Overstock.com

On Aug 24, 2017, at 9:22 PM, Jim Okken jim@jokken.com wrote:

Ive been learning a bit more about storage. Let me share what think I
know and ask a more specific question. Please correct me if I am off on
what I think I know.

Glance Images and Cinder Volumes are traditionally stored on the storage
node. Ephemeral volumes (Nova managed, traditionally on the compute node)
are the copy of the Glance image that has been copied to the compute node
and booted as an instances' vHD. Cinder volumes can (among other things) be
added to an instance as additional storage besides this Glance Image.

In Fuel I set the "Ceph RBD for volumes (Cinder)" and "Ceph RBD for
images (Glance)" settings, which will setup Glance and Cinder on the CEPH
OSD storage nodes.

But I am not sure about what the setting "Ceph RBD for ephemeral volumes
(Nova)" will do.

Would selecting it move the running instances' vHD off the hypervisors
and onto the storage node? (aka: move ephemeral from local to over the
network?

Thanks

--jim

On Thu, Aug 24, 2017 at 12:14 PM, Jim Okken jim@jokken.com wrote:

Hi all,

We have a pretty complicated storage setup and I am not sure how to
configure Fuel for deployment of the storage nodes. I'm using Fuel
10/Newton. Plus i'm a bit confused on some of the storage aspects
(image/glance, volume/cinder, ephemeral/?.)

We have 3 nodes dedicated to be storage nodes, for HA.

We’re using fiber channel extents and need to use the CEPH filesystem.

I’ll try to simplify the storage situation at first to ask my initial
question without too many details.

We have a fast and a slow storage location. Management tells me they
want the slow location for the Glance images and the fast location for the
place where the instances actually run. (assume compute nodes with slow
hard drives but access to a fast fiber channel volume.)

Where is “the place where the instances actually run”. It isn’t via
Glance nor Cinder is it?

When I configure the storage for CEPH OSD node I see volume settings for
Base System, CEPH and CEPH journal. (I see my slow storage and my fast
storage disks).

When I configure the storage for a Compute node I see volume settings
for Base system and Virtual Storage. Is this Ephemeral storage? How does a
Virtual Storage volume here compare to the CEPH volume on the CEPH OSD?

I have seen an openstack instance who’s .xml file on the compute node
shows the vHD as a CEPH path (ie: rbd:compute/f63e4d30-7706-40be-8eda-b74e91b9dac1_disk.
Is this a CEPH local to the compute node or CEPH on the storage node? (Is
this Ephemeral storage?)

Thanks for any help you might have, I’m a bit confused

thanks

-- Jim


Mailing list: http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack

CONFIDENTIALITY NOTICE: This message is intended only for the use and
review of the individual or entity to which it is addressed and may contain
information that is privileged and confidential. If the reader of this
message is not the intended recipient, or the employee or agent responsible
for delivering the message solely to the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify sender immediately by telephone or
return email. Thank you.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Aug 30, 2017 by Eddie_Yen (840 points)   1 1
0 votes

Hi all,

Can you offer and insight in this failure I get when deploying 2 compute
nodes using Fuel 10, please? (contoller etc nodes are all deployed/working)

fuelagent.cmd.agent PartitionNotFoundError: Partition
/dev/mapper/3600c0ff0001ea00f521fa45901000000-part2 not found after
creation fuel
agent.cmd.agent [-] Partition
/dev/mapper/3600c0ff0001ea00f521fa45901000000-part2 not found after creation

ls -al /dev/mapper

600c0ff0001ea00f521fa45901000000 -> ../dm-0

600c0ff0001ea00f521fa45901000000-part1 -> ../dm-1

600c0ff0001ea00f521fa45901000000p2 -> ../dm-2

Why the 2nd partition was created and actually named "...000p2" rather than
"...000-part2" is beyond me.

More logging if it helps, lots of failures:

2017-09-01 18:42:32 ERR puppet-user[3642]: /bin/bash
"/etc/puppet/shellmanifests/provision56_command.sh" returned 255 instead
of one of [0]

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) Partition
/dev/mapper/3600c0ff0001ea00f5d1fa45901000000-part2 not found after creation

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) Unexpected error

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) /bin/bash: warning:
setlocale: LCALL: cannot change locale (enUS.UTF-8)

2017-09-01 18:42:31 WARNING systemd-udevd[4982]: Process
'/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:31 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:31 WARNING systemd-udevd[4964]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:31 WARNING systemd-udevd[4963]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:31 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:30 WARNING systemd-udevd[4889]: Process
'/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:29 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:29 WARNING systemd-udevd[4866]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:29 WARNING systemd-udevd[4867]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:29 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:28 WARNING systemd-udevd[4791]: Process
'/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:28 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:28 WARNING systemd-udevd[4773]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:28 WARNING systemd-udevd[4774]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:28 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:28 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:27 WARNING systemd-udevd[4655]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:27 WARNING systemd-udevd[4654]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:27 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:26 WARNING systemd-udevd[4576]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:26 WARNING systemd-udevd[4577]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:26 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:26 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:25 NOTICE nailgun-agent: I,
[2017-09-01T18:42:21.541001 #3601] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:42:24 WARNING systemd-udevd[4114]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:24 WARNING systemd-udevd[4115]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:24 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:24 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:24 NOTICE nailgun-agent: I,
[2017-09-01T18:42:20.153616 #3601] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:42:24 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:24 WARNING systemd-udevd[3965]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:24 WARNING systemd-udevd[3964]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:22 NOTICE puppet-user[3642]: Compiled catalog for
bootstrap.dialogic.com in environment production in 0.05 seconds

2017-09-01 18:42:22 NOTICE puppet-user[3642]: (Scope(Class[main]))
MODULAR: provision_56

2017-09-01 18:42:06 INFO CRON[3598]: (root) CMD (flock -w 0 -o
/var/lock/nailgun-agent.lock -c "/usr/bin/nailgun-agent 2>&1 | tee -a
/var/log/nailgun-agent.log | /usr/bin/logger -t nailgun-agent")

2017-09-01 18:41:57 NOTICE nailgun-agent: I,
[2017-09-01T18:41:53.155954 #3109] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:41:55 NOTICE nailgun-agent: I,
[2017-09-01T18:41:51.503429 #3109] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:41:09 NOTICE nailgun-agent: I,
[2017-09-01T18:41:04.963763 #2606] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:41:07 NOTICE nailgun-agent: I,
[2017-09-01T18:41:03.312223 #2606] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:41:05 INFO CRON[2603]: (root) CMD (flock -w 0 -o
/var/lock/nailgun-agent.lock -c "/usr/bin/nailgun-agent 2>&1 | tee -a
/var/log/nailgun-agent.log | /usr/bin/logger -t nailgun-agent")

2017-09-01 18:40:37 INFO systemd[1]: Started The Marionette
Collective.

2017-09-01 18:40:37 WARNING systemd[1]:
mcollective.service: Supervising process 2583 which is not our child. We'll
most likely not notice when it exits.

thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 1, 2017 by Jim_Okken (480 points)   1 3
0 votes

Hi

Can you describe your disk configuration and partitioning?

2017-09-02 4:57 GMT+08:00 Jim Okken jim@jokken.com:

Hi all,

Can you offer and insight in this failure I get when deploying 2 compute
nodes using Fuel 10, please? (contoller etc nodes are all deployed/working)

fuelagent.cmd.agent PartitionNotFoundError: Partition /dev/mapper/
3600c0ff0001ea00f521fa45901000000-part2 not found after creation
fuel
agent.cmd.agent [-] Partition /dev/mapper/
3600c0ff0001ea00f521fa45901000000-part2 not found after creation

ls -al /dev/mapper

600c0ff0001ea00f521fa45901000000 -> ../dm-0

600c0ff0001ea00f521fa45901000000-part1 -> ../dm-1

600c0ff0001ea00f521fa45901000000p2 -> ../dm-2

Why the 2nd partition was created and actually named "...000p2" rather
than "...000-part2" is beyond me.

More logging if it helps, lots of failures:

2017-09-01 18:42:32 ERR puppet-user[3642]: /bin/bash
"/etc/puppet/shellmanifests/provision56_command.sh" returned 255
instead of one of [0]

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) Partition
/dev/mapper/3600c0ff0001ea00f5d1fa45901000000-part2 not found after
creation

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) Unexpected error

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) /bin/bash: warning:
setlocale: LCALL: cannot change locale (enUS.UTF-8)

2017-09-01 18:42:31 WARNING systemd-udevd[4982]: Process
'/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:31 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:31 WARNING systemd-udevd[4964]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:31 WARNING systemd-udevd[4963]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:31 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:30 WARNING systemd-udevd[4889]: Process
'/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:29 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:29 WARNING systemd-udevd[4866]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:29 WARNING systemd-udevd[4867]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:29 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:28 WARNING systemd-udevd[4791]: Process
'/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:28 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:28 WARNING systemd-udevd[4773]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:28 WARNING systemd-udevd[4774]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:28 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:28 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:27 WARNING systemd-udevd[4655]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:27 WARNING systemd-udevd[4654]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:27 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:26 WARNING systemd-udevd[4576]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:26 WARNING systemd-udevd[4577]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:26 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:26 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:25 NOTICE nailgun-agent: I,
[2017-09-01T18:42:21.541001 #3601] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:42:24 WARNING systemd-udevd[4114]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:24 WARNING systemd-udevd[4115]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:24 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:24 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:24 NOTICE nailgun-agent: I,
[2017-09-01T18:42:20.153616 #3601] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:42:24 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:24 WARNING systemd-udevd[3965]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:24 WARNING systemd-udevd[3964]: Process
'/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:22 NOTICE puppet-user[3642]: Compiled catalog for
bootstrap.dialogic.com in environment production in 0.05 seconds

2017-09-01 18:42:22 NOTICE puppet-user[3642]: (Scope(Class[main]))
MODULAR: provision_56

2017-09-01 18:42:06 INFO CRON[3598]: (root) CMD (flock -w 0
-o /var/lock/nailgun-agent.lock -c "/usr/bin/nailgun-agent 2>&1 | tee -a
/var/log/nailgun-agent.log | /usr/bin/logger -t nailgun-agent")

2017-09-01 18:41:57 NOTICE nailgun-agent: I,
[2017-09-01T18:41:53.155954 #3109] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:41:55 NOTICE nailgun-agent: I,
[2017-09-01T18:41:51.503429 #3109] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:41:09 NOTICE nailgun-agent: I,
[2017-09-01T18:41:04.963763 #2606] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:41:07 NOTICE nailgun-agent: I,
[2017-09-01T18:41:03.312223 #2606] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:41:05 INFO CRON[2603]: (root) CMD (flock -w 0
-o /var/lock/nailgun-agent.lock -c "/usr/bin/nailgun-agent 2>&1 | tee -a
/var/log/nailgun-agent.log | /usr/bin/logger -t nailgun-agent")

2017-09-01 18:40:37 INFO systemd[1]: Started The Marionette
Collective.

2017-09-01 18:40:37 WARNING systemd[1]:
mcollective.service: Supervising process 2583 which is not our child. We'll
most likely not notice when it exits.

thanks


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 4, 2017 by Eddie_Yen (840 points)   1 1
0 votes

thanks for the help once again Eddie!

im sure you remember i have that fiber channel SAN configuration

This system has a 460GB disk mapped to it from the fiber channel SAN. As
far as I can tell this disk isn't much different to the OS than a local
SATA drive.
There is also a internal 32GB USB/Flash drive in this system which isn't
even shown in the Fuel 10 GUI

In the bootstrap OS I see:

ls /dev/disk/by-path:
pci-0000:00:14.0-usb-0:3.1:1.0-scsi-0:0:0:0
pci-0000:09:00.0-fc-0x247000c0ff25ce6d-lun-12
pci-0000:09:00.0-fc-0x207000c0ff25ce6d-lun-12

both those xxx-lun-12 devices are the same drive.

I also see one /dev/dm-X device
lsblk /dev/dm-0
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
3600c0ff0001ea00f5d1fa45901000000 252:0 0 429.3G 0 mpath

there are 3 /dev/sdX devices

1.
(parted) select /dev/sda
Using /dev/sda
(parted) print
Model: HP iLO Internal SD-CARD (scsi)
Disk /dev/sdd: 32.1GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags

2.
(parted) select /dev/sdb
Using /dev/sdb
(parted) print
Error: /dev/sdb: unrecognised disk label
Model: HP MSA 2040 SAN (scsi)
Disk /dev/sdb: 461GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

3.
(parted) select /dev/sdc
Using /dev/sdc
(parted) print
Error: /dev/sdc: unrecognised disk label
Model: HP MSA 2040 SAN (scsi)
Disk /dev/sdc: 461GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

dev sdb and sdc are the same disk.

I see this bug, but wouldn't know how to even start applying a patch if it
apply to my situation.
https://bugs.launchpad.net/fuel/+bug/1652788

thanks!

-- Jim

On Mon, Sep 4, 2017 at 2:34 AM, Eddie Yen missile0407@gmail.com wrote:

Hi

Can you describe your disk configuration and partitioning?

2017-09-02 4:57 GMT+08:00 Jim Okken jim@jokken.com:

Hi all,

Can you offer and insight in this failure I get when deploying 2 compute
nodes using Fuel 10, please? (contoller etc nodes are all deployed/working)

fuelagent.cmd.agent PartitionNotFoundError: Partition
/dev/mapper/3600c0ff0001ea00f521fa45901000000-part2 not found after
creation fuel
agent.cmd.agent [-] Partition /dev/mapper/3600c0ff0001ea00f521fa45901000000-part2
not found after creation

ls -al /dev/mapper

600c0ff0001ea00f521fa45901000000 -> ../dm-0

600c0ff0001ea00f521fa45901000000-part1 -> ../dm-1

600c0ff0001ea00f521fa45901000000p2 -> ../dm-2

Why the 2nd partition was created and actually named "...000p2" rather
than "...000-part2" is beyond me.

More logging if it helps, lots of failures:

2017-09-01 18:42:32 ERR puppet-user[3642]: /bin/bash
"/etc/puppet/shellmanifests/provision56_command.sh" returned 255
instead of one of [0]

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) Partition
/dev/mapper/3600c0ff0001ea00f5d1fa45901000000-part2 not found after
creation

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) Unexpected error

2017-09-01 18:42:32 NOTICE puppet-user[3642]:
(/Stage[main]/Main/Exec[provision56shell]/returns) /bin/bash: warning:
setlocale: LCALL: cannot change locale (enUS.UTF-8)

2017-09-01 18:42:31 WARNING systemd-udevd[4982]:
Process '/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:31 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:31 WARNING systemd-udevd[4964]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:31 WARNING systemd-udevd[4963]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:31 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:30 WARNING systemd-udevd[4889]:
Process '/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:29 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:29 WARNING systemd-udevd[4866]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:29 WARNING systemd-udevd[4867]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:29 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:28 WARNING systemd-udevd[4791]:
Process '/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.

2017-09-01 18:42:28 INFO multipathd[1012]: dm-3: remove map
(uevent)

2017-09-01 18:42:28 WARNING systemd-udevd[4773]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:28 WARNING systemd-udevd[4774]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:28 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:28 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:27 WARNING systemd-udevd[4655]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:27 WARNING systemd-udevd[4654]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:27 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:26 WARNING systemd-udevd[4576]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:26 WARNING systemd-udevd[4577]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:26 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:26 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:25 NOTICE nailgun-agent: I,
[2017-09-01T18:42:21.541001 #3601] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:42:24 WARNING systemd-udevd[4114]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:24 WARNING systemd-udevd[4115]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:24 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:24 INFO multipathd[1012]: dm-2: remove map
(uevent)

2017-09-01 18:42:24 NOTICE nailgun-agent: I,
[2017-09-01T18:42:20.153616 #3601] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:42:24 ERR multipath: /dev/sda: can't store
path info

2017-09-01 18:42:24 WARNING systemd-udevd[3965]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.

2017-09-01 18:42:24 WARNING systemd-udevd[3964]:
Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.

2017-09-01 18:42:22 NOTICE puppet-user[3642]: Compiled catalog
for bootstrap.dialogic.com in environment production in 0.05 seconds

2017-09-01 18:42:22 NOTICE puppet-user[3642]:
(Scope(Class[main])) MODULAR: provision_56

2017-09-01 18:42:06 INFO CRON[3598]: (root) CMD (flock -w 0
-o /var/lock/nailgun-agent.lock -c "/usr/bin/nailgun-agent 2>&1 | tee -a
/var/log/nailgun-agent.log | /usr/bin/logger -t nailgun-agent")

2017-09-01 18:41:57 NOTICE nailgun-agent: I,
[2017-09-01T18:41:53.155954 #3109] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:41:55 NOTICE nailgun-agent: I,
[2017-09-01T18:41:51.503429 #3109] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:41:09 NOTICE nailgun-agent: I,
[2017-09-01T18:41:04.963763 #2606] INFO -- : Wrote data to file
'/etc/nailgun_uid'. Data: 56

2017-09-01 18:41:07 NOTICE nailgun-agent: I,
[2017-09-01T18:41:03.312223 #2606] INFO -- : API URL is
https://10.20.243.1:8443/api

2017-09-01 18:41:05 INFO CRON[2603]: (root) CMD (flock -w 0
-o /var/lock/nailgun-agent.lock -c "/usr/bin/nailgun-agent 2>&1 | tee -a
/var/log/nailgun-agent.log | /usr/bin/logger -t nailgun-agent")

2017-09-01 18:40:37 INFO systemd[1]: Started The Marionette
Collective.

2017-09-01 18:40:37 WARNING systemd[1]:
mcollective.service: Supervising process 2583 which is not our child. We'll
most likely not notice when it exits.

thanks


Mailing list: http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi
-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Sep 6, 2017 by Jim_Okken (480 points)   1 3
...