settingsLogin | Registersettings

[Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid

0 votes

I'm trying my newly installed Openstack system and I'm getting
problem in starting my first instance.

----- s n i p -----
Build of instance 5193c2d9-0aaf-4f84-b108-f6884d97b571 aborted: Block Device Mapping is Invalid.
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in dobuildandruninstance filterproperties) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2083, in buildandruninstance 'create.error', fault=e) File "/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 220, in exit self.forcereraise() File "/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 196, in forcereraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2048, in buildandruninstance blockdevicemapping) as resources: File "/usr/lib/python2.7/contextlib.py", line 17, in enter return self.gen.next() File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2206, in buildresources reason=e.format_message())
----- s n i p -----

Cleaning up all the logs for irrelevant stuff, I see:

----- s n i p -----
INFO cinder.api.v2.volumes Create volume of 5 GB
INFO cinder.volume.api Volume created successfully.
INFO cinder.volume.flows.manager.createvolume Volume 6b1dace4-78e1-452b-a455-c0fc882374f3: being created as image with specification: {'status': u'creating', 'imagelocation': (None, None), 'volumesize': 5, 'volumename': 'volume-6b1dace4-78e1-452b-a455-c0fc882374f3', 'imageid': u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'imageservice': <cinder.image.glance.GlanceImageService object at 0x7fa4f31d8ad0>, 'imagemeta': {'status': u'active', 'name': u'fedora23', 'deleted': False, 'containerformat': u'docker', 'createdat': datetime.datetime(2016, 6, 15, 20, 38, 43, tzinfo=<iso8601.Utc>), 'diskformat': u'qcow2', 'updatedat': datetime.datetime(2016, 6, 15, 20, 38, 45, tzinfo=<iso8601.Utc>), 'id': u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'owner': u'd524c8dfd9e9449798ebac9b025f8de6', 'minram': 0, 'checksum': u'38d62e2e1909c89f72ba4d5f5c0005d5', 'mindisk': 0, 'ispublic': True, 'deletedat': None, 'properties': {u'hypervisortype': u'docker', u'architecture': u'x8664'}, 'size': 234363392}}
INFO cinder.image.image
utils Image download 223.00 MB at 35.35 MB/s
WARN manila.context [-] Arguments dropped when creating context: {u'readonly': False, u'domain': None, u'showdeleted': False, u'useridentity': u'- - - - -', u'projectdomain': None, u'resourceuuid': None, u'userdomain': None}.
WARN manila.context [-] Arguments dropped when creating context: {u'readonly': False, u'domain': None, u'showdeleted': False, u'useridentity': u'- - - - -', u'projectdomain': None, u'resourceuuid': None, u'userdomain': None}.
INFO cinder.image.imageutils Converted 3072.00 MB image at 31.59 MB/s
INFO cinder.volume.flows.manager.create
volume Volume volume-6b1dace4-78e1-452b-a455-c0fc882374f3 (6b1dace4-78e1-452b-a455-c0fc882374f3): created successfully
INFO cinder.volume.manager Created volume successfully.
INFO cinder.api.v2.volumes Delete volume with id: 6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.api Delete volume request issued successfully.
INFO eventlet.wsgi.server 10.0.4.5 "DELETE /v2/d524c8dfd9e9449798ebac9b025f8de6/volumes/6b1dace4-78e1-452b-a455-c0fc882374f3 HTTP/1.1" status: 202 len: 211 time: 0.1300900
INFO cinder.volume.targets.iscsi Skipping removeexport. No iscsitarget is presently exported for volume: 6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.utils Performing secure delete on volume: /dev/mapper/blade_center-volume--6b1dace4--78e1--452b--a455--c0fc882374f3
----- s n i p -----

Full log at http://bayour.com/misc/openstack_instance_create-log.txt.

The web GUI say (this might be from another test, but I always
get the same):

----- s n i p -----
Error: Failed to perform requested operation on instance
"jessie-test", the instance has an error status: Please try again
later [Error: Build of instance a4e1deaa-cdf0-4fc7-8c54-579868c962c3
aborted: Block Device Mapping is Invalid.].
----- s n i p -----

I can see nothing relevant in this that would make it fail!
The only thing that bought my eye was that it isn't removing
the iSCSI target, because there isn't one..

This is (most of) my cinder.conf file:

----- s n i p -----
[DEFAULT]
myip = 10.0.4.1
storage
availabilityzone = nova
default
availabilityzone = nova
enabled
backends = lvm
iscsitargetprefix = iqn.2010-10.org.openstack:
iscsiipaddress = $myip
iscsi
port = 3260
iscsiiotype = blockio
iscsi
writecache = on
volume
group = bladecenter
scheduler
driver = cinder.scheduler.filter_scheduler.FilterScheduler

[lvm]
volumedriver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume
group = bladecenter
iscsi
protocol = iscsi
iscsi_helper = tgtadm
----- s n i p -----

PS. Creating the instance from an already existing, empty
volume, didn't work either. Same message, and even less
information in the log.
--
As soon as you find a product that you really like,
they will stop making it.
- Wilson's Law


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Jun 16, 2016 in openstack by Turbo_Fredriksson (8,980 points)   7 12 15

34 Responses

0 votes

I had also some trouble getting volume backed instances to boot. I use
xen hypervisor and found out that the instance was assigned a device
name of "vda" (which is set by default) instead of xvda, I filed a bug
report for this. Have you nova-compute.logs? I can't find them in your
link. They should give a hint about the device name or other possible
causes. Since the volume is created but immediately destroyed, I guess
nova has a problem with the block device.

Regards,
Eugen

Zitat von Turbo Fredriksson turbo@bayour.com:

I'm trying my newly installed Openstack system and I'm getting
problem in starting my first instance.

----- s n i p -----
Build of instance 5193c2d9-0aaf-4f84-b108-f6884d97b571 aborted:
Block Device Mapping is Invalid.
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
line 1926, in dobuildandruninstance filterproperties) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2083, in buildandruninstance 'create.error', fault=e) File
"/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 220,
in exit self.force
reraise() File
"/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 196,
in force
reraise six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2048, in buildandruninstance blockdevicemapping) as resources:
File "/usr/lib/python2.7/contextlib.py", line 17, in enter
return self.gen.next() File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2206, in buildresources reason=e.format_message())
----- s n i p -----

Cleaning up all the logs for irrelevant stuff, I see:

----- s n i p -----
INFO cinder.api.v2.volumes Create volume of 5 GB
INFO cinder.volume.api Volume created successfully.
INFO cinder.volume.flows.manager.createvolume Volume
6b1dace4-78e1-452b-a455-c0fc882374f3: being created as image with
specification: {'status': u'creating', 'image
location': (None,
None), 'volumesize': 5, 'volumename':
'volume-6b1dace4-78e1-452b-a455-c0fc882374f3', 'imageid':
u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'image
service':
<cinder.image.glance.GlanceImageService object at 0x7fa4f31d8ad0>,
'imagemeta': {'status': u'active', 'name': u'fedora23', 'deleted':
False, 'container
format': u'docker', 'createdat':
datetime.datetime(2016, 6, 15, 20, 38, 43, tzinfo=<iso8601.Utc>),
'disk
format': u'qcow2', 'updatedat': datetime.datetime(2016, 6,
15, 20, 38, 45, tzinfo=<iso8601.Utc>), 'id':
u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'owner':
u'd524c8dfd9e9449798ebac9b025f8de6', 'min
ram': 0, 'checksum':
u'38d62e2e1909c89f72ba4d5f5c0005d5', 'mindisk': 0, 'ispublic':
True, 'deletedat': None, 'properties': {u'hypervisortype':
u'docker', u'architecture': u'x8664'}, 'size': 23436
3392}}
INFO cinder.image.image
utils Image download 223.00 MB at 35.35 MB/s
WARN manila.context [-] Arguments dropped when creating context:
{u'readonly': False, u'domain': None, u'showdeleted': False,
u'useridentity': u'- - - - -', u'projectdomain': None,
u'resourceuuid': None, u'userdomain': None}.
WARN manila.context [-] Arguments dropped when creating context:
{u'readonly': False, u'domain': None, u'showdeleted': False,
u'useridentity': u'- - - - -', u'projectdomain': None,
u'resourceuuid': None, u'userdomain': None}.
INFO cinder.image.imageutils Converted 3072.00 MB image at 31.59 MB/s
INFO cinder.volume.flows.manager.create
volume Volume
volume-6b1dace4-78e1-452b-a455-c0fc882374f3
(6b1dace4-78e1-452b-a455-c0fc882374f3): created successfully
INFO cinder.volume.manager Created volume successfully.
INFO cinder.api.v2.volumes Delete volume with id:
6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.api Delete volume request issued successfully.
INFO eventlet.wsgi.server 10.0.4.5 "DELETE
/v2/d524c8dfd9e9449798ebac9b025f8de6/volumes/6b1dace4-78e1-452b-a455-c0fc882374f3 HTTP/1.1" status: 202 len: 211 time:
0.1300900
INFO cinder.volume.targets.iscsi Skipping removeexport. No
iscsi
target is presently exported for volume:
6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.utils Performing secure delete on volume:
/dev/mapper/blade_center-volume--6b1dace4--78e1--452b--a455--c0fc882374f3
----- s n i p -----

Full log at http://bayour.com/misc/openstack_instance_create-log.txt.

The web GUI say (this might be from another test, but I always
get the same):

----- s n i p -----
Error: Failed to perform requested operation on instance
"jessie-test", the instance has an error status: Please try again
later [Error: Build of instance a4e1deaa-cdf0-4fc7-8c54-579868c962c3
aborted: Block Device Mapping is Invalid.].
----- s n i p -----

I can see nothing relevant in this that would make it fail!
The only thing that bought my eye was that it isn't removing
the iSCSI target, because there isn't one..

This is (most of) my cinder.conf file:

----- s n i p -----
[DEFAULT]
myip = 10.0.4.1
storage
availabilityzone = nova
default
availabilityzone = nova
enabled
backends = lvm
iscsitargetprefix = iqn.2010-10.org.openstack:
iscsiipaddress = $myip
iscsi
port = 3260
iscsiiotype = blockio
iscsi
writecache = on
volume
group = bladecenter
scheduler
driver = cinder.scheduler.filter_scheduler.FilterScheduler

[lvm]
volumedriver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume
group = bladecenter
iscsi
protocol = iscsi
iscsi_helper = tgtadm
----- s n i p -----

PS. Creating the instance from an already existing, empty
volume, didn't work either. Same message, and even less
information in the log.
--
As soon as you find a product that you really like,
they will stop making it.
- Wilson's Law


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983
responded Jun 17, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

On Jun 17, 2016, at 1:12 PM, Eugen Block wrote:

Have you nova-compute.logs?

They don't say a thing, so I'm guessing it never gets
that far.

If I'm quick, i can see the LVM volume being created
successfully (which the log also indicates).

responded Jun 17, 2016 by Turbo_Fredriksson (8,980 points)   7 12 15
0 votes

Then I would turn on debug logs for cinder and see if there is more
information on why it's deleting the volumes before attaching them. I
don't even see the attempt to attach it. If it works, these steps
should be processed:

  • Created volume successfully.
  • Initialize volume connection completed successfully.
  • Attach volume completed successfully.
  • Deleted volume successfully.

Regards,
Eugen

Zitat von Turbo Fredriksson turbo@bayour.com:

On Jun 17, 2016, at 1:12 PM, Eugen Block wrote:

Have you nova-compute.logs?

They don't say a thing, so I'm guessing it never gets
that far.

If I'm quick, i can see the LVM volume being created
successfully (which the log also indicates).


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 17, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

On Jun 17, 2016, at 2:38 PM, Eugen Block wrote:

I don't even see the attempt to attach it. If it works, these steps should be processed:

Neither can I! And running with debugging doesn't
show anything either :(

The log literally say (no changes, no additions or removal!):

----- s n i p -----
2016-06-17 15:12:39.335 8046 DEBUG cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Task 'cinder.volume.flows.manager.createvolume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'RUNNING' from state 'PENDING' _taskreceiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-06-17 15:12:39.583 8046 INFO cinder.volume.flows.manager.createvolume [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Volume 8892642a-6e12-48ae-ba94-8cc897a4acd5 (8892642a-6e12-48ae-ba94-8cc897a4acd5): created successfully
2016-06-17 15:12:39.585 8046 DEBUG cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Task 'cinder.volume.flows.manager.create
volume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' taskreceiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-06-17 15:12:39.589 8046 DEBUG cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Flow 'volumecreatemanager' (45f829ab-d4bd-480a-a78f-c5e8eaa38598) transitioned into state 'SUCCESS' from state 'RUNNING' flowreceiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:140
2016-06-17 15:12:39.591 8046 INFO cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Created volume successfully.
[NOTE: Here everything was a-ok!!]
2016-06-17 15:12:43.378 8046 DEBUG oslomessaging.drivers.amqpdriver [-] received message msgid: None reply to None call /usr/lib/python2.7/dist-packages/oslomessaging/drivers/amqpdriver.py:201
[NOTE: And here it starts deleting the volume!]
2016-06-17 15:12:43.383 8046 DEBUG oslo
concurrency.lockutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Lock "8892642a-6e12-48ae-ba94-8cc897a4acd5-deletevolume" acquired by "cinder.volume.manager.lvoinner2" :: waited 0.001s inner /usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:273
2016-06-17 15:12:43.614 8046 INFO cinder.volume.targets.iscsi [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Skipping remove
export. No iscsitarget is presently exported for volume: 8892642a-6e12-48ae-ba94-8cc897a4acd5
2016-06-17 15:12:43.615 8046 DEBUG oslo
concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LCALL=C lvs --noheadings --unit=g -o vgname,name,size --nosuffix bladecenter/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute /usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:326
2016-06-17 15:12:43.780 8046 DEBUG osloconcurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LCALL=C lvs --noheadings --unit=g -o vgname,name,size --nosuffix bladecenter/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in 0.166s execute /usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:356
2016-06-17 15:12:43.783 8046 DEBUG oslo
concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LCALL=C lvdisplay --noheading -C -o Attr bladecenter/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute /usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:326
2016-06-17 15:12:43.948 8046 DEBUG oslo
concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LCALL=C lvdisplay --noheading -C -o Attr bladecenter/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in 0.165s execute /usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:356
2016-06-17 15:12:43.950 8046 INFO cinder.volume.utils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Performing secure delete on volume: /dev/mapper/blade
center-8892642a--6e12--48ae--ba94--8cc897a4acd5
----- s n i p -----
--
Imagine you're an idiot and then imagine you're in
the government. Oh, sorry. Now I'm repeating myself
- Mark Twain

responded Jun 17, 2016 by Turbo_Fredriksson (8,980 points)   7 12 15
0 votes

Hi list,

I am seeing a strange behaviour of my cloud and could use some help on this.
I have a project containing 2 VMs, one is running in an external
network, the other is in a tenant-network with a floating ip. Security
group allows ping and ssh.
Now there are several ways to break or restore the connectivity but I
can't find the cause.

  1. Boot a new instance on the same compute node (but different
    project, no matter if same or different network). Connectivity to both
    existing VMs is lost, however, from within the instance I can still
    get out! Restarting neutron-linuxbridge-agent gets it right again.

  2. During the state of broken connectivity changing the
    security-group-rules (adding one rule or deleting a rule) for the
    default sec-group has the same effect, although
    neutron-linuxbridge-agent is not restarted after that, but the VMs are
    reachable again.

  3. Different project, different network, same compute node: deleting a
    running instance also leads to a connectivity loss for the existing VMs.

  4. In a way I was able to reproduce this issue: on a different compute
    node and different project I launched an instance in the same external
    network last Friday. The instance was reachable, I shut it down. Today
    I booted it again, it was not reachable. Restarting the
    linuxbridge-agent fixed it again.

I took a look into iptables and compared the output when the instances
are reachable and when they are not. Somehow the neutron rules aren't
there. Following the rule tree to the bottom it leads to a DROP rule
for all packets.

---cut here---
compute1:~ # iptables -L FORWARD -nv|more
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
0 0 nova-filter-top all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 nova-compute-FORWARD all -- * * 0.0.0.0/0
0.0.0.0/0

compute1:~ # systemctl restart openstack-neutron-linuxbridge-agent.service

compute1:~ # iptables -L FORWARD -nv|more
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
14 1176 neutron-filter-top all -- * * 0.0.0.0/0
0.0.0.0/0
14 1176 neutron-linuxbri-FORWARD all -- * *
0.0.0.0/0 0.0.0.0/0
0 0 nova-filter-top all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 nova-compute-FORWARD all -- * * 0.0.0.0/0
0.0.0.0/0
---cut here---

What is going on with neutron? I see that since about two weeks now, I
updated all nodes last Friday but the problem still exists.

Any help is appreciated!

Regards,
Eugen

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 20, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

Is it possible to create an empty volume? Without nova or glance, just
a volume. If that works and the volume is not deleted immediately, you
could try to attach it to a running instance to see if nova can handle
it.
Do you see the iscsi session on your compute node?
Then you could try to create a volume from an image, that way you see
if glance and cinder are working properly together. If that also works
it could be an issue with nova, maybe come misconfiguration.

Zitat von Turbo Fredriksson turbo@bayour.com:

On Jun 17, 2016, at 2:38 PM, Eugen Block wrote:

I don't even see the attempt to attach it. If it works, these steps
should be processed:

Neither can I! And running with debugging doesn't
show anything either :(

The log literally say (no changes, no additions or removal!):

----- s n i p -----
2016-06-17 15:12:39.335 8046 DEBUG cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Task
'cinder.volume.flows.manager.createvolume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'RUNNING' from state 'PENDING' _taskreceiver
/usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-06-17 15:12:39.583 8046 INFO
cinder.volume.flows.manager.createvolume
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Volume 8892642a-6e12-48ae-ba94-8cc897a4acd5
(8892642a-6e12-48ae-ba94-8cc897a4acd5): created successfully
2016-06-17 15:12:39.585 8046 DEBUG cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Task
'cinder.volume.flows.manager.create
volume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' taskreceiver
/usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-06-17 15:12:39.589 8046 DEBUG cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Flow 'volumecreatemanager'
(45f829ab-d4bd-480a-a78f-c5e8eaa38598) transitioned into state
'SUCCESS' from state 'RUNNING' flowreceiver
/usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:140
2016-06-17 15:12:39.591 8046 INFO cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Created volume successfully.
[NOTE: Here everything was a-ok!!]
2016-06-17 15:12:43.378 8046 DEBUG
oslomessaging.drivers.amqpdriver [-] received message msgid: None
reply to None call
/usr/lib/python2.7/dist-packages/oslo
messaging/drivers/amqpdriver.py:201
[NOTE: And here it starts deleting the volume!]
2016-06-17 15:12:43.383 8046 DEBUG oslo
concurrency.lockutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Lock "8892642a-6e12-48ae-ba94-8cc897a4acd5-deletevolume"
acquired by "cinder.volume.manager.lvo
inner2" :: waited 0.001s
inner
/usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:273
2016-06-17 15:12:43.614 8046 INFO cinder.volume.targets.iscsi
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Skipping remove
export. No iscsitarget is presently exported
for volume: 8892642a-6e12-48ae-ba94-8cc897a4acd5
2016-06-17 15:12:43.615 8046 DEBUG oslo
concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf env LCALL=C lvs --noheadings --unit=g -o
vg
name,name,size --nosuffix
bladecenter/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute
/usr/lib/python2.7/dist-packages/oslo
concurrency/processutils.py:326
2016-06-17 15:12:43.780 8046 DEBUG osloconcurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env
LC
ALL=C lvs --noheadings --unit=g -o vgname,name,size --nosuffix
blade
center/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in
0.166s execute
/usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:356
2016-06-17 15:12:43.783 8046 DEBUG oslo
concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf env LCALL=C lvdisplay --noheading -C -o
Attr blade
center/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute
/usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:326
2016-06-17 15:12:43.948 8046 DEBUG oslo
concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env
LCALL=C lvdisplay --noheading -C -o Attr
blade
center/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in
0.165s execute
/usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:356
2016-06-17 15:12:43.950 8046 INFO cinder.volume.utils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Performing secure delete on volume:
/dev/mapper/blade
center-8892642a--6e12--48ae--ba94--8cc897a4acd5
----- s n i p -----
--
Imagine you're an idiot and then imagine you're in
the government. Oh, sorry. Now I'm repeating myself
- Mark Twain


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 20, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

On Jun 20, 2016, at 1:01 PM, Eugen Block wrote:

Is it possible to create an empty volume?

Yes.

try to attach it to a running instance

Can't start any instances because I can't create volumes.. :(

Do you see the iscsi session on your compute node?

No.

Then you could try to create a volume from an image, that way you see if glance and cinder are working properly together.

How do i do that from the shell?
--
Michael Jackson is not going to buried or cremated
but recycled into shopping bags so he can remain white,
plastic and dangerous for kids to play with.

responded Jun 20, 2016 by Turbo_Fredriksson (8,980 points)   7 12 15
0 votes

Can't start any instances because I can't create volumes

Can't you boot an instance without cinder? You could edit nova.conf to
use local file system, just to have a running instance. If that works
you can switch to another backend.

How do i do that from the shell?

cinder create --image --name

Do you see the iscsi session on your compute node?

No.

Try debugging your iscsi connection, maybe first without openstack. If
you aren't able to login to a session then openstack will also fail, I
guess...

In my environment, I first tried to get all services running and
working without external backends, cinder, glance and nova all ran on
local storage. Then I tried other backends for cinder (iscsi), now all
services use ceph.

Zitat von Turbo Fredriksson turbo@bayour.com:

On Jun 20, 2016, at 1:01 PM, Eugen Block wrote:

Is it possible to create an empty volume?

Yes.

try to attach it to a running instance

Can't start any instances because I can't create volumes.. :(

Do you see the iscsi session on your compute node?

No.

Then you could try to create a volume from an image, that way you
see if glance and cinder are working properly together.

How do i do that from the shell?
--
Michael Jackson is not going to buried or cremated
but recycled into shopping bags so he can remain white,
plastic and dangerous for kids to play with.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 20, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

On Jun 20, 2016, at 3:27 PM, Eugen Block wrote:

Can't you boot an instance without cinder?

Don't know, can I??

You could edit nova.conf to use local file system, just to have a running instance. If that works you can switch to another backend.

How?

cinder create --image --name

I'll try that thanx. How do you do that with the "openstack" command?

Try debugging your iscsi connection, maybe first without openstack.

From what I can see, it doesn't even start sharing via iSCSI..

In my environment, I first tried to get all services running and working without external backends, cinder, glance and nova all ran on local storage.

Didn't even knew you could do that. Thought you HAD to use cinder/swift..

Please point me to a faq/howto/doc on how to do that, thanx!

Then I tried other backends for cinder (iscsi), now all services use ceph.

ceph?
--
Life sucks and then you die

responded Jun 20, 2016 by Turbo_Fredriksson (8,980 points)   7 12 15
0 votes

Can't you boot an instance without cinder?

Don't know, can I??

Well, you should ;-) How do you try to boot your instance, from CLI or
Horizon? If it's Horizon, you would have to NOT klick the button
"Create a new volume --> Yes" ;-) If it's CLI it's sufficient to only
execute "nova boot --flavor --image --nic
net-id= (optional: only if you have multiple networks
available) "
This way you avoid creating a volume.

You could edit nova.conf
How?

It's usually the default, although I'm really not an expert in
Openstack. But if you simply try to set up nova on control and compute
node following an install guide, it should bring you there.
I followed
http://docs.openstack.org/mitaka/install-guide-obs/nova-controller-install.html, there aren't many options to configure and it defaults to local file
storage.

From what I can see, it doesn't even start sharing via iSCSI

You should try to fix that before you try to use it with openstack.

Didn't even knew you could do that. Thought you HAD to use cinder/swift..

Please point me to a faq/howto/doc on how to do that, thanx!

I used this guide:
http://docs.openstack.org/mitaka/install-guide-obs/environment-networking-storage-cinder.html
In the section for block storage it says "Block storage node
(Optional)", so you wouldn't have to, but I guess it makes sense in
the longterm. But as I already said, first you should try to get an
instance running at all before using another backend.

Regards,
Eugen

Zitat von Turbo Fredriksson turbo@bayour.com:

On Jun 20, 2016, at 3:27 PM, Eugen Block wrote:

Can't you boot an instance without cinder?

Don't know, can I??

You could edit nova.conf to use local file system, just to have a
running instance. If that works you can switch to another backend.

How?

cinder create --image --name

I'll try that thanx. How do you do that with the "openstack" command?

Try debugging your iscsi connection, maybe first without openstack.

From what I can see, it doesn't even start sharing via iSCSI..

In my environment, I first tried to get all services running and
working without external backends, cinder, glance and nova all ran
on local storage.

Didn't even knew you could do that. Thought you HAD to use cinder/swift..

Please point me to a faq/howto/doc on how to do that, thanx!

Then I tried other backends for cinder (iscsi), now all services use ceph.

ceph?
--
Life sucks and then you die


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 21, 2016 by Eugen_Block (3,740 points)   2 2
...