settingsLogin | Registersettings

[Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid

0 votes

I'm trying my newly installed Openstack system and I'm getting
problem in starting my first instance.

----- s n i p -----
Build of instance 5193c2d9-0aaf-4f84-b108-f6884d97b571 aborted: Block Device Mapping is Invalid.
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in dobuildandruninstance filterproperties) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2083, in buildandruninstance 'create.error', fault=e) File "/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 220, in exit self.forcereraise() File "/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 196, in forcereraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2048, in buildandruninstance blockdevicemapping) as resources: File "/usr/lib/python2.7/contextlib.py", line 17, in enter return self.gen.next() File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2206, in buildresources reason=e.format_message())
----- s n i p -----

Cleaning up all the logs for irrelevant stuff, I see:

----- s n i p -----
INFO cinder.api.v2.volumes Create volume of 5 GB
INFO cinder.volume.api Volume created successfully.
INFO cinder.volume.flows.manager.createvolume Volume 6b1dace4-78e1-452b-a455-c0fc882374f3: being created as image with specification: {'status': u'creating', 'imagelocation': (None, None), 'volumesize': 5, 'volumename': 'volume-6b1dace4-78e1-452b-a455-c0fc882374f3', 'imageid': u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'imageservice': <cinder.image.glance.GlanceImageService object at 0x7fa4f31d8ad0>, 'imagemeta': {'status': u'active', 'name': u'fedora23', 'deleted': False, 'containerformat': u'docker', 'createdat': datetime.datetime(2016, 6, 15, 20, 38, 43, tzinfo=<iso8601.Utc>), 'diskformat': u'qcow2', 'updatedat': datetime.datetime(2016, 6, 15, 20, 38, 45, tzinfo=<iso8601.Utc>), 'id': u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'owner': u'd524c8dfd9e9449798ebac9b025f8de6', 'minram': 0, 'checksum': u'38d62e2e1909c89f72ba4d5f5c0005d5', 'mindisk': 0, 'ispublic': True, 'deletedat': None, 'properties': {u'hypervisortype': u'docker', u'architecture': u'x8664'}, 'size': 234363392}}
INFO cinder.image.image
utils Image download 223.00 MB at 35.35 MB/s
WARN manila.context [-] Arguments dropped when creating context: {u'readonly': False, u'domain': None, u'showdeleted': False, u'useridentity': u'- - - - -', u'projectdomain': None, u'resourceuuid': None, u'userdomain': None}.
WARN manila.context [-] Arguments dropped when creating context: {u'readonly': False, u'domain': None, u'showdeleted': False, u'useridentity': u'- - - - -', u'projectdomain': None, u'resourceuuid': None, u'userdomain': None}.
INFO cinder.image.imageutils Converted 3072.00 MB image at 31.59 MB/s
INFO cinder.volume.flows.manager.create
volume Volume volume-6b1dace4-78e1-452b-a455-c0fc882374f3 (6b1dace4-78e1-452b-a455-c0fc882374f3): created successfully
INFO cinder.volume.manager Created volume successfully.
INFO cinder.api.v2.volumes Delete volume with id: 6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.api Delete volume request issued successfully.
INFO eventlet.wsgi.server 10.0.4.5 "DELETE /v2/d524c8dfd9e9449798ebac9b025f8de6/volumes/6b1dace4-78e1-452b-a455-c0fc882374f3 HTTP/1.1" status: 202 len: 211 time: 0.1300900
INFO cinder.volume.targets.iscsi Skipping removeexport. No iscsitarget is presently exported for volume: 6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.utils Performing secure delete on volume: /dev/mapper/blade_center-volume--6b1dace4--78e1--452b--a455--c0fc882374f3
----- s n i p -----

Full log at http://bayour.com/misc/openstack_instance_create-log.txt.

The web GUI say (this might be from another test, but I always
get the same):

----- s n i p -----
Error: Failed to perform requested operation on instance
"jessie-test", the instance has an error status: Please try again
later [Error: Build of instance a4e1deaa-cdf0-4fc7-8c54-579868c962c3
aborted: Block Device Mapping is Invalid.].
----- s n i p -----

I can see nothing relevant in this that would make it fail!
The only thing that bought my eye was that it isn't removing
the iSCSI target, because there isn't one..

This is (most of) my cinder.conf file:

----- s n i p -----
[DEFAULT]
myip = 10.0.4.1
storage
availabilityzone = nova
default
availabilityzone = nova
enabled
backends = lvm
iscsitargetprefix = iqn.2010-10.org.openstack:
iscsiipaddress = $myip
iscsi
port = 3260
iscsiiotype = blockio
iscsi
writecache = on
volume
group = bladecenter
scheduler
driver = cinder.scheduler.filter_scheduler.FilterScheduler

[lvm]
volumedriver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume
group = bladecenter
iscsi
protocol = iscsi
iscsi_helper = tgtadm
----- s n i p -----

PS. Creating the instance from an already existing, empty
volume, didn't work either. Same message, and even less
information in the log.
--
As soon as you find a product that you really like,
they will stop making it.
- Wilson's Law


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Jun 16, 2016 in openstack by Turbo_Fredriksson (8,980 points)   7 13 18

34 Responses

0 votes

How can I create a local volume?

You have probably configured your cinder.conf to use lvm as backend:

control1:~ # grep -r enabledbackends /etc/cinder/
/etc/cinder/cinder.conf:#enabled_backends = lvm
/etc/cinder/cinder.conf:enabled
backends = rbd --> that's what I
use currently

I'm not sure if it would work, it's been a while since I used local
storage, but if you just comment the enabled_backend option out and
restart cinder services, I believe it would create local volumes. But
still, I would postpone volumes for now if you want to bring an
instance up at all and try to get nova to work with glance.

Ok, that's different! I'm not running Glance on my Compute, only on
my Control.

Glance is not supposed to run on a compute node, it runs on a control
node. Reading the error message it seems that you have configured your
glance host as it tries to connect, but do you also have configured
the endpoints according to
http://docs.openstack.org/draft/install-guide-debconf/debconf/debconf-api-endpoints.html? What's the output of "openstack endpoint list | grep
glance"?

[waited a little while]

How long did you wait? Timeout problem? Make sure that nothing blocks
the requests (proxy?), what response do you get if you execute
control1:~ # curl http://:9292

Which of these should I run on the Compute and which one on the Control?

On top of every "install and configure" page there is a statement
where to install the required services, for example the glance page
says:

"This section describes how to install and configure the Image
service, code-named glance, on the controller node."

Or if you continue to the compute service, which has several
components, it differs between control and compute node:

"This section describes how to install and configure the Compute
service, code-named nova, on the controller node."

and

"This section describes how to install and configure the Compute
service on a compute node."

Now, this might be a stupid question, but it actually only occurred
to me just now
when I looking at that missing net error.

I don't think this should be a problem if you have at least a subnet
assigned to the network, which is true in your case. I just tested
that, the instance boots into a newly created network without any
further configuration. So in your case it's the missing connection to
glance, if you fix that we'll see what's next ;-)

Zitat von Turbo Fredriksson turbo@bayour.com:

Now that my authentication problems seems to be fixed, it's back on
track with
trying to boot my first instance..

On Jun 21, 2016, at 3:17 PM, Cynthia Lopes wrote:

If not, the command is: openstack volume create --size (size in GB) --image
(image name or id) volume_name

Just for info the cinder command was not exact, it should be: cinder create
--image-id * *--display-name

Thanx.

I agree with Eugen that you should make sure you can create a volume and
attach to a VM to help understand what your problem is.

Ok, so I created an empty, bootable volume. Worked just fine it seems.

I then used that when creating the instance (from Horizon).

Still the same error - Block Device Mapping is Invalid.

----- s n i p -----
bladeA01b:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status |
Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| c16975ad-dd45-41d7-b0a9-cbd0849f80e4 | test | available |
5 | |
+--------------------------------------+--------------+-----------+------+-------------+
bladeA01b:~# openstack volume show test
+--------------------------------+--------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availabilityzone | nova |
| bootable | true |
| consistencygroup
id | None |
| createdat | 2016-06-22T20:48:31.000000 |
| description | |
| encrypted | False |
| id | c16975ad-dd45-41d7-b0a9-cbd0849f80e4 |
| migration
status | None |
| multiattach | False |
| name | test |
| os-vol-host-attr:host | bladeA01b@lvm#LVM_iSCSI |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:nameid | None |
| os-vol-tenant-attr:tenant
id | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | |
| replicationstatus | disabled |
| size | 5 |
| snapshot
id | None |
| sourcevolid | None |
| status | available |
| type | None |
| updated
at | 2016-06-22T20:48:48.000000 |
| user_id | 0b7e5b0653084efdad5d67b66f2cf949 |
+--------------------------------+--------------------------------------+
----- s n i p -----

If I understand you correctly, this is a Cinder volume, right? Because of
the "@lvm.." part?

How can I create a local volume?

Looking under "System Information -> Block Storage Services" I see only
Cinder services..

----- s n i p -----
Name Host Zone Status State Last Updated
cinder-backup bladeA01b nova Enabled Up 0 minutes
cinder-scheduler bladeA01b nova Enabled Up 0 minutes
cinder-volume bladeA01b@lvm nova Enabled Up 0 minutes
cinder-volume bladeA01b@nfs nova Enabled Down 4 hours, 13 minutes
----- s n i p -----

This guide explains about ephemeral storage options:
https://platform9.com/support/openstack-tutorial-storage-options-and-use-cases/

Thanx, I've read something similar so I'm aware of the differences and
what they do. This one I'm going to read in more detail, because it HAD
more detail! :)

Usually you can specify the directory where VM instances disks will be
stored in the compute node on nova.conf option 'instances_path' in
[DEFAULT] session.

It was commented out, but just for the sake of it I un-commented it..

Thanx. That was actually halfway to actually be "documentation".
I'll bookmark
that.

The command to create the VM with an ephemeral disk (nova local storage and
not cinder) is:
openstack server create --image (image id or name) --flavor (flavor id or
name) vm_name

----- s n i p -----
bladeA01b:/var/tmp# wget --quiet
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
bladeA01b:/var/tmp# openstack image create --public --protected
--disk-format qcow2 \

    --container-format docker --property architecture=x86_64 \
    --property hypervisor_type=docker \
    --file cirros-0.3.4-x86_64-disk.img cirros

+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| containerformat | docker |
| created
at | 2016-06-22T21:23:03Z |
| diskformat | qcow2 |
| file | /v2/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8/file |
| id | d4d913c3-21f3-4e7d-932c-2cb35c8131e8 |
| min
disk | 0 |
| minram | 0 |
| name | cirros |
| owner | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | architecture='x86
64', hypervisortype='docker' |
| protected | True |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated
at | 2016-06-22T21:23:04Z |
| virtualsize | None |
| visibility | public |
+------------------+------------------------------------------------------+
bladeA01b:/var/tmp# openstack server create --image cirros --flavor
m1.tiny test3
Multiple possible networks found, use a Network ID to be more
specific. (HTTP 409) (Request-ID:
req-381a6df8-cd8b-474a-89c4-8a5935b3d7f8)
bladeA01b:/var/tmp# openstack network list
+--------------------------------------+------------+--------------------------------------+
| ID | Name | Subnets
|
+--------------------------------------+------------+--------------------------------------+
| fb1a3653-44d9-4f98-a357-c87406a8ea47 | physical |
5e3ea098-975d-460c-b313-61c11b2175d3 |
| 2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d | network-99 |
6ef5d993-2796-4adf-a724-eae5f5d1cc53 |
+--------------------------------------+------------+--------------------------------------+
bladeA01b:/var/tmp# openstack server create --image cirros --flavor
m1.tiny --nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d test3
+--------------------------------------+------------------------------------------------+
| Field | Value
|
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
|
| OS-EXT-AZ:availability
zone | nova
|
| OS-EXT-SRV-ATTR:host | None
|
| OS-EXT-SRV-ATTR:hypervisorhostname | None
|
| OS-EXT-SRV-ATTR:instance
name | instance-00000003
|
| OS-EXT-STS:powerstate | 0
|
| OS-EXT-STS:task
state | scheduling
|
| OS-EXT-STS:vmstate | building
|
| OS-SRV-USG:launched
at | None
|
| OS-SRV-USG:terminatedat | None
|
| accessIPv4 |
|
| accessIPv6 |
|
| addresses |
|
| adminPass | whateversecret
|
| config
drive |
|
| created | 2016-06-22T21:26:55Z
|
| flavor | m1.tiny
(5936ba55-7d76-4b80-8b3a-73b458b306f2) |
| hostId |
|
| id |
860613fe-3834-4f72-909b-5fb4b7ff2932 |
| image | cirros
(d4d913c3-21f3-4e7d-932c-2cb35c8131e8) |
| keyname | None
|
| name | test3
|
| os-extended-volumes:volumes
attached | []
|
| progress | 0
|
| projectid |
2985b96e27f048cd92a18db0dd03aa23 |
| properties |
|
| security
groups | [{u'name': u'default'}]
|
| status | BUILD
|
| updated | 2016-06-22T21:26:55Z
|
| userid |
0b7e5b0653084efdad5d67b66f2cf949 |
+--------------------------------------+------------------------------------------------+
[waited a little while]
bladeA01b:/var/tmp# openstack server show test3 | grep fault
| fault | {u'message': u'Build of
instance 860613fe-3834-4f72-909b-5fb4b7ff2932 aborted: Cannot load
repository file: Connection to glance host http://10.0.4.3:9292
failed: Error finding address for
http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8:
HTTPConnecti', u'code': 500, u'details': u' File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1926, in _do
buildandruninstance\n filterproperties)\n File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2083, in buildandruninstance\n \'create.error\', fault=e)\n
File "/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line
221, in exit\n self.force
reraise()\n File
"/usr/lib/python2.7/dist-packages/osloutils/excutils.py", line 197,
in force
reraise\n six.reraise(self.type_, self.value, self.tb)\n
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
line 2067, in buildandruninstance\n instance=instance)\n
File "/usr/lib/python2.7/contextlib.py", line 35, in exit\n
self.gen.throw(type, value, traceback)\n File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2244, in buildresources\n reason=six.text_type(exc))\n',
u'created': u'2016-06-22T21:27:28Z'} |
----- s n i p -----

Ok, that's different! I'm not running Glance on my Compute, only on
my Control.

Which of these should I run on the Compute and which one on the Control?

The documentation (one of many I follow:
http://docs.openstack.org/draft/install-guide-debconf/common/get_started_image_service.html) doesn't say. Only which ones to
install
on the Control.

----- s n i p -----
bladeA03b:/etc/nova# apt-cache search glance | grep ^glance
glance - OpenStack Image Registry and Delivery Service - Daemons
glance-api - OpenStack Image Registry and Delivery Service - API server
glance-common - OpenStack Image Registry and Delivery Service - common files
glance-glare - OpenStack Artifacts - API server
glance-registry - OpenStack Image Registry and Delivery Service -
registry server
----- s n i p -----

Currently, I have all of them only on the Control..

Concerning the flavor, I think the flavor you use should have the same disk
size as the disk.

Ok, I'll keep that in mind, thanx.

Now, this might be a stupid question, but it actually only occurred
to me just now
when I looking at that missing net error. I haven't really setup my
network, just
"winged" it. I' pretty sure it's not even close to working (I need to do more
studying in the matter - I still don't have a clue about how things
is supposed
to work in/on the OpenStack side of things).

I've postponed it because I desperately need ANY success story - creating an
instance, even if it won't technically work would help a lot in
that. I figured
it should at least TRY to start.. And I ASUME (!!) that as long as
the Control
can talk to the Compute and "tell" it what to do (such as "attach
this volume/image"),
it should at least be able to be created. I'm guessing the
networking (Neutron)
in OS is for the instance, not for administration etc. Or, did I
misunderstood
(the little I've read and actually understood about it :)?
--
Att tänka innan man talar, är som att torka sig i röven
innan man skiter.
- Arne Anka


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 23, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

On Jun 23, 2016, at 12:26 PM, Eugen Block wrote:

/etc/cinder/cinder.conf:enabled_backends = rbd --> that's what I use currently

"rbd"?

I'm not sure if it would work, it's been a while since I used local storage, but if you just comment the enabled_backend option out and restart cinder services, I believe it would create local volumes.

Shouldn't it be enough just to "disable" those services/backends?

I guess I have to, because just commenting that out didn't help, they still
show as enabled and running.

But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?

Ok, that's different! I'm not running Glance on my Compute, only on my Control.

Glance is not supposed to run on a compute node, it runs on a control node.

Ok, good! I thought I missed something fundamental.

What's the output of "openstack endpoint list | grep glance"?

| 57b10556b7bf47eaa019c603a0f6b34f | europe-london | glance | image | True | public | http://10.0.4.1:9292
| 8672f6de1673470d93ab6ccee1c1a2bb | europe-london | glance | image | True | internal | http://10.0.4.1:9292
| e45c3e83fe744e7db949cdd89dfe5654 | europe-london | glance | image | True | admin | http://10.0.4.1:9292

That's the Control node..

[waited a little while]

How long did you wait?

10-15 seconds perhaps. At least less than (half?) a minute..

"This section describes how to install and configure the Image service, code-named glance, on the controller node."

It is not obvious from that that that (!! :) should only be done on the
Controller! It just say "do this on the controller". It does not make it
clear that you shouldn't do something on the compute as well.

"This section describes how to install and configure the Compute service, code-named nova, on the controller node."
"This section describes how to install and configure the Compute service on a compute node."

Neither of which distinguish the different parts - what if I
have/want a separate compute and control node? It does not
make things obvious!

And that's why I have a problem with HOWTOs! They assume to much.
And a BAD HOWTO (which all of them on Openstack are!) doesn't even
attempt to explain the different options you have, so if you deviate
even the very slightest, you're f**ked!

There's a HUMONGOS difference between a "HOWTO" and "Documentation"!

Timeout problem? Make sure that nothing blocks the requests (proxy?), what response do you get if you execute
control1:~ # curl http://:9292

I was doing that ON the Control. Worked just fine.

And The Control and Compute is on the same switch.
--
Det är när man känner doften av sin egen avföring
som man börjar undra vem man egentligen är.
- Arne Anka

responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
0 votes

On Jun 23, 2016, at 2:10 PM, Turbo Fredriksson wrote:

But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?

I tried anyway, but creating a volume (empty or from an image) gave
the host field as empty. And the status was Error!

So apparently I need to figure out a way to configure "local storage".

The LVM volume shows:

----- s n i p -----
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | bladeA01b@lvm#LVM_iSCSI |
----- s n i p -----
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.

responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
0 votes

I'm starting to think that it might have something to do with the
networking after all:

----- s n i p -----
2016-06-23 15:52:13.775 25419 DEBUG nova.compute.manager [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Instance networkinfo: |[VIF({'profile': {}, 'ovsinterfaceid': u'9c23c0b8-1e96-4e73-b048-55c73
80b2425', 'preserveondelete': False, 'network': Network({'bridge': 'br-provider', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fi
xed', 'floatingips': [], 'address': u'10.99.0.4'})], 'version': 4, 'meta': {'dhcpserver': u'10.99.0.2'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': 'dns'
, 'address': u'10.0.0.254'})], 'routes': [], 'cidr': u'10.99.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'10.99.0.1'})})]
, 'meta': {'injected': False, 'tenantid': u'2985b96e27f048cd92a18db0dd03aa23', 'mtu': 1458}, 'id': u'2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d', 'label': u'networ
k-99'}), 'devname': u'tap9c23c0b8-1e', 'vnic
type': u'normal', 'qbhparams': None, 'meta': {}, 'details': {u'portfilter': True, u'ovshybridplug': True}, 'a
ddress': u'fa:16:3e:4c:04:17', 'active': False, 'type': u'ovs', 'id': u'9c23c0b8-1e96-4e73-b048-55c7380b2425', 'qbgparams': None})]| _allocatenetworkasync
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1572
2016-06-23 15:52:13.776 25419 DEBUG nova.compute.claims [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd0
3aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Aborting claim: [Claim: 1024 MB memory, 5 GB disk] abort /usr/lib/python2.7/dist-packages/nova/c
ompute/claims.py:120
2016-06-23 15:52:13.777 25419 DEBUG oslo
concurrency.lockutils [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a1
8db0dd03aa23 - - -] Lock "computeresources" acquired by "nova.compute.resourcetracker.abortinstanceclaim" :: waited 0.000s inner /usr/lib/python2.7/dist-p
ackages/osloconcurrency/lockutils.py:273
2016-06-23 15:52:14.017 25419 DEBUG oslo
concurrency.lockutils [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a1
8db0dd03aa23 - - -] Lock "computeresources" released by "nova.compute.resourcetracker.abortinstanceclaim" :: held 0.240s inner /usr/lib/python2.7/dist-pac
kages/osloconcurrency/lockutils.py:285
2016-06-23 15:52:14.018 25419 DEBUG nova.compute.manager [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Build of instance d75bd127-c554-4d79-bb9e-157c752628f4 aborted: Block Device Mapping is Invalid
. _build
andruninstance /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2081
2016-06-23 15:52:14.019 25419 DEBUG nova.compute.utils [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03
aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Build of instance d75bd127-c554-4d79-bb9e-157c752628f4 aborted: Block Device Mapping is Invalid.
notifyaboutinstance_usage /usr/lib/python2.7/dist-packages/nova/compute/utils.py:284
2016-06-23 15:52:14.020 25419 ERROR nova.compute.manager [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Build of instance d75bd127-c554-4d79-bb9e-157c752628f4 aborted: Block Device Mapping is Invalid.
----- s n i p -----


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
0 votes

"rbd"?

It's a different storage backend, something like a network RAID. But
don't mind it right now ;-)

But even after disabling them, they're still show as
"status=disabled,state=up"

They are running because you didn't stop the services, you just
disabled them. You could stop them for now if you don't intend using
cinder until you get an instance up and running, but I would take care
of cinder after that. It doesn't affect you if you while trying to
boot an instance on local storage because cinder is not required for
that.

From your latest logs I assume that you are still trying to boot from
volume, I recommend to ignore cinder for now and focus on launching an
instance at all. Have you fixed your glance issue? Because that is
required, otherwise it won't work at all.

Zitat von Turbo Fredriksson turbo@bayour.com:

On Jun 23, 2016, at 12:26 PM, Eugen Block wrote:

/etc/cinder/cinder.conf:enabled_backends = rbd --> that's what I
use currently

"rbd"?

I'm not sure if it would work, it's been a while since I used local
storage, but if you just comment the enabled_backend option out and
restart cinder services, I believe it would create local volumes.

Shouldn't it be enough just to "disable" those services/backends?

I guess I have to, because just commenting that out didn't help, they still
show as enabled and running.

But even after disabling them, they're still show as
"status=disabled,state=up"
with a "cinder service-list".. ?

Ok, that's different! I'm not running Glance on my Compute, only
on my Control.

Glance is not supposed to run on a compute node, it runs on a control node.

Ok, good! I thought I missed something fundamental.

What's the output of "openstack endpoint list | grep glance"?

| 57b10556b7bf47eaa019c603a0f6b34f | europe-london | glance | image
| True | public | http://10.0.4.1:9292
| 8672f6de1673470d93ab6ccee1c1a2bb | europe-london | glance | image
| True | internal | http://10.0.4.1:9292
| e45c3e83fe744e7db949cdd89dfe5654 | europe-london | glance | image
| True | admin | http://10.0.4.1:9292

That's the Control node..

[waited a little while]

How long did you wait?

10-15 seconds perhaps. At least less than (half?) a minute..

"This section describes how to install and configure the Image
service, code-named glance, on the controller node."

It is not obvious from that that that (!! :) should only be done on the
Controller! It just say "do this on the controller". It does not make it
clear that you shouldn't do something on the compute as well.

"This section describes how to install and configure the Compute
service, code-named nova, on the controller node."
"This section describes how to install and configure the Compute
service on a compute node."

Neither of which distinguish the different parts - what if I
have/want a separate compute and control node? It does not
make things obvious!

And that's why I have a problem with HOWTOs! They assume to much.
And a BAD HOWTO (which all of them on Openstack are!) doesn't even
attempt to explain the different options you have, so if you deviate
even the very slightest, you're f**ked!

There's a HUMONGOS difference between a "HOWTO" and "Documentation"!

Timeout problem? Make sure that nothing blocks the requests
(proxy?), what response do you get if you execute
control1:~ # curl http://:9292

I was doing that ON the Control. Worked just fine.

And The Control and Compute is on the same switch.
--
Det är när man känner doften av sin egen avföring
som man börjar undra vem man egentligen är.
- Arne Anka


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@nde.ag

     Vorsitzende des Aufsichtsrates: Angelika Mozdzen
       Sitz und Registergericht: Hamburg, HRB 90934
               Vorstand: Jens-U. Mozdzen
                USt-IdNr. DE 814 013 983


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 23, 2016 by Eugen_Block (3,740 points)   2 2
0 votes

I think it's possible we're far away from the correct path ;-)
It's not mentioned at all in the openstack lbaas V2 documentation,
but I think it's necessary to install Octavia on the controller machine first.
Then configure neutron on all compute nodes to support lbaas ...
Someone please correct me, if I'm wrong on this....

Cheers
Yngvi

-----Original Message-----
From: Turbo Fredriksson [mailto:turbo@bayour.com]
Sent: 23. júní 2016 13:36
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid

On Jun 23, 2016, at 2:10 PM, Turbo Fredriksson wrote:

But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?

I tried anyway, but creating a volume (empty or from an image) gave the host field as empty. And the status was Error!

So apparently I need to figure out a way to configure "local storage".

The LVM volume shows:

----- s n i p -----
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | bladeA01b@lvm#LVM_iSCSI |
----- s n i p -----
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jun 23, 2016 by Yngvi_Páll_Þorfinnss (3,100 points)   1 4 7
0 votes

On Jun 23, 2016, at 4:30 PM, Eugen Block wrote:

They are running because you didn't stop the services, you just disabled them.

I kind'a expected a disable to stop the service.. But what if I wanted to
stop only ONE service (of several)? For example the "nfs" backend but leave
the "lvm" online. I can't shutdown cinder-volume, that would stop all of them..

You could stop them for now if you don't intend using cinder until you get an instance up and running, but I would take care of cinder after that. It doesn't affect you if you while trying to boot an instance on local storage because cinder is not required for that.

Well. I can create Cinder volumes without any problems it seems:

bladeA01b:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 0a0929e6-cf4d-40b3-9ba3-9575290993e6 | test2 | available | 5 | |
| c16975ad-dd45-41d7-b0a9-cbd0849f80e4 | test | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | bladeA01b@lvm#LVM_iSCSI
bladeA01b:~# openstack volume show test2 | grep host
| os-vol-host-attr:host | bladeA01b@nfs#nfsbackend

From your latest logs I assume that you are still trying to boot from volume, I recommend to ignore cinder for now and focus on launching an instance at all.

That doesn't seem to be possible. I've looked over some of the code for
Cinder, and if you don't configure "enabled_backends", then no volume
can be created. Well, they can be created, but they end up in Error state
right away!

Have you fixed your glance issue?

I don't know. I don't know what's wrong with it :(

But I touch one thing and two other things break :( :( :(. Dang, dang, dang,
I'm getting really tired of this s**t!

----- s n i p -----
2016-06-23 17:00:21.511 18347 INFO cinder.api.openstack.wsgi [req-35b7eb35-997c-4149-a975-aba921d86182 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] HTTP exception thrown: Volume test2 could not be found.
2016-06-23 17:00:21.513 18347 INFO cinder.api.openstack.wsgi [req-35b7eb35-997c-4149-a975-aba921d86182 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test2 returned with HTTP 404
2016-06-23 17:00:21.514 18347 INFO eventlet.wsgi.server [req-35b7eb35-997c-4149-a975-aba921d86182 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET /v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test2 HTTP/1.1" status: 404 len: 419 time: 0.6278081
2016-06-23 17:00:21.524 18347 INFO cinder.api.openstack.wsgi [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] GET http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/detail?all_tenants=1&name=test2
2016-06-23 17:00:21.600 18347 INFO cinder.volume.api [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Get all volumes completed successfully.
2016-06-23 17:00:21.609 18347 INFO cinder.api.openstack.wsgi [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/detail?all_tenants=1&name=test2 returned with HTTP 200
2016-06-23 17:00:21.611 18347 INFO eventlet.wsgi.server [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET /v2/2985b96e27f048cd92a18db0dd03aa23/volumes/detail?all_tenants=1&name=test2 HTTP/1.1" status: 200 len: 1607 time: 0.0912130
----- s n i p -----

And yet it's right there! See top of email.
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka

responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
0 votes

----- s n i p -----
2016-06-23 23:08:25.277 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18d
b0dd03aa23 - - -] GET http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test
2016-06-23 23:08:25.278 25887 DEBUG cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18
db0dd03aa23 - - -] Empty body provided in request getbody /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:936
2016-06-23 23:08:25.278 25887 DEBUG cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18
db0dd03aa23 - - -] Calling method '>' _process
stack /
usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:1092
2016-06-23 23:08:25.362 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] HTTP exception thrown: Volume test could not be found.
2016-06-23 23:08:25.363 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test returned with HTTP 404
2016-06-23 23:08:25.366 25887 INFO eventlet.wsgi.server [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET /v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test HTTP/1.1" status: 404 len: 418 time: 0.8508980
----- s n i p -----

and yet:

----- s n i p -----
bladeA01b:~# openstack volume list +--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 8dbd3b7c-e36b-433f-a3b0-d701f63f63c2 | test | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
----- s n i p -----

That's with the "admin" user+password etc though..
--
There are no dumb questions,
unless a customer is asking them.
- Unknown

responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
0 votes

Sorry for this long mail - I think the original problem is now fixed.
I'm including the whole work/test log for posterity. And if someone
have anything to comment on it, incase I've missed something..

After six, seven hours of debugging and modifying the code to output
more information, I've found out this:

When running

openstack server create --volume test --flavor m1.tiny \
--nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d --wait test

I get this on the Compute:

----- s n i p -----
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [req-37cfbeac-324c-4077-8056-2efc62e80b3f 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: efe13dfa-79be-49a1-8113-04830463b545] Instance failed block device setup
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] Traceback (most recent call last):
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1754, in prepblockdevice
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] wait
func=self.awaitblockdevicemapcreated)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block
device.py", line 518, in attachblockdevices
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] map(logandattach, blockdevicemapping)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block
device.py", line 516, in logandattach
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] bdm.attach(*attach
args, **attachkwargs)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block
device.py", line 54, in wrapped
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] retval = method(obj, context, *args, **kwargs)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block
device.py", line 261, in attach
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] connector = virtdriver.getvolumeconnector(instance)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 1375, in get
volume_connector
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] raise NotImplementedError()
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] NotImplementedError
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545]
2016-06-23 23:27:34.742 10716 DEBUG keystoneauth.session [req-37cfbeac-324c-4077-8056-2efc62e80b3f 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] RESP: [200] Content-Type: application/json Content-Length: 639 X-Openstack-Request-Id: req-701bcf9e-e2bb-41d1-b49c-935da83f6653 Date: Thu, 23 Jun
----- s n i p -----

Following that backwards, I come to attachblockdevices():

----- s n i p -----
[..]
else:
LOG.info(LI('Booting with blank volume at %(mountpoint)s'),
{'mountpoint': bdm['mount
device']},
context=context, instance=instance)

    bdm.attach(*attach_args, **attach_kwargs)       (L516)

[..]
connector = virtdriver.getvolumeconnector(instance) (L261)
[..]
def get
volume_connector(self, instance):
"""Get connector information for the instance for attaching to volumes.

    Connector information is a dictionary representing the ip of the
    machine that will be making the connection, the name of the iscsi
    initiator and the hostname of the machine as follows::
        {
            'ip': ip,
            'initiator': initiator,
            'host': hostname
        }
    """
    raise NotImplementedError()

----- s n i p -----

So I think I've found a bug. It seems you can't attach an empty volume like I've been
trying for days now. OR, I missed some configuration somewhere..

However, when running:

openstack server create --image cirros --flavor m1.tiny \
--nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d --wait test

I on the other hand get:

----- s n i p -----
2016-06-23 23:39:09.326 10716 DEBUG nova.compute.manager [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Start spawning the instance on the hypervisor. buildandruninstance /usr/lib/python2.7/dist-
packages/nova/compute/manager.py:2059
2016-06-23 23:39:09.330 10716 DEBUG novadocker.virt.docker.driver [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd9
2a18db0dd03aa23 - - -] Image name "cirros" does not exist, fetching it... pullmissingimage /usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/dr
iver.py:384
2016-06-23 23:39:09.332 10716 DEBUG novadocker.virt.docker.driver [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Fetching image with id d4d913c3-21f3-4e7d-932c-2cb35c8131e8 from glance _pull
missing_image /usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py:415
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Error contacting glance server 'http://10.0.4.3:9292' for 'data', done trying.
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance Traceback (most recent call last):
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 250, in call
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance result = getattr(client.images, method)(*args, **kwargs)
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 148, in data
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance % urlparse.quote(str(image_id)))
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 275, in get
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance return self._request('GET', url, **kwargs)
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 256, in _request
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance raise exc.CommunicationError(message=message)
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance CommunicationError: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnectionPool(host='10.0.4.3', port=9292): Max retries exceeded with url: /v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc32ac35110>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance
2016-06-23 23:39:09.395 10716 WARNING novadocker.virt.docker.driver [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Cannot load repository file: Connection to glance host http://10.0.4.3:9292 failed: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnectionPool(host='10.0.4.3', port=9292): Max retries exceeded with url: /v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc32ac35110>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Traceback (most recent call last):
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 417, in _pull_missing_image
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] instance['user_id'], instance['project_id'])
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 110, in fetch
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] IMAGE_API.download(context, image_href, dest_path=path)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 182, in download
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] dst_path=dest_path)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 383, in download
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] _reraise_translated_image_exception(image_id)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 682, in _reraise_translated_image_exception
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] six.reraise(new_exc, None, exc_trace)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 381, in download
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] image_chunks = self._client.call(context, 1, 'data', image_id)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 269, in call
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] server=str(self.api_server), reason=six.text_type(e))
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] GlanceConnectionFailed: Connection to glance host http://10.0.4.3:9292 failed: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnectionPool(host='10.0.4.3', port=9292): Max retries exceeded with url: /v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc32ac35110>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777]
2016-06-23 23:39:09.398 10716 ERROR nova.compute.manager [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Instance failed to spawn
----- s n i p -----

This was because of a missing "api_servers" in nova.conf. Setting that to (using trial
and error):

api_servers = http://control:9292

The information in the config file say:

These should be fully qualified urls of the form "scheme://hostname:port[/path]"

However, with the/any path, it won't work. Granted, the 'endpoint list' DOES say
"http://10.0.4.1:9292", so I guess the path is optional and for special configuration.

Now it seems to go further:

----- s n i p -----
2016-06-24 00:03:09.622 14217 DEBUG novadocker.virt.docker.driver [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd9
2a18db0dd03aa23 - - -] Loading repository file into docker cirros pullmissingimage /usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py:
419
2016-06-24 00:03:10.100 14217 DEBUG keystoneauth.session [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] RESP: [201] Content-Type: application/json Content-Length: 871 X-Openstack-Request-Id: req-17ccf052-d52c-401f-9cc7-3a0819e4c3fa Date: Thu, 23 Ju
n 2016 23:03:08 GMT Connection: keep-alive
RESP BODY: {"port": {"status": "DOWN", "binding:host
id": "bladeA03b", "description": "", "allowedaddresspairs": [], "extradhcpopts": [], "updatedat": "2
016-06-23T23:03:08", "device
owner": "compute:nova", "portsecurityenabled": true, "binding:profile": {}, "fixedips": [{"subnetid": "6ef5d993-2796-4adf-a72
4-eae5f5d1cc53", "ipaddress": "10.99.0.40"}], "id": "db8bec43-9ba0-4276-9623-1a17f5857a06", "securitygroups": ["c39cbc1f-99cf-4c1a-98b2-ec4f56481ccf"], "dev
iceid": "3dd1ff63-24ea-435d-a036-e99a42ebf1b5", "name": "", "adminstateup": true, "networkid": "2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d", "dnsname": null, "
binding:vif
details": {"portfilter": true, "ovshybridplug": true}, "binding:vnictype": "normal", "binding:viftype": "ovs", "tenantid": "2985b96e27f048cd
92a18db0dd03aa23", "macaddress": "fa:16:3e:64:4e:18", "createdat": "2016-06-23T23:03:08"}}
httplogresponse /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:277
2016-06-24 00:03:10.101 14217 DEBUG nova.network.neutronv2.api [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 3dd1ff63-24ea-435d-a036-e99a42ebf1b5] Successfully created port: db8bec43-9ba0-4276-9623-1a17f5857a06 _create
port /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py:261
2016-06-24 00:03:10.102 14217 DEBUG osloconcurrency.lockutils [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Acquired semaphore "refreshcache-3dd1ff63-24ea-435d-a036-e99a42ebf1b5" lock /usr/lib/python2.7/dist-packages/osloconcurrency/lockutils.py:215
2016-06-24 00:03:10.103 14217 DEBUG nova.network.neutronv2.api [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 3dd1ff63-24ea-435d-a036-e99a42ebf1b5] _get
instancenwinfo() getinstancenwinfo /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py:910
2016-06-24 00:03:10.121 14217 DEBUG keystoneauth.session [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] REQ: curl -g -i -X GET http://10.0.4.1:9696/v2.0/ports.json?tenant_id=2985b96e27f048cd92a18db0dd03aa23&device_id=3dd1ff63-24ea-435d-a036-e99a42ebf1b5 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}262b2c831c6ea94c09cd20bb956858e6c71671b2" httplog_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:248
2016-06-24 00:03:10.169 14217 WARNING novadocker.virt.docker.driver [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 3dd1ff63-24ea-435d-a036-e99a42ebf1b5] Cannot load repository file: ('Connection aborted.', error(32, 'Broken pipe'))
----- s n i p -----

And now I'm stuck again. Looking at the information of the instance, it now say:

No valid host was found. There are not enough hosts available.

Although:

----- s n i p -----
bladeA01b:~# openstack endpoint list | grep nova
| a5e36f0b933c4e4da7a5737d00e7230b | europe-london | nova | compute | True | internal | http://10.0.4.1:8774/v2/%(tenant_id)s |
| b7a8e4623fbd456fb008527f9c51995f | europe-london | nova | compute | True | admin | http://10.0.4.1:8774/v2/%(tenant_id)s |
| c3b5eda8124b4e4186f919a7944d1290 | europe-london | nova | compute | True | public | http://10.0.4.1:8774/v2/%(tenant_id)s |
----- s n i p -----

responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
0 votes

On Jun 24, 2016, at 12:32 AM, Turbo Fredriksson wrote:

And now I'm stuck again. Looking at the information of the instance, it now say:

No valid host was found. There are not enough hosts available.

Looking closer at the Controllers logs, I see:

----- s n i p -----
2016-06-24 00:29:38.835 20278 INFO glance.registry.api.v1.images [req-55daa3fe-c192-41a4-a5c0-0b6e076a4bcf 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Image cirros not found
2016-06-24 00:29:38.837 20278 INFO eventlet.wsgi.server [req-55daa3fe-c192-41a4-a5c0-0b6e076a4bcf 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 127.0.0.1 - - [24/Jun/2016 00:29:38] "GET /images/cirros HTTP/1.1" 404 242 0.251989
2016-06-24 00:29:38.852 20307 ERROR glance.registry.client.v1.client [req-55daa3fe-c192-41a4-a5c0-0b6e076a4bcf 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Registry client request GET /images/cirros raised NotFound
----- s n i p -----
--
System administrators motto:
You're either invisible or in trouble.
- Unknown

responded Jun 23, 2016 by Turbo_Fredriksson (8,980 points)   7 13 18
...