settingsLogin | Registersettings

[Openstack] Cinder - no availability zones

0 votes

I'm back to trying to get Cinder volumes to work. I think
I've nailed it down to the fact that there are no availability
zones;

bladeA01:~# cinder availability-zone-list
+------+--------+
| Name | Status |
+------+--------+
+------+--------+

However, two hours of googling, and no one seems to know how
to create a zone in Cinder.

All "they" say is to add the zone in cinder.conf under
DEFAULT/storageavailabilityzone and DEFAULT/defaultavailabilityzone.

I did that weeks ago, but still nothing (although previously I
didn't explicitly look for it):

[DEFAULT]
storageavailabilityzone = nova
defaultavailabilityzone = nova
allowavailabilityzone_fallback = true

"They" also mention host aggregates:

bladeA01:~# openstack aggregate list
+----+-------+-------------------+
| ID | Name | Availability Zone |
+----+-------+-------------------+
| 6 | infra | nova |
| 7 | devel | nova |
| 8 | build | nova |
| 9 | tests | nova |
+----+-------+-------------------+

I'm not sure what kind of availability zone these are, but
I have something (I'm guessing Nova zones):

bladeA01:~# openstack availability zone list
+-----------+-------------+
| Zone Name | Zone Status |
+-----------+-------------+
| internal | available |
| nova | available |
| nova | available |
| nova | available |
+-----------+-------------+
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
asked Jul 12, 2016 in openstack by Turbo_Fredriksson (8,980 points)   7 13 19

9 Responses

0 votes

Can you send this output ?

cinder service-list

Also, when you create a volume what happens ?
Is there any error ?

On Tue, Jul 12, 2016 at 2:57 PM, Turbo Fredriksson turbo@bayour.com wrote:

I'm back to trying to get Cinder volumes to work. I think
I've nailed it down to the fact that there are no availability
zones;

bladeA01:~# cinder availability-zone-list
+------+--------+
| Name | Status |
+------+--------+
+------+--------+

However, two hours of googling, and no one seems to know how
to create a zone in Cinder.

All "they" say is to add the zone in cinder.conf under
DEFAULT/storageavailabilityzone and DEFAULT/defaultavailabilityzone.

I did that weeks ago, but still nothing (although previously I
didn't explicitly look for it):

[DEFAULT]
storageavailabilityzone = nova
defaultavailabilityzone = nova
allowavailabilityzone_fallback = true

"They" also mention host aggregates:

bladeA01:~# openstack aggregate list
+----+-------+-------------------+
| ID | Name | Availability Zone |
+----+-------+-------------------+
| 6 | infra | nova |
| 7 | devel | nova |
| 8 | build | nova |
| 9 | tests | nova |
+----+-------+-------------------+

I'm not sure what kind of availability zone these are, but
I have something (I'm guessing Nova zones):

bladeA01:~# openstack availability zone list
+-----------+-------------+
| Zone Name | Zone Status |
+-----------+-------------+
| internal | available |
| nova | available |
| nova | available |
| nova | available |
+-----------+-------------+
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jul 12, 2016 by Brent_Troge (2,440 points)   6 11
0 votes

On Jul 12, 2016, at 9:32 PM, Brent Troge wrote:

cinder service-list

bladeA01:~# cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | bladeA01 | nova | enabled | up | 2016-07-12T20:32:44.000000 | - |
| cinder-scheduler | bladeA01 | nova | enabled | up | 2016-07-12T20:32:43.000000 | - |
| cinder-volume | bladeA01@lvm | nova | enabled | up | 2016-07-12T20:32:39.000000 | - |
| cinder-volume | bladeA01@nfs | nova | enabled | up | 2016-07-12T20:32:39.000000 | - |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+

Also, when you create a volume what happens ?
Is there any error ?

If I create a volume in Horizon, it just say "Error".

If I create one from the shell:

cinder create --name test1 --volume-type lvm \
--availability-zone nova 10
[..]
bladeA01:~# cinder list
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| 88087aa1-208c-44f1-bf33-0cda7b757274 | error | test1 | 10 | lvm | false | |
| e8f50273-f62e-4cad-8066-725067e062f8 | deleting | test5 | 0 | lvm | true | |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+

The logs say:

==> /var/log/cinder/cinder-scheduler.log <==
2016-07-12 21:40:19.868 15552 DEBUG cinder.scheduler.basefilter [req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Starting with 0 host(s) getfilteredobjects /usr/lib/python2.7/dist-packages/cinder/scheduler/basefilter.py:79
2016-07-12 21:40:19.869 15552 INFO cinder.scheduler.base_filter [req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Filter AvailabilityZoneFilter returned 0 host(s)

Same thing if I don't use the availability-zone..
--
Life sucks and then you die

responded Jul 12, 2016 by Turbo_Fredriksson (8,980 points)   7 13 19
0 votes

this looks to be an issue with your lvm configuration..
on your volume host, do you see any errors ? look in cinder logs as well as
system logs.

can you also send your lvm backend configuration ?

On Tue, Jul 12, 2016 at 3:44 PM, Turbo Fredriksson turbo@bayour.com wrote:

On Jul 12, 2016, at 9:32 PM, Brent Troge wrote:

cinder service-list

bladeA01:~# cinder service-list

+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | bladeA01 | nova | enabled | up |
2016-07-12T20:32:44.000000 | - |
| cinder-scheduler | bladeA01 | nova | enabled | up |
2016-07-12T20:32:43.000000 | - |
| cinder-volume | bladeA01@lvm | nova | enabled | up |
2016-07-12T20:32:39.000000 | - |
| cinder-volume | bladeA01@nfs | nova | enabled | up |
2016-07-12T20:32:39.000000 | - |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

Also, when you create a volume what happens ?
Is there any error ?

If I create a volume in Horizon, it just say "Error".

If I create one from the shell:

cinder create --name test1 --volume-type lvm \
--availability-zone nova 10
[..]
bladeA01:~# cinder list

+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size |
Volume Type | Bootable | Attached to |

+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| 88087aa1-208c-44f1-bf33-0cda7b757274 | error | test1 | 10 |
lvm | false | |
| e8f50273-f62e-4cad-8066-725067e062f8 | deleting | test5 | 0 |
lvm | true | |

+--------------------------------------+----------+-------+------+-------------+----------+-------------+

The logs say:

==> /var/log/cinder/cinder-scheduler.log <==
2016-07-12 21:40:19.868 15552 DEBUG cinder.scheduler.basefilter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Starting with 0 host(s)
get
filteredobjects
/usr/lib/python2.7/dist-packages/cinder/scheduler/base
filter.py:79
2016-07-12 21:40:19.869 15552 INFO cinder.scheduler.base_filter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Filter AvailabilityZoneFilter
returned 0 host(s)

Same thing if I don't use the availability-zone..
--
Life sucks and then you die


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jul 12, 2016 by Brent_Troge (2,440 points)   6 11
0 votes

sometimes i like to stop the cinder-volume service, then run it manually.

service cinder-volume stop

once the service is stopped, run it manually

cinder-volume

send your create command, and see what errors are thrown back in the
cinder-volume terminal

On Tue, Jul 12, 2016 at 4:17 PM, Brent Troge brenttroge2016@gmail.com
wrote:

this looks to be an issue with your lvm configuration..
on your volume host, do you see any errors ? look in cinder logs as well
as system logs.

can you also send your lvm backend configuration ?

On Tue, Jul 12, 2016 at 3:44 PM, Turbo Fredriksson turbo@bayour.com
wrote:

On Jul 12, 2016, at 9:32 PM, Brent Troge wrote:

cinder service-list

bladeA01:~# cinder service-list

+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | bladeA01 | nova | enabled | up |
2016-07-12T20:32:44.000000 | - |
| cinder-scheduler | bladeA01 | nova | enabled | up |
2016-07-12T20:32:43.000000 | - |
| cinder-volume | bladeA01@lvm | nova | enabled | up |
2016-07-12T20:32:39.000000 | - |
| cinder-volume | bladeA01@nfs | nova | enabled | up |
2016-07-12T20:32:39.000000 | - |

+------------------+--------------+------+---------+-------+----------------------------+-----------------+

Also, when you create a volume what happens ?
Is there any error ?

If I create a volume in Horizon, it just say "Error".

If I create one from the shell:

cinder create --name test1 --volume-type lvm \
--availability-zone nova 10
[..]
bladeA01:~# cinder list

+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size |
Volume Type | Bootable | Attached to |

+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| 88087aa1-208c-44f1-bf33-0cda7b757274 | error | test1 | 10 |
lvm | false | |
| e8f50273-f62e-4cad-8066-725067e062f8 | deleting | test5 | 0 |
lvm | true | |

+--------------------------------------+----------+-------+------+-------------+----------+-------------+

The logs say:

==> /var/log/cinder/cinder-scheduler.log <==
2016-07-12 21:40:19.868 15552 DEBUG cinder.scheduler.basefilter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Starting with 0 host(s)
get
filteredobjects
/usr/lib/python2.7/dist-packages/cinder/scheduler/base
filter.py:79
2016-07-12 21:40:19.869 15552 INFO cinder.scheduler.base_filter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Filter AvailabilityZoneFilter
returned 0 host(s)

Same thing if I don't use the availability-zone..
--
Life sucks and then you die


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jul 12, 2016 by Brent_Troge (2,440 points)   6 11
0 votes

On Jul 12, 2016, at 10:17 PM, Brent Troge wrote:

this looks to be an issue with your lvm configuration..
on your volume host, do you see any errors ?

None! It looks like everything is perfectly fine..

can you also send your lvm backend configuration ?

https://github.com/FransUrbo/openstack_bladecenter/blob/master/configs-control/etc/cinder/cinder.conf
--
God gave man both a penis and a brain,
but unfortunately not enough blood supply
to run both at the same time.
- R. Williams

responded Jul 12, 2016 by Turbo_Fredriksson (8,980 points)   7 13 19
0 votes

On Jul 12, 2016, at 10:27 PM, Brent Troge wrote:

send your create command, and see what errors are thrown back in the
cinder-volume terminal

Didn't say a thing. A second or two after the create command
finished, it said this:

2016-07-12 22:36:39.151 16418 DEBUG osloservice.periodictask [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running periodic task VolumeManager.publi
sh
servicecapabilities runperiodictasks /usr/lib/python2.7/dist-packages/osloservice/periodictask.py:215
2016-07-12 22:36:39.151 16418 DEBUG cinder.manager [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Notifying Schedulers of capabilities ... _publish
serv
icecapabilities /usr/lib/python2.7/dist-packages/cinder/manager.py:168
2016-07-12 22:36:39.153 16418 DEBUG oslo
messaging.drivers.amqpdriver [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] CAST uniqueid: 2cf38a9df5154068bd
e11d66d847af22 FANOUT topic 'cinder-scheduler' send /usr/lib/python2.7/dist-packages/oslomessaging/drivers/amqpdriver.py:443
2016-07-12 22:36:39.156 16418 DEBUG oslo
service.periodictask [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running periodic task VolumeManager.repor
tdriverstatus runperiodictasks /usr/lib/python2.7/dist-packages/osloservice/periodictask.py:215
2016-07-12 22:36:39.156 16418 DEBUG cinder.volume.drivers.lvm [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Updating volume stats updatevolumestats
/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py:189
2016-07-12 22:36:39.157 16418 DEBUG oslo
concurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running cmd (subprocess): env LCALL=C
vgs --noheadings --unit=g -o name,size,free,lv
count,uuid --separator : --nosuffix bladecenter execute /usr/lib/python2.7/dist-packages/osloconcurrency/proc
essutils.py:344
2016-07-12 22:36:39.178 16418 DEBUG osloconcurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] CMD "env LCALL=C vgs --noheadings --un
it=g -o name,size,free,lvcount,uuid --separator : --nosuffix bladecenter" returned: 0 in 0.021s execute /usr/lib/python2.7/dist-packages/osloconcurrency/pr
ocessutils.py:374
2016-07-12 22:36:39.179 16418 DEBUG oslo
concurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running cmd (subprocess): env LCALL=C
lvs --noheadings --unit=g -o vg
name,name,size --nosuffix bladecenter execute /usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:344
2016-07-12 22:36:39.199 16418 DEBUG osloconcurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] CMD "env LCALL=C lvs --noheadings --un
it=g -o vgname,name,size --nosuffix bladecenter" returned: 0 in 0.021s execute /usr/lib/python2.7/dist-packages/osloconcurrency/processutils.py:374
^C2016-07-12 22:36:52.631 16411 INFO oslo
service.service [-] Caught SIGINT signal, instantaneous exiting
2016-07-12 22:36:52.631 16423 INFO osloservice.service [-] Caught SIGINT signal, instantaneous exiting
2016-07-12 22:36:52.631 16418 INFO oslo
service.service [-] Caught SIGINT signal, instantaneous exiting
--
There are no dumb questions,
unless a customer is asking them.
- Unknown

responded Jul 12, 2016 by Turbo_Fredriksson (8,980 points)   7 13 19
0 votes

On Jul 12, 2016, at 11:33 PM, Brent Troge wrote:

from your volume server send output of this..

vgscan

bladeA01:~# vgscan
Reading volume groups from cache.
Found volume group "blade_center" using metadata type lvm2
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.

responded Jul 12, 2016 by Turbo_Fredriksson (8,980 points)   7 13 19
0 votes

can you also run, 'pvscan' on the volume-server and send that output

does your scheduler even inventory the volume-server ?

do you see any references to 'free_capacity' in your scheduler logs ?

On Tue, Jul 12, 2016 at 5:34 PM, Turbo Fredriksson turbo@bayour.com wrote:

On Jul 12, 2016, at 11:33 PM, Brent Troge wrote:

from your volume server send output of this..

vgscan

bladeA01:~# vgscan
Reading volume groups from cache.
Found volume group "blade_center" using metadata type lvm2
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.


Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
responded Jul 12, 2016 by Brent_Troge (2,440 points)   6 11
0 votes

On Jul 12, 2016, at 11:54 PM, Brent Troge wrote:

can you also run, 'pvscan' on the volume-server and send that output

does your scheduler even inventory the volume-server ?

do you see any references to 'free_capacity' in your scheduler logs ?

With a lot of trial and error (and really reading every
single character of the debug/log output from a restart and
a create), it might have been this:

2016-07-12 23:44:16.711 9199 DEBUG cinder.scheduler.filters.capabilitiesfilter [req-ae7c0c47-9d08-4d81-a94b-56a4feaf2922 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] extraspec requirement 'LVMiSCSI' does not match 'LVM' _satisfiesextraspecs /usr/lib/python2.7/dist-packages/cinder/scheduler/filters/capabilitiesfilter.py:59

The "LVMiSCSI" was part of the '[zol]' driver and there was no
"volume
backend_name" set for the '[lvm]' one..
https://github.com/FransUrbo/openstack_bladecenter/commit/63fe97399bdd0a49fcf30dcf670cf40e25b4306c

Fixing the config file, I can now create a dummy volume, a
volume from an image AND there's the expected "nova" availability
zone in Horizon!

However, I now (again!) get "Block Device Mapping is Invalid."
when trying to create an instance with a volume from an image.
The volume is created, but then deleted and instance create fails.

I'm going to continue this tomorrow and read the logs for that
more closely.

Thanx for the help!
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka

responded Jul 12, 2016 by Turbo_Fredriksson (8,980 points)   7 13 19
...