settingsLogin | Registersettings

Questions in openstack-operators

Search:
CategoryQuestionAnswer

[Openstack-operators] I think there is a dependency in openstack-nova-spicehtml5proxy

I am running Centos 6.5

I ran into a problem with openstack-nova-spicehtml5proxy. It wouldn't
start. I did some investigating and discovered that

root at controller1-prod.controller1-prod:/var/log# cat /tmp/spicehtml5proxy.txt
WARNING: no 'numpy' module, HyBi protocol will be slower
Can not find spice html/js/css files at /usr/share/spice-html5.
root at controller1-prod.controller1-prod:/var/log#

I got this by modifying the /etc/init.d/openstack-nova-spicehtml5proxy

daemon --user nova --pidfile $pidfile "$exec --logfile $logfile

&>/dev/null & echo \$! > $pidfile"
daemon --user nova --pidfile $pidfile "$exec --logfile $logfile
&>/tmp/spicehtml5proxy.txt & echo \$! > $pidfile"

which I think ought to be a permanent change, because there is useful
information coming out of stdout. Alternatively,
daemon --user nova --pidfile $pidfile "$exec --logfile $logfile
&>>$logfile & echo \$! > $pidfile"

To resolve the underlying problem, I had to install
spice-html5-0.1.4-1.el6.noarch, which is easy to do:

yum install spice-html5-0.1.4-1.el6.noarch

?I then started the ?openstack-nova-spicehtml5proxy service

?service openstack-nova-spicehtml5proxy start
?
I think the problem is that there is a missing dependency in package
openstack-nova-console-2014.1.1-2.el6.noarch

root at controller1-prod.controller1-prod:~# yum deplist
openstack-nova-console-2014.1.1-2.el6.noarch
Loaded plugins: fastestmirror, priorities, security
Repository sl-release-el6 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
* epel: mirror.pnl.gov
14079 packages excluded due to repository priority protections
Finding dependencies:
package: openstack-nova-console.noarch 2014.1.1-2.el6
dependency: openstack-nova-common = 2014.1.1-2.el6
provider: openstack-nova-common.noarch 2014.1.1-2.el6
dependency: /usr/bin/python
provider: python.x8664 2.6.6-52.el6
dependency: /bin/sh
provider: bash.x86
64 4.1.2-15.el6_4
dependency: python-websockify
provider: python-websockify.noarch 0.5.1-1.el6
root at controller1-prod.controller1-prod:~#

?spice-html5-0.1.4-1.el6.noarch should be in that list.?

?I'm very much an openstack newbie, so it may be that I am not going
through proper channels. I appreciate your patience with me.?

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140731/5a9d610c/attachment.html

[Openstack-operators] keystone is throwing Authorization Failed: 'module' object is not callable errors

I did something to keystone, I'm not sure what.

root at controller1-prod.controller1-prod:~# keystone role-list
Authorization Failed: 'module' object is not callable
root at controller1-prod.controller1-prod:~#
root at controller1-prod.controller1-prod:~# keystone role-get admin
Authorization Failed: 'module' object is not callable
root at controller1-prod.controller1-prod:~#

I have envars OSUSERNAME, OSPASSWORD, OSTENANT defined. OSAUTH_URL has
a URL:
root at controller1-prod.controller1-prod:~# curl -i
http://controller1-prod.sea.opencandy.com:35357/v2.0
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Date: Fri, 01 Aug 2014 21:10:47 GMT
Transfer-Encoding: chunked

{"version": {"status": "stable", "updated": "2012-10-13T17:42:56Z",
"media-types": [{"base": "application/json", "type":
"application/vnd.openstack.identity-v2.0+json"}, {"base":
"application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}],
"id": "v2.0", "links": [{"href": "
http://controller1-prod.sea.opencandy.com:35357/v2.0/", "rel": "self"},
{"href": "
http://docs.openstack.org/api/openstack-identity-service/2.0/content/",
"type": "text/html", "rel": "describedby"}, {"href": "
http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf",
"type": "application/pdf", "rel":
"describedby"}]}}root at controller1-prod.controller1-prod:~#

I have been poking at keystone with pdb to try find the point where the
exception is raised, with little success. Maybe I am incompetent as a
python programmer.

I have discovered that keystoneclient does a call to the identity server to
get a token - I think. I tried to simulate the call using curl.

root at controller1-prod.controller1-prod:~# curl -i
http://controller1-prod.sea.opencandy.com:35357/v2.0/tokens
HTTP/1.1 404 Not Found
Vary: X-Auth-Token
Content-Type: application/json
Date: Fri, 01 Aug 2014 20:26:00 GMT
Transfer-Encoding: chunked

{"error": {"message": "The resource could not be found.", "code": 404,
"title": "Not Found"}}

One of the things I find frustrating is the code assumes that any error is
an authorization problem, which means that any bug is handled and doesn't
percolate up the stack. There seems to be no way to get the debugger to
halt on a handled exception. In client.py, there is
except Exception as e:
raise exceptions.AuthorizationFailure("Authorization Failed: "
which makes debugging a challenge..

I think that the exception is in the call to a.getauthref(self.session).
I think that the problem is that a, a Password object, is not callable.

(Pdb) print callable(a)
False
(Pdb)
(Pdb) list
168 token=token,
169 trustid=trustid,
170 tenantid=projectid or
tenantid,
171 tenant
name=projectname or
tenant
name)
172
173 -> return a.getauthref(self.session)
174 except (exceptions.AuthorizationFailure,
exceptions.Unauthorized):
175 _logger.debug("Authorization Failed.")
176 raise
177 except exceptions.EndpointNotFound:
178 msg = 'There was no suitable authentication url for this
request'

(Pdb) pp vars(a)
{'authref': None,
'auth
url': 'http://controller1-prod.sea.opencandy.com:35357/v2.0',
'password': "XXXXXXXXXXX",
'tenantid': None,
'tenant
name': 'admin',
'token': None,
'trust_id': None,
'username': 'admin'}
(Pdb)

I instrumented the code to see if I could get a better handle on the
exception getting thrown:

(Pdb) list 165,184
165 a = v2auth.Auth.factory(authurl,
166 username=username,
167 password=password,
168 token=token,
169 trust
id=trustid,
170 tenant
id=projectid or
tenant
id,
171 tenantname=projectname or
tenantname)
172
173 try:
174 return a.get
authref(self.session)
175 except Exception as e:
176 print "Hit an exception %s" % e
177 pdb.set
trace()
178 -> raise
179 except (exceptions.AuthorizationFailure,
exceptions.Unauthorized):
180 _logger.debug("Authorization Failed.")
181 raise
182 except exceptions.EndpointNotFound:
183 msg = 'There was no suitable authentication url for this
request'
184 raise exceptions.AuthorizationFailure(msg)

(Pdb) c
Hit an exception 'module' object is not callable
>
/usr/lib/python2.6/site-packages/keystoneclient/v20/client.py(178)getrawtokenfromidentityservice()
-> raise

Not sure what to do next.

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140801/28d7ee87/attachment.html

The keystone client does indeed hide failures from you and wrap them, which makes it annoying to debug, see https://bugs.launchpad.net/python-keystoneclient/+bug/1210625. If you do a ?debug however you can see the exact call you are attempting and how to repro it with curl. To get a token, you need to POST, I figure the default action for curl is a GET which may be why you are having issues with your curl command.

Here is a curl request to get a token.

keystone --debug token-get
DEBUG:keystoneclient.session:REQ: curl -i -X POST http://example.com:5000/v2.0/tokens -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "myPassword"}}}'

More debugging hints:

If you still have problems the server-side logs are generally way more useful. You can enable debug in the config file and then run keystone by hand (after stopping it) by doing /usr/bin/keystone-all. That will generally provide better feedback.

Also :35357 is the service endpoint for which I usually use a service token, is there a reason you're using that and not the standard :5000?

From: Jeff Silverman >
Date: Friday, August 1, 2014 3:35 PM
To: "openstack-operators at lists.openstack.org" >
Subject: [Openstack-operators] keystone is throwing Authorization Failed: 'module' object is not callable errors

I did something to keystone, I'm not sure what.

root at controller1-prod.controller1-prod:~# keystone role-list
Authorization Failed: 'module' object is not callable
root at controller1-prod.controller1-prod:~#
root at controller1-prod.controller1-prod:~# keystone role-get admin
Authorization Failed: 'module' object is not callable
root at controller1-prod.controller1-prod:~#

I have envars OSUSERNAME, OSPASSWORD, OSTENANT defined. OSAUTH_URL has a URL:
root at controller1-prod.controller1-prod:~# curl -i http://controller1-prod.sea.opencandy.com:35357/v2.0
HTTP/1.1 200 OK
Vary: X-Auth-Token
Content-Type: application/json
Date: Fri, 01 Aug 2014 21:10:47 GMT
Transfer-Encoding: chunked

{"version": {"status": "stable", "updated": "2012-10-13T17:42:56Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}, {"base": "application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": [{"href": "http://controller1-prod.sea.opencandy.com:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/api/openstack-identity-service/2.0/content/", "type": "text/html", "rel": "describedby"}, {"href": "http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf", "type": "application/pdf", "rel": "describedby"}]}}root at controller1-prod.controller1-prod:~#

I have been poking at keystone with pdb to try find the point where the exception is raised, with little success. Maybe I am incompetent as a python programmer.

I have discovered that keystoneclient does a call to the identity server to get a token - I think. I tried to simulate the call using curl.

root at controller1-prod.controller1-prod:~# curl -i http://controller1-prod.sea.opencandy.com:35357/v2.0/tokens

HTTP/1.1 404 Not Found
Vary: X-Auth-Token
Content-Type: application/json
Date: Fri, 01 Aug 2014 20:26:00 GMT
Transfer-Encoding: chunked

{"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}

One of the things I find frustrating is the code assumes that any error is an authorization problem, which means that any bug is handled and doesn't percolate up the stack. There seems to be no way to get the debugger to halt on a handled exception. In client.py, there is
except Exception as e:
raise exceptions.AuthorizationFailure("Authorization Failed: "
which makes debugging a challenge..

I think that the exception is in the call to a.getauthref(self.session). I think that the problem is that a, a Password object, is not callable.

(Pdb) print callable(a)
False
(Pdb)
(Pdb) list
168 token=token,
169 trustid=trustid,
170 tenantid=projectid or tenantid,
171 tenant
name=projectname or tenantname)
172
173 -> return a.getauthref(self.session)
174 except (exceptions.AuthorizationFailure, exceptions.Unauthorized):
175 _logger.debug("Authorization Failed.")
176 raise
177 except exceptions.EndpointNotFound:
178 msg = 'There was no suitable authentication url for this request'

(Pdb) pp vars(a)
{'authref': None,
'auth
url': 'http://controller1-prod.sea.opencandy.com:35357/v2.0',
'password': "XXXXXXXXXXX",
'tenantid': None,
'tenant
name': 'admin',
'token': None,
'trust_id': None,
'username': 'admin'}
(Pdb)

I instrumented the code to see if I could get a better handle on the exception getting thrown:

(Pdb) list 165,184
165 a = v2auth.Auth.factory(authurl,
166 username=username,
167 password=password,
168 token=token,
169 trust
id=trustid,
170 tenant
id=projectid or tenantid,
171 tenantname=projectname or tenantname)
172
173 try:
174 return a.get
authref(self.session)
175 except Exception as e:
176 print "Hit an exception %s" % e
177 pdb.set
trace()
178 -> raise
179 except (exceptions.AuthorizationFailure, exceptions.Unauthorized):
180 _logger.debug("Authorization Failed.")
181 raise
182 except exceptions.EndpointNotFound:
183 msg = 'There was no suitable authentication url for this request'
184 raise exceptions.AuthorizationFailure(msg)

(Pdb) c
Hit an exception 'module' object is not callable
/usr/lib/python2.6/site-packages/keystoneclient/v20/client.py(178)getrawtokenfromidentityservice()
-> raise

Not sure what to do next.

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
[https://dl.dropboxusercontent.com/u/16943296/SweetLabs-Signatures/New_2014/signature-logo.png]


This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] OpenStack Community Weekly Newsletter (July 25 - Aug 1)

  Brace yourself, DevStack Ceph is here!
  <http://techs.enovance.com/6572/brace-yourself-devstack-ceph-is-here>

It's already a legend: after 7 months and 42(fortytwo) patch sets,
Sebastien https://review.openstack.org/#/cSebastienHan's patch
https://review.openstack.org/#/c/65113/ got merged into DevStack. The
patch configures things to bootstrap a Ceph cluster and then configure
the OpenStack services Glance, Cinder, Cinder backup and Nova. A toast
to Sebastien's persistence and to all Devstack maintainers who provided
help.

  Coding all summer long in OpenStack
  <http://opensource.com/business/14/7/coding-all-summer-long-openstack>

The end of Google Summer of Code
https://www.google-melange.com/gsoc/homepage/google/gsoc2014 (GSoC) is
near, and intern Victoria Martinez de la Cruz
http://opensource.com/business/14/7/coding-all-summer-long-openstackshares
her experience asan OpenStack intern with OpenStack.

  Keystone team looking for feedback

The Keystone team is looking for feedback from the community on what
type of Keystone Token is being used in your OpenStack deployments. This
is to help us understand the use of the different providers and get
information on the reasoning (if possible) that that token provider is
being used. Please respond to the survey:
https://www.surveymonkey.com/s/NZNDH3M

  Third Party CI group meeting summary:

At this week's meeting the Third-Party group continued to discuss a new
testing terminology proposal and and a proposal for recheck syntax.
There was also a summary review of the proposed initial draft for a
sharing of best practices and a good discussion on using templates for
configuration files. Anyone deploying a third-party test system or
interested in easing third-party involvement is welcome to attend the
meetings. Minutes of ThirdParty meetings
https://wiki.openstack.org/wiki/Meetings/ThirdParty are carefully logged.

The Road To Paris 2014 -- Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

sanu m
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9a554f48-83bb-486a-9342-f7fec031100e
Mahalakshmi
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4eb5f2ad-1353-4554-9243-ffbd553d6fe6

Sitaram Dontu
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2687a11a-6ec8-4839-b654-33d7c7c7dd8e
Petteri Tammiaho
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person933d5dee-8f10-4bfa-bdb1-a1aa31f19b8d

Renee
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone7728684-ba53-41e6-b855-41d791d9a196
Oleksii Chuprykov
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc45bff67-ccaa-45d3-8ac5-b52023249c74

Marcus V R Nascimento
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person833f5d98-08db-4a09-a6b2-9dcef79a7c74
Liyi Meng
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person850f51c4-4cae-4817-bd1e-4d2493e2f8b8

Fausto Marzi
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf13b0611-80d3-417b-8d63-8d29f1292caf
JC Delay
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb11d918d-6b31-47e3-81cd-e3bd4cb8de00

Jeff Kramer
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0c73669b-8b43-4307-90e0-1beaa9be324f
Chelsea Winfree
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person622fa055-cf1c-4f30-9b76-693483c1a35f

lisa michele marino
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2692e588-c96c-471c-96a6-1373320b8914
Vlad Okhrimenko
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person73df8941-d347-46e0-8691-a41ad1783f77

Maria Abrahms
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone3e65c4d-6749-4e4e-9c48-035ca5c6b452
Vitaly Gridnev
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person32ca98e9-4810-4df4-a52c-b336fad77802

Alexey Miroshkin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person35751925-0511-4db7-9d04-4de421a5f01a
Vladyslav Drok
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person966a2444-3b6f-4ec9-b1a2-9b296cd5666f

Simeon Monov
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person545f65ff-0d83-4f5a-ae91-78b3e315177e
Pramita Gautam
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personfb8a9ae4-1d44-4a22-80f7-5b01d9264a4f

EliQiao
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc3122726-c0bd-48ad-953b-b3c50c999938
Murali Balcha
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6a2facf0-9c6c-444b-83ee-6b6b6b29cb96

Andreas Steil
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona5c3298c-9931-4197-b07c-97351ce0610e
Mike Heald
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person16e9d020-8297-4a69-943c-af96c63f4e7e

Alin Gabriel Serdean
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personcfe4f095-a18f-4a0c-9dbd-9f141d124147
John Schwarz
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4df3c8be-ee55-466d-8258-a023594bc5b1

Lin Yang
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1be15bea-e230-40f6-b72d-bc7981c71a1d
Gergely Szalay
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3fca39a5-e0b1-4db6-8ec4-700a866bc0bb

Geraint North
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person89d4550a-95c6-433b-89f8-e5306028d5c7
Forest Romain
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona34ed416-2a9a-4686-94b0-12efc64064c8

Georges Dubus
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person35ad6967-3497-4500-b1e5-943aa33c6b5a
Eddie Lin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personfeac58d6-d758-4188-ae0f-af9cc988e54a

Stephane Miller
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8e7b2b7a-303e-4aa0-a994-0049d1153926
Tim Kelsey
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6364f8df-a6d5-4326-8e7f-55d564d2b74f

Sunil G
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7709adae-ddb7-4e3d-a0b3-20cedc074ae8
Sitaram Dontu
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2687a11a-6ec8-4839-b654-33d7c7c7dd8e

Will Foster
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person039398b9-6e8e-4abf-9ebe-9b572f67709d
ryszard chojnacki
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9bd6b927-27fe-4f8e-a61b-becfcecb6748

Walter Heck
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf6ee577c-4281-4d27-8c7c-fc1bf02aa714
Stephane Miller
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8e7b2b7a-303e-4aa0-a994-0049d1153926

Mithil Arun
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6f700eb1-7c22-48cf-805f-d640686d639b
PORTE Lo?c
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person126c6cd6-4d89-4941-b125-437fd1c5526d

Kieran Forde
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9352dd7e-3669-43f3-9672-b10f8b383407
Robbie Harwood
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1ee18507-7a75-4a26-be33-f35948080596

Sanjay Kumar Singh
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1ea86116-4809-477d-9057-5112cd852acc
Mingyan Bao
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person85fb985a-df67-47e0-b676-105bf1c224bc

Fausto Marzi

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf13b0611-80d3-417b-8d63-8d29f1292caf

Daniel Shirley

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2e3006b3-6066-48d8-8922-b4f4f7ca020a

Martin Seidl

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personeaed93d3-bbb5-4206-92aa-dfa2d3b3637b

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week?
Latest patches submitted for review? Check out the individual project
pages on OpenStack Activity Board -- Insights
http://activity.openstack.org/data/display/OPNSTK2/.

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140801/3bed7654/attachment.html

[Openstack-operators] Summit Session Voting

How are folks getting the word out for their session ideas? I submitted 4
including one I'm really passionate about. Is there a rule against sharing
links to our ideas with the mailing list here? As a start-up, we don't have
hundreds of employee votes for our own company unfortunately so it looks
like creativity will be helpful!

Mahalo,
Adam

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140802/33d21b15/attachment.html

Please vote for me :)

How Non profit community Can support openstack? Speaker (Asmaa Ibrahim)

https://www.openstack.org/vote-paris/Presentation/how-non-profit-community-can-support-openstack

On Sun, Aug 3, 2014 at 8:22 AM, Adam Lawson wrote:

Alright Gary. As they say, hopefully the end will justify the means!

Also, if folks aren't seeing what they're wanting to see, I would love to
hear what they have been waiting for or what they would find
interesting at future Summits!

Adam Lawson
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Sat, Aug 2, 2014 at 9:25 PM, Gary Kotton wrote:

Hi,
Feel free to share. :)
Thanks
Gary

From: Adam Lawson
Date: Saturday, August 2, 2014 at 9:02 PM
To: "openstack at lists.openstack.org" , "
openstack-operators at lists.openstack.org" <
openstack-operators at lists.openstack.org>
Subject: [Openstack] Summit Session Voting

How are folks getting the word out for their session ideas? I
submitted 4 including one I'm really passionate about. Is there a rule
against sharing links to our ideas with the mailing list here? As a
start-up, we don't have hundreds of employee votes for our own company
unfortunately so it looks like creativity will be helpful!

Mahalo,
Adam

  • Adam Lawson*

    AQORN, Inc.
    427 North Tatnall Street
    Ste. 58461
    Wilmington, Delaware 19801-2230
    Toll-free: (844) 4-AQORN-NOW ext. 101
    International: +1 302-387-4660
    Direct: +1 916-246-2072


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
--
Thanks,

Asmaa
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Now you know there are BIG opportunities for OpenStack for Carrier SDN/NFV, what's next...

Stackers:

In Hong Kong and Atlanta OpenStack Summit, you probably have heard about Software Defined Networking and Network Function Virtualization, and the pivotal role that OpenStack plays in both technologies.

In the upcoming Paris OpenStack Summit, there is a new track just to cover ?Telco Strategies?. As an innovator in networking for both carriers and cloud service providers, Juniper Networks is also leaving the SDN/NFV wave as the first networking vendors to open source its production-ready SDN/NFV platform ? OpenContrail (See what Cloudwatt says about OpenContrail: http://www.lightreading.com/carrier-sdn/cloudwatt-builds-snoop-proof-cloud/d/d-id/710181?). Juniper is also among the first batch of AT&T Domain 2.0 vendors.

Check out our proposals to the Paris OpenStack Summit and cast your vote for Juniper and our partner/customer sessions, and explore opportunities in cloud networking and service automation with OpenStack! Check out my blog at:

http://forums.juniper.net/t5/SDN-Era/Calling-all-Juniper-Fans-Show-your-support-and-vote-for-our/ba-p/251638

Or vote by clicking on the following links:

Beginning Level Technology Introduction Sessions:

Hands-on Workshops:

Network Function Virtualization related sessions:

Joint Session with Canonical

Joint Session with Amdocs:

Joint session with Hitachi:

Cloud Orchestration Sessions:
Joint session with Cloudwatt:

Cloud Security Sessions:

Analytics and Telemetry

OpenStack Applications:

Sessions proposed by Symantac on Contrail:

Session Proposed by TCP on Contrail:

Chloe Jian Ma
Director, SDN Product Marketing
Juniper Networks
O +1.408.936.6432
C +1.408.835.8686
mailto:ChloeMa at juniper.net
[Can SDN help you build the best? We have answers.]http://www.juniper.net/us/en/dm/sdn/?utm_source=emailsignature&utm_medium=email&utm_campaign=buildthebest&utm_content=sdn&cid=701C0000000pXnr

LET'S GET STARTED http://www.juniper.net/us/en/dm/sdn/?utm_source=emailsignature&utm_medium=email&utm_campaign=buildthebest&utm_content=sdn&cid=701C0000000pXnr

From: Asmaa Ibrahim <asmaa.ibrahim12 at gmail.com<mailto:asmaa.ibrahim12 at gmail.com>>
Date: Sunday, August 3, 2014 at 3:39 AM
To: Adam Lawson >
Cc: Gary Kotton >, "openstack-operators at lists.openstack.org" >, openstack >
Subject: Re: [Openstack-operators] [Openstack] Summit Session Voting

Please vote for me :)

How Non profit community Can support openstack? Speaker (Asmaa Ibrahim)

https://www.openstack.org/vote-paris/Presentation/how-non-profit-community-can-support-openstack

On Sun, Aug 3, 2014 at 8:22 AM, Adam Lawson > wrote:
Alright Gary. As they say, hopefully the end will justify the means!

Also, if folks aren't seeing what they're wanting to see, I would love to hear what they have been waiting for or what they would find interesting at future Summits!

Adam Lawson
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
[http://www.aqorn.com/images/logo.png]

On Sat, Aug 2, 2014 at 9:25 PM, Gary Kotton > wrote:
Hi,
Feel free to share. :)
Thanks
Gary

From: Adam Lawson >
Date: Saturday, August 2, 2014 at 9:02 PM
To: "openstack at lists.openstack.org" >, "openstack-operators at lists.openstack.org" >
Subject: [Openstack] Summit Session Voting

How are folks getting the word out for their session ideas? I submitted 4 including one I'm really passionate about. Is there a rule against sharing links to our ideas with the mailing list here? As a start-up, we don't have hundreds of employee votes for our own company unfortunately so it looks like creativity will be helpful!

Mahalo,
Adam

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660<tel:%2B1%20302-387-4660>
Direct: +1 916-246-2072<tel:%2B1%20916-246-2072>
[http://www.aqorn.com/images/logo.png]


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
--
Thanks,

Asmaa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140803/64dbbb7e/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: B1F05CDC-7B6F-4CC8-AE9E-63698E0549E4[39].png
Type: image/png
Size: 38257 bytes
Desc: B1F05CDC-7B6F-4CC8-AE9E-63698E0549E4[39].png
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140803/64dbbb7e/attachment.png

[Openstack-operators] August 25-26 - Ops Meetup Registration Open

Anyone wanna car pool between downtown and the event? :)

https://etherpad.openstack.org/p/SAT-ops-meetup-car-pool

On 16/07/14 04:42, Anne Gentle wrote:

Hi Dan,
I'm updating the etherpad with more hotel info in a new "Where should I
stay?" section.

There's no group hotel I know of. Close to the address, there are just
two hotels, and not much to do around a renovated shopping mall. :) So
I'll recommend the Valencia on the Riverwalk or the aLoft that's a bit
closer to the Rackspace office. I'll put details and links in the 'pad.
Anne

On Mon, Jul 14, 2014 at 8:16 AM, Dan Radez > wrote:

I noticed the reference to where to stay is gone,
is there any group hotel or should we just find somewhere close to the
address of the meetup?

Dan


On 07/13/2014 11:35 PM, Tom Fifield wrote:
 > Hi all,
 >
 > I want to finalise the agenda and start advertising it very soon. If
 > you're interested in running a session for the ops event, please
 >
 > 1) register
 >
http://www.eventbrite.com/e/openstack-ops-mid-cycle-meetup-tickets-12149171499
 >
 >
 > 2) edit https://etherpad.openstack.org/p/SAT-ops-meetup and add your
 > name under volunteers
 >
 >
 > Regards,
 >
 >
 > Tom
 >
 >
 > On 08/07/14 16:06, Tom Fifield wrote:
 >> If you run an OpenStack cloud, please consider to join us on
August 25 &
 >> 26 in San Antonio for the next Ops Meetup!
 >>
 >> Registration is now open. Please pass the link to other
OpenStack ops
 >> folk you know:
 >>
 >>
http://www.eventbrite.com/e/openstack-ops-mid-cycle-meetup-tickets-12149171499
 >>
 >>
 >>
 >>
 >> ***Please register by July 29th***
 >>
 >>
 >> The OpenStack Ops Mid-Cycle Meetup will provide a chance to continue
 >> conversations and dive into working groups initiatied at the
first Ops
 >> Meetup in March 2014, and followed by the Ops Meetup tracks
during the
 >> Atlanta Summit May 2014. It is intended to be a forum for people
who are
 >> currently running OpenStack clouds to congregate, swap best
practices,
 >> share ideas and give feedback. The format will primarily consist of
 >> round table / working groups / discussion sessions, with only a
small
 >> number of presentations. The event has the following goals:
 >>
 >>      Gather feedback on the issues that come up in running
OpenStack and
 >> work to communicate this throughout the community
 >>      Create a forum in which to share best practices and
architectures
 >> between interested parties
 >>      Increase constructive, proactive involvement from those running
 >> clouds
 >>
 >> More details to come on the agenda, but you can see initial
input and
 >> notes at: https://etherpad.openstack.org/p/SAT-ops-meetup
 >>
 >> The event is free to attend, but please RSVP by July 29th, so we can
 >> plan for food and space.
 >>
 >> Thank you to Rackspace for hosting this event!!
 >>
 >> ***Note***: This event assumes OpenStack ops knowledge and is _not_
 >> appropriate for beginners, or a place to learn about OpenStack. The
 >> event contains relatively few 'presentations' and is mostly a
 >> discussion-style event. To find other OpenStack events in your area,
 >> please visit www.openstack.org/events
<http://www.openstack.org/events>.
 >>
 >>
 >> Regards,
 >>
 >>
 >> Tom
 >>
 >>
 >> On 07/07/14 23:04, Dan Radez wrote:
 >>> TShirt Signup at the bottom of the etherpad:
 >>> https://etherpad.openstack.org/p/SAT-ops-meetup
 >>>
 >>> RDO is ordering trystack t-shirts
 >>> I put a signup on the etherpad.
 >>> Put you name down by end of the week (2014-07-11) and we should
be able
 >>> to bring a shirt to San Antonio for everyone who puts their
name down.
 >>>
 >>> Dan Radez
 >>>
 >>>
 >>> On 06/26/2014 03:49 AM, Tom Fifield wrote:
 >>>> Hi all,
 >>>>
 >>>> Thanks to those of you who have put some thoughts down on the
etherpad.
 >>>>
 >>>> There's still a small amount of time for those who haven't -
please
 >>>> check it out at:
 >>>>
 >>>> https://etherpad.openstack.org/p/SAT-ops-meetup
 >>>>
 >>>>
 >>>> I'm also going to start looking for moderators soon. If you are
 >>>> interesting in moderating a session, let me know which ones :)
 >>>>
 >>>>
 >>>> Regards,
 >>>>
 >>>>
 >>>> Tom
 >>>>
 >>>>
 >>>> On 19/06/14 11:02, Tom Fifield wrote:
 >>>>> Hi all,
 >>>>>
 >>>>> We've come a long way since our first small meeting in San
Jose: I've
 >>>>> looked at the survey feedback around the Ops Meetup at the
Atlanta
 >>>>> summit and it was amazingly successful. That's all thanks to you.
 >>>>>
 >>>>> It's now time to organise our next meeting. Details forthcoming,
 >>>>> but we
 >>>>> know that it's going to be over two days, August 25th and
26th in San
 >>>>> Antonio, Texas, USA, thanks to Rackspace.
 >>>>>
 >>>>> However, first - we need to work on the content we'd like to see.
 >>>>>
 >>>>> The strongest response to the summit survey was a desire for
sessions
 >>>>> that really "connect" the ops/dev feedback loop full-circle,
and see
 >>>>> change actually happening. So this time, in addition to
 >>>>> feedback-gathering sessions, I'm hoping to see at least a few
real
 >>>>> working sessions, much more "development" focused. Perhaps
"blueprint
 >>>>> review" might be one of these, or maybe the "enterprise"
working group
 >>>>> would like to have a working meeting.
 >>>>>
 >>>>> Aside from this, it turns out that people actually liked the
topics we
 >>>>> picked for the summit ops meetup - well done! The more
 >>>>> presentation-style architecture "show and tell" was also
enjoyed, but
 >>>>> people wanted to change up the format a bit. Two back-to-back
hours
 >>>>> was
 >>>>> too long :)
 >>>>>
 >>>>> Please take a moment to jot down your ideas on how to shape
this next
 >>>>> meeting on the following etherpad:
 >>>>>
 >>>>> https://etherpad.openstack.org/p/SAT-ops-meetup
 >>>>>
 >>>>>
 >>>>> Regards,
 >>>>>
 >>>>>
 >>>>> Tom
 >>>>

Hi Aaron,

As best I can tell*, the lines about policy were added after the draft
agenda derived from the etherpad was created and sent to the mailing
list - as such, no time was scheduled to discuss "policy/congress".

In terms of changing that at this late stage, as the agenda has since
been relatively widely broadcast, I'd probably be looking for some
community consensus on this mailing list as to which session should be
replaced.

Just a random suggestion: on a brief check, I couldn't find a single
thread about congress on the ops ML (or, even, the general list). If
feedback to guide development is the name of the game, perhaps you could
introduce the project and ask a few questions on this list?

Regards,

Tom

  • The time slider on the etherpad is only going back to Aug 5th, but I
    did go through the entire etherpad with a fine-toothed comb before
    making the schedule sheet and do not recall seeing these entries.

On 05/08/14 07:09, Aaron Rosen wrote:
Hi Tom,

We added Policy/congress to the etherpad
(https://etherpad.openstack.org/p/SAT-ops-meetup) a while back, do you
know which day that will be mapped to to be talked about?

Thanks,

Aaron

On Tue, Jul 8, 2014 at 1:06 AM, Tom Fifield > wrote:

If you run an OpenStack cloud, please consider to join us on August
25 & 26 in San Antonio for the next Ops Meetup!

Registration is now open. Please pass the link to other OpenStack
ops folk you know:

http://www.eventbrite.com/e/__openstack-ops-mid-cycle-__meetup-tickets-12149171499



***Please register by July 29th***


The OpenStack Ops Mid-Cycle Meetup will provide a chance to continue
conversations and dive into working groups initiatied at the first
Ops Meetup in March 2014, and followed by the Ops Meetup tracks
during the Atlanta Summit May 2014. It is intended to be a forum for
people who are currently running OpenStack clouds to congregate,
swap best practices, share ideas and give feedback. The format will
primarily consist of round table / working groups / discussion
sessions, with only a small number of presentations. The event has
the following goals:

     Gather feedback on the issues that come up in running OpenStack
and work to communicate this throughout the community
     Create a forum in which to share best practices and
architectures between interested parties
     Increase constructive, proactive involvement from those running
clouds

More details to come on the agenda, but you can see initial input
and notes at: https://etherpad.openstack.__org/p/SAT-ops-meetup
<https://etherpad.openstack.org/p/SAT-ops-meetup>

The event is free to attend, but please RSVP by July 29th, so we can
plan for food and space.

Thank you to Rackspace for hosting this event!!

***Note***: This event assumes OpenStack ops knowledge and is _not_
appropriate for beginners, or a place to learn about OpenStack. The
event contains relatively few 'presentations' and is mostly a
discussion-style event. To find other OpenStack events in your area,
please visit www.openstack.org/events .


Regards,


Tom


On 07/07/14 23:04, Dan Radez wrote:

    TShirt Signup at the bottom of the etherpad:
    https://etherpad.openstack.__org/p/SAT-ops-meetup
    <https://etherpad.openstack.org/p/SAT-ops-meetup>

    RDO is ordering trystack t-shirts
    I put a signup on the etherpad.
    Put you name down by end of the week (2014-07-11) and we should
    be able
    to bring a shirt to San Antonio for everyone who puts their name
    down.

    Dan Radez


    On 06/26/2014 03:49 AM, Tom Fifield wrote:

        Hi all,

        Thanks to those of you who have put some thoughts down on
        the etherpad.

        There's still a small amount of time for those who haven't -
        please
        check it out at:

        https://etherpad.openstack.__org/p/SAT-ops-meetup
        <https://etherpad.openstack.org/p/SAT-ops-meetup>


        I'm also going to start looking for moderators soon. If you are
        interesting in moderating a session, let me know which ones :)


        Regards,


        Tom


        On 19/06/14 11:02, Tom Fifield wrote:

            Hi all,

            We've come a long way since our first small meeting in
            San Jose: I've
            looked at the survey feedback around the Ops Meetup at
            the Atlanta
            summit and it was amazingly successful. That's all
            thanks to you.

            It's now time to organise our next meeting. Details
            forthcoming, but we
            know that it's going to be over two days, August 25th
            and 26th in San
            Antonio, Texas, USA, thanks to Rackspace.

            However, first - we need to work on the content we'd
            like to see.

            The strongest response to the summit survey was a desire
            for sessions
            that really "connect" the ops/dev feedback loop
            full-circle, and see
            change actually happening. So this time, in addition to
            feedback-gathering sessions, I'm hoping to see at least
            a few real
            working sessions, much more "development" focused.
            Perhaps "blueprint
            review" might be one of these, or maybe the "enterprise"
            working group
            would like to have a working meeting.

            Aside from this, it turns out that people actually liked
            the topics we
            picked for the summit ops meetup - well done! The more
            presentation-style architecture "show and tell" was also
            enjoyed, but
            people wanted to change up the format a bit. Two
            back-to-back hours was
            too long :)

            Please take a moment to jot down your ideas on how to
            shape this next
            meeting on the following etherpad:

            https://etherpad.openstack.__org/p/SAT-ops-meetup
            <https://etherpad.openstack.org/p/SAT-ops-meetup>


            Regards,


            Tom



        _________________________________________________
        OpenStack-operators mailing list
        OpenStack-operators at lists.__openstack.org
        <mailto:OpenStack-operators at lists.openstack.org>
        http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
        



    _________________________________________________
    OpenStack-operators mailing list
    OpenStack-operators at lists.__openstack.org
    <mailto:OpenStack-operators at lists.openstack.org>
    http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
    



_________________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.__openstack.org
<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators

[Openstack-operators] Normal user operated Live Migration

Hi,
I experienced the impossibility for a normal user to (Live)Migrate
her/his virtual instances from a compute node (eventually failed or not)
to another one:

ERROR: Live migration of instance b2c60d5a-831e-4fa6-856c-ddf3d8d287ce
to host compute-01.cloud.pd.infn.it failed (HTTP 400) (Request-ID:
req-d32ebc3f-d74a-41a3-821e-149009ea2cbb)

In the controller node I see this in the conductor.log:

2014-08-04 08:56:35.988 2346 ERROR nova.openstack.common.rpc.common
[req-d32ebc3f-d74a-41a3-821e-149009ea2cbb
ca9b92d86e184def8e4d651ced8f67eb a6c9f4d7e973430db7f9615fe2a2bfec]
['Traceback (most recent call last):\n', ' File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py",
line 461, in processdata\n **args)\n', ' File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py",
line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt,
**kwargs)\n', ' File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py",
line 439, in inner\n return catchclientexception(exceptions, func,
args, **kwargs)\n', ' File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py",
line 420, in catchclientexception\n return func(
args,
**kwargs)\n', ' File
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 645,
in migrateserver\n blockmigration, diskovercommit)\n', ' File
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 747,
in livemigrate\n ex, requestspec, self.db)\n', ' File
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 719,
in _live
migrate\n blockmigration, diskovercommit)\n', ' File
"/usr/lib/python2.6/site-packages/nova/conductor/tasks/live
migrate.py",
line 205, in execute\n return task.execute()\n', ' File
"/usr/lib/python2.6/site-packages/nova/conductor/tasks/livemigrate.py",
line 59, in execute\n self.
checkhostisup(self.source)\n', ' File
"/usr/lib/python2.6/site-packages/nova/conductor/tasks/live
migrate.py",
line 90, in checkhostisup\n service =
db.servicegetbycomputehost(self.context, host)\n', ' File
"/usr/lib/python2.6/site-packages/nova/db/api.py", line 151, in
servicegetbycomputehost\n return
IMPL.servicegetbycomputehost(context, host)\n', ' File
"/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 107,
in wrapper\n nova.context.requireadmincontext(args[0])\n', ' File
"/usr/lib/python2.6/site-packages/nova/context.py", line 195, in
requireadmincontext\n raise exception.AdminRequired()\n',
'AdminRequired: User does not have admin privileges\n']

This seems to be strictly related to other restrictions (like "nova
host-list" or the missing compute node in the output of "nova show
").
I tried to live-migrate VM using the admin user and it worked fine. But
of course admin doesn't see regular user's VM.

I'm wondering what the live migration is for, if it cannot be used by
regular users ... If a compute node fails there seems to be no chance to
restore your work seamlessly.

Even after modification of /etc/nova/policy.json, the situation haven't
changed.
After a quick search on google, I've found this bug
https://review.openstack.org/#/c/26972/: I tried to implemented the fix
(el_context.elevate()), but for example the command "nova host-list"
still gives up with privilege problem.

Is there a way to perform the important task of VM migration ?

thanks,

 Alvise

Going out on a limb here.
I think the intent of live migration was for the operators to be able to perform scheduled maintenance on a compute node, not really something a user would be savvy to.

On Aug 4, 2014, at 12:00 AM, Alvise Dorigo <alvise.dorigo at pd.infn.it> wrote:

Hi,
I experienced the impossibility for a normal user to (Live)Migrate her/his virtual instances from a compute node (eventually failed or not) to another one:

ERROR: Live migration of instance b2c60d5a-831e-4fa6-856c-ddf3d8d287ce to host compute-01.cloud.pd.infn.it failed (HTTP 400) (Request-ID: req-d32ebc3f-d74a-41a3-821e-149009ea2cbb)

In the controller node I see this in the conductor.log:

2014-08-04 08:56:35.988 2346 ERROR nova.openstack.common.rpc.common [req-d32ebc3f-d74a-41a3-821e-149009ea2cbb ca9b92d86e184def8e4d651ced8f67eb a6c9f4d7e973430db7f9615fe2a2bfec] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in processdata\n **args)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py", line 439, in inner\n return catchclientexception(exceptions, func, args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py", line 420, in catchclientexception\n return func(args, **kwargs)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 645, in migrateserver\n blockmigration, diskovercommit)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 747, in livemigrate\n ex, requestspec, self.db)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 719, in _livemigrate\n blockmigration, diskovercommit)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/tasks/livemigrate.py", line 205, in execute\n return task.execute()\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/tasks/livemigrate.py", line 59, in execute\n self.checkhostisup(self.source)\n', ' File "/usr/lib/python2.6/site-packages/nova/conductor/tasks/livemigrate.py", line 90, in checkhostisup\n service = db.servicegetbycomputehost(self.context, host)\n', ' File "/usr/lib/python2.6/site-packages/nova/db/api.py", line 151, in servicegetbycomputehost\n return IMPL.servicegetbycomputehost(context, host)\n', ' File "/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 107, in wrapper\n nova.context.requireadmincontext(args[0])\n', ' File "/usr/lib/python2.6/site-packages/nova/context.py", line 195, in requireadmincontext\n raise exception.AdminRequired()\n', 'AdminRequired: User does not have admin privileges\n']

This seems to be strictly related to other restrictions (like "nova host-list" or the missing compute node in the output of "nova show ").
I tried to live-migrate VM using the admin user and it worked fine. But of course admin doesn't see regular user's VM.

I'm wondering what the live migration is for, if it cannot be used by regular users ... If a compute node fails there seems to be no chance to restore your work seamlessly.

Even after modification of /etc/nova/policy.json, the situation haven't changed.
After a quick search on google, I've found this bug https://review.openstack.org/#/c/26972/: I tried to implemented the fix (el_context.elevate()), but for example the command "nova host-list" still gives up with privilege problem.

Is there a way to perform the important task of VM migration ?

thanks,

Alvise


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Browser dependent weirdness in dashboard

I rebooted the controller node to test if the services would come back. openstack-nova-api failed to start because:
2014-07-30 13:28:52.356 4651 ERROR nova.wsgi [-] Could not bind to 0.0.0.0:8775
2014-07-30 13:28:52.357 4651 CRITICAL nova [-] error: [Errno 98] Address already in use
nova-api-metadata has already grabbed port 8775.

I also noticed this before:
The nova api daemon will also start the metadata part if that is also in: enabled_apis= in nova.conf
If you also start the metadata agent daemon you will have conflicts.

?I am back to troubleshooting dashboard, and here is where it gets weird. It doesn't work, but it doesn't work differently ...
Just some general recommendations for troubleshooting the dashboard:
I would first start with making sure that all APIs work.
So make sure keystone,nova, glance, cinder & neutron (if you use neutron) work when you use the CLI.
If it still does not work set the dashboard to debug, usually there are some pointers to what is going on.

Cheers,
Robert van Leeuwen

[Openstack-operators] OpenStack Summit voting

Please vote for me :)

How Non profit community Can support openstack? Speaker (Asmaa Ibrahim)

https://www.openstack.org/vote-paris/Presentation/how-non-profit-community-can-support-openstack

--
Thanks,

Asmaa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140804/c0e2464d/attachment-0001.html

Check out my proposal too
Guest Image Lifecycle
https://www.openstack.org/vote-paris/Presentation/guest-image-lifecycle

On Aug 4, 2014, at 12:20 AM, Asmaa Ibrahim <asmaa.ibrahim12 at gmail.com> wrote:

Please vote for me :)

How Non profit community Can support openstack? Speaker (Asmaa Ibrahim)

https://www.openstack.org/vote-paris/Presentation/how-non-profit-community-can-support-openstack

--
Thanks,

Asmaa


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Can I run a dummy firewall on Openstack

Hi,

I want to run a dummy firewall on Openstack single node setup. Could any
one please help how to proceeed. Do I need to write any driver or
plugin to launch my dummy firewall on Openstack. If so can any one help
how to start a firewall driver or plugin.

Thanks,
Shiva
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140805/9294cee7/attachment.html

[Openstack-operators] Slow DNS resolver causing problems for OSLO

Hello.

In our installation all endpoints is defined by DNS names. Zone is
hosted by two DNS servers, and both of them are stated in
/etc/resolv.conf on each node.

Today on of the two DNS servers fails badly due hardware malfunction and
stops to reply to any network traffic.

Theoretically, it should cause only small delays in operations - second
DNS server is alive and reply normally.

But in practice I found that every component of openstack starts to
cripple up to level of 500 errors from api's (nova, neutron, etc). I
didn't finish all logs reading, but it seems that slow DNS resolving
causing congestion in connection pools of components, probably to
keystone for token validation.

It was not just 'slow', it was pure '500 errors' in API, problems with
nova/neutron/ceilometer interoperations and so on.

I'll continue to read logs tomorrow, but it really strange.

Any ideas/comments?

On 5 August 2014 11:34, Jeff Silverman wrote:

George,

The way DNS works is that the resolver queries the first nameserver listed in /etc/resolv.conf. That name server has 15 seconds to respond. Why so long? Because the name server may have to do a recursive search, and that used to take a lot of time. If the name server doesn't respond in 15 seconds, then the resolver tries the second name server. It also has 15 seconds to resolve the name. If it doesn't respond, then the resolver tries the third name server. The bottom line is that the resolver may take up to 45 seconds to resolve the name. That's a long time.

This is configurable using options timeout in /etc/resolv.conf. E.g. setting

options timeout:2

will cause the timeout to be two seconds, which may be more
appropriate given modern network speeds.

Regards,
Marcus.

[Openstack-operators] I shot myself in the foot trying to install the block storage service

I am in
http://docs.openstack.org/icehouse/install-guide/install/yum/content/cinder-controller.html
step 8. The problem began when I didn't notice that the word controller
was in italics in the documentation. I wish it were in ALL CAPS, such as
for the CINDER_PASS/. However, I went back and repeated the commands
correctly, I think, and I am still getting a 500 internal server error.

I used keystone with the --debug switch and tried using the generated curl
command:

root at controller1-prod.controller1-prod:~# curl -i -X POST
http://controller1-prod.sea.opencandy.com:35357/v2.0/tokens -H
"Content-Type: application/json" -H "Accept: application/json" -H
"User-Agent: python-keystoneclient" \
-d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username":
"admin", "password": "f8d67f756e057918ef25dca59de79778"}}}'
HTTP/1.1 500 Internal Server Error
Vary: X-Auth-Token
Content-Type: application/json
Date: Tue, 05 Aug 2014 23:51:10 GMT
Transfer-Encoding: chunked

{"error": {"message": "Malformed endpoint URL (see ERROR log for details):
http://controller:8776/v1/%(tenant_id)", "code": 500, "title": "Internal
Server Error"}}root at controller1-prod.controller1-prod:~#

from /var/log/keystone/keystone.log:

2014-08-05 16:51:10 ERROR [keystone.catalog.core] Malformed endpoint
http://controller:8776/v1/%(tenant_id) - incomplete format (are you missing
a type notifier ?)
2014-08-05 16:51:10 WARNING [keystone.common.wsgi] Malformed endpoint URL
(see ERROR log for details): http://controller:8776/v1/%(tenant_id)

I looked at the transfer with wireshark and I confirm that the service
endpoint is really sending http://controller:8776 in the error message. I
don't know what port 8776 or why the server thinks that there is traffic on
port 8776.

Whatever I did wrong, it has messed up keystone but good:

keystone catalog

Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/keystoneclient/v20/client.py",
line 172, in get
rawtokenfromidentityservice
return a.getauthref(self.session)
File
"/usr/lib/python2.6/site-packages/keystoneclient/auth/identity/v2.py", line
84, in getauthref
authenticated=False)
File "/usr/lib/python2.6/site-packages/keystoneclient/session.py", line
334, in post
return self.request(url, 'POST', **kwargs)
File "/usr/lib/python2.6/site-packages/keystoneclient/utils.py", line
324, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/keystoneclient/session.py", line
275, in request
raise exceptions.from_response(resp, method, url)
InternalServerError: Malformed endpoint URL (see ERROR log for details):
http://controller:8776/v1/%(tenant_id) (HTTP 500)
not all arguments converted during string formatting
root at controller1-prod.controller1-prod:/var/log/httpd#

I found something similar on launchpad, bug 1291672
https://bugs.launchpad.net/openstack-manuals/+bug/1291672 which was closed
for lack of activity.

Thank you

Jeff Silverman

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140805/8154bc16/attachment.html

Yes, you've messed it up good. When the endpoint URL is bad Keystone cannot do anything. I filed a bughttps://bugs.launchpad.net/keystone/+bug/1347862 on this last week and the fix is being back ported. The only resolution that I know of is to fix the endpoint in the Keystone database. I think based on what I see that you're missing an "s" on the end of that. %(foo)s is how python does string substitution. Here's how I fixed it before:

http://www.mattfischer.com/blog/?p=532

From: Jeff Silverman >
Date: Tuesday, August 5, 2014 6:42 PM
To: "openstack-operators at lists.openstack.org" >
Subject: [Openstack-operators] I shot myself in the foot trying to install the block storage service

I am in http://docs.openstack.org/icehouse/install-guide/install/yum/content/cinder-controller.html step 8. The problem began when I didn't notice that the word controller was in italics in the documentation. I wish it were in ALL CAPS, such as for the CINDER_PASS/. However, I went back and repeated the commands correctly, I think, and I am still getting a 500 internal server error.

I used keystone with the --debug switch and tried using the generated curl command:

root at controller1-prod.controller1-prod:~# curl -i -X POST http://controller1-prod.sea.opencandy.com:35357/v2.0/tokens -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" \
-d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "f8d67f756e057918ef25dca59de79778"}}}'
HTTP/1.1 500 Internal Server Error
Vary: X-Auth-Token
Content-Type: application/json
Date: Tue, 05 Aug 2014 23:51:10 GMT
Transfer-Encoding: chunked

{"error": {"message": "Malformed endpoint URL (see ERROR log for details):http://controller:8776/v1/%(tenant_id)", "code": 500, "title": "Internal Server Error"}}root at controller1-prod.controller1-prod:~#

from /var/log/keystone/keystone.log:

2014-08-05 16:51:10 ERROR [keystone.catalog.core] Malformed endpoint http://controller:8776/v1/%(tenant_id) - incomplete format (are you missing a type notifier ?)
2014-08-05 16:51:10 WARNING [keystone.common.wsgi] Malformed endpoint URL (see ERROR log for details): http://controller:8776/v1/%(tenant_id)

I looked at the transfer with wireshark and I confirm that the service endpoint is really sending http://controller:8776 in the error message. I don't know what port 8776 or why the server thinks that there is traffic on port 8776.

Whatever I did wrong, it has messed up keystone but good:

keystone catalog

Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/keystoneclient/v20/client.py", line 172, in getrawtokenfromidentityservice
return a.getauthref(self.session)
File "/usr/lib/python2.6/site-packages/keystoneclient/auth/identity/v2.py", line 84, in getauthref
authenticated=False)
File "/usr/lib/python2.6/site-packages/keystoneclient/session.py", line 334, in post
return self.request(url, 'POST', **kwargs)
File "/usr/lib/python2.6/site-packages/keystoneclient/utils.py", line 324, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/keystoneclient/session.py", line 275, in request
raise exceptions.from_response(resp, method, url)
InternalServerError: Malformed endpoint URL (see ERROR log for details):http://controller:8776/v1/%(tenant_id) (HTTP 500)
not all arguments converted during string formatting
root at controller1-prod.controller1-prod:/var/log/httpd#

I found something similar on launchpad, bug 1291672 https://bugs.launchpad.net/openstack-manuals/+bug/1291672 which was closed for lack of activity.

Thank you

Jeff Silverman

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
[https://dl.dropboxusercontent.com/u/16943296/SweetLabs-Signatures/New_2014/signature-logo.png]


This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] How to disable ipv6 in openvswitch on l3 machine?

Hi,

We use SL6 (RDO icehouse) and that does not have "ip6tables-save" and "ip6tables-restore"
On our l3 agent machine the openvswitch fails with creating ports due to the unavailability of those commands.
Due to this the ports are not properly setup, they are created but missing the tag.
I also noticed this does not apply to the openvswitch on the compute nodes.

There is an "old" bug that describes this but was never closed here:
https://bugs.launchpad.net/neutron/+bug/1203611

Is there a way to disable it in a config file somewhere?
I've tried to set "use_ipv6 = False" in various config files but it does not help.
I've now cheated a bit by creating the commands (empty scripts) which does work but is an ugly hack :)

Any recommendations how to fix this?
Looking at the code I start to get the feeling this is not configurable... :(

Thx,
Robert van Leeuwen

On 8/8/2014 3:08 AM, Robert van Leeuwen wrote:
Hi,

We use SL6 (RDO icehouse) and that does not have "ip6tables-save" and "ip6tables-restore"
On our l3 agent machine the openvswitch fails with creating ports due to the unavailability of those commands.
Due to this the ports are not properly setup, they are created but missing the tag.
I also noticed this does not apply to the openvswitch on the compute nodes.

There is an "old" bug that describes this but was never closed here:
https://bugs.launchpad.net/neutron/+bug/1203611

Is there a way to disable it in a config file somewhere?
I've tried to set "use_ipv6 = False" in various config files but it does not help.
I've now cheated a bit by creating the commands (empty scripts) which does work but is an ugly hack :)

Any recommendations how to fix this?
Looking at the code I start to get the feeling this is not configurable... :(

It seems not. There is a bug open for it:
https://bugs.launchpad.net/neutron/+bug/1352893

There is a fix in progress.

[Openstack-operators] OpenStack Community Weekly Newsletter (Aug 1 – 8)

  OpenStack and NUMA placement
  <http://thoughtsoncloud.com/2014/08/openstack-numa-placement/>

Nonuniform memory access (NUMA) is a memory architecture that provides
different access times depending on which processor is being used. This
is a useful feature for improving the performance of virtualized guests.
Guests can be optimized to use specific NUMA nodes when provisioning
resources. On most modern hardware, one can specify which NUMA nodes a
guest can use for virtualization. As an example, by improving
performance and reducing latency, the network functions virtualization
(NFV) use cases can really take advantage of it. Tiago Rodrigues de
Mello http://thoughtsoncloud.com/2014/08/openstack-numa-placement/
wrote a report on the current plans to improve use NUMA placement and
asked for comments on his blog.

  How do companies do OpenStack?
  <http://maffulli.net/2014/08/04/how-do-companies-do-openstack/>

Yours truly is going around these days asking ?how does your company do
OpenStack?? to collect best practices and notable mistakes from various
leaders of OpenStack?s corporate community. I?m hoping to build a ?how
to? manual to help managers build better dev teams, more effective at
collaborating while shipping products to their customers. This is an
effort that goes hand-in-hand with training new developers with Upstream
Training and other initiatives aimed at sustaining OpenStack growth.
Email me, please, I?d love to hear more stories.

  Ops Mid-Cycle Meetup ? August 25/26
  <http://www.openstack.org/blog/2014/07/ops-mid-cycle-meetup-august-2526/>

Are you running an OpenStack cloud? Come down to San Antonio on August
25-26th and hang out with others who do as well.
http://www.eventbrite.com/e/openstack-ops-mid-cycle-meetup-tickets-12149171499

The Road To Paris 2014 ? Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Welcome Victoria Mart?nez de la Cruz (vkmc) to Zaqar?s core team
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041724.html

Welcome (back) Jay Pipes to nova-core
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041775.html

Claire Delcourt
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persondf8d1009-189d-4850-9ecd-eb059760aedd
Koichi Yoshigoe
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb7ad4981-ee82-4a96-b4c4-33c3127dafb7

Ravi Sankar Penta
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb99221da-9c47-42c5-84c2-631ba517796f
HAYASHI Yusuke
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone4ffacf8-a78f-4598-94da-746d1a17ed92

Sridhar Ramaswamy
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3388c86c-109e-4802-b791-9685c55e93cf
Winnie Tsang
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone9810d03-205e-40fb-8969-4466aecdf5aa

???? ???????????
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6855aef1-5955-4f7b-9ed3-c9e289d90ca2
Wee Loong
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0760fb5c-b232-43e1-a8e8-4cef0a092cb3

Zsolt Dud?s
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6aae4605-23ad-4bec-81cd-2b58af900ddb
Sunu Engineer
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3152ee1e-3d7d-46b6-a8a4-d3f5beb2e04b

Robbie Harwood
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1ee18507-7a75-4a26-be33-f35948080596
Sam Betts
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persond7515652-5760-42e8-8140-8a59e29395a2

Vitaly Gridnev
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person32ca98e9-4810-4df4-a52c-b336fad77802
John Davidge
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person49b47de0-c068-4c1f-9f42-1579c29a90c7

Victor A. Ying
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person21f464bc-0851-4054-9bf4-d9a216098ba3
Jacek S'widerski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc278c5e7-ef4e-422d-9a0c-64625b2951ba

Sergey Nuzhdin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1d703cf3-cd6a-4bc4-a060-1924ce9f6581
Aishwarya Thangappa
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person88bbb684-2706-4f00-94c2-9d1528b39e6c

Sridhar Ramaswamy

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3388c86c-109e-4802-b791-9685c55e93cf

Mike Smith

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person34f3d38a-508f-416e-92f8-464573f538f8

Di Xu

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0720cd72-0e53-43ac-9334-8c8035ff39f8

Subrahmanyam Ongole

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7548125f-1f03-420e-bee0-dfeb14cbf296

John Trowbridge

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personcf393e4d-dd1a-4b76-bc1c-c73a3a79acec

Emily Hugenbruch

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona9e0d3af-7b1c-46a6-8db6-c210e69227f5

Patrizio Tufarolo

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8042db5d-e67e-48a6-a300-d260889c4b44

Yaling Fan

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona2075d41-dc12-42e4-a0e2-62518c77899f

Robin Wang

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone9b1326f-f31a-44ab-bd99-c573bf36a04f

Joseph Davis

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person96174e9c-4877-4ee6-962c-1fd09e9b0def

Ambroise CHRISTEA

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person05a2241e-79e7-419a-87dc-5a07780efcfc

Alexey Miroshkin

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person35751925-0511-4db7-9d04-4de421a5f01a

Pavlov Andrey

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person454b3687-c59d-4731-92c6-aa87513a81a8

Aviram Bar-Haim

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personabd8a592-896b-4235-b2e9-198280b2e9cc

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week?
Latest patches submitted for review? Check out the individual project
pages on OpenStack Activity Board ? Insights
http://activity.openstack.org/data/display/OPNSTK2/.

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140808/9750a897/attachment.html

[Openstack-operators] Flavor ID mismatch

Hi,
I?ve created a new flavor in my openstack cloud, called xsmall. When I started a new instance with that flavor, I found that it is connected to a non-existing flavor ID, which is resolved to my xsmall new flavor:

~>nova flavor-list
+--------------------------------------+------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | MemoryMB | Disk | Ephemeral | Swap | VCPUs | RXTXFactor | Is_Public |
+--------------------------------------+------------+-----------+------+-----------+------+-------+-------------+-----------+
| 1229a10f-5f13-4218-ba21-47a49cb99ae9 | m1.medium | 2048 | 10 | 0 | | 2 | 1.0 | True |
| 4b5afdbf-8dc3-4b30-938c-864e2da27f00 | m1.small | 1024 | 5 | 0 | | 1 | 1.0 | True |
| 677e1cc5-0046-4b22-9b17-57483b24030f | m1.xsmall | 512 | 3 | 0 | | 1 | 1.0 | True |
| 9eb9489a-f116-4282-9ba9-035a9408b3a9 | m1.tiny | 256 | 1 | 0 | | 1 | 1.0 | True |
| a5618e34-9118-42ec-a404-143f4cf14f86 | m1.large | 4096 | 40 | 0 | | 2 | 1.0 | True |
| ad49139e-e609-4d18-9962-cfdcf211c235 | m1.xxlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
| dfdd707d-5a3f-42a2-be3e-864b548bc536 | m1.xlarge | 8192 | 80 | 0 | | 4 | 1.0 | True |
+--------------------------------------+------------+-----------+------+-----------+------+-------+-------------+-----------+

~>nova flavor-show m1.xsmall
+----------------------------+--------------------------------------+
| Property | Value |
+----------------------------+--------------------------------------+
| name | m1.xsmall |
| ram | 512 |
| OS-FLV-DISABLED:disabled | False |
| vcpus | 1 |
| extraspecs | {} |
| swap | |
| os-flavor-access:is
public | True |
| rxtx_factor | 1.0 |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 3 |
| id | 677e1cc5-0046-4b22-9b17-57483b24030f |
+----------------------------+???????????????????+

~>nova show 698d493b-0866-4979-9101-e94a2aed9ebc
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-08-08T10:17:33Z |
| OS-EXT-STS:taskstate | None |
| OS-EXT-SRV-ATTR:host | compute-01.cloud.pd.infn.it |
| key
name | None |
| image | Fedora 20 - x64 (910417c9-de55-45e1-849d-d84f095c8a87) |
| hostId | 6384c4d4e4d8c298b66ce2f2528f335a16071d6dc2456236e511bba4 |
| OS-EXT-STS:vmstate | active |
| OS-EXT-SRV-ATTR:instance
name | instance-00000026 |
| OS-SRV-USG:launchedat | 2014-08-08T10:14:25.000000 |
| OS-EXT-SRV-ATTR:hypervisor
hostname | compute-01.cloud.pd.infn.it |
| flavor | m1.xsmall (904db668-b018-4563-81a7-189d2206b47e) |
| id | 698d493b-0866-4979-9101-e94a2aed9ebc |
| securitygroups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated
at | None |
| userid | 052be1ed98024ae1acb18ff692deef5a |
| name | test-698d493b-0866-4979-9101-e94a2aed9ebc |
| created | 2014-08-08T10:12:54Z |
| tenant
id | fb310b4d52f14fbbaf337d803576c7e0 |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumesattached | [] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power
state | 1 |
| OS-EXT-AZ:availabilityzone | nova |
| Admin network | 10.0.1.10 |
| config
drive | |
+--------------------------------------+?????????????????????????????+

as you can see "nova show? shows an ID for xsmall which is different than that one of the output of flavor-list or flavor-show.
While it seems not to be a problem for the end-user who uses the command-line, it becomes a 'semi-serious' problem for 3rd party clients using the nova APIs (v2).

Is this a known issue or am I doing something wrong ?

Alvise

[Openstack-operators] Accessing Swift via S3-API(boto) using Keystone authentication.

Hi,

I am hard time accessing my swift store via boto. Command-line and dashboard works fine I have included s3token middleware as well. I am using Havana release.

The error is at keystone log file

2014-08-11 22:28:39.915 22694 WARNING keystone.common.wsgi [-] Could not find credential, 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6

I am not sure about the "accesskeyid" and "secretaccesskey", using following:

awsaccesskeyid=':',
aws
secretaccesskey=''

is this correct?

proxy-service.conf


[pipeline:main]
pipeline = catch_errors cache healthcheck swift3 s3token authtoken keystone proxy-server

[filter:s3token]
paste.filterfactory = keystoneclient.middleware.s3token:filterfactory
auth
port = 35357
authhost = keystone-hostname
auth
protocol = http

[filter:swift3]
use = egg:swift3#swift3

[app:proxy-server]
use = egg:swift#proxy
allowaccountmanagement = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = 10.0.11.245:11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:keystone]
use = egg:swift#keystoneauth
operatorroles = admin,member,SwiftOperator
is
admin = true
cache = swift.cache

[filter:authtoken]
paste.filterfactory = keystoneclient.middleware.authtoken:filter_factory


keystone.conf (s3extension is not in the public api-- )
[filter:s3
extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory

[pipeline:publicapi]
pipeline = stats
monitoring urlnormalize tokenauth admintokenauth xmlbody jsonbody debug ec2extension usercrudextension publicservice

[pipeline:adminapi]
pipeline = stats
monitoring urlnormalize tokenauth admintokenauth xmlbody jsonbody debug statsreporting ec2extension s3extension crudextension admin_service


Any hint will be appreciated.

/Salman.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140811/e6cdbff1/attachment.html

Use the "keystone ec2-credentials-list" to get the access id and
secret_key. And use the same to access Swift S3.

$ keystone ec2-credentials-list

+--------+----------------------------------+----------------------------------+
| tenant | access | secret |
+--------+----------------------------------+----------------------------------+
| admin | bf0551537a98b5 | 16e5cd7fc623f405991 |
| demo | 2efbca009372433 | ab1eac986c589231a4 |
+--------+----------------------------------+----------------------------------+

On Tue, Aug 12, 2014 at 2:49 AM, Salman Toor <salman.toor at it.uu.se> wrote:
Hi,

I am hard time accessing my swift store via boto. Command-line and dashboard
works fine I have included s3token middleware as well. I am using Havana
release.

The error is at keystone log file

2014-08-11 22:28:39.915 22694 WARNING keystone.common.wsgi [-] Could not
find credential, 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6

I am not sure about the "accesskeyid" and "secretaccesskey", using
following:

awsaccesskeyid=':',
aws
secretaccesskey=''

is this correct?

proxy-service.conf


[pipeline:main]
pipeline = catch_errors cache healthcheck swift3 s3token authtoken keystone
proxy-server

[filter:s3token]
paste.filterfactory = keystoneclient.middleware.s3token:filterfactory
auth
port = 35357
authhost = keystone-hostname
auth
protocol = http

[filter:swift3]
use = egg:swift3#swift3

[app:proxy-server]
use = egg:swift#proxy
allowaccountmanagement = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = 10.0.11.245:11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:keystone]
use = egg:swift#keystoneauth
operatorroles = admin,member,SwiftOperator
is
admin = true
cache = swift.cache

[filter:authtoken]
paste.filterfactory = keystoneclient.middleware.authtoken:filter_factory


keystone.conf (s3extension is not in the public api-- )
[filter:s3
extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory

[pipeline:publicapi]
pipeline = stats
monitoring urlnormalize tokenauth admintokenauth
xmlbody jsonbody debug ec2extension usercrudextension publicservice

[pipeline:adminapi]
pipeline = stats
monitoring urlnormalize tokenauth admintokenauth
xmlbody jsonbody debug statsreporting ec2extension s3extension
crud
extension admin_service


Any hint will be appreciated.

/Salman.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] cinder-volume isn't starting - cause and cure

I am working from
http://docs.openstack.org/icehouse/install-guide/install/yum/content/cinder-verify.html
step 3. When I go to create a new volume, I get an error in the cinder
list command.

I checked the directory /var/log/cinder and it is empty. I found a
suggestion that it had to be owned by user cinder so I tried chown
cinder:cinder /var/log/cinder. No joy.

I did some more checking. The daemon dies within a few seconds of starting.

root at storage5-prod.storage5-prod:~# service openstack-cinder-volume start
Starting openstack-cinder-volume: [ OK ]
root at storage5-prod.storage5-prod:~# service openstack-cinder-volume status
openstack-cinder-volume dead but pid file exists
root at storage5-prod.storage5-prod:~#

Looking in /var/log/messages, I see:

Aug 12 16:47:24 storage5-prod abrt: detected unhandled Python exception in
'/usr/bin/cinder-volume'
Aug 12 16:47:24 storage5-prod abrtd: New client connected
Aug 12 16:47:24 storage5-prod abrt-server[7029]: Saved Python crash dump of
pid 7017 to /var/spool/abrt/pyhook-2014-08-12-16:47:24-7017
Aug 12 16:47:24 storage5-prod abrtd: Directory
'pyhook-2014-08-12-16:47:24-7017' creation detected
Aug 12 16:47:24 storage5-prod abrtd: Package 'openstack-cinder' isn't
signed with proper key
Aug 12 16:47:24 storage5-prod abrtd: 'post-create' on
'/var/spool/abrt/pyhook-2014-08-12-16:47:24-7017' exited with 1
Aug 12 16:47:24 storage5-prod abrtd: Deleting problem directory
'/var/spool/abrt/pyhook-2014-08-12-16:47:24-7017'

I wish the abrtd wouldn't delete the problem directory - that makes the
problem tough to track down.

I diagnosed the problem by modifying /etc/init.d/openstack-cinder-volume:

root at storage5-prod.storage5-prod:/etc/init.d# diff openstack-cinder-volume
openstack-cinder-volume_ORIGINAL
43c43
< daemon --user cinder --pidfile $pidfile "$exec --config-file
$distconfig --config-file $config --logfile $logfile
&>>/tmp/openstack-cinder-volume.txt & echo \$! > $pidfile"


daemon --user cinder --pidfile $pidfile "$exec --config-file

$distconfig --config-file $config --logfile $logfile &>/dev/null & echo \$!
$pidfile"
root at storage5-prod.storage5-prod:/etc/init.d#

then loooked at /tmp/openstack-cinder-volume.txt and found:
root at storage5-prod.storage5-prod:/etc/init.d# cat
/tmp/openstack-cinder-volume.txt
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57:
PowmInsecureWarning: Not using mpzpowmsec. You should rebuild using
libgmp >= 5 to avoid timing attack vulnerability.
warn("Not using mpzpowmsec. You should rebuild using libgmp >= 5 to
avoid timing attack vulnerability.", PowmInsecureWarning)
Traceback (most recent call last):
File "/usr/bin/cinder-volume", line 58, in
version=version.version
string())
File "/usr/lib/python2.6/site-packages/oslo/config/cfg.py", line 1638, in
call
raise ConfigFilesNotFoundError(self.namespace.filesnot_found)
oslo.config.cfg.ConfigFilesNotFoundError: Failed to read some config files:
/etc/cinder/cinder.conf

Ah hah!

I solved the problem by chown cinder:cinder /etc/cinder/cinder.conf

root at storage5-prod.storage5-prod:~# ls -ld /etc/cinder/
drwxr-xr-x 3 cinder cinder 4096 Aug 12 16:41 /etc/cinder/
root at storage5-prod.storage5-prod:~# ls -ld /etc/cinder/cinder.conf
-rw-r----- 1 root root 60893 Aug 12 15:55 /etc/cinder/cinder.conf
root at storage5-prod.storage5-prod:~# chown cinder:cinder
/etc/cinder/cinder.conf
root at storage5-prod.storage5-prod:~# ls -l /etc/cinder/cinder.conf
-rw-r----- 1 cinder cinder 60893 Aug 12 15:55 /etc/cinder/cinder.conf
You have new mail in /var/spool/mail/root
root at storage5-prod.storage5-prod:~#

root at storage5-prod.storage5-prod:~# service openstack-cinder-volume start
Starting openstack-cinder-volume: [ OK ]
root at storage5-prod.storage5-prod:~# service openstack-cinder-volume status
openstack-cinder-volume (pid 7370) is running...
root at storage5-prod.storage5-prod:~# service openstack-cinder-volume status
openstack-cinder-volume (pid 7370) is running...
root at storage5-prod.storage5-prod:~# sleep 5; service
openstack-cinder-volume status
openstack-cinder-volume (pid 7370) is running...
root at storage5-prod.storage5-prod:~#

?The bottom line is that I would like to make a suggestion: openstack is
very complicated and has a lot of moving parts. Furthermore, a lot of
people are using it in enterprise critical applications. Put some thought
into troubleshooting techniques. In particular, don't redirect stderr to
/dev/null, instead it write it somewhere.

While I have your undivided attention, there is another error message
coming out of stderr that you ought to think about:

?/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57:
PowmInsecureWarning: Not using mpzpowmsec. You should rebuild using
libgmp >= 5 to avoid timing attack vulnerability.
warn("Not using mpzpowm_sec. You should rebuild using libgmp >= 5 to
avoid timing attack vulnerability.", PowmInsecureWarning)

?Thank you

Jeff
?

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140812/118a8840/attachment.html

[Openstack-operators] Recover stalled live-migration

Hi guys,

What's the best way to recover a stalled live-migration?

Scenario:
Using shared storage (ceph & NFS in our case), we triggered some
live-migrations. During the l-m, all of the VMs stalled with the migration
never completing. In this case the qemu-kvm instances processes did migrate
to the new target however, the l-m could not complete. (The underlying
reason that l-m could not complete was due to a systemic neutron issue.)

I've typically used "nova reset-state --active" when an l-m doesn't finish,
but in this case, that kept the state of the l-m on the original node
though the qemu was on the target node. Is there any way to stitch them
back together after the fact (aside from just hacking the mysql database)?

Your thoughts and/or experience with this would be useful. (This occurred
in a testbed and ultimately we terminated the instances PRIOR to finding
out that it was neutron that was causing the lack-of-forward-progress.
Likely restarting neutron in this case would have allowed the l-ms to
complete.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140813/c4142798/attachment.html

[Openstack-operators] IceHouse Neutron ML2 Plugin with Vlans

Hi Team,

I am trying to create a multi node POC environment using Icehouse, I have
followed Openstack Icehouse Linux doc

http://docs.openstack.org/icehouse/install-guide/install/yum/content/section_neutron-networking.html

to install and configure Neutron with ML2 plugin I am confused with that
doc as it says we need to run neutron services on controller node, if we
have neutron node separately why should we run neutron services on control
node? and we just install OVS plugin on compute node

Can someone give me the working document to configure Neutron as separate
node with Ml2 plugin +Vlan configuration.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140813/da1771e6/attachment.html

is it the latest log entry after 4 PM? in Morning I have issue connecting
to neutron and its resolved.

On Wed, Aug 13, 2014 at 6:54 PM, Edgar Magana <edgar.magana at workday.com>
wrote:

I see the same error over an over:

RescheduledException: Build of instance
dc48b2c6-e6f9-4952-b864-e96bd7f2699d was re-scheduled: Connection to
neutron failed: [Errno 111] ECONNREFUSED

It seems that nova does not how to get neutron, maybe the end point
configuration is wrong

Edgar

From: raju <raju.roks at gmail.com>
Date: Wednesday, August 13, 2014 at 3:26 PM

To: Edgar Magana <edgar.magana at workday.com>
Cc: "openstack-operators at lists.openstack.org" <
openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] IceHouse Neutron ML2 Plugin with Vlans

I have this vif config in my compute node nova.conf file

vifdriver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
vif
pluggingisfatal=True
vifpluggingtimeout=300

If I remove the above config able to create instance but unable to get IP
to instance from DHCP, If enable the above config am getting the below error

"Error: Failed to launch instance "test200": Please try again later
[Error: Virtual Interface creation failed]."

here is the compute log file

http://filebin.ca/1WjOZLfVHZ1y

Thanks,
Raju

On Wed, Aug 13, 2014 at 5:29 PM, Edgar Magana <edgar.magana at workday.com>
wrote:

That configuration should definitely work.
Let?s take a look to the logs.. Can you paste a link to your logs?

Edgar

From: raju <raju.roks at gmail.com>
Date: Wednesday, August 13, 2014 at 2:10 PM
To: Edgar Magana <edgar.magana at workday.com>
Cc: "openstack-operators at lists.openstack.org" <
openstack-operators at lists.openstack.org>
Subject: Re: [Openstack-operators] IceHouse Neutron ML2 Plugin with Vlans

Hi Edgar,

Appreciate your quick reply, I have deployed neutron-server, ovs, ml2,
l3 and dhcp on neutron node and configured my neutron node as endpoint, and
my compute node just running ovs and dhcp will this work? am getting vif
"unable to create virtual interface" errors while spinning up instances.

I have tested previously with just 2 nodes controller and compute node
using packstack and worked fine. I am unable to figure out what is the
manual configuration I need to at controller/neutron/compute end to make
VLANs work using ml2 plugin

Thanks,
Raju

On Wed, Aug 13, 2014 at 4:55 PM, Edgar Magana <edgar.magana at workday.com>
wrote:

Raju,

Take a look to this link:

http://docs.openstack.org/icehouse/install-guide/install/yum/content/ch_overview.html#architecture_example-architectures

Basically, the Controller will run Neutron Server (API) configure for
the ML2 plugin and the Network Node will run the following ones:

  • L2 OVS/ or LB agent (you can chose the one that you want)
  • L3 agent
  • DHCP agent

The compute node only needs to run the

  • L2 OVS/ or LB agent

    Cheers,

    Edgar

    From: raju <raju.roks at gmail.com>
    Date: Wednesday, August 13, 2014 at 11:10 AM
    To: "openstack-operators at lists.openstack.org" <
    openstack-operators at lists.openstack.org>
    Subject: [Openstack-operators] IceHouse Neutron ML2 Plugin with Vlans

    Hi Team,

    I am trying to create a multi node POC environment using Icehouse, I
    have followed Openstack Icehouse Linux doc

http://docs.openstack.org/icehouse/install-guide/install/yum/content/section_neutron-networking.html

to install and configure Neutron with ML2 plugin I am confused with
that doc as it says we need to run neutron services on controller node, if
we have neutron node separately why should we run neutron services on
control node? and we just install OVS plugin on compute node

Can someone give me the working document to configure Neutron as
separate node with Ml2 plugin +Vlan configuration.

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Neutron: Why OVS bridges (br-int, br-tun) are integrated in Icehouse?

Hello,
I want to know that why bridges are integrated. So, I saw that GRE tunnels also established on br-int. I dumped FlowTables on br-int like bellow. (It's summerized )


table=0, npackets=29, nbytes=2938, sendflowrem inport=3,dlsrc=fa:16:3e:9e:dc:5f actions=setfield:0x3->tunid,gototable:10 table=0, npackets=22, nbytes=2308, sendflowrem inport=4,dlsrc=fa:16:3e:41:24:1d actions=setfield:0x3->tunid,gototable:10 table=0, npackets=70, nbytes=8286, sendflowrem tunid=0x3,inport=2 actions=gototable:20 table=0, npackets=0, nbytes=0, sendflowrem priority=8192,inport=3 actions=drop table=0, npackets=0, nbytes=0, sendflowrem priority=8192,inport=4 actions=drop table=0, npackets=415, nbytes=37765, sendflowrem dltype=0x88cc actions=CONTROLLER:56
table=10, npackets=0, nbytes=0, sendflowrem priority=8192,tunid=0x3 actions=gototable:20 table=10, npackets=12, nbytes=1860, sendflowrem priority=16384,tunid=0x3,dldst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:2,gototable:20 table=10, npackets=24, nbytes=2084, sendflowrem tunid=0x3,dldst=fa:16:3e:4c:a5:a9 actions=output:2,gototable:20 table=10, npackets=15, nbytes=1302, sendflowrem tunid=0x3,dldst=fa:16:3e:e8:f6:86 actions=output:2,gototable:20
table=20, n
packets=39, nbytes=3386, sendflowrem priority=8192,tunid=0x3 actions=drop table=20, npackets=21, nbytes=2956, sendflowrem priority=16384,tunid=0x3,dldst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:4,output:3 table=20, npackets=18, nbytes=2102, sendflowrem tunid=0x3,dldst=fa:16:3e:41:24:1d actions=output:4 table=20, npackets=25, nbytes=2732, sendflowrem tunid=0x3,dldst=fa:16:3e:9e:dc:5f actions=output:3

then, I assumed that by enabling OF1.3 protocol on OVS, they can use pipe-lining process by numbering FlowTable.
but, I could not know that what is the advantage of integration of both bridges.
there are any performance improvement?

Regards
Taeheum Na


M.S. candidate of Networked Computing Systems Lab.
School of Information and Communications
GIST (Gwangju Inst. of Sci. and Tech.)
E-mail: thna at nm.gist.ac.kr
Phone: +82-10-2238-9424
Office: +82-62-715-2273

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140814/3cc1533b/attachment.html

[Openstack-operators] memcached redundancy

Hello,

I have an OpenStack cloud with two HA cloud controllers. Each controller
runs the standard controller components: glance, keystone, nova minus
compute and network, cinder, horizon, mysql, rabbitmq, and memcached.

Everything except memcached is accessed through haproxy and everything is
working great (well, rabbit can be finicky ... I might post about that if
it continues).

The problem I currently have is how to effectively work with memcached in
this environment. Since all components are load balanced, they need access
to the same memcached servers. That's solved by the ability to specify
multiple memcached servers in the various openstack config files.

But if I take a server down for maintenance, I notice a 2-3 second delay in
all requests. I've confirmed it's memcached by editing the list of
memcached servers in the config files and the delay goes away.

I'm wondering how people deploy memcached in environments like this? Are
you using some type of memcached replication between servers? Or if a
memcached server goes offline are you reconfiguring OpenStack to remove the
offline memcached server?

Thanks,
Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140814/8e73f0f2/attachment.html

Hi Clint,

Thank you for your input.

If I understand you correctly, the core cause seems to be internal to
OpenStack? If that's true, I will create a bug report about this. I'm
guessing Oslo would be the correct project to file the bug?

Thanks,
Joe

On Thu, Aug 21, 2014 at 12:19 PM, Clint Byrum wrote:

Excerpts from Joe Topjian's message of 2014-08-14 09:09:59 -0700:

Hello,

I have an OpenStack cloud with two HA cloud controllers. Each controller
runs the standard controller components: glance, keystone, nova minus
compute and network, cinder, horizon, mysql, rabbitmq, and memcached.

Everything except memcached is accessed through haproxy and everything is
working great (well, rabbit can be finicky ... I might post about that if
it continues).

The problem I currently have is how to effectively work with memcached in
this environment. Since all components are load balanced, they need
access
to the same memcached servers. That's solved by the ability to specify
multiple memcached servers in the various openstack config files.

But if I take a server down for maintenance, I notice a 2-3 second delay
in
all requests. I've confirmed it's memcached by editing the list of
memcached servers in the config files and the delay goes away.

I've seen a few responses to this that show a massive misunderstanding
of how memcached is intended to work.

Memcached should never need to be load balanced at the connection
level. It has a consistent hash ring based on the keys to handle
load balancing and failover. If you have 2 servers, and 1 is gone,
the failover should happen automatically. This gets important when you
have, say, 5 memcached servers as it means that given 1 failed server,
you retain n-1 RAM for caching.

What I suspect is happening is that we're not doing that right by
either not keeping persistent connections, or retrying dead servers
too aggressively.

In fact, it looks like the default one used in oslo-incubator's
'memorycache', the 'memcache' driver, will by default retry dead servers
every 30 seconds, and wait 3 seconds for a timeout, which probably
matches the behavior you see. None of the places I looked in Nova seem
to allow passing in a different deadretry or timeout. In my experience,
you probably want something like dead
retry == 600, so only one slow
operation every 10 minutes per process (so if you have 10 nova-api's
running, that's 10 requests every 10 minutes).

It is also possible that some of these objects are being re-created on
every request, as is common if caching is implemented too deep inside
"middleware" and not at the edges of a solution. I haven't dug deep
enough in, but suffice to say, replicating and load balancing may be the
cheaper solution to auditing the code and fixing it at this point.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] how can I make a new VM use the project name as sub-domain ?

hi all,

am running openstack in a flat-network with just 1 IP range / network and
would like to have the VMs' named after the project, e.g.:

project name: dev

host name: redis1
domain: acme.com

resulting name: redis1.dev.acme.com

at the moment i'd just get a result of redis1.acme.com - but do want/need
the sub-domains as

project: dev
and
project: internal
and
project: frontend
etc.

may use the same name for things they work on ... how is this accomplished
? Or where do I have to look at (in the code) to try and make this a
default ?

Thanks a lot!
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140814/fcbc5b12/attachment.html

I would try to look at what?s available in the metadata, if the project name isn?t there, then I would pass something in the user-data at boot time, and set up a script to use the user-data to accomplish this.

On Aug 14, 2014, at 11:48 AM, Alex Leonhardt <aleonhardt.py at gmail.com> wrote:

hi all,

am running openstack in a flat-network with just 1 IP range / network and would like to have the VMs' named after the project, e.g.:

project name: dev

host name: redis1
domain: acme.com

resulting name: redis1.dev.acme.com

at the moment i'd just get a result of redis1.acme.com - but do want/need the sub-domains as

project: dev
and
project: internal
and
project: frontend
etc.

may use the same name for things they work on ... how is this accomplished ? Or where do I have to look at (in the code) to try and make this a default ?

Thanks a lot!
Alex


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] I have finally created an instance, and it works! However, there is no ethernet card

People,

I have brought up an instance, and I can connect to it using my browser! I
am so pleased.

However, my instance doesn't have an ethernet device, only a loopback
device. My management wants me to use a provider network, which I
understand to mean that my instances will have IP addresses in the same
space as the controller, block storage, and compute node administrative
addresses. However, I think that discussing addressing is premature until
I have a working virtual ethernet card.

I am reading through
http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
and I think that the ML2 plugin is what I need. However, I think I do not
want a network type of GRE, because that encapsulates the packets and I
don't have anything to un-encapsulate them.

Thank you

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140814/d30ec3cf/attachment.html

By "defined a network space for your instances", does that mean going
through the process as described in
http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
?

I got part way through that when I realized that the procedure was going to
bridge packets through neutron. That's not what I want. I want the
packets to go directly to the physical router. For example, I have two
tenants, with IP addresses 10.50.15.80/24 and 10.50.18.15.90/24.and the
router is at 10.50.15.1. There is a nice picture of what I am trying to do
at
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
. But if the hypervisor doesn't present a virtual device to the guests,
then nothing else is going to happen. The network troubleshooting guide
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
does not explain what to do if the virtual NIC is missing.

Thank you

Jeff

On Fri, Aug 15, 2014 at 9:38 AM, Abel Lopez wrote:

Curious if you?ve defined a network space for your instances. If you?re
using the traditional flatnetwork, this is known as the ?fixedaddress?
space.
If you?re using neutron, you would need to create a network and a subnet
(and router with gateway, etc). You?d then assign the instance to a network
at launch time.

On Aug 15, 2014, at 9:17 AM, Jeff Silverman wrote:

<ipa.png>
?
For those of you that can't see pictures:
$ sudo ip a
1: lo: <LOOPBACK,UP,LOWER
UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
validlft forever preferred1ft forever

I suspect that the issue is that the hypervisor is not presenting a
virtual ethernet card.

Thank you

Jeff

On Thu, Aug 14, 2014 at 6:57 PM, Nhan Cao wrote:

can you show output of command:
ip a

2014-08-15 7:41 GMT+07:00 Jeff Silverman :

People,

I have brought up an instance, and I can connect to it using my browser!
I am so pleased.

However, my instance doesn't have an ethernet device, only a loopback
device. My management wants me to use a provider network, which I
understand to mean that my instances will have IP addresses in the same
space as the controller, block storage, and compute node administrative
addresses. However, I think that discussing addressing is premature until
I have a working virtual ethernet card.

I am reading through
http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
and I think that the ML2 plugin is what I need. However, I think I do not
want a network type of GRE, because that encapsulates the packets and I
don't have anything to un-encapsulate them.

Thank you

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] unable to access Horizion dashboard with public IP

Hi Team,

I am unable to access Dashboard with public IP it stopped working suddenly, I am able to access from private IP,

?[error] [client xx.xx.xx.xx] Script timed out before returning headers: django.wsgi, referer: http://xx.xx.xx.xx/dashboard/project/
?[error] [client?xx.xx.xx.xx] Script timed out before returning headers: django.wsgi, referer: http://xx.xx.xx.xx/dashboard/project/instances/03-a5ce-4c17-bd5d-cde9adf068d2/
?[error] [client?xx.xx.xx.xx] Script timed out before returning headers: django.wsgi, referer: http://xx.xx.xx.xx/dashboard/project/
[error] [client?xx.xx.xx.xx] Script timed out before returning headers: django.wsgi, referer: http://xx.xx.xx.xx/dashboard/project/
?[error] [client?xx.xx.xx.xx] Script timed out before returning headers: django.wsgi
[warn] [client?xx.xx.xx.xx] incomplete redirection target of '/dashboard/' for URI '/' modified to 'http://xx.xx.xx.xx/dashboard/'
?[warn] [client?xx.xx.xx.xx] incomplete redirection target of '/dashboard/' for URI '/' modified to 'http://xx.xx.xx.xx/dashboard/'
?[error] [client?xx.xx.xx.xx] File does not exist: /var/www/html/favicon.ico
?[error] [client?xx.xx.xx.xx] File does not exist: /var/www/html/favicon.ico
?[warn] [client?xx.xx.xx.xx] incomplete redirection target of '/dashboard/' for URI '/' modified to 'http://xx.xx.xx.xx/dashboard/'
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140815/ca2cc1dd/attachment.html

[Openstack-operators] OpenStack Community Weekly Newsletter (Aug 8 - 15)

  Patchwork Onion delivers stability & innovation: the graphics that
  explains how we determine OpenStack Core
  <http://robhirschfeld.com/2014/08/12/patchwork-onion/>

The OpenStack board, through the DefCore committee
http://bit.ly/DefCore, has been working to define "core" for
commercial users
http://robhirschfeld.com/2014/05/20/defcore-three-cases/ using a
combination of minimum required capabilities (APIs) and code (Designated
Sections). These minimums are decided on a per project basis so it can
be difficult to visualize the impact on the overall effect on the
Integrated Release. Rob Hirschfeld and Joshua McKenty have created the
patchwork onion graphic to help illustrate how core relates to the
integrated release.

  OpenStack Upstream Training in Paris
  <http://www.openstack.org/blog/2014/08/openstack-upstream-training-in-paris/>

We're doing it again, bigger: the OpenStack Foundation is delivering a
training program to accelerate the speed at which new OpenStack
developers are successful at integrating their own roadmap into that of
the OpenStack project. If you're a new OpenStack contributor or plan on
becoming one soon, you should sign up for the next OpenStack Upstream
Training in Paris, November 1-2.
http://openstackupstream.eventbrite.com/ Participation is strongly
advised also for first time participants to OpenStack Design Summit.
We're doing it again before the Paris Summit, as we did in Atlanta
http://www.openstack.org/blog/2014/05/openstack-upstream-training-in-atlanta-a-big-success/,
only bigger.

  Using gerrymander, a client API and command line tool for gerrit
  <http://lists.openstack.org/pipermail/openstack-dev/2014-August/043085.html>

Over the past couple of months Daniel P. Berrange has worked on creating
a python library and command line tool for dealing with gerrit.
Gerrymander was born out of his frustration with the inefficiency of
handling reviews through the gerrit web UI. Reviewers can use
gerrymander to query gerrit for reviews and quickly get a list of
comments filtering out comments by bots.

The Road To Paris 2014 -- Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Louis Taylor
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2e1a714a-b6c4-48f8-a03d-b0ce21677ac7
wudx05
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person78a6d2d9-e1b0-4601-81cf-1297a36920a6

Puneet Arora
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0236b8b6-3be0-4a4d-b5a0-a4158e593f62
Miguel Grinberg
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf7989751-dec2-49fa-95f7-459f96f10775

Sandro Tosi
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person316bf764-1009-43b8-a4b4-31c9a89378e3
Piet Delaney
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person71f6bd79-0775-47e7-a921-7a65da38619b

Sam Betts
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persond7515652-5760-42e8-8140-8a59e29395a2
Ilia Meerovich
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person43db3208-b9c3-424b-9001-b81cfdc4efed

Philippe Jeurissen
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7b6c184d-f866-438d-8f81-4b45d2cb8fdd
Chirag Shahani
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone650c9ff-f4e6-4593-828f-9258d0ebd276

Amit Gandhi
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person48926aa4-d554-4521-acb5-778f03984885
venkata anil
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone2ae6149-2c32-48c5-911b-771089e139f4

Takashi Sogabe
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2665a618-3bd8-415a-80b2-31fd6585db70
Takashi Sogabe
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2665a618-3bd8-415a-80b2-31fd6585db70

Veena
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3496eef1-9301-4864-aae9-fa389d50b8a7
Saro Chandra Bhooshan
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona1b27b46-5f66-4e91-83de-fc378acad922

Juergen Brendel
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9353aad7-417d-4ce1-90dd-6b0373294767
selvakumar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9009e2cf-5ded-437b-bcf0-502327b95076

Jin Dong
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc9299569-e869-491f-9ffa-3b9bbf7319aa
sandhya
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone090f2cc-eeba-48e2-b721-6c056be8733a

Qijing Li
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person69b6451c-32f2-47db-a4cf-af4e4252f889
Tarun Jain
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personeacf5557-5740-4265-92e4-a391f4e4f02f

Mathieu Losmede

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personaf6c2f03-5bb2-43be-84b4-c7f502a446a2

Lakshminarayanan Krishnan

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5fc4b409-a4a4-4f05-bc7f-da1ff274262f

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week?
Latest patches submitted for review? Check out the individual project
pages on OpenStack Activity Board -- Insights
http://activity.openstack.org/data/display/OPNSTK2/.

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140815/95643776/attachment-0001.html

[Openstack-operators] Neutron nf_conntrack performance

Hey other ops!

Im having a serious problem with my neutron router getting spin locked in
nfconntracktupletaken.
Has anybody else experienced it?
As the incoming request rate goes up, so nf
conntracktupletaken runs very
hot on CPU0 causing ksoftirqd/0 to run at 100%. At that point internal
pings on the GRE network go sky high and its game over.

Ubuntu 14.04/Icehouse 2014.1.1 on an ibm x3550 with 4 10G intel nics.
eth0 - Mgt
eth1 - GRE
eth2 - Public
eth3 - unused

Any help very much appreciated!

BR,
Stuart
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140815/91bf033f/attachment.html

Hey John

Thanks for replying. I just checked, ipv6 is already turned off :(
Any other places I could look?

BR,
Stuart

On Sat, Aug 16, 2014 at 6:52 AM, John Edrington wrote:

Hi Stuart,

I had a similar sounding problem on CentOS running icehouse (ksoftirqd/0
at 100%, tcpdump showed GRE router advertisement flood). The workaround I
found was to disable ipv6 by editing /etc/sysctl.conf.

John

Sent from my mobile device.

----- Reply message -----
From: "Stuart Fox"
To: "openstack-operators at lists.openstack.org" <
openstack-operators at lists.openstack.org>
Subject: [Openstack-operators] Neutron nf_conntrack performance
Date: Sat, Aug 16, 2014 2:44 AM

Hey other ops!

Im having a serious problem with my neutron router getting spin locked in
nfconntracktupletaken.
Has anybody else experienced it?
As the incoming request rate goes up, so nf
conntracktupletaken runs
very hot on CPU0 causing ksoftirqd/0 to run at 100%. At that point internal
pings on the GRE network go sky high and its game over.

Ubuntu 14.04/Icehouse 2014.1.1 on an ibm x3550 with 4 10G intel nics.
eth0 - Mgt
eth1 - GRE
eth2 - Public
eth3 - unused

Any help very much appreciated!

BR,
Stuart

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] How can I select host node to create VM?

Hello,
When I create instance, I want to choice host node.
I saw that by default, VMs are scheduled by filter scheduler which only consider computing resource.
I want to apply substrate network considered scheduling algorithm on OpenStack.
To do this one, I have to monitor utilization of computing/network resources.
Now, I'm planning to use OpenvSwitch commend to know network situation.

Do you have any comment for me? (monitor/scheduling)

Regards
Taeheum Na


M.S. candidate of Networked Computing Systems Lab.
School of Information and Communications
GIST (Gwangju Inst. of Sci. and Tech.)
E-mail: thna at nm.gist.ac.kr
Phone: +82-10-2238-9424
Office: +82-62-715-2273

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/470f4ef4/attachment.html

Basically, what you need is to write your own Compute Scheduler based on Networking information.
It is not the first time that somebody proposed a networking-based scheduler, I would recommend to look back on the previous OpenStack summit and search for sessions related to the computer scheduler topic and then contact directly to the presenter(s). It could save you a lot of time and development effort.

In order to get Networking information, you could get many valuable data from Ceilometer.

Cheers,

Edgar

From: Taeheum Na >
Date: Sunday, August 17, 2014 at 8:55 PM
To: "openstack-operators at lists.openstack.org" >
Subject: [Openstack-operators] How can I select host node to create VM?

Hello,
When I create instance, I want to choice host node.
I saw that by default, VMs are scheduled by filter scheduler which only consider computing resource.
I want to apply substrate network considered scheduling algorithm on OpenStack.
To do this one, I have to monitor utilization of computing/network resources.
Now, I'm planning to use OpenvSwitch commend to know network situation.

Do you have any comment for me? (monitor/scheduling)

Regards
Taeheum Na


M.S. candidate of Networked Computing Systems Lab.
School of Information and Communications
GIST (Gwangju Inst. of Sci. and Tech.)
E-mail: thna at nm.gist.ac.kr
Phone: +82-10-2238-9424
Office: +82-62-715-2273

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Operations summit

I wanted to post a few items on the upcoming operator summit
- I am going to moderate the Tuesday 10:30am Deploy/Config/Upgrade session. Any ideas on content is welcome.
- I would like to add Congress / Policy to the Tuesday 1:30pm session along side Puppet, Chef, Salt, and Ansible. I think we are still missing someone to represent Salt.
- I believe the Monday 10:30am Network session will be on the Nova.network to Neutron migration path. Any ideas on content is welcome.

I am going to think some on the Deploy/Config/Upgrade session agenda and post it here for early discussion.

~ sean

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/3ca3c098/attachment.html

[Openstack-operators] Operations summit (networking)

Hi Folks,

Following Sean's initiative. I also would like to open the mailing lists to receive suggestions about the networking session on Monday.
I will be moderating that one and I would like to receive your requests about the topics to cover, this is a full feedback session, which means zero slides, this is all about collecting feedback for the development team.

Thanks,

Edgar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/fdec646b/attachment.html

Mark McClain is also around for the Monday, so we should be able to
get him into the room if we schedule this wisely as well.

Michael

On Tue, Aug 19, 2014 at 7:37 AM, Michael Still wrote:
I will be coming to the ops meetup. My plan was mostly to float around
being helpful where I can, but I'm happy to take questions in a
neutron to nova-network upgrade session. Its one of the topics from
our mid-cycle meetup that I haven't covered yet, so there might also
be a write up before then of our discussion too.

For reference, the nova mid-cycle meetup is being summarized in a
series of blog posts. The ones posted so far are:

http://www.stillhq.com/openstack/juno/000005.html -- social issues
http://www.stillhq.com/openstack/juno/000006.html -- containers
http://www.stillhq.com/openstack/juno/000007.html -- ironic
http://www.stillhq.com/openstack/juno/000009.html -- db2 support
http://www.stillhq.com/openstack/juno/000010.html -- cells
http://www.stillhq.com/openstack/juno/000011.html -- bug management
http://www.stillhq.com/openstack/juno/000012.html -- scheduler

I still have slots; nova-network to neutron; tasks API; nova v3 API;
and hypervisor CI to go.

Michael

On Tue, Aug 19, 2014 at 6:01 AM, Joe Topjian wrote:

Hi Edgar,

A nova-network to neutron migration path topic is definitely of interest. In
that same category, how about a discussion of the longevity of nova-network:
it was recently unfrozen, how long can ops expect or want to use it in
production?

Maybe a poll of what kind of network configurations are currently being used
or would like to be used. If there are common ones, effort could go into the
docs to make sure they are well described.

ML2 discussion is obvious, but perhaps any questions and feedback on it
would be resolved by your deep dive?

Thanks,
Joe

On Mon, Aug 18, 2014 at 9:10 AM, Edgar Magana <edgar.magana at workday.com>
wrote:

Hi Folks,

Following Sean?s initiative. I also would like to open the mailing lists
to receive suggestions about the networking session on Monday.
I will be moderating that one and I would like to receive your requests
about the topics to cover, this is a full feedback session, which means zero
slides, this is all about collecting feedback for the development team.

Thanks,

Edgar


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Rackspace Australia

--
Rackspace Australia

[Openstack-operators] Operations summit (storage)

I'll be moderating the storage session on Tuesday. As Edgar and Sean have mentioned, these sessions are for feedback, not presentations. And all the feedback will go back to the development teams

So what do you want to make sure is covered in the storage session?

I am very interested to get some feedback from all of you on Swift and some of the changes the Technical Committee is asking us to consider. But to be clear, as a "storage" track, we'll cover all storage-related projects in OpenStack.

--John

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 801 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/70315403/attachment.pgp

[Openstack-operators] OpenStack-operators Digest, Vol 46, Issue 20

I figured out how to give the instance a virtual NIC. It is getting an IP
address from our DHCP, I can ssh in, and I can surf the web.

root at compute1-prod.compute1-prod:/var/log/neutron# ifconfig VLAN15
VLAN
15 Link encap:Ethernet HWaddr 00:25:90:5B:AA:A0
inet addr:10.50.15.48 Bcast:10.50.15.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8355781 errors:0 dropped:0 overruns:0 frame:0
TX packets:1644788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5040893113 (4.6 GiB) TX bytes:211825561 (202.0 MiB)

root at compute1-prod.compute1-prod:/var/log/neutron#
/etc/init.d/openvswitch start

ovs-vsctl list-br # you should see nothing
ovs-vsctl add-br br-int
ovs-vsctl list-br
ovs-vsctl add-br VLAN15
ovs-vsctl add-br VLAN
20
ovs-vsctl list-br

Connect the open virtual switch to the instance

root at compute1-prod.compute1-prod:/var/log/neutron# virsh
attach-interface instance-00000006 bridge VLAN_15
Interface attached successfully

root at compute1-prod.compute1-prod:/var/log/neutron#

I rebooted the guest and voila!

$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 52:54:00:8E:E5:25
inet addr:10.50.15.239 Bcast:10.50.15.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe8e:e525/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1404 errors:0 dropped:0 overruns:0 frame:0
TX packets:256 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:82015 (80.0 KiB) TX bytes:27940 (27.2 KiB)
Interrupt:10 Base address:0x6000

$
The IP address came from our dhcpd and works fine. dhcpd also
provided a default router and populated /etc/resolv.conf

My lead asked to add ports to the VLAN since there are two NICs which
are bonded together using LACP:

root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN
20
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports br-int
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl add-port
VLAN15 bond0.15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl add-port
VLAN
20 bond0.20
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN15
bond0.15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN
20
bond0.20
root at compute1-prod.compute1-prod:/var/log/neutron#

root at compute1-prod.compute1-prod:/var/log/neutron# virsh dumpxml 3

...







...

root at compute1-prod.compute1-prod:/var/log/neutron#

So I know that the hypervisor is working okay. This is exactly what I
want: I want the compute node to connect direct to the physical
switch/router. I don't want to go through neutron, which I think will
be a bottleneck. I think I have good enough security upstream to
protect the openstack. But how do I do this configuration using
neutron and open virtual switch? Or is this the way it is supposed to
be done?

Thank you

Jeff

On Mon, Aug 18, 2014 at 5:00 AM, <
openstack-operators-request at lists.openstack.org> wrote:

Send OpenStack-operators mailing list submissions to
openstack-operators at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

or, via email, send a message with subject or body 'help' to
openstack-operators-request at lists.openstack.org

You can reach the person managing the list at
openstack-operators-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-operators digest..."

Today's Topics:

  1. How can I select host node to create VM? (Taeheum Na)
  2. Re: I have finally created an instance, and it works!
    However, there is no ethernet card (Andreas Scheuring)
  3. Operations summit (Sean Roberts)

Message: 1
Date: Mon, 18 Aug 2014 12:55:58 +0900
From: Taeheum Na
To: "openstack-operators at lists.openstack.org"

Subject: [Openstack-operators] How can I select host node to create
VM?
Message-ID:
<
0145D5DF1C56174FABB6EB834BC4909C03FEBD1915AC at NML2007.netmedia.kjist.ac.kr>

Content-Type: text/plain; charset="us-ascii"

Hello,
When I create instance, I want to choice host node.
I saw that by default, VMs are scheduled by filter scheduler which only
consider computing resource.
I want to apply substrate network considered scheduling algorithm on
OpenStack.
To do this one, I have to monitor utilization of computing/network
resources.
Now, I'm planning to use OpenvSwitch commend to know network situation.

Do you have any comment for me? (monitor/scheduling)

Regards
Taeheum Na


M.S. candidate of Networked Computing Systems Lab.
School of Information and Communications
GIST (Gwangju Inst. of Sci. and Tech.)
E-mail: thna at nm.gist.ac.kr
Phone: +82-10-2238-9424
Office: +82-62-715-2273

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/470f4ef4/attachment-0001.html
>


Message: 2
Date: Mon, 18 Aug 2014 09:04:26 +0200
From: Andreas Scheuring
To: "openstack-operators at lists.openstack.org"

Subject: Re: [Openstack-operators] I have finally created an instance,
and it works! However, there is no ethernet card
Message-ID: <1408345466.4188.6.camel at oc5515017671.ibm.com>
Content-Type: text/plain; charset="UTF-8"

Assuming you're running a libvirt based hypervisor (e.g. kvm). Could you
please dump the libvirt xml of your instance?

You can get it this way

virsh list --all
--> shows a list of all virtual servers running on your hypervisor

virsh dumpxml
--> dumps the xml of your vm, addressed by the id (first column). Id
does not correlate with UUID!!!!! So if you're not sure which list entry
belongs to your instance just stop all other via openstack to have only
one running.

PS: on some systems you have to sudo when using virsh.

There should be a subtag of devices called or
'network' or somehting like that representing your eth interface

Andreas

On Fri, 2014-08-15 at 11:43 -0700, Jeff Silverman wrote:

I have been surfing the internet, and one of the ideas that comes to
mind is modifying the /etc/neutron/agent.ini file on the compute
nodes. In the agent.ini file, there is a comment near the top that is
almost helpful:

L3 requires that an interface driver be set. Choose the one that

best

matches your plugin.

The only plug I know about is ml2. I have no idea if that is right
for me or not. And I have no idea to choose the interface drive that
best matches my plugin.

Thank you

Jeff

On Fri, Aug 15, 2014 at 10:26 AM, Jeff Silverman
wrote:
By "defined a network space for your instances", does that
mean going through the process as described
in
http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
?

    I got part way through that when I realized that the procedure
    was going to bridge packets through neutron.  That's not what
    I want.  I want the packets to go directly to the physical
    router.  For example, I have two tenants, with IP addresses
    10.50.15.80/24 and 10.50.18.15.90/24.and the router is at
    10.50.15.1.  There is a nice picture of what I am trying to do
    at

http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
. But if the hypervisor doesn't present a virtual device to the guests,
then nothing else is going to happen. The network troubleshooting guide
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud
does not explain what to do if the virtual NIC is missing.

    Thank you


    Jeff




    On Fri, Aug 15, 2014 at 9:38 AM, Abel Lopez
    <alopgeek at gmail.com> wrote:
            Curious if you?ve defined a network space for your
            instances. If you?re using the traditional
            flat_network, this is known as the ?fixed_address?
            space.
            If you?re using neutron, you would need to create a
            network and a subnet (and router with gateway, etc).
            You?d then assign the instance to a network at launch
            time.



            On Aug 15, 2014, at 9:17 AM, Jeff Silverman
            <jeff at sweetlabs.com> wrote:

            > <ip_a.png>
            >
            > ?
            >
            > For those of you that can't see pictures:
            > $ sudo ip a
            > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc
            > noqueue
            >     link/loopback 00:00:00:00:00:00 brd
            > 00:00:00:00:00:00
            >     inet 127.0.0.1/8 scope host lo
            >     inet6 ::1/128 scope host
            >         valid_lft forever preferred_1ft forever
            >
            >
            > I suspect that the issue is that the hypervisor is
            > not presenting a virtual ethernet card.
            >
            >
            > Thank you
            >
            >
            >
            >
            > Jeff
            >
            >
            >
            >
            > On Thu, Aug 14, 2014 at 6:57 PM, Nhan Cao
            > <nhanct92 at gmail.com> wrote:
            >         can you show output of command:
            >         ip a
            >
            >
            >
            >
            >
            >
            >         2014-08-15 7:41 GMT+07:00 Jeff Silverman
            >         <jeff at sweetlabs.com>:
            >                 People,
            >
            >
            >                 I have brought up an instance, and I
            >                 can connect to it using my browser!
            >                  I am so pleased.
            >
            >
            >                 However, my instance doesn't have an
            >                 ethernet device, only a loopback
            >                 device.   My management wants me to
            >                 use a provider network, which I
            >                 understand to mean that my instances
            >                 will have IP addresses in the same
            >                 space as the controller, block
            >                 storage, and compute node
            >                 administrative addresses.  However,
            >                 I think that discussing addressing
            >                 is premature until I have a working
            >                 virtual ethernet card.
            >
            >
            >                 I am reading
            >                 through

http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html
and I think that the ML2 plugin is what I need. However, I think I do not
want a network type of GRE, because that encapsulates the packets and I
don't have anything to un-encapsulate them.

            Thank you




            Jeff




            --
            Jeff Silverman
            Systems Engineer
            (253) 459-2318 (c)

            OpenStack-operators mailing list

OpenStack-operators at lists.openstack.org
>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

    --
    Jeff Silverman
    Systems Engineer
    (253) 459-2318 (c)

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Message: 3
Date: Mon, 18 Aug 2014 00:34:22 -0700
From: Sean Roberts
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Operations summit
Message-ID:
Content-Type: text/plain; charset="us-ascii"

I wanted to post a few items on the upcoming operator summit
- I am going to moderate the Tuesday 10:30am Deploy/Config/Upgrade
session. Any ideas on content is welcome.
- I would like to add Congress / Policy to the Tuesday 1:30pm session
along side Puppet, Chef, Salt, and Ansible. I think we are still missing
someone to represent Salt.
- I believe the Monday 10:30am Network session will be on the Nova.network
to Neutron migration path. Any ideas on content is welcome.

I am going to think some on the Deploy/Config/Upgrade session agenda and
post it here for early discussion.

~ sean

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/3ca3c098/attachment-0001.html
>



OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

End of OpenStack-operators Digest, Vol 46, Issue 20


--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140818/7506e506/attachment-0001.html

Hi Jeff,
what does it mean when you're saying "I don't want to go through
neutron". Does it mean you're running nova network?

Usually, when you configured openvswitch as vswitch, neutron should
create the network ports for you. There's not much config required to
use it. You just need the ml2 core plugin and the openvswitch mechanism
driver configured.

This is a vlan config I configured some time ago in neutron. But not
sure about nova network!

neutron.conf:
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

ml2conf.ini:
[ml2]
type
drivers = vlan,flat
mechanismdrivers = openvswitch
tenant
network_types = vlan

[ml2typevlan]
networkvlanranges = phys-data:1000:2999

[securitygroup]
firewalldriver =
neutron.agent.linux.iptables
firewall.OVSHybridIptablesFirewallDriver

[database]
connection = mysql://root:password at 127.0.0.1/neutron_ml2?charset=utf8

[ovs]
localip = 9.152.150.204
bridge
mappings = phys-data:br-data,phys-ex:br-ex
tenantnetworktypes = vlan

[agent]
root_helper =
sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

On Mon, 2014-08-18 at 16:33 -0700, Jeff Silverman wrote:
I figured out how to give the instance a virtual NIC. It is getting
an IP address from our DHCP, I can ssh in, and I can surf the web.

root at compute1-prod.compute1-prod:/var/log/neutron# ifconfig VLAN15
VLAN
15 Link encap:Ethernet HWaddr 00:25:90:5B:AA:A0
inet addr:10.50.15.48 Bcast:10.50.15.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

      RX packets:8355781 errors:0 dropped:0 overruns:0 frame:0
      TX packets:1644788 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:5040893113 (4.6 GiB)  TX bytes:211825561 (202.0 MiB)

root at compute1-prod.compute1-prod:/var/log/neutron# /etc/init.d/openvswitch start

ovs-vsctl list-br # you should see nothing
ovs-vsctl add-br br-int
ovs-vsctl list-br
ovs-vsctl add-br VLAN15
ovs-vsctl add-br VLAN
20

ovs-vsctl list-br

Connect the open virtual switch to the instance

root at compute1-prod.compute1-prod:/var/log/neutron# virsh attach-interface instance-00000006 bridge VLAN_15
Interface attached successfully

root at compute1-prod.compute1-prod:/var/log/neutron#

I rebooted the guest and voila!

$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 52:54:00:8E:E5:25
inet addr:10.50.15.239 Bcast:10.50.15.255 Mask:255.255.255.0

      inet6 addr: fe80::5054:ff:fe8e:e525/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:1404 errors:0 dropped:0 overruns:0 frame:0
      TX packets:256 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:1000 
      RX bytes:82015 (80.0 KiB)  TX bytes:27940 (27.2 KiB)
      Interrupt:10 Base address:0x6000 

$
The IP address came from our dhcpd and works fine. dhcpd also provided a default router and populated /etc/resolv.conf

My lead asked to add ports to the VLAN since there are two NICs which are bonded together using LACP:

root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN
20

root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports br-int
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl add-port VLAN15 bond0.15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl add-port VLAN
20 bond0.20

root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN15
bond0.15
root at compute1-prod.compute1-prod:/var/log/neutron# ovs-vsctl list-ports VLAN
20
bond0.20
root at compute1-prod.compute1-prod:/var/log/neutron#

root at compute1-prod.compute1-prod:/var/log/neutron# virsh dumpxml 3

...




  <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>

...

root at compute1-prod.compute1-prod:/var/log/neutron#

So I know that the hypervisor is working okay. This is exactly what I want: I want the compute node to connect direct to the physical switch/router. I don't want to go through neutron, which I think will be a bottleneck. I think I have good enough security upstream to protect the openstack. But how do I do this configuration using neutron and open virtual switch? Or is this the way it is supposed to be done?

Thank you

Jeff

On Mon, Aug 18, 2014 at 5:00 AM,
wrote:
Send OpenStack-operators mailing list submissions to
openstack-operators at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

or, via email, send a message with subject or body 'help' to
openstack-operators-request at lists.openstack.org

You can reach the person managing the list at
openstack-operators-owner at lists.openstack.org

When replying, please edit your Subject line so it is more
specific
than "Re: Contents of OpenStack-operators digest..."


Today's Topics:

1. How can I select host node to create VM? (Taeheum Na)
2. Re: I have finally created an instance, and it works!
However, there is no ethernet card (Andreas Scheuring)
3. Operations summit (Sean Roberts)


----------------------------------------------------------------------

Message: 1
Date: Mon, 18 Aug 2014 12:55:58 +0900
From: Taeheum Na
To: "openstack-operators at lists.openstack.org"

Subject: [Openstack-operators] How can I select host node to
create
VM?
Message-ID:

<0145D5DF1C56174FABB6EB834BC4909C03FEBD1915AC at NML2007.netmedia.kjist.ac.kr>

Content-Type: text/plain; charset="us-ascii"

Hello,
When I create instance, I want to choice host node.
I saw that by default, VMs are scheduled by filter scheduler
which only consider computing resource.
I want to apply substrate network considered scheduling
algorithm on OpenStack.
To do this one, I have to monitor utilization of
computing/network resources.
Now, I'm planning to use OpenvSwitch commend to know network
situation.

Do you have any comment for me? (monitor/scheduling)

Regards
Taeheum Na
****************************************************
M.S. candidate of Networked Computing Systems Lab.
School of Information and Communications
GIST (Gwangju Inst. of Sci. and Tech.)
E-mail: thna at nm.gist.ac.kr
Phone: +82-10-2238-9424
Office: +82-62-715-2273

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


------------------------------

Message: 2
Date: Mon, 18 Aug 2014 09:04:26 +0200
From: Andreas Scheuring
To: "openstack-operators at lists.openstack.org"

Subject: Re: [Openstack-operators] I have finally created an
instance,
and it works! However, there is no ethernet card
Message-ID: <1408345466.4188.6.camel at oc5515017671.ibm.com>
Content-Type: text/plain; charset="UTF-8"

Assuming you're running a libvirt based hypervisor (e.g. kvm).
Could you
please dump the libvirt xml of your instance?

You can get it this way

virsh list --all
--> shows a list of all virtual servers running on your
hypervisor

virsh dumpxml
--> dumps the xml of your vm, addressed by the id (first
column). Id
does not correlate with UUID!!!!! So if you're not sure which
list entry
belongs to your instance just stop all other via openstack to
have only
one running.

PS: on some systems you have to sudo when using virsh.


There should be a subtag of devices called or
'network' or somehting like that representing your eth
interface


Andreas


On Fri, 2014-08-15 at 11:43 -0700, Jeff Silverman wrote:

I have been surfing the internet, and one of the ideas that
comes to
mind is modifying the /etc/neutron/agent.ini file on the
compute
nodes. In the agent.ini file, there is a comment near the
top that is
almost helpful:

L3 requires that an interface driver be set. Choose the

    one that

best

matches your plugin.

The only plug I know about is ml2. I have no idea if that
is right
for me or not. And I have no idea to choose the interface
drive that
best matches my plugin.

Thank you

Jeff

On Fri, Aug 15, 2014 at 10:26 AM, Jeff Silverman

wrote:
By "defined a network space for your instances",
does that
mean going through the process as described
in
http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html?

    I got part way through that when I realized that the
    procedure
    was going to bridge packets through neutron.  That's
    not what
    I want.  I want the packets to go directly to the
    physical
    router.  For example, I have two tenants, with IP
    addresses
    10.50.15.80/24 and 10.50.18.15.90/24.and the router
    is at
    10.50.15.1.  There is a nice picture of what I am
    trying to do
    at
    http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud .  But if the hypervisor doesn't present a virtual device to the guests, then nothing else is going to happen.  The network troubleshooting guide http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html#nova_network_traffic_in_cloud does not explain what to do if the virtual NIC is missing.




    Thank you


    Jeff




    On Fri, Aug 15, 2014 at 9:38 AM, Abel Lopez
    <alopgeek at gmail.com> wrote:
            Curious if you?ve defined a network space
    for your
            instances. If you?re using the traditional
            flat_network, this is known as
    the ?fixed_address?
            space.
            If you?re using neutron, you would need to
    create a
            network and a subnet (and router with
    gateway, etc).
            You?d then assign the instance to a network
    at launch
            time.



            On Aug 15, 2014, at 9:17 AM, Jeff Silverman
            <jeff at sweetlabs.com> wrote:

            > <ip_a.png>
            >
            > ?
            >
            > For those of you that can't see pictures:
            > $ sudo ip a
            > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436
    qdisc
            > noqueue
            >     link/loopback 00:00:00:00:00:00 brd
            > 00:00:00:00:00:00
            >     inet 127.0.0.1/8 scope host lo
            >     inet6 ::1/128 scope host
            >         valid_lft forever preferred_1ft
    forever
            >
            >
            > I suspect that the issue is that the
    hypervisor is
            > not presenting a virtual ethernet card.
            >
            >
            > Thank you
            >
            >
            >
            >
            > Jeff
            >
            >
            >
            >
            > On Thu, Aug 14, 2014 at 6:57 PM, Nhan Cao
            > <nhanct92 at gmail.com> wrote:
            >         can you show output of command:
            >         ip a
            >
            >
            >
            >
            >
            >
            >         2014-08-15 7:41 GMT+07:00 Jeff
    Silverman
            >         <jeff at sweetlabs.com>:
            >                 People,
            >
            >
            >                 I have brought up an
    instance, and I
            >                 can connect to it using my
    browser!
            >                  I am so pleased.
            >
            >
            >                 However, my instance
    doesn't have an
            >                 ethernet device, only a
    loopback
            >                 device.   My management
    wants me to
            >                 use a provider network,
    which I
            >                 understand to mean that my
    instances
            >                 will have IP addresses in
    the same
            >                 space as the controller,
    block
            >                 storage, and compute node
            >                 administrative addresses.
    However,
            >                 I think that discussing
    addressing
            >                 is premature until I have
    a working
            >                 virtual ethernet card.
            >
            >
            >                 I am reading
            >                 through
    http://docs.openstack.org/icehouse/install-guide/install/yum/content/neutron-ml2-compute-node.html and I think that the ML2 plugin is what I need.  However, I think I do not want a network type of GRE, because that encapsulates the packets and I don't have anything to un-encapsulate them.
            >
            >
            >                 Thank you
            >
            >
            >
            >
            >                 Jeff
            >
            >
            >
            >
            >                 --
            >                 Jeff Silverman
            >                 Systems Engineer
            >                 (253) 459-2318 (c)
            >
            >
            >
            >
            >
     _______________________________________________
            >                 OpenStack-operators
    mailing list
            >
     OpenStack-operators at lists.openstack.org
            >
     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
            >
            >
            >
            >
            >
            >
            >
            > --
            > Jeff Silverman
            > Systems Engineer
            > (253) 459-2318 (c)
            >
            >
            >
    _______________________________________________
            > OpenStack-operators mailing list
            > OpenStack-operators at lists.openstack.org
            >
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
            >






    --
    Jeff Silverman
    Systems Engineer
    (253) 459-2318 (c)

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





------------------------------

Message: 3
Date: Mon, 18 Aug 2014 00:34:22 -0700
From: Sean Roberts
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Operations summit
Message-ID:
Content-Type: text/plain; charset="us-ascii"

I wanted to post a few items on the upcoming operator summit
- I am going to moderate the Tuesday 10:30am
Deploy/Config/Upgrade session. Any ideas on content is
welcome.
- I would like to add Congress / Policy to the Tuesday 1:30pm
session along side Puppet, Chef, Salt, and Ansible. I think we
are still missing someone to represent Salt.
- I believe the Monday 10:30am Network session will be on the
Nova.network to Neutron migration path. Any ideas on content
is welcome.

I am going to think some on the Deploy/Config/Upgrade session
agenda and post it here for early discussion.

~ sean

-------------- next part --------------
An HTML attachment was scrubbed...
URL:


------------------------------

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


End of OpenStack-operators Digest, Vol 46, Issue 20
***************************************************

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)

[Openstack-operators] Extends Ceilometer's plugin but not effective

Hello, guys.
Recently, I write a plugin for Celimeter(Openstack version: Icehouse).
I add "instancecost =
ceilometer.compute.pollsters.billing:DaoliBillingPollster" into file named
"ceilometer-2014.1.1-py2.6.egg-info/entry
points.txt".
And I add a new file named "billing.py" located in
"ceilometer/compute/pollsters/". But I found the file "billing.py" is not
compiled when I restart all services about ceilometer like: ceilometer-api,
ceilometer-compute and so on. I don't know which ceilometer service should
I restart if I want to compile the file named "billing.py". Could someone
helps me?

Thanks a lot!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140819/db069830/attachment.html

[Openstack-operators] I'd like to prohibit XML format.

Hi,

As everybody knows, OpenStack API supports both
XML and JSON formats now.

I'd like to prohibit XML format by openstack settings.
Is there any way to prohibit XML format?
(Of course, I know that proxy server(ex.Apache) can realize it.)

Best regards,
Rikimaru
--
Rikimaru Honjo
NTT Software Corporation
E-Mail : honjo.rikimaru at po.ntts.co.jp

[Openstack-operators] Consuming vSphere via OpenStack API

We?re working on a project with a customer where they want to use OpenStack API as their interaction demarc. Obviously the first things that pop into my head are how are you going to deploy to individual datastores? How are you going to be able to deploy to specific port groups? Can you deploy from vSphere templates rather than using Glance?

Thoughts? Ideas?


Eric Sarakaitis
Sr. Systems Engineer
419.303.4624 ? mobile
513.841.6329 ? desk
Eric.Sarakaitis at cbts.net<mailto:Eric.Sarakaitis at cbts.net>
[cid:457D5C26-EC2A-4778-80CF-54537A22DBC0]
-------------- next part --------------
A non-text attachment was scrubbed...
Name: C86F6A8F-AD86-4727-A68E-CEFEBB87AD8B[21].png
Type: image/png
Size: 25386 bytes
Desc: C86F6A8F-AD86-4727-A68E-CEFEBB87AD8B[21].png
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140821/17f90b52/attachment.png

[Openstack-operators] Vmware datastore backend storage for glance

Hello,

I have an openstack all-in-one on RHEL 7.

I configured my glance-api.conf file to use a datastore from my vcenter as a backend for images.

I restarted the glance-api service and it came up good with no errors, so I uploaded the image through the horizon dashboard, the image is a flat-VMDK of 2008R2 template.

My questions are:

1) Where can I see the progress through the CLI of the openstack node of the uploading process?

2) Why can't I see the image being uploaded to the datastore via the vsphere client?

Regards,

Ohad

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140821/5fb504ec/attachment.html

Thanks Chris,

I have an issue, when I upload via Horizon, I receive an uploading progress bar at the bottom but I?m stuck in the same window I can?t go anywhere.
When the upload has finished, I received a blank page, and after going to the datastore I configured as the backend, I couldn?t find the image!
Does someone have any idea what could have gone wrong? I followed the exact steps to configure the datastore but I still can?t upload images to it?

Regards,
Ohad

From: Chris Buccella [mailto:chris.buccella at antallagon.com]
Sent: Thursday, August 21, 2014 7:30 PM
To: Baruch, Ohad
Cc: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Vmware datastore backend storage for glance

1) Where can I see the progress through the CLI of the openstack node of the uploading process?
If you start the upload from the CLI, you can pass --progress to glance image-create to see a progress bar. I don't think the glance CLI has a way to see the upload progress if you initiated the upload through horizon.
-Chris

On Thu, Aug 21, 2014 at 10:40 AM, Baruch, Ohad <Ohad.Baruch at emc.com<mailto:Ohad.Baruch at emc.com>> wrote:

Hello,

I have an openstack all-in-one on RHEL 7.

I configured my glance-api.conf file to use a datastore from my vcenter as a backend for images.

I restarted the glance-api service and it came up good with no errors, so I uploaded the image through the horizon dashboard, the image is a flat-VMDK of 2008R2 template.

My questions are:

1) Where can I see the progress through the CLI of the openstack node of the uploading process?

2) Why can't I see the image being uploaded to the datastore via the vsphere client?

Regards,

Ohad


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Operations summit (database)

Database needs have already been an important topic at past
Operator-focused events in California [1] and Atlanta [2]. As I will be
moderating the database discussion on Tuesday at the OpenStack Operations
Summit, I wanted to kickstart the conversation. Please share any topics
you'd like to cover, questions, ideas, needs, etc. related to the database
core of OpenStack.

Best,
Matt

[1] March 2014 Operators Mini Summit:
https://etherpad.openstack.org/p/operators-feedback-mar14
[2] April 2014 OpenStack Summit:
https://etherpad.openstack.org/p/juno-summit-ops-database
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140822/fb2a5581/attachment.html

[Openstack-operators] OpenStack Community Weekly Newsletter (Aug 15 – 22)

  Call for Proposals: Open Source Ecosystem Design Sessions at the
  Paris Summit
  <http://www.openstack.org/blog/2014/08/call-for-proposals-open-source-ecosystem-design-sessions-at-the-paris-summit/>

We?re continuing theOpen Source Ecosystem Design Sessions at the
OpenStack Summit Paris (Nov 3-7, 2014). It?s an opportunity to foster
the projects and communities that don?t fall under the umbrella of the
OpenStack Foundation, but are actively being used and developed within
the greater ecosystem. We were inspired by everyone who came together to
plan the next development cycle on their storage, configuration
management, orchestration and networking projects. If you have an open
source project that?s related to OpenStack and has a significant
community of contributors, we invite you to submit a proposal
https://docs.google.com/a/openstack.org/forms/d/1j76uPTPevOph1LzADs8RBsu6VJBdlFtSMWe5TPszqW8/viewform.

  OpenStack Ceilometer and the Gnocchi experiment
  <http://julien.danjou.info/blog/2014/openstack-ceilometer-the-gnocchi-experiment>

Julien Danjou http://julien.danjou.info/blog/ reports on what?s
happening within the OpenStack Telemetry program, with a retrospective
on Ceilometer and its drawbacks.

  Your baby is ugly! Picking which code is required for Commercial
  Core. <http://robhirschfeld.com/2014/08/18/ugly-baby/>

Entertaining post by Rob Hirschfeld http://robhirschfeld.com/ about
DefCore. ?There?s no point in sugar-coating this: selecting API and code
sections for core requires making hard choices and saying no
http://robhirschfeld.com/2014/08/12/patchwork-onion/. DefCore makes
this fair by 1) defining principles for selection, 2) going slooooowly
to limit surprises and 3) being transparent in operation.?

  Who Are the Most Influential People of OpenStack??
  <http://www.metacloud.com/influential-people-openstack/>

Fun game by the fun folks at Metacloud. They?re printing and giving away
in Paris custom-printed decks of cards that include the most influential
people of OpenStack. The best part? The community gets to decide who is
featured in the deck!

The Road To Paris 2014 ? Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Welcome Michael Bayer
https://review.openstack.org/#/q/owner:%22Michael+Bayer+%253Cmike_mp%2540zzzcomputing.com%253E%22,n,z
to oslo-db core reviewers

Vlad Okhrimenko
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person73df8941-d347-46e0-8691-a41ad1783f77
Yu Zhang
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone499c1c4-165a-4cec-b939-0ab9077d31b9

Simona Iuliana Toader
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9b8b2e2e-987d-41c2-880f-487d76d18e50
Taku Fukushima
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personaf6e3146-6baa-472a-8817-1c7e1a705bb5

Sergey Lupersolsky
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4378e09c-8ba1-4c7d-9307-6db9b8c2790b
Simon Leinen
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf2d8f84d-e7e9-485e-8c4d-fd26b6ce2866

Johnu George
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5da67528-ea48-4a6a-a998-8671f9ae3ae4
Shaun McDowell
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person52a3030e-bdc2-4f9d-baf2-91a070eb9eb6

Brent Roskos
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person728dd0f4-9969-45a7-8b5e-33890ab26ddf
RedBaron
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona3f66e5a-854d-42ca-ac13-da3845fd41df

Bhavani Shankar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person805d55f7-52d3-466f-b1a6-3ec777c6e8d7
Jesse Proudman
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone2bf7508-53ae-45f7-b4d4-effea68b2788

Alexis GUNST
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personceb2b889-a963-4a0d-b6ef-50ec963a8401
Jesse J. Cook
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb6106cdb-e5c9-4fb4-b0dd-3cb00451fd2e

Adelina Tuvenie
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person72362383-e8aa-49e2-a137-b6e03f694532
Edouard Outin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6587fdae-2efc-4d61-ba0c-181f3c06ebea

Doug Baer
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person308ae4b0-501b-4307-bfc9-acba62332568
justin thomas simms
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb0f011c8-322d-4126-aee4-e4fc2aef37ff

Angus Thomas
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persond948a073-f5d9-4fe8-af81-8295f2f7d8b7
Tom Holtzen
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persondba747c7-b07e-448c-bd3d-729bbcae72eb

avi berger
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person875214a9-cb82-40fe-9c9e-80f2ea6d9454
TARDIVEL
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personce05d6f8-2eba-4733-9ae0-663b443fb83e

Dorin Paslaru
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persond763be36-755a-42e9-b980-a2a20f3d8af7
Samer Deeb
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personad952e96-0860-4abb-a3ab-4ed914fdfbfb

Aishwarya Thangappa
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person88bbb684-2706-4f00-94c2-9d1528b39e6c
Puneet Arora
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0236b8b6-3be0-4a4d-b5a0-a4158e593f62

phanipawan

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personbfef769e-bf1e-434f-810f-6d653a35b73f

Venkatasubramanian Ramachandran

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person187b7b7e-4104-432c-af0f-338fb49c950c

Tom?? Nov?c(ik

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0abe1517-8388-4448-82d6-41fc99ce5924

Kyle Stevenson

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person955500ce-eeb1-4330-8720-8fb9543d1f89

Can ZHANG

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf394347f-f45f-42fe-9207-1809ab7628e2

Doug Baer

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person308ae4b0-501b-4307-bfc9-acba62332568

Clinton Knight

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc83c18d6-be3b-440f-8ab6-bf2d4f92bc67

???? ???????????

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6855aef1-5955-4f7b-9ed3-c9e289d90ca2

Steven Tan

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9b736ede-8525-4167-9aae-86303f20eb18

Rob Cresswell

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person39683670-586a-4441-906a-9e0a8167f5e4

yuriy brodskiy

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person57171c72-da4d-4b83-9b08-c05a61d30c4d

avi berger

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person875214a9-cb82-40fe-9c9e-80f2ea6d9454

Oscar Romero

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf440882f-76b1-48e2-a60b-538661d64b85

Louis Taylor

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2e1a714a-b6c4-48f8-a03d-b0ce21677ac7

Latest Activity In Projects

Do you want to see at a glance the bugs filed and solved this week?
Latest patches submitted for review? Check out the individual project
pages on OpenStack Activity Board ? Insights
http://activity.openstack.org/data/display/OPNSTK2/.

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140822/1c7a0f05/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: icon_smile.gif
Type: image/gif
Size: 174 bytes
Desc: not available
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140822/1c7a0f05/attachment.gif

[Openstack-operators] VMware datastore backend for glance

Hi everyone,

I'm having a problem during my image upload I get a HTTP 400 error:
"400 Bad Request"
"Client disconnected before sending all data to backend"
When looking at the logs I see:
"File "/usr/lib/python2.7/site-packages/eventlet/greenio.py", line 295, in send"
"totalsent += fd.send(data[totalsent:], flags)"
"error: [Errno 32] Broken pipe"

It seems that the session with vSphere works good, but for some reason it doesn't send any data.
Can anyone please assist?

Regards,
Ohad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140825/3cc0163d/attachment.html

[Openstack-operators] Iperf isn't work between VMs which are placed in other host

Hello, I tried to measure network performance BTW VM which connected through GRE tunnel. So, I configured neutron(ML2/GRE).
Then I verified TCP performance between VMs which are placed on same host.
But I could not make Iperf connection BTW VMs (on other host), but ping is ok.
bellow things are current Security group configuration. I think that I allow all traffic which is needed to execute iperf.
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| udp | 1 | 65535 | 0.0.0.0/0 | |
| tcp | 1 | 65535 | 0.0.0.0/0 | |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
Do you have any comment?

Regards
Taeheum Na


M.S. candidate of Networked Computing Systems Lab.
School of Information and Communications
GIST (Gwangju Inst. of Sci. and Tech.)
E-mail: thna at nm.gist.ac.kr
Phone: +82-10-2238-9424
Office: +82-62-715-2273

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140826/3e162af0/attachment.html

We usually serves reduced MTU to instances via DHCP option:

(at network node for neutron)
/etc/neutron/dnsmasq-neutron.conf:
dhcp-option-force=26,1476

On 08/26/2014 05:19 AM, Taeheum Na wrote:

Hello, David

Your comment was exactly correct!!

I refer following blog. Then adjust MTU on the VM

And current it?s work well.

http://techbackground.blogspot.kr/2013/06/path-mtu-discovery-and-gre.html

Thank you for your comment.

It was helpful to save my time.

From:medberry at gmail.com [mailto:medberry at gmail.com] On Behalf Of
*David Medberry
*Sent:
Tuesday, August 26, 2014 1:40 AM
To: Taeheum Na
Cc: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Iperf isn't work between VMs
which are placed in other host

What MTU are you using (try different ping sizes to see if this
problem also occurs with large pings.) You will likely want to limit
the vms to 1454 MTU and re-run your tests. Google for GRE OPENSTACK
MTU for more info.

On Mon, Aug 25, 2014 at 11:23 AM, Taeheum Na > wrote:

Hello, I tried to measure network performance BTW VM which
connected through GRE tunnel. So, I configured neutron(ML2/GRE).

Then I verified TCP performance between VMs which are placed on
same host.

But I could not make Iperf connection BTW VMs (on other host), but
ping is ok.

bellow things are current Security group configuration. I think
that I allow all traffic which is needed to execute iperf.

+-------------+-----------+---------+-----------+--------------+

|IP Protocol|FromPort|ToPort|IP Range|SourceGroup|

+-------------+-----------+---------+-----------+--------------+

|udp |1|65535|0.0.0.0/0||

|tcp |1|65535|0.0.0.0/0||

|icmp |-1|-1|0.0.0.0/0||

+-------------+-----------+---------+-----------+--------------+

Do you have any comment?

Regards

Taeheum Na

****************************************************

M.S. candidate of Networked Computing Systems Lab.

School of Information and Communications

GIST (Gwangju Inst. of Sci. and Tech.)

E-mail: thna at nm.gist.ac.kr <mailto:thna at nm.gist.ac.kr>

Phone: +82-10-2238-9424 <tel:%2B82-10-2238-9424>

Office: +82-62-715-2273 <tel:%2B82-62-715-2273>


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] consolidation options for nova scheduler

Good day.

I can't find any options for nova scheduler to consolidate new instances
on few hosts instead of 'spreading' them on all available hosts.

Simple example: let says we has 10 hosts, each host got 10Gb of memory
for instances. We has flavors of 3Gb and 5Gb of RAM. If we run 20 new
instances, they will consume about 6Gb per host and we will not able to
run new instance with 6Gb of RAM (even we have 10*4=40 Gb of free memory
on computes, none of the hosts has more than 4Gb of memory).

Is any nice way to say 'consolidate' to openstack? Thanks!

Hi,
IIUC you probably want to set ramweightmultiplier to a negative number.

From the OpenStack documentation [1]:

By default, the scheduler spreads instances across all hosts evenly. Set

the ramweightmultiplier option to a negative number if you prefer
stacking instead of spreading. Use a floating-point value.

Simon

[1]
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html

On Tue, Aug 26, 2014 at 5:01 PM, George Shuklin <george.shuklin at gmail.com>
wrote:

Good day.

I can't find any options for nova scheduler to consolidate new instances
on few hosts instead of 'spreading' them on all available hosts.

Simple example: let says we has 10 hosts, each host got 10Gb of memory for
instances. We has flavors of 3Gb and 5Gb of RAM. If we run 20 new
instances, they will consume about 6Gb per host and we will not able to run
new instance with 6Gb of RAM (even we have 10*4=40 Gb of free memory on
computes, none of the hosts has more than 4Gb of memory).

Is any nice way to say 'consolidate' to openstack? Thanks!


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] High Availability Guide team

Hi all,
Cross-posting to -docs and -operators.

At the Ops mid-cycle meetup this week, Matt Griffin, David Medberry, and
Sriram Subramanian offered to start a review team for the High Availability
Guide. It needs some updates and it's best if the review team works similar
to the Security Guide -- subject matter experts working with core docs team
members for reviews. So I'm proposing we pull it into its own repo just
like the Security Guide. Any reasons not to? I think the next steps are:

  1. Propose a patch to openstack/governance to show the repo is governed by
    the Docs program.
  2. Propose a patch that sets up a separate review team starting with Matt,
    David, and Sriram. We think Emilien would be interested too, sound good?
    Any others? We can have members of openstack-docs-core as well, similar to
    the Security Guide.
  3. Propose a patch that gets that guide building separately in a different
    repo.
  4. Start working on the four bugs already logged [1] and logged more as
    needed.

Any other subtasks? Any interested parties?

Thanks,
Anne

  1. https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=ha-guide
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140826/f256082e/attachment.html

I?m interested in helping too.

From: John Dewey >
Date: Tuesday, August 26, 2014 at 14:03
To: Anne Gentle >
Cc: David Medberry <david.medberry at canonical.com<mailto:david.medberry at canonical.com>>, "openstack-operators at lists.openstack.org" >, "openstack-docs at lists.openstack.org" >, Sriram Subramanian >
Subject: Re: [Openstack-operators] High Availability Guide team

Definitely interested in helping.

On Tuesday, August 26, 2014 at 11:09 AM, Anne Gentle wrote:

Hi all,
Cross-posting to -docs and -operators.

At the Ops mid-cycle meetup this week, Matt Griffin, David Medberry, and Sriram Subramanian offered to start a review team for the High Availability Guide. It needs some updates and it's best if the review team works similar to the Security Guide -- subject matter experts working with core docs team members for reviews. So I'm proposing we pull it into its own repo just like the Security Guide. Any reasons not to? I think the next steps are:

  1. Propose a patch to openstack/governance to show the repo is governed by the Docs program.
  2. Propose a patch that sets up a separate review team starting with Matt, David, and Sriram. We think Emilien would be interested too, sound good? Any others? We can have members of openstack-docs-core as well, similar to the Security Guide.
  3. Propose a patch that gets that guide building separately in a different repo.
  4. Start working on the four bugs already logged [1] and logged more as needed.

Any other subtasks? Any interested parties?

Thanks,
Anne

  1. https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=ha-guide

    OpenStack-operators mailing list
    OpenStack-operators at lists.openstack.org
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] slides for Linuxcon/CloudOpen talk: "Running OpenStack at Scale: Beyond the Private Cloud PoC"

Hi all,

Marcos Garcia suggested I post my slides from the Linuxcon/CloudOpen
talk I gave last Friday, so here they are:

https://drive.google.com/file/d/0B5cZ2y527ClxSXh4bERLam5YY2s/edit?usp=sharing

Feel free to make any suggestions or point out any corrections you might
have.

Thanks,
Dan

--
Dan Yocum
Sr. Systems Engineer
OpenShift | PaaS by Red Hat
dyocum at redhat.com

Thanks Dan. Nice write-up. -dave

On Tue, Aug 26, 2014 at 4:03 PM, Dan Yocum wrote:

Hi all,

Marcos Garcia suggested I post my slides from the Linuxcon/CloudOpen
talk I gave last Friday, so here they are:

https://drive.google.com/file/d/0B5cZ2y527ClxSXh4bERLam5YY2s/edit?usp=sharing

Feel free to make any suggestions or point out any corrections you might
have.

Thanks,
Dan

--
Dan Yocum
Sr. Systems Engineer
OpenShift | PaaS by Red Hat
dyocum at redhat.com


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Neutron metadata slow

What you mean by 'slow'? Transfer speed or time to reply?

Check the link between neutron metadata proxy and nova-api. It can stuck
due slow dns, or token validation.

On 08/27/2014 05:23 AM, Nhan Cao wrote:

hi guys,
I have a problem.
when I create new tenant-router per project,my ubuntu-instance fetch
metadata very slow.I don't see any error about metadata in log service.

Any hints would be much appreciated!


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140827/a34b59d0/attachment.html

[Openstack-operators] Operators IRC Meeting

At the Operators Summit we had a discussion about getting an Operators IRC
meeting together. An opportunity for the ops to come together and have a
real time conversation. We were also thinking about having a "guest" star
(Michael I'm looking at you) so we can talk about specific things
like...Nova.

I'm opening this up to the group, there were some really positive feedback
at the Summit so how about here?

--
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140827/30cf7cfc/attachment.html

+1

From: JJ Asghar >
Date: Wednesday, August 27, 2014 9:28 AM
To: "OpenStack-operators at lists.openstack.org" >
Subject: [Openstack-operators] Operators IRC Meeting

At the Operators Summit we had a discussion about getting an Operators IRC meeting together. An opportunity for the ops to come together and have a real time conversation. We were also thinking about having a "guest" star (Michael I'm looking at you) so we can talk about specific things like...Nova.

I'm opening this up to the group, there were some really positive feedback at the Summit so how about here?

--
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2


This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] neutron network configuration: linux bridges

I found something strange I can't solve by myself.

Sometimes I getting configuration where VMs traffic get to the gre
tunnes via following sequence: tap->qbr->qvb->qvo->br-int->br-tun->eth0.
It described here:
http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html

But sometimes I getting very different configuration:
tap->br-int->br-tun->eth0. It working, and working fine. And I just
can't find option or configuration difference, or combination of both
which cause switching between 'simple' and 'normal' scheme. I feel like
I don't understand something basic and something important.

Any ideas? Thanks!

On 08/28/2014 02:57 PM, Assaf Muller wrote:

----- Original Message -----

I found something strange I can't solve by myself.

Sometimes I getting configuration where VMs traffic get to the gre
tunnes via following sequence: tap->qbr->qvb->qvo->br-int->br-tun->eth0.
It described here:
http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html

But sometimes I getting very different configuration:
tap->br-int->br-tun->eth0. It working, and working fine. And I just
can't find option or configuration difference, or combination of both
which cause switching between 'simple' and 'normal' scheme. I feel like
I don't understand something basic and something important.

Any ideas? Thanks!

It depends on the vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
option in nova.conf.

In my case in both cases (with qbr/qvb/qvo and without) nova-compute
starts without defined vifdriver. I enable debug (in both cases) and
debug config print says that
libvirt
vifdriver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver, and
there is no notice about just 'vif
driver'. Is any other options related
to this? And what values can be in vif_driver except
nova.virt.libvirt.vif.LibvirtGenericVIFDriver?

Thanks!

[Openstack-operators] Nocna migracija virtualk, Openstack

Danes ponoci bi rad zmigriral naslednje virtualke na na novo nadgrajeni openstack-02 server. Lahko se zgodi, da bo virtalka zaradi tega zacasno nedostopna (1-10 minut). Po tej migraciji bomo server ugasnili in ga nadgradili (+1x CPU +148GB RAM). Pri vseh virtualkah bom poskrbel da se bodo zagnale nazaj in da bo delovala mreza.

server: openstack-03
| ac69bef4-ad6e-49a8-9062-5f22dfae34c9 | GiraffPlus CTX | ACTIVE | GiraffPlus=10.32.19.5, 10.10.43.115
| 45ec86c1-d0c4-4841-a654-28bb193883bb | GiraffPlus Mongo 1 | ACTIVE | GiraffPlus=10.32.19.4, 10.10.43.89
| 7a367f28-277c-43d1-8d09-8814591a6aad | GiraffPlus Mongo 3 | ACTIVE | GiraffPlus=10.32.19.7, 10.10.43.90
| 789be527-56aa-4bcd-ab1b-2d605efeae55 | GiraffPlus XL Web | ACTIVE | GiraffPlus=10.32.19.3, 10.10.43.85
| 1fa04175-40ee-49f7-9889-f374a77eed4d | WD20-gateway | ACTIVE | XNautica=10.32.17.18, 10.10.43.68
| 11e2f0aa-ff20-4212-827e-280c1d14799e | Wirecloud | ACTIVE | finesce-interna=10.32.27.6; finesce-dmz-network=10.32.28.4, 10.10.43.120
| e9229360-9101-4452-8b3e-ffce0032fbd5 | dmt-db | ACTIVE | dmt-lan=10.32.10.9, 10.10.43.111
| f830877e-3325-480d-bedd-de1c8b78cad5 | dmt-logging | ACTIVE | dmt-lan=10.32.10.10
| b5c1b8a8-c783-4377-82fe-b0148c8e9f51 | logger | ACTIVE | olaii=10.32.13.107, 10.10.43.116
| 43f44f20-12df-4784-91bb-5b359ed2d9c9 | mongodb | ACTIVE | XNautica=10.32.17.11, 10.10.43.109
| 894774bb-007c-4124-a29c-2c2dfdd3ccea | olaii-api | ACTIVE | olaii=10.32.13.106, 10.10.43.65
| 5a1eb773-4fa7-4a2f-b52a-a7a642833f0a | olaii-solr | ACTIVE | olaii=10.32.13.104, 10.10.43.99
| a3eed585-5dd6-4b75-94fa-854a7e002db3 | phov-mongo | ACTIVE | gaea-crunchers=10.32.21.9, 172.16.95.6
| 150db7ae-2531-475b-bf2b-a1f5e3cee286 | speu-websvc-analytics | ACTIVE | speu-net=10.32.23.5, 172.16.95.2
| d8193a12-06ca-43c7-a2d4-95024773461d | speu-x-portal | ACTIVE | speu-net=10.32.23.4, 172.16.95.33
| fbcea9e2-3b48-423e-ab25-cc5af17727e2 | teltonika-nosql | ACTIVE | cloudscale=10.32.11.102, 10.10.43.71
| a8a79a32-dab1-4acc-9c1f-e0e583516971 | xmarine-stage | ACTIVE | XNautica=10.32.17.16, 10.10.43.105

LP,
Robert

Server nadgrajen.

Danes ponoci v Ceph storage cluster ponovno vkljucim openstack-03 server, kar bo povzrocilo degradacijo/sync storage clustra. Med tem casom bodo virtualke precej pocasne (lahko tudi obcasno nedosegljive), trajalo pa naj ne bi vec kot 1-2 uri.

Ta motnja bo obcutna na vseh vrtualkah.

  • Robert

----- Original Message -----
From: "Robert Plestenjak" <robert.plestenjak at xlab.si>
To: "Ales Stimec" <ales.stimec at xlab.si>, "Ales Cernivec" <ales.cernivec at xlab.si>, "Stas - XLAB" <stas.strozak at xlab.si>, "Boris Savi?" <boris.savic at xlab.si>, "Matej Arta?" <matej.artac at xlab.si>, "Uro? Trebec" <uros.trebec at xlab.si>, "Gregor Berginc" <gregor.berginc at xlab.si>, "Jure Polutnik" <jure.polutnik at xlab.si>, "Simon Ivan?ek" <simon.ivansek at xlab.si>, "Anze Brvar" <anze.brvar at xlab.si>
Cc: "Justin Cinkelj" <justin.cinkelj at xlab.si>, openstack-operators at lists.openstack.org
Sent: Friday, August 29, 2014 7:41:51 AM
Subject: Re: Nocna migracija virtualk, Openstack

Migracija opravljena, bi vas pa rad opozoril na "novi feature" najnovejsega bugfix-a openstack quantum networkinga.

Po novem openstack filtrira tudi promet med virtualkami znotraj istega projekta. Zaradi tega je treba v Security Groups dodati tudi porte, preko katerih virtualke znotraj projekta komunicirajo med seboj (thanks Matej A.)

  • Robert

----- Original Message -----
From: "Robert Plestenjak" <robert.plestenjak at xlab.si>
To: "Ales Stimec" <ales.stimec at xlab.si>, "Ales Cernivec" <ales.cernivec at xlab.si>, "Stas - XLAB" <stas.strozak at xlab.si>, "Boris Savi?" <boris.savic at xlab.si>, "Matej Arta?" <matej.artac at xlab.si>, "Uro? Trebec" <uros.trebec at xlab.si>, "Gregor Berginc" <gregor.berginc at xlab.si>, "Jure Polutnik" <jure.polutnik at xlab.si>, "Simon Ivan?ek" <simon.ivansek at xlab.si>, "Anze Brvar" <anze.brvar at xlab.si>
Cc: "Justin Cinkelj" <justin.cinkelj at xlab.si>, openstack-operators at lists.openstack.org
Sent: Thursday, August 28, 2014 3:07:10 PM
Subject: Nocna migracija virtualk, Openstack

Danes ponoci bi rad zmigriral naslednje virtualke na na novo nadgrajeni openstack-02 server. Lahko se zgodi, da bo virtalka zaradi tega zacasno nedostopna (1-10 minut). Po tej migraciji bomo server ugasnili in ga nadgradili (+1x CPU +148GB RAM). Pri vseh virtualkah bom poskrbel da se bodo zagnale nazaj in da bo delovala mreza.

server: openstack-03
| ac69bef4-ad6e-49a8-9062-5f22dfae34c9 | GiraffPlus CTX | ACTIVE | GiraffPlus=10.32.19.5, 10.10.43.115
| 45ec86c1-d0c4-4841-a654-28bb193883bb | GiraffPlus Mongo 1 | ACTIVE | GiraffPlus=10.32.19.4, 10.10.43.89
| 7a367f28-277c-43d1-8d09-8814591a6aad | GiraffPlus Mongo 3 | ACTIVE | GiraffPlus=10.32.19.7, 10.10.43.90
| 789be527-56aa-4bcd-ab1b-2d605efeae55 | GiraffPlus XL Web | ACTIVE | GiraffPlus=10.32.19.3, 10.10.43.85
| 1fa04175-40ee-49f7-9889-f374a77eed4d | WD20-gateway | ACTIVE | XNautica=10.32.17.18, 10.10.43.68
| 11e2f0aa-ff20-4212-827e-280c1d14799e | Wirecloud | ACTIVE | finesce-interna=10.32.27.6; finesce-dmz-network=10.32.28.4, 10.10.43.120
| e9229360-9101-4452-8b3e-ffce0032fbd5 | dmt-db | ACTIVE | dmt-lan=10.32.10.9, 10.10.43.111
| f830877e-3325-480d-bedd-de1c8b78cad5 | dmt-logging | ACTIVE | dmt-lan=10.32.10.10
| b5c1b8a8-c783-4377-82fe-b0148c8e9f51 | logger | ACTIVE | olaii=10.32.13.107, 10.10.43.116
| 43f44f20-12df-4784-91bb-5b359ed2d9c9 | mongodb | ACTIVE | XNautica=10.32.17.11, 10.10.43.109
| 894774bb-007c-4124-a29c-2c2dfdd3ccea | olaii-api | ACTIVE | olaii=10.32.13.106, 10.10.43.65
| 5a1eb773-4fa7-4a2f-b52a-a7a642833f0a | olaii-solr | ACTIVE | olaii=10.32.13.104, 10.10.43.99
| a3eed585-5dd6-4b75-94fa-854a7e002db3 | phov-mongo | ACTIVE | gaea-crunchers=10.32.21.9, 172.16.95.6
| 150db7ae-2531-475b-bf2b-a1f5e3cee286 | speu-websvc-analytics | ACTIVE | speu-net=10.32.23.5, 172.16.95.2
| d8193a12-06ca-43c7-a2d4-95024773461d | speu-x-portal | ACTIVE | speu-net=10.32.23.4, 172.16.95.33
| fbcea9e2-3b48-423e-ab25-cc5af17727e2 | teltonika-nosql | ACTIVE | cloudscale=10.32.11.102, 10.10.43.71
| a8a79a32-dab1-4acc-9c1f-e0e583516971 | xmarine-stage | ACTIVE | XNautica=10.32.17.16, 10.10.43.105

LP,
Robert

[Openstack-operators] Migrating instances from two different environments

Hi,

I need to migrate my running instances from Grizzily to Icehouse, are there
any tools to do this? please let me know the process.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140828/79bc5fcb/attachment.html

Thanks George, will try that

On Thu, Aug 28, 2014 at 12:40 PM, George Shuklin <george.shuklin at gmail.com>
wrote:

On 08/28/2014 07:31 PM, raju wrote:

Hi,

I need to migrate my running instances from Grizzily to Icehouse, are
there any tools to do this? please let me know the process.

Instances only, or whole infrastructure? If instances only, make
snapshots, download them from g-stack and upload to i-stack, than create
new instances from those snapshots.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Ceph crashes with larger clusters and denser hardware

One of my colleagues here at Comcast just returned from the Operators
Summit and mentioned that multiple folks experienced Ceph instability with
larger clusters. I wanted to send out a note and save headache for some
folks.

If you up the number of threads per OSD, there are situations where many
threads could be quickly spawned. You must up the max number of PIDs
available to the OS, otherwise you essentially get fork bombed. Every
single Ceph process with crash, and you might see a message in your shell
about "Cannot allocate memory".

In your sysctl.conf:

For Ceph

kernel.pid_max=4194303

Then run "sysctl -p". In 5 days on a lab Ceph box, we have mowed through
nearly 2 million PIDs. There's a tracker about this to add it to the
ceph.com docs.

Warren
@comcastwarren
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140828/eb1bc119/attachment.html

What version of ceph was this seen on?

From: Warren Wang >
Date: Thursday, August 28, 2014 10:38 AM
To: "openstack-operators at lists.openstack.org" >
Subject: [Openstack-operators] Ceph crashes with larger clusters and denser hardware

One of my colleagues here at Comcast just returned from the Operators Summit and mentioned that multiple folks experienced Ceph instability with larger clusters. I wanted to send out a note and save headache for some folks.

If you up the number of threads per OSD, there are situations where many threads could be quickly spawned. You must up the max number of PIDs available to the OS, otherwise you essentially get fork bombed. Every single Ceph process with crash, and you might see a message in your shell about "Cannot allocate memory".

In your sysctl.conf:

For Ceph

kernel.pid_max=4194303

Then run "sysctl -p". In 5 days on a lab Ceph box, we have mowed through nearly 2 million PIDs. There's a tracker about this to add it to the ceph.com docs.

Warren
@comcastwarren


This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Graceful Swift reloads

There isn't much documentation about caveats of the 'swift-init reload' command, which is meant to allow existing connections to finish up before restarting.

However, I've observed this creating a number of additional processes while those connections finish up (in some cases, nearly double the amount of normal processes). Since the documentation doesn't define in detail what occurs, and the code path behind this behavior is a bit rough, two questions:

1) Does it restart a single child process only after all connections to it are complete/closed, or does it fire up the normal amount of reloaded child processes, and let the existing processes finish up? (The latter would explain the bloat in process count)

2) Are any options in place to at least keep total connection count in check such that a proxy can't consume more backend storage connections than it normally would have the capacity to?

Ultimately I'm less concerned about process count and more concerned about total connections to backend storage nodes. Even though frontend connection capacity may not be bloated as a result of those additional processes, long running/keepalive connections may still be able to cause a single proxy to consume a higher than normal amount of connections to backend storage nodes.

More clarification and details on what exactly occurs with the reload command would be most appreciated.

Thanks,
Brian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140828/fee97d5b/attachment.html

Any insights from others using this command?

From: Brian Cline [mailto:bcline at softlayer.com]
Sent: Thursday 28 August 2014 12:04
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Graceful Swift reloads

There isn't much documentation about caveats of the 'swift-init reload' command, which is meant to allow existing connections to finish up before restarting.

However, I've observed this creating a number of additional processes while those connections finish up (in some cases, nearly double the amount of normal processes). Since the documentation doesn't define in detail what occurs, and the code path behind this behavior is a bit rough, two questions:

1) Does it restart a single child process only after all connections to it are complete/closed, or does it fire up the normal amount of reloaded child processes, and let the existing processes finish up? (The latter would explain the bloat in process count)

2) Are any options in place to at least keep total connection count in check such that a proxy can't consume more backend storage connections than it normally would have the capacity to?

Ultimately I'm less concerned about process count and more concerned about total connections to backend storage nodes. Even though frontend connection capacity may not be bloated as a result of those additional processes, long running/keepalive connections may still be able to cause a single proxy to consume a higher than normal amount of connections to backend storage nodes.

More clarification and details on what exactly occurs with the reload command would be most appreciated.

Thanks,
Brian

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Increase MTU from VM to VM, pass through physical Network devic

Dear experts,

My issue is that:

  1. I set up OpenStack in multi Nodes model, with ML2 Plugin and GRE mode.

  2. I created 2 VMs in the same Compute node. (1 Web and 1 DB).

  3. My App needs fully support packet 1500 Bytes.

How can I configure Openstack to do that?


I already configured:

  1. Set MTU of Ethernet_interface in VM

VM1# ifconfig eth0 mtu 1600

VM2# ifconfig eth0 mtu 1600

root at cuong-vm-01:~# netstat -i

Kernel Interface table

Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg

eth0 1600 0 15958 0 0 0 10317 0 0 0 BMRU

lo 65536 0 0 0 0 0 0 0 0 0 LRU

root at cuong-vm-02:~# netstat -i

Kernel Interface table

Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg

eth0 1600 0 15936 0 0 0 10009 0 0 0 BMRU

lo 65536 0 0 0 0 0 0 0 0 0 LRU

  1. Set MTU of every port in BR-INT to 1600Bytes

root at controller:~# netstat -i

Kernel Interface table

Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg

br-ex 1500 0 21680 0 0 0 2928 0 0 0 BRU

br-int 1600 0 308 0 0 0 6 0 0 0 BRU

br-tun 1500 0 0 0 0 0 6 0 0 0 BRU

eth0 1500 0 483582 0 2 0 22859 0 0 0 BMPRU

eth1 1500 0 220 0 0 0 6 0 0 0 BMRU

lo 65536 0 174389 0 0 0 174389 0 0 0 LRU

qbr480003c8-57 1500 0 29 0 0 0 6 0 0 0 BMRU

qbr59db81a4-93 1500 0 46 0 0 0 6 0 0 0 BMRU

qvb480003c8-57 1500 0 15907 0 0 0 9877 0 0 0 BMPRU

qvb59db81a4-93 1500 0 15773 0 0 0 10087 0 0 0 BMPRU

qvo480003c8-57 1600 0 9877 0 0 0 15907 0 0 0 BMPRU

qvo59db81a4-93 1600 0 10087 0 0 0 15773 0 0 0 BMPRU

tap480003c8-57 1500 0 9995 0 0 0 15912 0 0 0 BMRU

tap59db81a4-93 1500 0 10169 0 0 0 15764 0 0 0 BMRU

virbr0 1500 0 0 0 0 0 0 0 0 0 BMU

However, I clouldn?t ping from VM 01 to VM02 without fragment

root at cuong-vm-01:~# traceroute --mtu 172.16.10.13

traceroute to 172.16.10.13 (172.16.10.13), 30 hops max, 65000 byte packets

1 * F=1600 * *

2 * * *

3 * * *

4 * *^C


root at cuong-vm-02:~# ping -s 1500 -M do 172.16.10.12

PING 172.16.10.12 (172.16.10.12) 1500(1528) bytes of data.

Thank you very much.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140829/0013fb6b/attachment.html

The tap interfaces from QEMU, and the veth pair from the qbr linux bridge
to the br-int OVS integration bridge also need to be configured for jumbo
frames. This article may help to get the lay of the land:
http://openstack.redhat.com/Networking_in_too_much_detail It's a good idea
to configure all interfaces on an L2 segment with the same MTU (at least
all the interfaces with IP addresses, and ensure any L2 interfaces have the
same or larger MTU) otherwise path MTU detection doesn't work.

Dustin Lundquist

On Thu, Aug 28, 2014 at 7:47 PM, Tong Manh Cuong wrote:

Dear experts,

My issue is that:

  1. I set up OpenStack in multi Nodes model, with ML2 Plugin and GRE
    mode.

  2. I created 2 VMs in the same Compute node. (1 Web and 1 DB).

  3. My App needs fully support packet 1500 Bytes.

How can I configure Openstack to do that?


I already configured:

  1. Set MTU of Ethernet_interface in VM

VM1# ifconfig eth0 mtu 1600

VM2# ifconfig eth0 mtu 1600

root at cuong-vm-01:~# netstat -i

Kernel Interface table

Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
Flg

eth0 1600 0 15958 0 0 0 10317 0
0 0 BMRU

lo 65536 0 0 0 0 0 0 0
0 0 LRU

root at cuong-vm-02:~# netstat -i

Kernel Interface table

Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
Flg

eth0 1600 0 15936 0 0 0 10009 0
0 0 BMRU

lo 65536 0 0 0 0 0 0 0
0 0 LRU

  1. Set MTU of every port in BR-INT to 1600Bytes

root at controller:~# netstat -i

Kernel Interface table

Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
Flg

br-ex 1500 0 21680 0 0 0 2928 0
0 0 BRU

br-int 1600 0 308 0 0 0 6 0
0 0 BRU

br-tun 1500 0 0 0 0 0 6 0
0 0 BRU

eth0 1500 0 483582 0 2 0 22859 0
0 0 BMPRU

eth1 1500 0 220 0 0 0 6 0
0 0 BMRU

lo 65536 0 174389 0 0 0 174389 0
0 0 LRU

qbr480003c8-57 1500 0 29 0 0 0 6 0
0 0 BMRU

qbr59db81a4-93 1500 0 46 0 0 0 6 0
0 0 BMRU

qvb480003c8-57 1500 0 15907 0 0 0 9877 0
0 0 BMPRU

qvb59db81a4-93 1500 0 15773 0 0 0 10087 0
0 0 BMPRU

qvo480003c8-57 1600 0 9877 0 0 0 15907 0
0 0 BMPRU

qvo59db81a4-93 1600 0 10087 0 0 0 15773
0 0 0 BMPRU

tap480003c8-57 1500 0 9995 0 0 0 15912 0
0 0 BMRU

tap59db81a4-93 1500 0 10169 0 0 0 15764 0
0 0 BMRU

virbr0 1500 0 0 0 0 0 0 0
0 0 BMU

However, I clouldn?t ping from VM 01 to VM02 without fragment

root at cuong-vm-01:~# traceroute --mtu 172.16.10.13

traceroute to 172.16.10.13 (172.16.10.13), 30 hops max, 65000 byte packets

1 * F=1600 * *

2 * * *

3 * * *

4 * *^C


root at cuong-vm-02:~# ping -s 1500 -M do 172.16.10.12

PING 172.16.10.12 (172.16.10.12) 1500(1528) bytes of data.

Thank you very much.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] OpenStack Community Weekly Newsletter (Aug 22 – 29)

  The results are in ? Manila is an OpenStack incubated project!
  <http://netapp.github.io/openstack/2014/08/26/manila-incubated/>

OpenStack Technical Committee voted Manila into official incubation
status. The TC formally accepted the creation of shared file systems
program, as well as the incubation of the Manila project.

  OpenStack as Layers
  <https://dague.net/2014/08/26/openstack-as-layers/>

Sean Dague https://dague.net/ has a good suggestion for representing
OpenStack?s pieces and how they stack up together to realize
Infrastructure As A Service. His post expands on the timely topic of
what should be part of an integrated release.

  Keystone is not an authentication service
  <https://blog-nkinder.rhcloud.com/?p=130>

Nathan Kinder https://blog-nkinder.rhcloud.com/ dives deeper into
Keystone. While many would argue that Keystone in OpenStack world has
something to do with authentication, Nathan argues that Keystone?s main
purpose is authorization within an OpenStack deployment. While it?s true
that Keystone can perform authentication itself, Nathan argues that
doesn?t mean that it should be used for authentication.

The Road To Paris 2014 ? Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Atin Ruia
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0cd4614e-fcf9-4e52-826b-1ae79cd373f5
Santosh Kumar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6c67d551-5a86-4630-8a61-0b98923395f8

abhiram moturi
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personcd7a14f7-b7f4-4e4f-9dcf-1ddd8aa685d3
Matt Rutkowski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person622312d4-fe86-4ad3-981e-62a86a401097

Peter Krempa
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf0e0952b-b695-4706-8856-05f0dcbd9ada
Carl Bader
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc49e989f-476e-418c-b5c3-ea85a6766ff3

Kentaro TANAKA
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person333a8e95-0bcc-48d7-89e5-4d8f8e41c64c
Veena
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3496eef1-9301-4864-aae9-fa389d50b8a7

Eric Blake
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4da361f1-46e4-45c0-b32a-99963bea7bf3
Nolan Brubaker
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb7c978c6-8814-4c36-bb9c-c6c11dde2146

Artem Osadchiy
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb696ac09-1b17-407e-81a7-a4800ab832ed
Neeti Dahiya
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person82b9c3e5-cc17-40b8-ab1c-8d2936950d40

Shanthakumar K
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persondd7dbdf2-fd33-4aa9-a3e5-00fc9a225119
Mikhail S Medvedev
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2aaa62e8-ca2c-46a6-9b60-a5fda70038e8

Sanja Nosan
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person45e13d5f-9e3c-4937-bc8b-74c57a1c5b3d
Abhishek Talwar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personca4162f2-9152-48b4-9d4b-0a3cfc214cd5

Prasoon Telang
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personea476f36-585f-4196-a366-01b62311eb8c
Yukinori Sagara
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4215b71b-74d2-442c-9036-dd3ea1d6485c

Oscar Romero
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf440882f-76b1-48e2-a60b-538661d64b85
Ted Ross
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8cae659a-7285-46f1-bc2f-e76bf3273aa8

Jeremy Moffitt
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person93eb4929-a375-4a67-9ff9-f19b9c3f2289
Prabhakar Kudva
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb83270c5-ed0c-4aba-9ff8-a7d9eae76fe5

Csaba Henk
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona725fdc0-607c-4301-960a-f743a579be2c
Eric Blake
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4da361f1-46e4-45c0-b32a-99963bea7bf3

Santosh Kumar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6c67d551-5a86-4630-8a61-0b98923395f8
abhiram moturi
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personcd7a14f7-b7f4-4e4f-9dcf-1ddd8aa685d3

Randy Perryman

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf00b6323-d129-4ff2-a73c-6d4b2ed5f194

Kent Wang

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3479fcec-45d3-4fce-8a15-0b5853d84815

Johnu George

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5da67528-ea48-4a6a-a998-8671f9ae3ae4

Jeffrey Calcaterra

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona4d0d56d-3e34-4cde-820c-1a7bb59b9acf

Grace Yu

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona972c324-1b7c-49a9-af7a-a0a0bee90049

David Mahony

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7233ed47-1f13-419b-8e40-34576bbcccc9

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140829/6603f234/attachment.html

[Openstack-operators] Openstack upgrade

Hello,

I'm now using openstack in havana release and I wonder how I should do
upgrade. Can I upgrade it immediately from Havana to e.g. Juno version (when
it will be stable) or should I first upgrade everything to Icehouse and than to
Juno?

--
Pozdrawiam,
S?awek Kap?o?ski
slawek at kaplonski.pl

--
Klucz GPG mo?na pobra? ze strony:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
--
My public GPG key can be downloaded from:
http://kaplonski.pl/files/slawek_kaplonski.pub.key
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: This is a digitally signed message part.
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140831/b6ec1327/attachment.pgp

On 08/31/2014 09:16 PM, S?awek Kap?o?ski wrote:
Hello,

I'm now using openstack in havana release and I wonder how I should do
upgrade. Can I upgrade it immediately from Havana to e.g. Juno version (when
it will be stable) or should I first upgrade everything to Icehouse and than to
Juno?

Skipping version may be hard, especially if you want live update without
downtime for instances or minimal downtime on reboot.

Skips are less tested (and may be not supported at all). I think
step-by-step upgrade is much better.

... And one more advice: do not upgrade openstack on 1st release. They
usually fix tons of bugs after release.

[Openstack-operators] Openstack Manual install with nova-network

Hi guys,

I have a 2 node(1 controller + 1 compute nodes) architecture that I installed manually, all of my compute services, including nova network, are on the compute node.
I tried logging to the dashboard but I received this message:

ConnectionFailed at /admin/
Connection to neutron failed: Maximum attempts reached
Request Method:

GET

Request URL:

http:///dashboard/admin/

Django Version:

1.6.5

Exception Type:

ConnectionFailed

Exception Value:

Connection to neutron failed: Maximum attempts reached

Exception Location:

/usr/lib/python2.7/site-packages/neutronclient/v20/client.py in retryrequest, line 1228

I don't even have a neutron client or server installed, and I configured my nova.conf to use nova networking.
This is really frustrating, can somebody please assist?

Regards,
Ohad

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140902/a3cde4f0/attachment.html

Hi Vishal,

Thank for the quick response, the python-neutronclient is a dependency for the dashboard installation, when I try to remove it, it says it will remove the dashboard installation as well.
Do you know of a way to force the openstack not to use the neutron client, but to use the nova-network?
And for your other question, in the nova.conf ? networkapiclass = nova.network.api.API

Regards,
Ohad

From: vishal yadav [mailto:vishalcdac07 at gmail.com]
Sent: Tuesday, September 02, 2014 3:57 PM
To: Baruch, Ohad
Cc: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Openstack Manual install with nova-network

Looking at the "exception location" seems like you have python-neutronclient installed. Would you please check below:
1) Use rpm -qa (REL based) or dpkg -l (debian based) to look for the package python-neutronclient. If so please remove python client for neutron.
2) What is value of parameter networkapiclass in nova.conf on compute node? Please check if it is set correctly.

Thanks,
Vishal

On Tue, Sep 2, 2014 at 4:07 AM, Baruch, Ohad <Ohad.Baruch at emc.com<mailto:Ohad.Baruch at emc.com>> wrote:
Hi guys,

I have a 2 node(1 controller + 1 compute nodes) architecture that I installed manually, all of my compute services, including nova network, are on the compute node.
I tried logging to the dashboard but I received this message:

ConnectionFailed at /admin/
Connection to neutron failed: Maximum attempts reached
Request Method:

GET

Request URL:

http:///dashboard/admin/

Django Version:

1.6.5

Exception Type:

ConnectionFailed

Exception Value:

Connection to neutron failed: Maximum attempts reached

Exception Location:

/usr/lib/python2.7/site-packages/neutronclient/v20/client.py in retryrequest, line 1228

I don?t even have a neutron client or server installed, and I configured my nova.conf to use nova networking.
This is really frustrating, can somebody please assist?

Regards,
Ohad


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] neutron debug messages

Hi guys,

I've disabled all debugging and verbose configs, but I still get/see this
in my logs:

2014-09-02 11:45:08.247 50862 DEBUG neutron.agent.metadata.namespace_proxy
[-] Request: GET /2008-02-01/meta-data/security-groups HTTP/1.0 Accept: */*

Any idea how to disable these from being logged or at least set the correct
logging level ?

Thanks
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140902/0455e6c1/attachment.html

Make sure you have set debug = False both in neutron.conf and agent
configuration (metadata_agent.ini in this case from example message).

Thanks,
Vishal

On Tue, Sep 2, 2014 at 6:57 AM, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

Hi guys,

I've disabled all debugging and verbose configs, but I still get/see this
in my logs:

2014-09-02 11:45:08.247 50862 DEBUG neutron.agent.metadata.namespace_proxy
[-] Request: GET /2008-02-01/meta-data/security-groups HTTP/1.0 Accept: */*

Any idea how to disable these from being logged or at least set the
correct logging level ?

Thanks
Alex


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Ceilometer - cinder meters missing

Hello. I'm setting up metering for the first time. I'm using Icehouse, and
- after following the instructions
http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-notification-agent
- I'm now seeing samples from most of the services (nova, neutron, swift,
and glance). But I'm not seeing volume-related meters (and their samples).
I have the notificationdriver and controlexchange variables set in
cinder.conf. Any suggestions on how to fix (or debug) this?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140902/38682fc5/attachment.html

Thank you Arne. That was it.

On Tue, Sep 2, 2014 at 12:39 PM, Arne Wiebalck <Arne.Wiebalck at cern.ch>
wrote:

Hi,

You may need to add the cinder-volume-usage-audit script to your crontab:

https://github.com/openstack/cinder/blob/master/bin/cinder-volume-usage-audit

HTH,
Arne

On Sep 2, 2014, at 9:26 PM, Mustafa Jamil
wrote:

Hello. I'm setting up metering for the first time. I'm using Icehouse, and
- after following the instructions

  • I'm now seeing samples from most of the services (nova, neutron, swift,
    and glance). But I'm not seeing volume-related meters (and their samples).
    I have the notificationdriver and controlexchange variables set in
    cinder.conf. Any suggestions on how to fix (or debug) this?


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Monitoring RabbitMQ

Hi all,

The topic of RabbitMQ related issues came up at last week's operator
meetup in San Antonio. I meant to give some more detail to a couple of
the attendees on Tuesday but was unable to attend.

Here's a blog post that goes over a few of the ways to help with
issues that could be RabbitMQ related:

http://virtualandy.wordpress.com/2014/09/02/operating-openstack-monitoring-rabbitmq/

Thanks,

-AH

Nice writeup Andy, thanks for sharing!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] RabbitMQ issues since upgrading to Icehouse

What release were you running before Icehouse?
I?m curious if you purged/deleted queues during the upgrade.
Might be useful to start fresh with your rabbit, like completely trash your mensia during a maintenance window (obviously with your services stopped) so they recreate the queues at startup.
Also, was kombu upgraded along with your openstack release?

On Aug 25, 2014, at 4:17 PM, Sam Morrison wrote:

Hi,

Since upgrading to Icehouse we have seen increased issues with messaging relating to RabbitMQ.

  1. We often get reply_xxxxxx queues starting to fill up with unacked messages. To fix this we need to restart the offending service. Usually nova-api or nova-compute.

  2. If you kill a node so as to force an ungraceful disconnect of rabbit the connection ?object?? still sticks around in rabbit. Starting the service again means there are now 2 consumers. The new one and the phantom old one. This then leads to messages piling up in the unacked queue. This feels like a rabbit bug to me but just thought I?d mention it here too.

We have have a setup that includes icehouse computes and havana computes in the same cloud and we only see this on the icehouse computes. This is using Trusty and RabbitMQ 3.3.4

Has anyone seen anything like this too?

Thanks,
Sam


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140902/1dcb7861/attachment.pgp

[Openstack-operators] hostname on vms not correct until reboot

Hi,

I'm using openstack together with puppet and mcollective (VM images is
pre-provisoined) but when a new VM is being spun up, it seems that the host
is being wrongly identified as host-i-p-g-o-e-s-h-e-r-e instead of its
actual hostname that was set by cloud-init / puppet ?

Am sure there is a trick to make this work properly ?

Note that after I reboot the VM (no changes made) the host comes up with
the correct name in mcollective. Am just wondering if there is something I
forgot to set in cloud-init or if someone had the same issue and knows a
proper fix (not rebooting) ?

Thanks
Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140903/37a771f9/attachment.html

Hi abel,

good point! checked it and you're right! there are actually another 2
services that come up before cloud-init (i just assumed it's the next
service after network init no matter what) ... thanks for the pointer! :)

alex

On 10 September 2014 23:27, Abel Lopez wrote:

That screams of boot order.
It sounds like cloud-init needs to start sooner, like immediately after
network initialization.
Are these home-grown images?

On Sep 10, 2014, at 11:52 AM, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

Hi Abel,

It's actually correct when logging in, but some services, like mcollective
are reporting the wrong fqdn until we reboot the VM so as a temporary fix
we added a reboot at the end of cloud-init.

Any better way to get this sorted properly though?

Alex
On 8 Sep 2014 16:20, "Abel Lopez" wrote:

Yes. I think this is similar to changing hostname on an interactive
shell, you won't see your prompt change until you logout/in. Seems to be a
new feature, maybe something in cloud-init 0.7.5? IIRC it used to be
correct at first login.

On Monday, September 8, 2014, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

HI all,

did anyone have the same problem ?

Alex

On 3 September 2014 13:57, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

Hi,

I'm using openstack together with puppet and mcollective (VM images is
pre-provisoined) but when a new VM is being spun up, it seems that the host
is being wrongly identified as host-i-p-g-o-e-s-h-e-r-e instead of its
actual hostname that was set by cloud-init / puppet ?

Am sure there is a trick to make this work properly ?

Note that after I reboot the VM (no changes made) the host comes up
with the correct name in mcollective. Am just wondering if there is
something I forgot to set in cloud-init or if someone had the same issue
and knows a proper fix (not rebooting) ?

Thanks
Alex

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Lbaas and Fwaas on prod environment

Hi Team,

I am having running Havana environment on RHEL I want to add Lbaas and
Fwaas now, can anyone guide me through the configuration or good
documentation?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140903/6345e123/attachment.html

HII Raju:

Bellow some (hope) useful links:

https://openstack.redhat.com/LBaaS
https://openstack.redhat.com/Load_Balance_OpenStack_API
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Configuring_Load_Balancing_as_a_Service_LBaas.html

Regards,


JuanFra Rodriguez Cardoso

2014-09-03 16:55 GMT+02:00 raju <raju.roks at gmail.com>:

Hi Team,

I am having running Havana environment on RHEL I want to add Lbaas and
Fwaas now, can anyone guide me through the configuration or good
documentation?


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Is it possible to convert an raw image or a qcow image or a qcow2 image to a volume without importing it

Hi. We have a bunch of images that range in size from 10 GB to 20 GB. We
have to move them across a network before we can import them using glance.

Is there a way to convert an image to a volume before importing it, please?

Many thanks,

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140903/4322a430/attachment.html

Let me see if I understand correctly?
You have large images you want to import into glance, I assume they are RAW format. You could use qemu-img convert -O qcow2 {raw image} {compressed file} to convert them to a much smaller size. Why do you need to use volumes for? If you?re using cinder, the image would need to already exist in glance before you can use it for a volume.

On Sep 3, 2014, at 1:27 PM, Jeff Silverman wrote:

Hi. We have a bunch of images that range in size from 10 GB to 20 GB. We have to move them across a network before we can import them using glance.

Is there a way to convert an image to a volume before importing it, please?

Many thanks,

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Fwd: Unexpected error in OpenStack Nova

Hi,
After successful installation of both keystone and nova, I tried to execute
the 'nova list' command by the folllowing env variables(My Deployment Model
is single machine deployment):
export OSUSERNAME=admin
export OS
PASSWORD=...
export OSTENANTNAME=service
export OSAUTHURL=http://10.0.0.1:5000

But the following unknown error was occurred:
ERROR: (HTTP
300)

My nova.conf has the following configuration to connect to keystone:
[keystoneauthtoken]
auth
uri = localhost:5000
authhost = 10.0.0.1
auth
port = 35357
authprotocol = http
admin
tenantname = service
admin
user = nova
adminpassword = novapass

How can I solve the problem?
Thanks in advance.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140904/77b6b1f2/attachment.html

Hate to nit pick, but I notice that you?re setting authuri to localhost, but authhost to 10.0.0.1.
When I setup services, I specify a bind_host, so that the service is only listening on the interface I want. You may want to verify lsof -i :5000 to see if 127.0.0.1 is bound, or just for my OCD, set them to the same value.

Once you?ve done that, try setting debug=true and see if the logs tell you more.

On Sep 4, 2014, at 12:29 AM, Hossein Zabolzadeh wrote:

Hi,
After successful installation of both keystone and nova, I tried to execute the 'nova list' command by the folllowing env variables(My Deployment Model is single machine deployment):
export OSUSERNAME=admin
export OS
PASSWORD=...
export OSTENANTNAME=service
export OSAUTHURL=http://10.0.0.1:5000

But the following unknown error was occurred:
ERROR: (HTTP 300)

My nova.conf has the following configuration to connect to keystone:
[keystoneauthtoken]
auth
uri = localhost:5000
authhost = 10.0.0.1
auth
port = 35357
authprotocol = http
admin
tenantname = service
admin
user = nova
adminpassword = novapass

How can I solve the problem?
Thanks in advance.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Hypervisor free memory recommendations

Hi guys, I'm running a Grizzly cloud with Ubuntu 12.04+KVM. I'd like to
know if there's any kind of recommended free RAM for the Hypervisor. I know
there's a nova variable called "
reservedhostmemory_mb" but don't know what a proper value would be.

Regards

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140904/b349fc7d/attachment.html

On 09/04/2014 01:51 PM, Juan Jos? Pavlik Salles wrote:
Hi Jay, I do agree about 10% being too much memory in big nodes, but
right now we are using small ones (too small if you ask). These new
nodes are 16GB so if I reserve 4 Gb for the dom0 I'd be loosing 25% of
the available RAM. I was thinking about something like: if you have got
less than 32GB give 10% of it to the dom0 and If you have got more than
32GB go with 4GB for the dom0.

I'd go with something like this:

Host RAM dom0/reserved host RAM
================== ======================
16 - 32 GB 2 GB
32 - 64 GB 2.75 GB
64 - 128 GB 3.50 GB
128 - 256 GB 4.25 GB
256+ GB 5.50 GB

If you have heavy packing of VMs (lots of tiny or small VMs), you may
want to add a half GB to the above, but not much more than that, IMO)

Maybe different environments will need
different rules, but this should work in most standar deployments I'd
say. Jay, you mentioned that big nodes running many VMs don't neet more
than 4GB of dedicated RAM, haven't you ever had any swapping situation
in that kind of scenarios?

No, not on the compute nodes, no. On the controller nodes, yes, but
that's a totally different thing :)

Best,
-jay

2014-09-04 14:26 GMT-03:00 Jay Pipes >:

There's not really any need for 10% in my experience. Giving
dom0/bare metal around 3-4GB is perfectly fine for the vast majority
of scenarios, even when there's a hundred or more VMs on the box.
Most compute node server hardware nowadays should have 128-512GB of
RAM available, and 4GB for the host is more than enough.

-jay


On 09/04/2014 12:45 PM, Juan Jos? Pavlik Salles wrote:

    Hi Tomasz, thanks for your answer. I'll start with 10% and see what
    happens. Thanks again!


    2014-09-04 13:37 GMT-03:00 Tomasz Napierala
    <tnapierala at mirantis.com <mailto:tnapierala at mirantis.com>
    <mailto:tnapierala at mirantis.__com
    <mailto:tnapierala at mirantis.com>>>:



         On 04 Sep 2014, at 18:04, Juan Jos? Pavlik Salles
         <jjpavlik at gmail.com <mailto:jjpavlik at gmail.com>
    <mailto:jjpavlik at gmail.com <mailto:jjpavlik at gmail.com>>> wrote:

          > Hi guys, I'm running a Grizzly cloud with Ubuntu
    12.04+KVM. I'd
         like to know if there's any kind of recommended free RAM
    for the
         Hypervisor. I know there's a nova variable called "
          > reserved_host_memory_mb" but don't know what a proper
    value would be.

         Check on deployed compute node that has no running VMs, add
    some
         margin, say 10% and you should be fine. Usually compute
    nodes are
         not consuming extra memory besides VMs.

         Regards,
         --
         Tomasz 'Zen' Napierala
         Sr. OpenStack Engineer
    tnapierala at mirantis.com <mailto:tnapierala at mirantis.com>
    <mailto:tnapierala at mirantis.__com <mailto:tnapierala at mirantis.com>>










    --
    Pavlik Salles Juan Jos?
    Blog - http://viviendolared.blogspot.__com
    


    _________________________________________________
    OpenStack-operators mailing list
    OpenStack-operators at lists.__openstack.org
    <mailto:OpenStack-operators at lists.openstack.org>
    http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators
    


_________________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.__openstack.org
<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-operators

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com

[Openstack-operators] Survey on guest images

Hey guys,
I have a brief couple of questions about glance and your guest images preferences.
I would love to get community input on the following couple of questions
https://docs.google.com/forms/d/10GkCXr3ovHH2WvMiPbnHPCZIvVzPa4i5h_tgsyt_5Y4/viewform?usp=send_form

Please take a look, I?ll be collecting the data for my presentation in Paris.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140904/372e9c41/attachment.pgp

[Openstack-operators] OpenStack Community Weekly Newsletter (Aug 29 – Sep 5)

  Latest Technical Committee Updates
  <http://www.openstack.org/blog/2014/09/latest-technical-committee-updates/>

The OpenStack Technical Committee meets weekly to work through requests
for incubation, to review technical issues happening in currently
integrated projects, and to represent the technical contributors to
OpenStack. We have about a month remaining with our current crew and
elections coming soon. Read the summary of latest meetings to find out
about defcore, gap analysis and projects in incubation.

  OpenStack DefCore Process Flow: Community Feedback Cycles for Core
  [6 points + chart]
  <http://robhirschfeld.com/2014/09/02/defcore-process-flow/>

DefCore https://wiki.openstack.org/wiki/Governance/DefCoreCommitteeis
an OpenStack Foundation Board managed process ?that sets base
requirements by defining 1) capabilities, 2) code and 3) must-pass tests
http://robhirschfeld.com/2014/08/12/patchwork-onion/for all OpenStack?
products. This definition usescommunity resources and involvementto
drive interoperability by creating the minimum standards for products
labeled OpenStack?.? Rob Hirschfeld http://robhirschfeld.com/details
in a blog post what ?community resources and involvement? entails. Check
out the upcoming DefCore Community Meetings Sep 10 & Sep 11.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045229.html

The Road To Paris 2014 ? Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Welcome to Trove-core: Amrith Kumar.

Ilia Meerovich
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person43db3208-b9c3-424b-9001-b81cfdc4efed
Cesar Mojica
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person498150ea-7cb2-44f0-b20a-37558dcb6654

John McDonough
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person75685785-3626-494e-96d1-a7f2aa68575a
Bartosz Fic
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person918081a2-cca4-43d1-af28-4764648c40c9

Gerard Garcia
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person15fbaef9-0eda-4ce0-9fa1-dfda99ca632c
Adrien Verg?
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person326739f8-2d1e-45fa-8da3-47dbb79bb2a5

Colleen Murphy
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0a76f15f-9c3e-4393-b2df-ac0b397c96bf
Yi Ming Yin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person360b4b7a-61c5-4203-a613-a51189548b40

Antoine Ab?lard
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person53849081-4355-4275-8dc0-a72b51c901ee
Yanping Qu
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person11d07316-fe8c-4a00-a6a4-60629af86bd9

Miguel Grinberg
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf7989751-dec2-49fa-95f7-459f96f10775
Timothy Okwii
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9b2c59ce-386e-4b70-ba8b-017afd23c383

Brian Moss
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person91b2f7ae-dfd2-4a87-9af5-7e9a96d0ad87
Sarvesh Ranjan
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb0d6c5c3-1e05-4999-89cc-0e70e7d133de

Karen Noel
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person55915aa5-5b7d-4c9a-b9a2-578519cbf72b
Rishabh
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personae9959f5-fa57-4bcb-b84c-5d9a1d6a559b

Saksham Varma
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1d1318f3-96b4-4cb3-8d3e-94df766c8414
Komei Shimamura
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personab0f8be6-d154-4514-b0bd-04c473164320

Can ZHANG
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf394347f-f45f-42fe-9207-1809ab7628e2
Prasoon Telang
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personea476f36-585f-4196-a366-01b62311eb8c

Chirag Shahani
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone650c9ff-f4e6-4593-828f-9258d0ebd276
Steve Lewis
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf7c581df-00bb-4e8a-b024-f087ea49cc2d

Christian Fetzer
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9e01c38b-dd1d-4b85-a7ff-44534167afb2
Srinivas Sakhamuri
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personfb98e7f6-cc53-4004-b7fe-45d821bfbc7b

Juan Zuluaga
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf4f991c1-be87-4b03-9be5-ec2c27da3696
Patrick Amor
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6c9f3293-be09-4c95-88c7-ec422283d5e9

Yukinori Sagara
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4215b71b-74d2-442c-9036-dd3ea1d6485c
Om Prakash Pandey
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona8998c1a-848b-4841-a967-e09f7700ad24

Peter Krempa
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf0e0952b-b695-4706-8856-05f0dcbd9ada
Srini
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person92e68fb1-7577-42de-9bc0-26dfdd5fd408

Matt Kovacs
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personafb05ea0-f3a9-4b3f-8164-67c5935f739b

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140905/3cd7ccf4/attachment.html

[Openstack-operators] Neutron metering agent

Hi,

A quick query if anyone is using the neutron metering agent to measure
traffic in/out of routers:

When adding the metering label and rules, on some, but not all, of our
routers, I get a traceback in the metering_agent.log pointing at
"TRACE neutron.openstack.common.rpc.amqp TypeError: cannot concatenate
'str' and 'NoneType' objects".

After that error, the router no longer passes traffic other than ping
until we stop the metering agent, and restart the L3 agent.

This doesn't seem to affect routers with only one subnet, and only one
router in the tenant - but I may be completely misunderstanding the
whole thing here. It appears as soon as we get a router with more than
one network (plus gateway) we get trouble.

We're using Havana on Trusty, and the current Cloud Archive packages.

The rules are a simple egress and ingress, 0.0.0.0/0, both on the same
meter-label.

Anyone had similar experiences, or got tips for diagnosing this?

I will be reading the code over the next few days :)

Yep, it is bug.

I report it, and one guy close in (hate you, devstack), but now it seems
be in the process of fixing. But there is an 25% chance it will be
ported to icehouse and 5% change for havana.

This bug happens if you got more than one network node. If routers are
in database, but not present on network node running metering agent, it
will fail to do anything.

We manage to scramble patch to work with problem, but it is rather
drastic - we skip all errors in that place of code and continue.

If you want I cant post patch, but you gonna need to rebuild neutron
package (rather annoying process).

Here my bugreport:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1286209

On 09/07/2014 10:06 AM, Xav Paice wrote:
Hi,

A quick query if anyone is using the neutron metering agent to measure
traffic in/out of routers:

When adding the metering label and rules, on some, but not all, of our
routers, I get a traceback in the metering_agent.log pointing at
"TRACE neutron.openstack.common.rpc.amqp TypeError: cannot concatenate
'str' and 'NoneType' objects".

After that error, the router no longer passes traffic other than ping
until we stop the metering agent, and restart the L3 agent.

This doesn't seem to affect routers with only one subnet, and only one
router in the tenant - but I may be completely misunderstanding the
whole thing here. It appears as soon as we get a router with more than
one network (plus gateway) we get trouble.

We're using Havana on Trusty, and the current Cloud Archive packages.

The rules are a simple egress and ingress, 0.0.0.0/0, both on the same
meter-label.

Anyone had similar experiences, or got tips for diagnosing this?

I will be reading the code over the next few days :)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Glance vmware backend

Hi all,

I configured my glance backend to use a Vmware datastore, and added the cluster that sees this datastore as a hypervisor to the compute node.
I then deployed an instance using an ISO image I uploaded to the datastore via Glance, but I saw something weird.
The Glance service kinda ignores the situation and still copies the whole image to the designated instance datastore, and after that he deploys the instance and mounts the ISO.

1) Why not just deploy the instance and mount the ISO straight from the other datastore?

2) Is there a way to make glance not copy the whole image to the other datastore and simply deploy over the network?

3) Does anyone know how to make glance know that the images are stored on a Vmware datastore and that it deploys those images to a Vmware datastore in such a level that he can leverage the VAAI functionality, or does he always deploy through the networking?

Best regards,
Ohad

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140908/bb6d4c89/attachment.html

[Openstack-operators] How do I trouble shoot problems booting an image?

I am having troubles booting an server. It will start booting, perhaps for
several minutes, and then it will go into an error state.

root at controller1-prod.sea:/home/jeff# nova list
+--------------------------------------+-----------------------+--------+----------------------+-------------+----------------------+
| ID | Name |
Status | Task State | Power State | Networks |
+--------------------------------------+-----------------------+--------+----------------------+-------------+----------------------+
| 72cc091c-3ebc-40c5-b075-32295003edc9 | jeff5app1-pokki-dev3 | BUILD
| block
devicemapping | NOSTATE | VLAN40=10.50.40.93 |
| 6633c7f1-5aba-4b49-8781-bd1d74b1b72e | jeffapp1-pokki-dev3 |
ACTIVE | - | Running | VLAN
40=10.50.40.92 |
| 4f45c14d-57bf-4dc8-9e96-d7934816fa6e | observium1-prod |
ACTIVE | - | Running | VLAN15=10.50.15.144 |
+--------------------------------------+-----------------------+--------+----------------------+-------------+----------------------+
root at controller1-prod.sea:/home/jeff#
root at controller1-prod.sea:/home/jeff# nova list
+--------------------------------------+-----------------------+--------+------------+-------------+----------------------+
| ID | Name |
Status | Task State | Power State | Networks |
+--------------------------------------+-----------------------+--------+------------+-------------+----------------------+
| 72cc091c-3ebc-40c5-b075-32295003edc9 | jeff5
app1-pokki-dev3 | ERROR
| - | NOSTATE | |
| 6633c7f1-5aba-4b49-8781-bd1d74b1b72e | jeffapp1-pokki-dev3 |
ACTIVE | - | Running | VLAN
40=10.50.40.92 |
| 4f45c14d-57bf-4dc8-9e96-d7934816fa6e | observium1-prod |
ACTIVE | - | Running | VLAN_15=10.50.15.144 |
+--------------------------------------+-----------------------+--------+------------+-------------+----------------------+
root at controller1-prod.sea:/home/jeff#

?I have tried using the --debug switch to the nova command, and I see the
API calls go by. They all have 200 status codes.

I have tried fgreping the ID in /var/log/*? with the -r switch, and I see
lots of GETs with 200 status returns. On one of the tests, I was
experimenting with using a network port, and I see that something is
deleting the network port, but this latest attempt didn't use a network
port.

What I am looking for is a pointer to a general troubleshooting process.
If everybody who is having problems sends a message to a mailing list -
well, that doesn't scale very well.

Thank you

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140908/42abda7a/attachment.html

Hi Jeff, great question, of course docs are the answer. :)

On Mon, Sep 8, 2014 at 4:03 PM, Jeff Silverman wrote:

I am having troubles booting an server. It will start booting, perhaps
for several minutes, and then it will go into an error state.

root at controller1-prod.sea:/home/jeff# nova list
+--------------------------------------+-----------------------+--------+----------------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------------+--------+----------------------+-------------+----------------------+
| 72cc091c-3ebc-40c5-b075-32295003edc9 | jeff5app1-pokki-dev3 | BUILD | blockdevicemapping | NOSTATE | VLAN40=10.50.40.93 |
| 6633c7f1-5aba-4b49-8781-bd1d74b1b72e | jeffapp1-pokki-dev3 | ACTIVE | - | Running | VLAN40=10.50.40.92 |
| 4f45c14d-57bf-4dc8-9e96-d7934816fa6e | observium1-prod | ACTIVE | - | Running | VLAN15=10.50.15.144 |
+--------------------------------------+-----------------------+--------+----------------------+-------------+----------------------+
root at controller1-prod.sea:/home/jeff#
root at controller1-prod.sea:/home/jeff# nova list
+--------------------------------------+-----------------------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------------+--------+------------+-------------+----------------------+
| 72cc091c-3ebc-40c5-b075-32295003edc9 | jeff5
app1-pokki-dev3 | ERROR | - | NOSTATE | |
| 6633c7f1-5aba-4b49-8781-bd1d74b1b72e | jeffapp1-pokki-dev3 | ACTIVE | - | Running | VLAN40=10.50.40.92 |
| 4f45c14d-57bf-4dc8-9e96-d7934816fa6e | observium1-prod | ACTIVE | - | Running | VLAN_15=10.50.15.144 |
+--------------------------------------+-----------------------+--------+------------+-------------+----------------------+
root at controller1-prod.sea:/home/jeff#

?I have tried using the --debug switch to the nova command, and I see the
API calls go by. They all have 200 status codes.

I have tried fgreping the ID in /var/log/*? with the -r switch, and I see
lots of GETs with 200 status returns. On one of the tests, I was
experimenting with using a network port, and I see that something is
deleting the network port, but this latest attempt didn't use a network
port.

What I am looking for is a pointer to a general troubleshooting process.
If everybody who is having problems sends a message to a mailing list -
well, that doesn't scale very well.

We use both the documentation and an Ask site for supporting users. Here's
the documentation that should help you:

http://docs.openstack.org/admin-guide-cloud/content/section_compute-troubleshooting.html

Also, looking on previously asked questions on http://ask.openstack.org,
there's a pointer to this blog entry:
http://virtual2privatecloud.com/?p=312

Hope that helps.
Anne

Thank you

Jeff

--
Jeff Silverman
Systems Engineer
(253) 459-2318 (c)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] OpenStack-operators Digest, Vol 47, Issue 9

Hi, team,
I clicked Project/Instances, got this Error:
Error: Unable to retrieve instance size information.

then the Instance's "Size" went to "Not available",

Instance became unaccessible

Anyone help? Thanks!

FYI:
Version: Icehouse

Yours sincerely,
-Brant

-----Original Message-----
From: openstack-operators-request at lists.openstack.org
[mailto:openstack-operators-request at lists.openstack.org]
Sent: Monday, September 8, 2014 08:00 PM
To: openstack-operators at lists.openstack.org;
openstack-operators at lists.openstack.org
Subject: OpenStack-operators Digest, Vol 47, Issue 9

Send OpenStack-operators mailing list submissions to
openstack-operators at lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

or, via email, send a message with subject or body 'help' to
openstack-operators-request at lists.openstack.org

You can reach the person managing the list at
openstack-operators-owner at lists.openstack.org

When replying, please edit your Subject line so it is more specific than
"Re: Contents of OpenStack-operators digest..."

Today's Topics:

  1. Re: Neutron metering agent (George Shuklin)
  2. Re: Neutron metering agent (Xav Paice)
  3. Glance vmware backend (Baruch, Ohad)

Message: 1
Date: Sun, 07 Sep 2014 15:06:46 +0300
From: George Shuklin <george.shuklin at gmail.com>
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Neutron metering agent
Message-ID: <540C4A56.8030308 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

Yep, it is bug.

I report it, and one guy close in (hate you, devstack), but now it seems be
in the process of fixing. But there is an 25% chance it will be ported to
icehouse and 5% change for havana.

This bug happens if you got more than one network node. If routers are in
database, but not present on network node running metering agent, it will
fail to do anything.

We manage to scramble patch to work with problem, but it is rather drastic -
we skip all errors in that place of code and continue.

If you want I cant post patch, but you gonna need to rebuild neutron package
(rather annoying process).

Here my bugreport:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1286209

On 09/07/2014 10:06 AM, Xav Paice wrote:

Hi,

A quick query if anyone is using the neutron metering agent to measure
traffic in/out of routers:

When adding the metering label and rules, on some, but not all, of our
routers, I get a traceback in the metering_agent.log pointing at
"TRACE neutron.openstack.common.rpc.amqp TypeError: cannot concatenate
'str' and 'NoneType' objects".

After that error, the router no longer passes traffic other than ping
until we stop the metering agent, and restart the L3 agent.

This doesn't seem to affect routers with only one subnet, and only one
router in the tenant - but I may be completely misunderstanding the
whole thing here. It appears as soon as we get a router with more
than one network (plus gateway) we get trouble.

We're using Havana on Trusty, and the current Cloud Archive packages.

The rules are a simple egress and ingress, 0.0.0.0/0, both on the same
meter-label.

Anyone had similar experiences, or got tips for diagnosing this?

I will be reading the code over the next few days :)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator
s

Message: 2
Date: Mon, 08 Sep 2014 13:04:12 +1200
From: Xav Paice
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Neutron metering agent
Message-ID: <540D008C.3050603 at gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On 08/09/14 00:06, George Shuklin wrote:

Yep, it is bug.

I report it, and one guy close in (hate you, devstack), but now it
seems be in the process of fixing. But there is an 25% chance it will
be ported to icehouse and 5% change for havana.

This bug happens if you got more than one network node. If routers are
in database, but not present on network node running metering agent,
it will fail to do anything.

We manage to scramble patch to work with problem, but it is rather
drastic - we skip all errors in that place of code and continue.

If you want I cant post patch, but you gonna need to rebuild neutron
package (rather annoying process).

Here my bugreport:
https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1286209

Thanks George, although that's not quite the same behaviour we're seeing
it's one that is going to hit us when we switch on the second L3 node
(currently we're using Pacemaker but it's too slow to re-schedule the
routers during failover). Having seen your bug report I don't think we'll
do any changes in that respect till after our upgrade to Icehouse at least.

In our particular case, we have just one (running) L3 agent, and it appears
fine when we add metering labels right up to one of our special
tenants/routers, and somehow the iptables rules are not being applied
correctly.

My biggest trouble with this is reproducing it in our test environment - so
far:
- create a bunch of tenants, routers and networks, and attach instances to
the networks
- start neutron-metering-agent
- add metering label for each tenant (one per tenant)
- add 2 x metering rules for each metering label, one for ingress and one
for egress, both to/from 0.0.0.0/0

In most cases this works fine, and I can see samples in ceilometer plus
traffic is forwarded correctly.
In some cases, I can get ICMP to the instances but no other traffic (e.g.
http or ssh).

When there is a problem, the metering agent log gets the error listed below.

I guess I'd better open a bug report at least, if noone else is seeing this.
I was kind of hoping someone might tell me I'm an idiot and doing it wrong
:)

On 09/07/2014 10:06 AM, Xav Paice wrote:

Hi,

A quick query if anyone is using the neutron metering agent to
measure traffic in/out of routers:

When adding the metering label and rules, on some, but not all, of
our routers, I get a traceback in the metering_agent.log pointing at
"TRACE neutron.openstack.common.rpc.amqp TypeError: cannot
concatenate 'str' and 'NoneType' objects".

After that error, the router no longer passes traffic other than ping
until we stop the metering agent, and restart the L3 agent.

This doesn't seem to affect routers with only one subnet, and only
one router in the tenant - but I may be completely misunderstanding
the whole thing here. It appears as soon as we get a router with
more than one network (plus gateway) we get trouble.

We're using Havana on Trusty, and the current Cloud Archive packages.

The rules are a simple egress and ingress, 0.0.0.0/0, both on the
same meter-label.

Anyone had similar experiences, or got tips for diagnosing this?

I will be reading the code over the next few days :)


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato
rs


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator
s

Message: 3
Date: Mon, 8 Sep 2014 08:12:32 +0000
From: "Baruch, Ohad" <Ohad.Baruch at emc.com>
To: "openstack-operators at lists.openstack.org"

Subject: [Openstack-operators] Glance vmware backend
Message-ID:

Content-Type: text/plain; charset="us-ascii"

Hi all,

I configured my glance backend to use a Vmware datastore, and added the
cluster that sees this datastore as a hypervisor to the compute node.
I then deployed an instance using an ISO image I uploaded to the datastore
via Glance, but I saw something weird.
The Glance service kinda ignores the situation and still copies the whole
image to the designated instance datastore, and after that he deploys the
instance and mounts the ISO.

1) Why not just deploy the instance and mount the ISO straight from the
other datastore?

2) Is there a way to make glance not copy the whole image to the other
datastore and simply deploy over the network?

3) Does anyone know how to make glance know that the images are stored
on a Vmware datastore and that it deploys those images to a Vmware datastore
in such a level that he can leverage the VAAI functionality, or does he
always deploy through the networking?

Best regards,
Ohad

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.openstack.org/pipermail/openstack-operators/attachments/201409
08/bb6d4c89/attachment-0001.html>



OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

End of OpenStack-operators Digest, Vol 47, Issue 9


[Openstack-operators] some problem with nova

Hi
I have some weird problem with nova-compute .
in one of my nodes nova compute refuse to create instance whit this error

Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1329, in buildinstance
setaccessip=setaccessip)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
393, in decorated_function
return function(self, context, *args, kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1741, in spawn
LOG.exception(
('Instance failed to spawn'), instance=instance)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py",
line 68, in exit
six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1738, in spawn
block
deviceinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
line 2287, in spawn
block
deviceinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
line 3709, in _create
domainandnetwork
networkinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 285, in setup
basicfiltering
self.nwfilter.setup
basicfiltering(instance, networkinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 113, in setupbasicfiltering
self.ensurestaticfilters()
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 213, in _ensure
staticfilters
self.
definefilter(self.novanondreflectionfilter)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 233, in _define
filter
self.conn.nwfilterDefineXML(xml)
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in
doit
result = proxy
call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in
proxy_call
rv = execute(f,*args,
kwargs)
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in
tworker
rv = meth(*args,**kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3836, in
nwfilterDefineXML
if ret is None:raise libvirtError('virNWFilterDefineXML() failed', conn
=self)
libvirtError: operation failed: filter 'nova-no-nd-reflection' already
exists with uuid 89637f16-774a-4b8c-857b-08819c6fcf41

P.S
i am using nova-network instead of neutron.

tanks in advance.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140910/0ed7b589/attachment.html

On Wed, Sep 10, 2014 at 6:00 AM, Ehsan Qarekhani
wrote:

Hi
I have some weird problem with nova-compute .
in one of my nodes nova compute refuse to create instance whit this error

Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1329, in buildinstance
setaccessip=setaccessip)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
393, in decorated_function
return function(self, context, *args, kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1741, in spawn
LOG.exception(
('Instance failed to spawn'), instance=instance)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.
py", line 68, in exit
six.reraise(self.type_, self.value, self.tb)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1738, in spawn
block
deviceinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
line 2287, in spawn
block
deviceinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py",
line 3709, in _create
domainandnetwork
networkinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 285, in setup
basicfiltering
self.nwfilter.setup
basicfiltering(instance, networkinfo)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 113, in setupbasicfiltering
self.ensurestaticfilters()
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 213, in _ensure
staticfilters
self.
definefilter(self.novanondreflectionfilter)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py",
line 233, in _define
filter
self.conn.nwfilterDefineXML(xml)
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179,
in doit
result = proxy
call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139,
in proxy_call
rv = execute(f,*args,
kwargs)
File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in
tworker
rv = meth(*args,**kwargs)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 3836, in
nwfilterDefineXML
if ret is None:raise libvirtError('virNWFilterDefineXML() failed',
conn=self)
libvirtError: operation failed: filter 'nova-no-nd-reflection' already
exists with uuid 89637f16-774a-4b8c-857b-08819c6fcf41

I think this has to do with spoofing protection. From the code:
This filter protects false positives on IPv6 Duplicate Address
Detection(DAD).

Any chance you're reusing IP addresses?

Anne

P.S
i am using nova-network instead of neutron.

tanks in advance.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] anyone using RabbitMQ with active/active mirrored queues?

Hi,

The OpenStack high availability guide seems to be a bit ambiguous about
whether RabbitMQ should be configured active/standby or
active/active...both methods are described.

Has anyone tried using active/active with mirrored queues as recommended
by the RabbitMQ developers? If so, what problems did you run into?

Thanks,
Chris

On 12/09/14 04:15, Chris Friesen wrote:
Hi,

The OpenStack high availability guide seems to be a bit ambiguous about
whether RabbitMQ should be configured active/standby or
active/active...both methods are described.

Has anyone tried using active/active with mirrored queues as recommended
by the RabbitMQ developers? If so, what problems did you run into?

Thanks,
Chris


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Hi Chris,

We do Active/Active RabbitMQ with mirrored queues and Precise/Havana.
We faced a lot of failover problems where clients weren't figuring out
that their connections were dead. The TCP keepalives mentioned in the
following bug seemed to help a lot.
https://bugs.launchpad.net/nova/+bug/856764 The moral of our story is
to make sure you are monitoring the sanity of your agents.

Cheers,
James

--
James Dempsey
Senior Cloud Engineer
Catalyst IT Limited
+64 4 803 2264
--

[Openstack-operators] OpenStack Swift configuration

Hi,

We're trying to evaluate OpenStack Swift and would like to understand the best/optimal settings for a swift cluster with the following hardware:

  • Three dual-socket 24-core Intel Xeon servers with 64GB DRAM

  • 5 SSDs for storage cluster

I configured the swift-proxy-server on one server and the storage cluster (i.e. account, container and object servers) on another server with 5 SSDs.
Tweaking the #workers in proxy and object server has some improvement in the GET op/s, but latencies increase as well.
For instance, 3500 ops/s and latency of 18 ms (with 8 proxy workers and 16 object workers) vs 7200 ops/s and latency of 35 ms (with 40 proxy workers and 40 object workers). Please note that it's pretty small workload and it's likely most of the requests are served from memory(filesystem cache).

Couple of questions based on my understanding so far.

  1. Is it advisable to enable HT on proxy server and/or storage servers?

  2. I read that a single proxy-server worker can handle only 1024 simultaneous requests and it's good to assign one worker per CPU core. So, would that mean that theoretically the maximum requests that can be serviced simultaneously is 24,576 (on a 24-core server)? If yes, is there any way to go beyond this number?

  3. Is it okay to run container, account and object servers on the same server? Should they each be assigned separate disks?

  4. What's the optimal #workers for container, account and object servers? Is it equivalent to #CPU cores on the server?

  5. When benchmarking with a particular IO size, should I be modifying chunk sizes for proxy and object servers accordingly?

Highly appreciate any inputs and tuning guidelines/suggestions in this regard.

Thanks,
Sushma


PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140911/12f88e12/attachment.html

[Openstack-operators] Cinder + FC + Hitachi storage 110

Hi experts,

I intend to deploy OpenStack icehouse using
1. My existing Hitachi Storage 110.
2. VM go directly to Storage instead of going through cinder.
3. FC Protocol.
When I check: https://wiki.openstack.org/wiki/CinderSupportMatrix
I just saw HUS with Havana and iSCSI.

So please help me:
1. Which OpenStack version I can use with Hitachi with FC Protocol.
2. In addition, how can I configure OpenStack to make VM directly go
to external Storage (use iSCSI). Because if all Storage traffic go to
Cinder, Cinder will be overloaded.

Thanks guys,

There?s really two answers to your 2nd question.
First, when booting new VMs, Cinder doesn?t come into play unless you?re explicitly selecting ?Boot from Volume?. Typically, the default is to use ?Ephemeral Instances?, which are simply file-backed virtual disks on your hypervisor host. If you wanted to leverage your Hitachi for this, you could export a LUN to your compute node, mount it under /var/lib/nova/instances and all the virtual disk files would live there.

I wouldn?t suggest this personally, because it limits the flexibility of nova compute. You could also use your storage system with an NFS export, which would be usable by more compute nodes.

That?s the first answer, regarding VMs on storage.

Cinder is the equivalent of Amazon?s EBS, it?s virtual block storage for your VMs. And while you can use it to run VMs on, it?s not the primary use of it, nor is it a requirement.
The original versions used only iSCSI, later added NFS, and FC has been showing up in certain drivers (EMC driver has FC)
If you want to use your Hitachi for Cinder, you may want to check with your Vendor to see if that?s in the works. Typically backend storage vendors write drivers for Cinder (as with Netapp and EMC)

On Sep 11, 2014, at 10:51 AM, Tong Manh Cuong wrote:

Hi experts,

I intend to deploy OpenStack icehouse using
1. My existing Hitachi Storage 110.
2. VM go directly to Storage instead of going through cinder.
3. FC Protocol.
When I check: https://wiki.openstack.org/wiki/CinderSupportMatrix
I just saw HUS with Havana and iSCSI.

So please help me:
1. Which OpenStack version I can use with Hitachi with FC Protocol.
2. In addition, how can I configure OpenStack to make VM directly go
to external Storage (use iSCSI). Because if all Storage traffic go to
Cinder, Cinder will be overloaded.

Thanks guys,


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Kernel and KVM version

Hi,

Some compute nodes are frequently having kernel panic and, as far as I see,
the kernel
panics are related with KVM.

Do you guys have any recommendation about the kernel and KVM version to be
used in
an production environment?

Thanks,

Fl?vio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140911/1db4fe61/attachment.html

More info please!
What distro? What kernel version? what version of kvm? what version of openstack?

On Sep 11, 2014, at 12:26 PM, Fl?vio Ramalho <f.ramalhoo at gmail.com> wrote:

Hi,

Some compute nodes are frequently having kernel panic and, as far as I see, the kernel
panics are related with KVM.

Do you guys have any recommendation about the kernel and KVM version to be used in
an production environment?

Thanks,

Fl?vio


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Nova HTTPConnectionPool Error

Hi there,
After successful installation of both keystone and glance, my Nova
service didn't work. The following error was occured when I executed:
'nova list'


ERROR: HTTPConnectionPool(host='localhost', port=8774): Max retries
exceeded with url: /v2/19934884vr78as87437483bb1/servers/detail
(caused by : [errno 111] Connection Refused)

Can someone help me to fix it?
Thanks in advance.

I fixed it. But now in my nova-compute.log a new error meesage was shown:
unexpected error while running command. command: sudo nova-rootwrap
/etc/nova/rootwrap.conf iptables-restore -c
exit code: 2
stdout: ''
stderr: "iptables-restore v1.4.21: iptables-restore: unable to
initialize table 'nat'

How can I fix it?
My iptable_filter kernel module is also loaded.

On 9/12/14, Razique Mahroua <razique.mahroua at gmail.com> wrote:
Check your nova.conf to make sure:
A- You are not using any credentials
B- You are and they match the ones you are using for RabbitMQ

On Sep 11, 2014, at 13:27, Hossein Zabolzadeh wrote:

My nova-compute.log contains:
Connecting to AMQP server on localhost:5672
ERROR oslo.messaging.drivers.implrabbit [-] AMQP server
localhost:6572 closed the connection. Check login credentials: Socket
closed

On 9/12/14, Razique Mahroua <razique.mahroua at gmail.com> wrote:

Hi, look into /var/logs/nova/nova-compute.log to understand why the
service
isn?t started!

On Sep 11, 2014, at 13:16, Hossein Zabolzadeh
wrote:

Hi there,
After successful installation of both keystone and glance, my Nova
service didn't work. The following error was occured when I executed:
'nova list'


ERROR: HTTPConnectionPool(host='localhost', port=8774): Max retries
exceeded with url: /v2/19934884vr78as87437483bb1/servers/detail
(caused by : [errno 111] Connection Refused)

Can someone help me to fix it?
Thanks in advance.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Visualization & Control Information for OpenStack operators

Hello OpenStack Operators,

I would like to ask if anyone sees a need for full-scale visualization via
gaming quality 360-degree graphics as well as immediate control of all your
OpenStack resources, including interdependencies. Please let me know if
this sounds interesting to achieve. We would like to help.

Stacey King

+1 916 206 7860

sking at real-status.com

Real Status Ltd.

www.real-status.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140911/47fba9b0/attachment.html

That?s pretty neat, hadn?t seen this before. Looks like it would be handy for a NOC/Ops team.

On Sep 11, 2014, at 3:31 PM, Stacey King wrote:

Hello OpenStack Operators,

I would like to ask if anyone sees a need for full-scale visualization via gaming quality 360-degree graphics as well as immediate control of all your OpenStack resources, including interdependencies. Please let me know if this sounds interesting to achieve. We would like to help.

Stacey King
+1 916 206 7860
sking at real-status.com
Real Status Ltd.
www.real-status.com


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Problems to create EC2 credentials in Havana

Hi all

I'm facing difficulties to create the EC2 admin credentials. I've done
the following tries:

1) Exporting OSSERVICETOKEN and OSSERVICEENDPOINT
/keystone user-list ---> OK//
//keystone ec2-credentials-create
--tenant-id=6b30ac57ffa64b19a82ab7d0ca046aad
--user-id=01ea712f08834cffaca81ec741797f13//
//WARNING: Bypassing authentication using a token & endpoint
(authentication credentials are being ignored).//
//Invalid OpenStack Identity credentials./

So, I decided to unset OSSERVICETOKEN and OSSERVICEENDPOINT and try 2).

2) Exporting OSAUTHURL, OS_PASSWORD
/keystone user-list and keystone ec2-credentials-create report the same
error
Unable to establish connection to https:// ..../.

I've already configured /adminuser/ and /adminpassword/ inside
keystone.conf but it doesn't take effect.

Any idea?

Regards
Miguel.

--
/Miguel Angel D?az Corchero/
/System Administrator / Researcher/
/c/ Sola n? 1; 10200 TRUJILLO, SPAIN/
/Tel: +34 927 65 93 17 Fax: +34 927 32 32 37/

CETA-Ciemat logo http://www.ceta-ciemat.es/


Confidencialidad:
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener informaci?n privilegiada o confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est? prohibida en virtud de la legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente respondiendo al mensaje y proceda a su destrucci?n.

Disclaimer:
This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140912/a40fb544/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 26213 bytes
Desc: not available
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140912/a40fb544/attachment.png

Great! Simple and effective

Thanks
Miguel.

El 13/09/14 03:42, Anne Gentle escribi?:

The easiest way to get all the creds you need is to download them from
the dashboard as described in
http://docs.openstack.org/openstack-ops/content/lay_of_the_land.html#get_creds

You get cacert.pem, cert.pem, ec2rc.sh, and pk.pem in a bundle that way.

On Fri, Sep 12, 2014 at 2:04 AM, Miguel A Diaz Corchero
<miguelangel.diaz at externos.ciemat.es
<mailto:miguelangel.diaz at externos.ciemat.es>> wrote:

Hi all

I'm facing difficulties to create the EC2 admin credentials. I've
done the following tries:

1) Exporting OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT
/keystone user-list ---> OK//
//keystone ec2-credentials-create
--tenant-id=6b30ac57ffa64b19a82ab7d0ca046aad
--user-id=01ea712f08834cffaca81ec741797f13//
//WARNING: Bypassing authentication using a token & endpoint
(authentication credentials are being ignored).//
//Invalid OpenStack Identity credentials./

So, I decided to unset OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT
and try 2).

2) Exporting OS_AUTH_URL, OS_PASSWORD
/keystone user-list and keystone ec2-credentials-create report the
same error
Unable to establish connection to https:// ..../.

There's a difference between
"Invalid OpenStack Identity credentials" and "Unable to establish
connection to..." -- I see you triple-checked the configuration, is
the keystone service running?

I've already configured /admin_user/ and /admin_password/ inside
keystone.conf but it doesn't take effect.

You do have to restart the service to get the configuration to change.

Could be a few things... but definitely look into the dashboard
approach for getting those EC2 creds.

Anne

Any idea?

Regards
Miguel.


-- 
/Miguel Angel D?az Corchero/
/*System Administrator / Researcher*/
/c/ Sola n? 1; 10200 TRUJILLO, SPAIN/
/Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
<tel:%2B34%20927%2032%2032%2037>/

CETA-Ciemat logo 

/
----------------------------
Confidencialidad:
Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su destinatario y puede contener informaci?n privilegiada o
confidencial. Si no es vd. el destinatario indicado, queda notificado de que la utilizaci?n, divulgaci?n y/o copia sin autorizaci?n est?
prohibida en virtud de la legislaci?n vigente. Si ha recibido este mensaje por error, le rogamos que nos lo comunique
inmediatamente respondiendo al mensaje y proceda a su destrucci?n.
  
Disclaimer:
This message and its attached files is intended exclusively for its recipients and may contain confidential information. If you received
this e-mail in error you are hereby notified that any dissemination, copy or disclosure of this communication is strictly prohibited and
may be unlawful. In this case, please notify us by a reply and delete this email and its contents immediately.
----------------------------
/


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
<mailto:OpenStack-operators at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 26213 bytes
Desc: not available
URL:

[Openstack-operators] Grizzly: live-migration with attached volume?

I am fairly new to Openstack having just inherited a Grizzly system with
NFS storage. "nova live-migration" is working unless the VM has an attached
volume. When it doesn't work, the VM does move but it is not possible to
connect to it and the CPU load on that VM goes up to the number of cores
assigned to it. That is, for a VM with 8 cores in it, the load for that
qemu-system-x86_64 process goes up to about 800%. Is this a known problem
with Grizzly?

Here is a description of our system:

three machines set up with HA for Quantum (with Open vSwitch), Cinder,
RabbitMQ, and Nova. These nodes are all compute nodes too. We also have
another machine that is just compute.

Cinder and Glance storage is all NFS connected to each machine with the
same mount points.

?The command I am using to do the migration is:

nova live-migration $UUID compute3

In some cases the migration has worked when a volume has been attached.
Most of the time however, it does not work.

Thanks,

Steve

--


Steve Cousins Supercomputer Engineer/Administrator
Advanced Computing Group University of Maine System
244 Neville Hall (UMS Data Center) (207) 561-3574
Orono ME 04469 steve.cousins at maine.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140912/b93930a3/attachment.html

On Fri, Sep 12, 2014 at 1:44 PM, Stephen Cousins
<steve.cousins at maine.edu> wrote:

Does anyone live-migrate VM's successfully when volumes are attached?

My setup is significantly different so probably not terribly helpful,
but to answer the question I did do live migrations with Volumes
attached on my Grizzly cloud (since upgraded to Icehouse). We don't
have shared storage for the ephemeral disks and our Volumes are iSCSI
attached (from an equallogic san), so something like:

nova live-migration --block-migrate []

I wouldn't say it was 100% but the volume piece wasn't a problem. Well
once I figured out in my case I needed to flip some bits on the SAN
side to allow multiple logins to the same target as source and
destination are both connected during a migration, but I wouldn't
expect that to be an issue with NFS

-Jon

[Openstack-operators] segmentation of management and data plane

Hey everyone - we've been doing some OpenStack deployments on different
datacenter architectures in our lab. Many of the architectures we work
with divide the management plane from the data plane by VRF. I.e. there
is no routing between a tenant VM traffic and the management interfaces of
the datacenter infrastructure. Applying this same model to OpenStack we
put the OpenStack services onto the management network. Tenants users are
given access to the management VRF so they can utilize the APIs. Tenant
VM traffic is kept in the data VRF. That works pretty well with a couple
exceptions.

So the first expect ion we noticed was the nova metadata service. Here,
the tenant VM needs access to an API on the management VRF. You can solve
this via config drive, or through neutron's metadata proxy. Not too bad.

The second exception we hit to this policy is Swift. Swift APIs actually
need to be accessed by Tenant VMs. Not a big deal, just put the swift
servers into the data management VRF. But if you are wanting to use Ceph,
and have Ceph be your storage for both Swift, Cinder, and Glance, now
you've got a problem b/c you need access to the cluster from both VRFs.

I'm just working through this stuff in my lab, so I'm hoping to get some
feedback from the real world. Has anyone setup their OpenStack cluster
with the management and data planes segmented by VRF? And did you have to
run into this or any other issue of needing traffic to cross between VRFs?
If so, how did you work around? Dual homed servers? Fusion router?
Something more elegant?

Thx,
britt

[Openstack-operators] OpenStack Community Weekly Newsletter (Sep 5 – 12)

  Dox a tool that run python (or others) tests in a docker container
  <http://blog.chmouel.com/2014/09/08/dox-a-tool-that-run-python-or-others-tests-in-a-docker-container/>

What if there was a tool that allows to integrate docker containers to
do the automatic testing for OpenStack?The idea of dox is to slightly
behave like the tox https://pypi.python.org/pypi/tox tool but instead
of running, usedocker containers.

  What?s Coming in OpenStack Networking for Juno Release
  <http://redhatstackblog.redhat.com/2014/09/11/whats-coming-in-openstack-networking-for-juno-release/>

As the Juno development cycle ramps up, now is a good time to review
some of the key changes we saw in Neutron during this exciting cycle and
have a look at what is coming up in the next upstream major release
which is set to debut in October
https://wiki.openstack.org/wiki/Juno_Release_Schedule.

  Horizon?s new features introduced in Juno cycle
  <http://www.matthias-runge.de/2014/09/08/horizon-juno-cycle-features/>

Matthias Runge http://www.matthias-runge.de/ gives an overview on what
happened during Horizon?s Juno development cycle. Horizon?s blueprints
page on launchpad https://blueprints.launchpad.net/horizon/juno lists
31 implemented new featureswhich may be grouped in sub-topics:
Sahara-Dashboard, RBAC, JavaScript unbundling, look and feel
improvements and more. If you?re curious about what?s coming, read the
full post
http://www.matthias-runge.de/2014/09/08/horizon-juno-cycle-features/.

The Road To Paris 2014 ? Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Robb Romans
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person04cd05b4-6485-41ba-b9cc-1d1874f35e8a
Jim West
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person50159549-86c3-46da-9859-643c3ee12285

Rob Cresswell
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person39683670-586a-4441-906a-9e0a8167f5e4
Huai Jiang
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person830e7ce5-805c-4f8c-ae25-7e7e7e4a3804

Martin Andr?
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person754ac0db-adaa-4acb-a599-2420887b3c81
Abhishek Asthana
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9e65a32b-e83b-46b5-b798-f892069b3fe5

Tony Campbell
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person719031a2-0413-40f1-ae99-c0bfaea33b89
Zura Isakadze
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3dce7adf-6de4-4ced-adcd-9dcda9e00c12

Srinivas Sakhamuri
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personfb98e7f6-cc53-4004-b7fe-45d821bfbc7b
Robb Romans
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person04cd05b4-6485-41ba-b9cc-1d1874f35e8a

Isaias
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf4887618-e19b-44e2-aeff-ca97a85093f7
Jeremy Moffitt
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person93eb4929-a375-4a67-9ff9-f19b9c3f2289

Stig Telfer
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5c0fc204-75f5-4f9d-a9a6-7b5760254d36
Eduard Biceri-Matei
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person471cfde8-f372-4662-ac72-c50aef642872

Sarvesh Ranjan
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb0d6c5c3-1e05-4999-89cc-0e70e7d133de
Tom Barron
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5e663ab2-0967-40d6-86cf-f2d288b743c1

Hongbin Lu
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9374b038-f163-4027-8274-a9177a5017bd
Szymon Wr?blewski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9fb7806d-d054-4eb7-ba63-05586113366d

Timothy Okwii
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9b2c59ce-386e-4b70-ba8b-017afd23c383
Saksham Varma
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1d1318f3-96b4-4cb3-8d3e-94df766c8414

Thomas J?rvstrand
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personebd0c704-a990-419d-b718-715ebf039ee7
Mike Fedosin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf2f8df54-d2dc-412b-92af-924c7501d819

Kyle Stevenson
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person955500ce-eeb1-4330-8720-8fb9543d1f89

Komei Shimamura
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personab0f8be6-d154-4514-b0bd-04c473164320

Dave Chen
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf4518217-228c-42c7-941e-15b4c662936c

Aidan McGinley
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4507f987-e2d7-4a2b-870d-116708f1cab5

Rishabh
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personae9959f5-fa57-4bcb-b84c-5d9a1d6a559b

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140912/9a4f9fe0/attachment.html

[Openstack-operators] Questions about waitlisting/queuing provisioning requests due to capacity

Hi All,

Hopefully, this is the appropriate group to post these questions. My
organization operates a public cloud which is always at capacity due to the
demand, and we're looking at OpenStack approaches to handle the capacity
issue.

  • What successful strategies have other groups used to deal with highly
    utilized clouds? Obviously, increasing a monetary price for resources is
    one approach, but barring that, what are other methods?
  • Are there any existing schedulers, extensions, openstack projects, or
    any forthcoming blueprints that provide the ability to wait list or queue a
    request if resources are at capacity until resources are available to
    satisfy that request. I'm particularly interested in nova.
  • Are there any commercial products that provide this wait list or
    queuing functionality?

I came across Blazar, and it (and the notion of reservations) are somewhat
related to waitlists but not precisely:

https://wiki.openstack.org/wiki/Blazar

One particular statement for delayed or scheduled reservations that
suggests a mismatch for wait lists or queues is:

"In this reservation type lease is created successfully if Blazar thinks
there will be enough resources to process provisioning later (otherwise
this request returns failure status)"

What if the cloud is always full?

Thanks in advance for the responses.

Edwin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140915/aea573bf/attachment.html

There were some studies done by INFN into how this could work. See http://indico.cern.ch/event/272791/session/0/contribution/6/material/slides/1.pdf for a description. Two approaches, with a modified scheduler or potentially a blazar lease type.

This is a particular challenge in the research and HPC private clouds where high levels of utilisation are very typical.

Tim

From: Digital Wonk [mailto:digitalwonk at gmail.com]
Sent: 15 September 2014 19:13
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Questions about waitlisting/queuing provisioning requests due to capacity

Hi All,

Hopefully, this is the appropriate group to post these questions. My organization operates a public cloud which is always at capacity due to the demand, and we're looking at OpenStack approaches to handle the capacity issue.

  • What successful strategies have other groups used to deal with highly utilized clouds? Obviously, increasing a monetary price for resources is one approach, but barring that, what are other methods?
  • Are there any existing schedulers, extensions, openstack projects, or any forthcoming blueprints that provide the ability to wait list or queue a request if resources are at capacity until resources are available to satisfy that request. I'm particularly interested in nova.
  • Are there any commercial products that provide this wait list or queuing functionality?
    I came across Blazar, and it (and the notion of reservations) are somewhat related to waitlists but not precisely:

https://wiki.openstack.org/wiki/Blazar

One particular statement for delayed or scheduled reservations that suggests a mismatch for wait lists or queues is:

"In this reservation type lease is created successfully if Blazar thinks there will be enough resources to process provisioning later (otherwise this request returns failure status)"

What if the cloud is always full?

Thanks in advance for the responses.

Edwin

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Install Guide for Ironic driver

Hello,
can someone point me to a documentation URL to install the Ironic driver
in a IceHouse release ?

I can't find anything here:
http://docs.openstack.org/icehouse/install-guide/install/yum/content/

thank you,

 Alvise

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140916/724547f0/attachment.html

Thank you Anne,
I've a question about that doc. It says in the middle:

"Configure Compute Service to use the Bare Metal Service
The Compute Service needs to be configured to use the Bare Metal
Service?s driver. The configuration file for the Compute Service is
typically located at /etc/nova/nova.conf. This configuration file must
be modified on the Compute Service?s controller nodes and compute nodes."

I'm puzzled: if I configure the controller node's nova.conf using

compute_driver=nova.virt.ironic.IronicDriver

I presume that I cannot instantiate anymore 'regular' virtual machine,
but only real node. Am I wrong ?

Can be the two methods (virtual and baremetal) be mixed in the same cloud ?
I mean: instantiate virtual machines on some compute node, and baremetal
using other compute nodes ?

thank you,

 Alvise

On 09/16/2014 03:40 PM, Anne Gentle wrote:
Hi Alvise,

You want a more specific guide,
http://docs.openstack.org/developer/ironic/deploy/install-guide.html
is for you.

Anne Gentle
Content Stacker
anne at openstack.org

On Sep 16, 2014, at 2:53 AM, Alvise Dorigo <alvise.dorigo at pd.infn.it
<mailto:alvise.dorigo at pd.infn.it>> wrote:

Hello,
can someone point me to a documentation URL to install the Ironic
driver in a IceHouse release ?

I can't find anything here:
http://docs.openstack.org/icehouse/install-guide/install/yum/content/

thank you,

Alvise


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Cinde block storage HA

Hi guys, I'm trying to put some HA on our cinder service, we have the next
scenario:

-Real backends: EMC clarion (SATA drives) and HP Storevirtual P4000 (SAS
drives), this two backends export 2 big LUNs to our (one and only right
now) cinder server.
-Once these big LUNs are imported in the cinder server, two different VG
are created for two different cinder LVM drivers (cinder-volumes-1 and
cinder-volumes-2). This way I have two different storage resources to give
to my tenants.

What I want is to deploy a second cinder server to act as failover of the
first one. Both servers are identical. So far I'm running a few tests with
isolated VMs.

-I installed corosync+pacemaker in 2 VMs, added a Virtual IP.
-Imported in the VMs a LUN with iSCSI created a VG
-Exported a LV with tgt. More or less the same scenario we have on
production.

If one of the VMs die the second one picks the virtual IP throughtout tgt
is exporting the LUN and the iSCSI session doesn't die, here you can see
part of the logs where the LUN is being imported:

Sep 16 14:29:50 borrar-nfs kernel: [86630.416160] connection1:0: ping
timeout of 5 secs expired, recv timeout 5, last rx 4316547395, last ping
4316548646, now 4316549900
Sep 16 14:29:50 borrar-nfs kernel: [86630.418938] connection1:0: detected
conn error (1011)
Sep 16 14:29:51 borrar-nfs iscsid: Kernel reported iSCSI connection 1:0
error (1011) state (3)
Sep 16 14:29:53 borrar-nfs iscsid: connection1:0 is operational after
recovery (1 attempts)

This test was really simple, just one 1GB LUN but it worked ok, even when
the failover was tested during a writing operation.

So it seems to be a good-so-far-solution, but there are a few things that
worries me a bit:

-Timeouts? How much time do I have to detect the problem and move the IP to
the new node before the iscsi connections die. I think I could play a
little bit with timeo.noopouttimeout in iscsid.conf
-What if there was a write operation going on while a node failed, what if
this operation never reached the real backends, could I come across some
inconsistencies in the volume FS? Any recommendations?
-If I create a volume in cinder, the proper target file is created
in /var/lib/cinder/volumes/volue-* but, I need the file to be created in
both cinder nodes in case one of them fail. What would be a proper solution
for this? shared storage for the directory? SVN?
-Both servers should be running tgt at the same time or maybe I should
start tgt on the failover server once the virtual IP is changed?

Any comments or suggestions will be more than appreciated. Thanks!

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140916/c3d885ea/attachment.html

Hi Abel, I thought about trying it, but We had MANY performance problems
with the EMC because of running too many LUNs that's way we`d like to avoid
that scenario. It might seem the best solution but We don't want to go that
way again.

2014-09-16 15:20 GMT-03:00 Abel Lopez :

Have you tried using the native Emc drivers? That way cinder only acts as
a broker between your instances and the storage back end, and you don't
need to worry about your cinder-volume service being HA. (As much)

On Tuesday, September 16, 2014, Juan Jos? Pavlik Salles <
jjpavlik at gmail.com> wrote:

Hi guys, I'm trying to put some HA on our cinder service, we have the
next scenario:

-Real backends: EMC clarion (SATA drives) and HP Storevirtual P4000 (SAS
drives), this two backends export 2 big LUNs to our (one and only right
now) cinder server.
-Once these big LUNs are imported in the cinder server, two different VG
are created for two different cinder LVM drivers (cinder-volumes-1 and
cinder-volumes-2). This way I have two different storage resources to give
to my tenants.

What I want is to deploy a second cinder server to act as failover of the
first one. Both servers are identical. So far I'm running a few tests with
isolated VMs.

-I installed corosync+pacemaker in 2 VMs, added a Virtual IP.
-Imported in the VMs a LUN with iSCSI created a VG
-Exported a LV with tgt. More or less the same scenario we have on
production.

If one of the VMs die the second one picks the virtual IP throughtout tgt
is exporting the LUN and the iSCSI session doesn't die, here you can see
part of the logs where the LUN is being imported:

Sep 16 14:29:50 borrar-nfs kernel: [86630.416160] connection1:0: ping
timeout of 5 secs expired, recv timeout 5, last rx 4316547395, last ping
4316548646, now 4316549900
Sep 16 14:29:50 borrar-nfs kernel: [86630.418938] connection1:0:
detected conn error (1011)
Sep 16 14:29:51 borrar-nfs iscsid: Kernel reported iSCSI connection 1:0
error (1011) state (3)
Sep 16 14:29:53 borrar-nfs iscsid: connection1:0 is operational after
recovery (1 attempts)

This test was really simple, just one 1GB LUN but it worked ok, even when
the failover was tested during a writing operation.

So it seems to be a good-so-far-solution, but there are a few things that
worries me a bit:

-Timeouts? How much time do I have to detect the problem and move the IP
to the new node before the iscsi connections die. I think I could play a
little bit with timeo.noopouttimeout in iscsid.conf
-What if there was a write operation going on while a node failed, what
if this operation never reached the real backends, could I come across some
inconsistencies in the volume FS? Any recommendations?
-If I create a volume in cinder, the proper target file is created
in /var/lib/cinder/volumes/volue-* but, I need the file to be created in
both cinder nodes in case one of them fail. What would be a proper solution
for this? shared storage for the directory? SVN?
-Both servers should be running tgt at the same time or maybe I should
start tgt on the failover server once the virtual IP is changed?

Any comments or suggestions will be more than appreciated. Thanks!

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] InstanceInfoCacheNotFound: Info cache for instance X could not be found.

Hi,
I'm observing exactly the same problem.
But in my case it is happening every time a VM is deleted.

I'm using icehouse.

Any idea?

regards,
Belmiro

On Fri, Jul 18, 2014 at 10:22 AM, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

Hi All,

I keep seeing this in the logs when deleting an instance ( and it takes
significantly longer than on other Hyper Visors ) :

2014-07-18 09:16:50.453 23230 ERROR nova.virt.driver [-] Exception
dispatching event <nova.virt.event.LifecycleEvent object at 0x3382650>:
Info cache for instance {uuid goes here} could not be found.
Traceback (most recent call last):

File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line
597, in objectdispatch
return getattr(target, method)(context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 151,
in wrapper
return fn(self, ctxt, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/objects/instance.py", line
500, in refresh
self.info_cache.refresh()

File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 151,
in wrapper
return fn(self, ctxt, *args, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/objects/instanceinfocache.py",
line 103, in refresh
self.instance_uuid)

File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 112,
in wrapper
result = fn(cls, context, *args, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/objects/instanceinfocache.py",
line 70, in getbyinstanceuuid
instance
uuid=instance_uuid)

InstanceInfoCacheNotFound: Info cache for instance {uuid goes here} could
not be found.

I had a look around and there used to be a bug that got fixed, I cant seem
to see this on happening on a different HV (lets call it stack1) that runs
the same version of openstack, however, that HV (stack1) was provisioned
with packstack vs the one reporting above (stack2) via my own puppet
module/manifest ...

When deleting the VM from stack2, it also takes significantly longer than
on stack1 ...

Anyone got any ideas ? I dont see the same happening on stack1.

Thanks all!
Alex


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140917/bf55efc8/attachment.html

This is so long ago I cant even remember how it got fixed - fwiw, we're
running Icehouse quite happily now - it may have been some API endpoint
setting that was missing or wrong. Possible to post your nova config so I
can compare to what we use right now ?

Alex

On 17 September 2014 15:23, Belmiro Moreira <
moreira.belmiro.email.lists at gmail.com> wrote:

Hi,
I'm observing exactly the same problem.
But in my case it is happening every time a VM is deleted.

I'm using icehouse.

Any idea?

regards,
Belmiro

On Fri, Jul 18, 2014 at 10:22 AM, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

Hi All,

I keep seeing this in the logs when deleting an instance ( and it takes
significantly longer than on other Hyper Visors ) :

2014-07-18 09:16:50.453 23230 ERROR nova.virt.driver [-] Exception
dispatching event <nova.virt.event.LifecycleEvent object at 0x3382650>:
Info cache for instance {uuid goes here} could not be found.
Traceback (most recent call last):

File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line
597, in objectdispatch
return getattr(target, method)(context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 151,
in wrapper
return fn(self, ctxt, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/objects/instance.py", line
500, in refresh
self.info_cache.refresh()

File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 151,
in wrapper
return fn(self, ctxt, *args, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/objects/instanceinfocache.py",
line 103, in refresh
self.instance_uuid)

File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line 112,
in wrapper
result = fn(cls, context, *args, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/objects/instanceinfocache.py",
line 70, in getbyinstanceuuid
instance
uuid=instance_uuid)

InstanceInfoCacheNotFound: Info cache for instance {uuid goes here} could
not be found.

I had a look around and there used to be a bug that got fixed, I cant
seem to see this on happening on a different HV (lets call it stack1) that
runs the same version of openstack, however, that HV (stack1) was
provisioned with packstack vs the one reporting above (stack2) via my own
puppet module/manifest ...

When deleting the VM from stack2, it also takes significantly longer than
on stack1 ...

Anyone got any ideas ? I dont see the same happening on stack1.

Thanks all!
Alex


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] nova-network manual installation

Hi all,

I have an openstack IceHouse environment with 1 controller and 1 compute node with all the compute services including nova-network.
The compute node is using Vmware as a hypervisor.
I am able to boot instances with no problem, but my instances don't get any IP address.
The only time I am able to load them with an IP address is when I create a network using nova-manage and then using the boot command with the --net-id flag and assigning it the network I just created.
I tried doing it directly in the nova.conf but it doesn't work.
This is my networking configuration section in my nova.conf file:

networkmanager = nova.network.manager.FlatDHCPManager
network
size = 254
allowsamenettraffic = False
send
arpforha = True
sharedhcpaddress = True
forcedhcprelease = True
flatnetworkbridge = br100
flatinterface = ens224
vlan
interface = ens192
publicinterface = ens192
default
floatingpool = public
dhcpbridge
flagfile = /etc/nova/nova.conf
fixedrange =
enabled
apis=ec2,osapi_compute

This is my compute nodes NICs:
ens192 HWaddr 00:50:56:84:4c:2a
inet addr:10.192.168.130 Bcast:10.64.95.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe84:4c2a/64 Scope: link
UP BROADCAST RUNNING MULTICAST

ens224 HWaddr 00:50:56:84:96:48
UP BROADCAST RUNNING MULTICAST

br100 HWaddr 00:50:56:84:66:72
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe84:6672/64 Scope: link
UP BROADCAST RUNNING MULTICAST

The br100 is connected to a br100 portgroup I created in the vCenter.
The ens192 is connected to a management portgroup I created in the vCenter.

I'm really frustrated, I tried everything, please assist.

Regards,
Ohad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140917/36362fce/attachment.html

[Openstack-operators] Glance client installation problem on OpenSuse 13.1

Hi guy es, I am a new on open stack installation. When i install
openstack glance client on OpesSUSE 13.1. I am facing some problem as
like below.

controller:~ # pip install
python-glanceclient
Downloading/unpacking python-glanceclient

Downloading python-glanceclient-0.14.0.tar.gz (118kB): 118kB downloaded

Running setup.py egg_info for package python-glanceclient
[pbr]
Excluding argparse: Python 2.6 only dependency
[pbr] Processing
SOURCES.txt
warning: LocalManifestMaker: standard file '-c' not found

warning: no previously-included files found matching '.gitignore'

warning: no previously-included files found matching '.gitreview'

warning: no previously-included files matching '*.pyc' found anywhere in
distribution
warning: no previously-included files found matching
'.gitignore'
warning: no previously-included files found matching
'.gitreview'
Requirement already satisfied (use --upgrade to upgrade):
pbr>=0.6,!=0.7,<1.0 in /usr/lib/python2.7/site-packages (from
python-glanceclient)
Requirement already satisfied (use --upgrade to
upgrade): Babel>=1.3 in /usr/lib/python2.7/site-packages (from
python-glanceclient)
Requirement already satisfied (use --upgrade to
upgrade): PrettyTable>=0.7,<0.8 in /usr/lib/python2.7/site-packages
(from python-glanceclient)
Requirement already satisfied (use --upgrade
to upgrade): python-keystoneclient>=0.9.0 in
/usr/lib/python2.7/site-packages (from
python-glanceclient)
Downloading/unpacking pyOpenSSL>=0.11 (from
python-glanceclient)
Downloading pyOpenSSL-0.14.tar.gz (128kB): 128kB
downloaded
Running setup.py egg_info for package pyOpenSSL

warning:
no previously-included files matching '*.pyc' found anywhere in
distribution
no previously-included directories found matching
'doc/_build'
Requirement already satisfied (use --upgrade to upgrade):
requests>=1.1 in /usr/lib/python2.7/site-packages (from
python-glanceclient)
Downloading/unpacking warlock>=1.0.1,<2 (from
python-glanceclient)
Downloading warlock-1.1.0.tar.gz
Running setup.py
egg_info for package warlock

Requirement already satisfied (use
--upgrade to upgrade): six>=1.7.0 in /usr/lib/python2.7/site-packages
(from python-glanceclient)
Requirement already satisfied (use --upgrade
to upgrade): pip>=1.0 in /usr/lib/python2.7/site-packages (from
pbr>=0.6,!=0.7,<1.0->python-glanceclient)
Requirement already satisfied
(use --upgrade to upgrade): pytz>=0a in /usr/lib/python2.7/site-packages
(from Babel>=1.3->python-glanceclient)
Requirement already satisfied
(use --upgrade to upgrade): iso8601>=0.1.9 in
/usr/lib/python2.7/site-packages (from
python-keystoneclient>=0.9.0->python-glanceclient)
Requirement already
satisfied (use --upgrade to upgrade): netaddr>=0.7.6 in
/usr/lib/python2.7/site-packages (from
python-keystoneclient>=0.9.0->python-glanceclient)
Requirement already
satisfied (use --upgrade to upgrade): oslo.config>=1.2.0 in
/usr/lib/python2.7/site-packages (from
python-keystoneclient>=0.9.0->python-glanceclient)
Downloading/unpacking
cryptography>=0.2.1 (from pyOpenSSL>=0.11->python-glanceclient)

Downloading cryptography-0.5.4.tar.gz (320kB): 320kB downloaded
Running
setup.py egginfo for package cryptography
Package libffi was not found
in the pkg-config search path.
Perhaps you should add the directory
containing libffi.pc' to the PKG_CONFIG_PATH environment variable No package 'libffi' found Package libffi was not found in the pkg-config search path. Perhaps you should add the directory containinglibffi.pc'
to the PKG
CONFIGPATH environment variable
No package
'libffi' found
Package libffi was not found in the pkg-config search
path.
Perhaps you should add the directory containing `libffi.pc'
to
the PKG
CONFIG_PATH environment variable
No package 'libffi' found

Package libffi was not found in the pkg-config search path.
Perhaps you
should add the directory containing libffi.pc' to the PKG_CONFIG_PATH environment variable No package 'libffi' found Package libffi was not found in the pkg-config search path. Perhaps you should add the directory containinglibffi.pc'
to the PKGCONFIGPATH environment
variable
No package 'libffi' found
c/cffibackend.c:13:17: fatal
error: ffi.h: No such file or directory
#include <ffi.h>
^

compilation terminated.
Traceback (most recent call last):
File
"", line 16, in
File
"/tmp/pipbuildroot/cryptography/setup.py", line 174, in

"test": PyTest,
File "/usr/lib64/python2.7/distutils/core.py", line
112, in setup
setupdistribution = dist = klass(attrs)
File
"/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in
init
self.fetchbuildeggs(attrs.pop('setuprequires'))
File
"/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in
fetch
buildeggs
parse
requirements(requires),
installer=self.fetchbuildegg
File
"/usr/lib/python2.7/site-packages/pkgresources.py", line 618, in
resolve
dist = best[req.key] = env.best
match(req, self, installer)

File "/usr/lib/python2.7/site-packages/pkgresources.py", line 862, in
best
match
return self.obtain(req, installer) # try and
download/install
File
"/usr/lib/python2.7/site-packages/pkgresources.py", line 874, in
obtain
return installer(requirement)
File
"/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in
fetch
buildegg
return cmd.easy
install(req)
File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 630, in easy
install
return self.installitem(spec, dist.location,
tmpdir, deps)
File
"/usr/lib/python2.7/site-packages/setuptools/command/easy
install.py",
line 660, in installitem
dists = self.install
eggs(spec, download,
tmpdir)
File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 856, in install
eggs
return self.buildandinstall(setupscript,
setup
base)
File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 1137, in build
andinstall
self.run
setup(setupscript,
setup
base, args)
File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 1125, in run
setup
raise DistutilsError("Setup script exited with
%s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script
exited with error: command 'gcc' failed with exit status 1
Complete
output from command python setup.py egg_info:
Package libffi was not
found in the pkg-config search path.

Perhaps you should add the
directory containing `libffi.pc'

to the PKGCONFIGPATH environment
variable

No package 'libffi' found

Package libffi was not found in the
pkg-config search path.

Perhaps you should add the directory containing
`libffi.pc'

to the PKGCONFIGPATH environment variable

No package
'libffi' found

Package libffi was not found in the pkg-config search
path.

Perhaps you should add the directory containing `libffi.pc'

to
the PKGCONFIGPATH environment variable

No package 'libffi'
found

Package libffi was not found in the pkg-config search
path.

Perhaps you should add the directory containing `libffi.pc'

to
the PKGCONFIGPATH environment variable

No package 'libffi'
found

Package libffi was not found in the pkg-config search
path.

Perhaps you should add the directory containing `libffi.pc'

to
the PKGCONFIGPATH environment variable

No package 'libffi'
found

c/cffibackend.c:13:17: fatal error: ffi.h: No such file or
directory

#include <ffi.h>

^

compilation terminated.

Traceback
(most recent call last):

File "", line 16, in

File
"/tmp/pipbuildroot/cryptography/setup.py", line 174, in

"test": PyTest,

File "/usr/lib64/python2.7/distutils/core.py", line
112, in setup

setupdistribution = dist = klass(attrs)

File
"/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in
init

self.fetchbuildeggs(attrs.pop('setup_requires'))

File
"/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in
fetchbuildeggs

parserequirements(requires),
installer=self.fetch
build_egg

File
"/usr/lib/python2.7/site-packages/pkg_resources.py", line 618, in
resolve

dist = best[req.key] = env.best_match(req, self, installer)

File "/usr/lib/python2.7/site-packages/pkgresources.py", line 862, in
best
match

return self.obtain(req, installer) # try and
download/install

File
"/usr/lib/python2.7/site-packages/pkg_resources.py", line 874, in
obtain

return installer(requirement)

File
"/usr/lib/python2.7/site-packages/setuptools/dist.py", line 339, in
fetchbuildegg

return cmd.easy_install(req)

File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 630, in easy
install

return self.install_item(spec,
dist.location, tmpdir, deps)

File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 660, in install
item

dists = self.install_eggs(spec, download,
tmpdir)

File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 856, in install
eggs

return self.buildandinstall(setupscript,
setup
base)

File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 1137, in build
and_install

self.runsetup(setupscript,
setup_base, args)

File
"/usr/lib/python2.7/site-packages/setuptools/command/easyinstall.py",
line 1125, in run
setup

raise DistutilsError("Setup script exited with
%s" % (v.args[0],))

distutils.errors.DistutilsError: Setup script
exited with error: command 'gcc' failed with exit status
1


Cleaning up...
Command
python setup.py egginfo failed with error code 1 in
/tmp/pip
build_root/cryptography
Storing complete log in
/root/.pip/pip.log

Please help me. How can i solve this problem.

Thanks All

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140918/39b95857/attachment.html

On 09/18/2014 06:07 AM, mohib at qmail.com.bd wrote:
distutils.errors.DistutilsError: Setup script exited with error: command
'gcc' failed with exit status 1

Because of some dependencies of python-glanceclient you need to install
some development packages. Here you are missing gcc and python-devel.

If you do not have to use the latest version of python-glanceclient I
think it is better to use the packaged version like described by Marcus.

Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: berendt at b1-systems.de

B1 Systems GmbH
Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

[Openstack-operators] Glance Image Upload error

Hi,

I am facing glance image upload problem on open stack
installation in OpenSuse 13.1. When i run

controller:/tmp/images #
glance image-create --name "cirros-0.3.2-x8664" --disk-format qcow2
>
--container-format bare --is-public True --progress <
cirros-0.3.2-x86
64-disk.img
[==> ] 8%

this command this image not
upload after 8% .

Please help me . How can i solve this problem

Thanks All

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140918/66ac9319/attachment.html

Which backend do you use for the glance images?
Check if you enough storage space to upload the whole image

From: mohib at qmail.com.bd [mailto:mohib at qmail.com.bd]
Sent: Thursday, September 18, 2014 9:43 AM
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Glance Image Upload error

Hi,

I am facing glance image upload problem on open stack installation in OpenSuse 13.1. When i run

controller:/tmp/images # glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \

--container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img
[==> ] 8%

this command this image not upload after 8% .

Please help me . How can i solve this problem

Thanks All

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Openstack and mysql galera with haproxy

Hello,

Is anyone here using openstack with mysql galera and haproxy? Have You got any
problems with that?
I was today installed such ha infra for database (two mysql servers in galera
cluster and haproxy on controller and neutron node, this haproxy is connecting
to one of galera servers with round robin algorithm). Generally all is working
fine but I have few problems:
1. I have a lot of messages like:
WARNING neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server
has gone away: (2006, 'MySQL server has gone away')
2. I have (most on neutron) many errors like:
OperationalError: (OperationalError) (2013, 'Lost connection to MySQL server
during query') 'UPDATE ml2portbindings SET viftype=%s, driver=%s,
segment=%s WHERE ml2
portbindings.portid =
3. Also errors:
StaleDataError: UPDATE statement on table 'ports' expected to update 1 row(s);
0 were matched.
4. and errors:
DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to get lock;
try restarting transaction') 'UPDATE ipavailabilityranges SET firstip=%s WHERE
ipavailabilityranges.allocation
pool_id =

Sql queries in examples are "accidental" and same problem is with other
queries also (like deleting ports).
Strange think is that those problems are not happend when I have one mysql
server and all was connecting to that one server. Do You have maybe same
problems? Do You know what can be a reason and solution for it?


Best regards
S?awek Kap?o?ski
slawek at kaplonski.pl
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: This is a digitally signed message part.
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140918/26c536bd/attachment.pgp

Hello,

Dnia poniedzia?ek, 22 wrze?nia 2014 22:02:26 S?awek Kap?o?ski pisze:

Hello,

Answears below


Best regards
S?awek Kap?o?ski
slawek at kaplonski.pl

Dnia poniedzia?ek, 22 wrze?nia 2014 13:41:51 Jay Pipes pisze:

Hi Peter, S?awek, answers inline...

On 09/22/2014 08:12 AM, Peter Boros wrote:

Hi,

StaleDataError is not given by MySQL, but rather SQLAlchemy. After a
quick look, it seems like SQLAlchemy gets this, if the update updated
different number of rows then it expected. I am not sure what is the
expectation based on, perhaps soembody can chime in and we can put
this together. What is the transaction isolation level you are running
on?

The transaction isolation level is REPEATABLE_READ, unless S?awek has
changed the defaults (unlikely).

For sure I didn't change it

For the timeout setting in neutron: that's a good way to approach it
too, you can even be more agressive and set it to a few seconds. In
MySQL making connections is very cheap (at least compared to other
databases), an idle timeout of a few seconds for a connection is
typical.

On Mon, Sep 22, 2014 at 12:35 PM, S?awek Kap?o?ski

wrote:

Hello,

Thanks for Your explanations. I thought so and now I decrease
"idleconnectiontimeout" in neutron and nova. Now when master server
is
back to cluster than in less than one minute all conections are again
made to this master node becuase old connections which was made to
backup node are closed. So for now it looks almost perfect but when I
now testing cluster (with master node active and all connections
established to this node) in neutron I still sometimes see errors like:
StaleDataError: UPDATE statement on table 'ports' expected to update 1
row(s); 0 were matched.

and also today I found errors like:
2014-09-22 11:38:05.715 11474 INFO sqlalchemy.engine.base.Engine [-]
ROLLBACK 2014-09-22 11:38:05.784 11474 ERROR
neutron.openstack.common.db.sqlalchemy.session [-] DB exception
wrapped.
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session Traceback (most recent
call
last):
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-
packages/neutron/openstack/common/db/sqlalchemy/session.py", line 524,
in
_wrap
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session return f(*args,
**kwargs) 2014-09-22 11:38:05.784 11474 TRACE

From looking up the code, it looks like you are using Havana [1]. The

code in the master branch of Neutron now uses oslo.db, not
neutron.openstack.common.db, so this issue may have been resolved in
later versions of Neutron.

Yes, I'm using havana and I have now no possibility to upgrade it fast to
icehouse (about master branch I even don't want to think :)). Do You want to
tell me that this problem will be existing in havana and this can't be
fixed in that release?

[1]
https://github.com/openstack/neutron/blob/stable/havana/neutron/openstack/
co mmon/db/sqlalchemy/session.py#L524

neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-
packages/neutron/openstack/common/db/sqlalchemy/session.py", line 718,
in
flush 2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session return
super(Session,
self).flush(*args, **kwargs)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line
1818,
in
flush
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session self._flush(objects)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line
1936,
in
_flush
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session
transaction.rollback(_capture_exception=True)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line
58, in __exit__
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session
compat.reraise(exc_type,
exc_value, exc_tb)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line
1900,
in
_flush
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session
flush_context.execute()
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line
372, in execute
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session rec.execute(self)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line
525, in execute
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session uow
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line
64, in save_obj
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session table, insert)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line
541, in _emit_insert_statements
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session execute(statement,
multiparams)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662,
in
execute
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session params)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761,
in
_execute_clauseelement
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session compiled_sql,
distilled_params
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874,
in
_execute_context
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session context)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line
1027,
in
_handle_dbapi_exception
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session
util.reraise(*exc_info)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 856,
in
_execute_context
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session context)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/sqlalchemy/connectors/mysqldb.py",
line
60, in do_executemany
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session rowcount =
cursor.executemany(statement, parameters)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session File
"/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 206, in
executemany
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session r = r +
self.execute(query, a)
2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session TypeError: unsupported
operand type(s) for +: 'int' and 'NoneType'

Hmm, this is just bad coding in the MySQLdb driver, frankly. It is
assuming a call to Cursor.execute() will return an integer, but it can
return None in some circumstances. See code here:

http://sourceforge.net/p/mysql-python/code/ci/8041cc6df636b9c42d52e01b727a
a9 8b43f3632c/tree/MySQLdb/MySQLdb/cursors.py

Note that MySQLdb1 (the above, which is what is packaged in your Linux
distro I believe) is old and buggy. The maintainer has moved on to
MySQLdb2, which has a different call interface in this part of the code.

Ok, I checked that code and I found that in fact sometimes method "execute"
returns None and when it is called from executemany than there is problem.
Simple change in executemany method to:

if not m:

r = 0
for a in args:
iterr = self.execute(query, a)
if iter
r:
r = r + iter_r
return r

for me now in that tests which I made but I don't know what be result of
that change in longer time period and why execute method returns that None
value :/

You wrote that MySQLdb2 has different call interface in this part of code -
should change to MySQLdb2 fix that problem or not? If yes can You explain me
how I can change it? Should I install other package just or maybe I should
set in somewhere in config? (I'm using ubuntu 12.04)

2014-09-22 11:38:05.784 11474 TRACE
neutron.openstack.common.db.sqlalchemy.session

And I have to investigate why it is happening becuase whith single
mysql
server I have no such errors :/

Not sure, frankly. The code is executing many INSERT or UPDATE
statements in a single block. The MySQL connection is clearly getting
borked on one of those attempts and results in the traceback you see
above.

I'm not sure in 100% but on much bigger cluster with one database server
there was no such problem even once so I supposed that it is related
somehow to galera which I now want to use.

Today I checked that problem was always when neutron tries to delete row from
ipavailabilityranges table and when I moved whole database to old (one) mysql
server then problem was the same. So it is not related to galera or haproxy.
This problem is strange for me but I think that small patch for python-mysqldb
library (as I wrote in previous email) solves this problem :)

best,
-jay


Best regards
S?awek Kap?o?ski
slawek at kaplonski.pl

Dnia poniedzia?ek, 22 wrze?nia 2014 11:18:27 Peter Boros pisze:

Hi,

Let me answer this and one of your previous questions in one because
they are related.

Earlier you wrote:

I made such modifications today in my infra and generally it looks
better now. I don't see deadlocks. But I have one more problem with
that: generally it works fine when main node is active but in
situation when this node is down, haproxy connect to one of backup
nodes. Still all is ok but problem is when main node is up again -
all
new connections are made to main node but active connections which
was
made to backup node still are active and neutron (or nova) are using
connections to two servers and then there are problems with deadlock
again.
Do You know how to prevent such situation?

This is because of how haproxy works. Haproxy's load balancing is TCP
level, once the TCP connection is established, haproxy has nothing to
do with it. If the MySQL application (neutron in this case), uses
persistent connections, at the time of failing over, haproxy doesn't
make an extra decision upon failover, because a connection is already
established. This can be mitigated by using haproxy 1.5 and defining
the backend with on-marked-down shutdown-sessions, this will kill the
connections at the TCP level on the formerly active node. Or in case
of graceful failover, include killing connections in the failover
script on the formerly active node. The application in this case will
get error 1 or 2 you described.

From your description error 1 and 2 are related to killing

connections. Case 1 (MySQL server has gone away) happens when the
connection was killed (but not at the MySQL protocol level) while it
was idle, and the application is attempting to re-use it. In this case
the correct behaviour would be re-establishing the connection. Error 2
is the same thing, but while the connection was actually doing
something, reconnecting and retrying is the correct behaviour. These
errors are not avoidable, if the node dies non-gracefully. A server
can for example lose power while doing the transaction, in this case
the transaction will be aborted, and the application will get one of
the errors described above. The application has to know that the data
is not written, since it didn't do commit or database didn't
acknowledge the commit.

On Sat, Sep 20, 2014 at 12:11 AM, S?awek Kap?o?ski

wrote:

Hello,

New questions below


Best regards
S?awek Kap?o?ski
slawek at kaplonski.pl

Dnia czwartek, 18 wrze?nia 2014 09:45:21 Clint Byrum pisze:

Excerpts from S?awek Kap?o?ski's message of 2014-09-18 09:29:27
-0700:

Hello,

Is anyone here using openstack with mysql galera and haproxy? Have
You
got
any problems with that?
I was today installed such ha infra for database (two mysql servers
in
galera cluster and haproxy on controller and neutron node, this
haproxy
is connecting to one of galera servers with round robin algorithm).
Generally all is working fine but I have few problems:
1. I have a lot of messages like:
WARNING neutron.openstack.common.db.sqlalchemy.session [-] Got
mysql
server
has gone away: (2006, 'MySQL server has gone away')
2. I have (most on neutron) many errors like:
OperationalError: (OperationalError) (2013, 'Lost connection to
MySQL
server during query') 'UPDATE ml2portbindings SET viftype=%s,
driver=%s, segment=%s WHERE ml2
portbindings.portid =

1 and 2 look like timeout issues. Check haproxy's timeouts. They
need
to be just a little longer than MySQL's connection timeouts.

After I made ACTIVE/PASSIVE cluster and change sqlidletimeout in
neutron
and nova problem 1 looks that is solver. Unfortunatelly I found that
when
I'm deleting port from neutron I still sometimes have got errors like
in
2. I don't check exactly nova logs yet so I'm not sure is it only in
neutron or in both.
Do You maybe know why it happens in neutron? It not happend when I
have
single mysql node without haproxy and galera so I suppose that
haproxy
or
galera is responsible for that problem :/

  1. Also errors:
    StaleDataError: UPDATE statement on table 'ports' expected to
    update
    1
    row(s); 0 were matched.
  2. and errors:
    DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying
    to
    get
    lock; try restarting transaction') 'UPDATE ipavailabilityranges SET
    firstip=%s WHERE ipavailabilityranges.allocationpool_id =

3 and 4 are a known issue. Our code doesn't always retry
transactions,
which is required to use Galera ACTIVE/ACTIVE. Basically, that
doesn't
work.

You can use ACTIVE/PASSIVE, and even do vertical partitioning where
one of the servers is ACTIVE for Nova, but another one is ACTIVE for
Neutron. But AFAIK, ACTIVE/ACTIVE isn't being tested and the work
hasn't
been done to make the concurrent transactions work properly.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operat
or
s


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operato
rs


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Pozdrawiam
S?awek Kap?o?ski
slawek at kaplonski.pl
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: This is a digitally signed message part.
URL:

[Openstack-operators] OpenStack Community Weekly Newsletter (Sep 12 – 19)

  The Kilo Design Summit in Paris
  <http://ttx.re/kilo-design-summit.html>

In less than two months the OpenStack development community will gather
in Paris to discuss the details of the Kilo development cycle. It starts
after the keynotes on the Tuesday of the summit week, and ends at the
end of the day on the Friday. We decided a number of changes in the
Design Summit organization in order to make it an even more productive
time for all of us.

Reports from Previous Events

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey
https://www.surveymonkey.com/s/V39BL7Hto provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Welcome Nejc Saje to ceilometer-core
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045537.html
and Radoslav to
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046017.htmloslo-vmware
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046017.htmlcore
team
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046017.htmland
StevenK to core tripleo core team
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045474.html

m-k-k
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person01e3a7e4-a466-4bdf-8b92-a4c5210b7de2
Rafael Rivero
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person18662939-4a9d-4e76-88b8-79a2b1f7455f

Loa
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person01a3fd06-bb42-410f-8d76-bd70f849c52b
Billy Olsen
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person63e870fa-b8d6-4303-b02c-5d6d2cd371e5

Dave McCowan
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb5cd8fa7-3ff1-4fed-b073-c7ae4c58c013
Zolt?n Lajos Kis
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personddef94f1-349a-4b1e-b3d9-8af81ef735b7

Zhai, Edwin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person149e65e2-1533-4ba6-91cd-6b70a02ce874
Yogesh
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3fa9a8ce-4bec-409c-9aaf-6bd3d82f2280

Syed Ismail Faizan Barmawer
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person31aed408-fb7d-4e59-9067-be1152a4e303
Rajaneesh Singh
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc1c74d7e-8f40-41ac-ae51-3621a135d325

Neill Cox
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person737df28c-86f2-489f-a245-1565bf69a33e
Barnaby Court
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2303be14-3102-4e0c-b1cb-518a9af0c3f9

Satoru Moriya
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6195ecf9-c867-46ee-8168-54978107315f
Ari
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person904d063e-4759-4169-b861-1fbca02c5bf1

Jamie Finnigan
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person02de814e-472c-472c-a76d-2de24d77f880
Gary Hessler
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona024b6b6-a08a-44ce-b736-7157c72c2202

Dan Sneddon
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf596c60e-0613-4127-8c6a-a6c241050cb4
Chardon Gerome
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person15ed28c9-0fbb-42ef-b350-f74b9a2e4db1

Brian Tully

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6820c666-3812-4878-aa82-9c438f0c1ee7

woody

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4954e97f-7bf3-458b-a233-3d9e1c74f67e

Martin Andr?

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person754ac0db-adaa-4acb-a599-2420887b3c81

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140919/0e864fe7/attachment.html

[Openstack-operators] OpenStack Dashboard is not Accessible

Hi,
My openstack installation was successfuly done(I installed the
dashboard on my controller node). But the dashboard is only accessible
from the controller node(via curl -k https://localhost/horizon), and I
can not access to dashboard from other computer on the network.
Error on my browser: Connection Time Out
Any idea to resolve?
Thanks in advance...

Hi Hossein, have you fixed the problem? You should check that apache is
listening on 0.0.0.0 instead of just 127.0.0.1.

2014-09-20 16:58 GMT-03:00 Hossein Zabolzadeh :

Hi,
My openstack installation was successfuly done(I installed the
dashboard on my controller node). But the dashboard is only accessible
from the controller node(via curl -k https://localhost/horizon), and I
can not access to dashboard from other computer on the network.
Error on my browser: Connection Time Out
Any idea to resolve?
Thanks in advance...


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] multiinstancedisplaynametemplate - use 10 char/digit uuid only ?

hi guys,

am wondering if when setting nova.conf

multiinstancedisplaynametemplate = %(name)s-%(uuid)s

it'd be possible to do

multiinstancedisplaynametemplate = %(name)s-%(uuid.hex[:10])s

Any ideas ?

Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140921/b8980583/attachment.html

[Openstack-operators] Default security group for all tenant

Hello,

Is it possible to add "default" security group with defined rules to all
instances and all groups? I'm thinking about group with rules that user can't
change and only admin can. For example to block some connections for all
users.


Best regards
S?awek Kap?o?ski
slawek at kaplonski.pl
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: This is a digitally signed message part.
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140923/087a5552/attachment.pgp

Hello S?awek ?

Nova currently has API endpoints[1] for setting up a set of rules that will be created as a new tenant/project?s ?default? security group. I believe work is being done in neutron to support such things, but am not sure if it made it into Icehouse or if it is even on the schedule for Juno.

This API more or less sets up a ?template? however, and doesn?t allow endusers to modify them. You may be able to modify policies to achieve what you?re after, but I am not certain.

./JRH

1: http://docs.openstack.org/developer/nova/api/nova.api.openstack.compute.contrib.security_group_default_rules.html

On Sep 23, 2014, at 4:07 PM, S?awek Kap?o?ski wrote:

Hello,

Is it possible to add "default" security group with defined rules to all
instances and all groups? I'm thinking about group with rules that user can't
change and only admin can. For example to block some connections for all
users.


Best regards
S?awek Kap?o?ski
slawek at kaplonski.pl_______________________________________________
OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] AUDIT VCPUS: -30 ?

I have a Grizzly system and I'm trying to figure out why VM's aren't able
to be migrated (using "nova live-migration $UUID $NODE") from one node to
another. The error message on the node that it is being migrated from is:

ERROR nova.virt.libvirt.driver [-] [instance:
0522b23c-5c2d-4c45-a66b-24c4c3f4ba9c] Live Migration failure: internal
error process exited while connecting to monitor: W: kvm binary is
deprecated, please use qemu-system-x86_64 instead

There is no message on the node that it is supposed to be migrating to.

It was working fine for a while and then it fails. While investigating, I
see AUDIT messages in nova-compute.log:

2014-09-23 18:00:45.274 4608 AUDIT nova.compute.resourcetracker [-] Free
ram (MB): 80797
2014-09-23 18:00:45.274 4608 AUDIT nova.compute.resource
tracker [-] Free
disk (GB): 31369
2014-09-23 18:00:45.274 4608 AUDIT nova.compute.resource_tracker [-] Free
VCPUS: -30

The system has 16 cores and has "cpuallocationratio=8.0" in nova.conf so
it should have a capacity of 128 VCPU's (right?). Checking with nova-manage:

root at compute3:~# nova-manage service describeresources test3
HOST PROJECT cpu mem(mb) hdd
test3 (total) 16 128925 32330
test3 (used
now) 46 48128 960
test3 (used_max) 46 47104 960
.
.
.

It looks like it is calculating Free VCPUS by subtracting "used_now" from
"total": 16 - 46 = -30. Is it somehow using this to decide that the node
should not take more VM's? If so, I don't know why it allowed it to get as
low as -30.

Can anyone explain what is going on? Is there other information I can look
at to diagnose why the live-migration is failing?

Thanks a lot,

Steve

--


Steve Cousins Supercomputer Engineer/Administrator
Advanced Computing Group University of Maine System
244 Neville Hall (UMS Data Center) (207) 561-3574
Orono ME 04469 steve.cousins at maine.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140923/aa6e3c27/attachment.html

[Openstack-operators] Renaming a compute node

Hi guys,

Lets say I wish to rename a compute node. How should I proceed?

Is there someone will a script lying around for that purpose? =)

BTW, I found a bunch of values in the database but I'm confused: some
refer to the hostname, others are the FQDN. I never figured what's the
best practice: should everything refer to the FQDN or the hostname?

--
Mathieu

-----Original Message-----
From: Mathieu Gagn? [mailto:mgagne at iweb.com]
Sent: 24 September 2014 01:28
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Renaming a compute node

Hi guys,

Lets say I wish to rename a compute node. How should I proceed?

Is there someone will a script lying around for that purpose? =)

How about nova rename ? You'll need to do some work to get the VM hostname etc. changed though as I don't think cloud init will do all of this for you.

$ nova help rename
usage: nova rename

Rename a server.

Positional arguments:
Name (old name) or ID of server.
New name for the server.

Tim

BTW, I found a bunch of values in the database but I'm confused: some refer to
the hostname, others are the FQDN. I never figured what's the best practice:
should everything refer to the FQDN or the hostname?

--
Mathieu


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] IceHouse - neutron setup

Hello
We are testing OpenStack on Ubuntu 14.04.1+ IceHouse and trying to select
optimal network setup with neutron,
In our previous setup with Havana we used OpenVSwitch+VLANs.
IceHouse comes by default with ML2 plugin. Does anybody have experience
with ML2? How stable is it?
We world like to support the following network options:
1) Multiple private VLANs with floating IPs where needed
2) Multiple public VLANs
Appreciate any advice about the neutron configuration allowing to do this.

Thanks,

Olga

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140924/fc0c90f8/attachment.html

[Openstack-operators] limit num instance-type per host

Hi all,

I'm trying to wrap my head around whether it's possible with the existing
scheduler filters, to put a limit per host on the number of instances per
instance-type/flavor? I don't think this is possible with the existing
filters or weights, but it seems like a fairly common requirement.

The issue I'm thinking of using this for is that of instance-to-host
fragmentation in homogenous deployments, where there is a tendency as the
zone approaches capacity to hit a utilisation ceiling - there are rarely
any "gaps" large enough for high vcpu count instances. I'm guessing that
limiting the number of smaller instances per host would help to alleviate
this.

Looks like knocking up such a filter wouldn't be too hard, just want to
check whether there is another way...?

--
Cheers,
~Blairo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140925/05c00bdb/attachment.html

Hello Blair

IMO it's a matter of capacity planning and design to minimize
fragmentation. I don't know of any mechanisms to filter this and solve
(i.e. re-balance) a posteriori. Blueprints exists, though, to allow
re-scheduling and re-balancing.

Maybe I'm wrong and there are indeed some scheduler filters out of the
box...

Anyways, I recommend you to read this:
http://rhsummit.files.wordpress.com/2014/04/deterministic-capacity-planning-for-openstack-final.pdf
and open this spreadsheet:
https://github.com/noslzzp/cloud-resource-calculator , it will show you
the optimal flavor configuration for minimal fragmetation using
different variables (vcpu, RAM, disk)

Regards

On 2014-09-24 11:08 AM, Blair Bethwaite wrote:
Hi all,

I'm trying to wrap my head around whether it's possible with the
existing scheduler filters, to put a limit per host on the number of
instances per instance-type/flavor? I don't think this is possible
with the existing filters or weights, but it seems like a fairly
common requirement.

The issue I'm thinking of using this for is that of instance-to-host
fragmentation in homogenous deployments, where there is a tendency as
the zone approaches capacity to hit a utilisation ceiling - there are
rarely any "gaps" large enough for high vcpu count instances. I'm
guessing that limiting the number of smaller instances per host would
help to alleviate this.

Looks like knocking up such a filter wouldn't be too hard, just want
to check whether there is another way...?

--
Cheers,
~Blairo


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--

*Marcos Garcia
*
Technical Sales Engineer

PHONE : *(514) ? 907 - 0068 *EMAIL :marcos.garcia at enovance.com
<mailto:marcos.garcia at enovance.com> - SKYPE : *enovance-marcos.garcia**
*ADDRESS :
127 St-Pierre ? Montr?al (QC) H2Y 2L6, Canada *WEB :
*www.enovance.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] rbd ephemeral storage, very slow deleting...

Hi All,

Just started experimenting with RBD (ceph) back end for ephemeral
storage on some of my compute nodes.

I have it launching instances just fine, but when I try and delete
them libvirt shows the instances are gone, but OpensStack lists them
in 'deleting' state and the rbd process on the hypervisor spins madly
at about 300% cpu ...

...and now approx 18min later they have finally fully terminated, why so long?

-Jon

There was a bug in Havana where it would create the underlying RBD volume at 1024 times the actual size. We didn?t notice this until we started deleting instances and they took forever.
Could be the case with you too?

See https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1219658
https://review.openstack.org/#/q/I3ec53b3617d52f75784ebb3b0dad92ca815f8876,n,z

I don?t think this made it into Havana sadly.

Sam

On 25 Sep 2014, at 5:45 am, Jonathan Proulx wrote:

Hi All,

Just started experimenting with RBD (ceph) back end for ephemeral
storage on some of my compute nodes.

I have it launching instances just fine, but when I try and delete
them libvirt shows the instances are gone, but OpensStack lists them
in 'deleting' state and the rbd process on the hypervisor spins madly
at about 300% cpu ...

...and now approx 18min later they have finally fully terminated, why so long?

-Jon


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Confusion about security groups after upgrade from grizzly to icehouse

Hi all,

we've recently upgraded our setup from grizzly to icehouse. In grizzly,
we've created plenty of security-groups which disappeared in icehouse
(neutron security-group-list && nova secgroup-list).

We used quantum in grizzly and found our security groups in
nova.security_groups in MySQL. Now these tables seem to be ignored.
We use the OVSNeutronPluginV2 and IptablesFirewallDriver.

I am stuck here and any help pointing out to the
problem/misconfiguration is highly appreciated.

Thanks and Regards
Oliver

it happened to me when upgraded from grizzly to havana, i don't remember
to have fixed back in the day, i just recreated them

On 09/25/2014 04:36 PM, Oliver B?ttcher wrote:
Hi all,

we've recently upgraded our setup from grizzly to icehouse. In grizzly,
we've created plenty of security-groups which disappeared in icehouse
(neutron security-group-list && nova secgroup-list).

We used quantum in grizzly and found our security groups in
nova.security_groups in MySQL. Now these tables seem to be ignored.
We use the OVSNeutronPluginV2 and IptablesFirewallDriver.

I am stuck here and any help pointing out to the
problem/misconfiguration is highly appreciated.

Thanks and Regards
Oliver


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

[Openstack-operators] Nodes and configurations management in Puppet

Hi,

Some of you use Puppet to manage your OpenStack infrastructure.

  • How do you manage your node definitions?
    Do you have an external ENC?
    Or plain site.pp, Puppet Enterprise, theforeman, etc. ?

  • How about your configuration?
    Do you use Hiera? Or do you rely on the ENC to manage them?

My question is related to the complexity that managing multiple
OpenStack environments (staging/production), regions and cells involves
over time.

Is there a magically way to manage node definitions and especially
configurations so you guys no have a heart attack each time you have to
dig into them? How about versioning?

To answer my own questions and start the discussion:

I don't use an external ENC. The site.pp manifest has been the one used
since day one. Since we have a strong host naming convention, I didn't
see the limit of this model (yet). Regex has been a good friend so far.

As for configurations, Hiera is used to organize then with a hierarchy
to manage environments and regions specific configurations:

  • "environments/%{::environment}/regions/%{::openstack_region}/common"
  • "environments/%{::environment}/common"
  • common

I'm still exploring solutions for cells.

How about you guys?

--
Mathieu

Hi Clayton,

Thanks for sharing your experience and use of Puppet with OpenStack.

I feel less alone now. :)

On 2014-09-25 1:42 PM, Clayton O'Neill wrote:
We have a single default node definition. We have a custom fact that
determines the node's role based on it's hostname, then includes a
derived class ("include role::${::role}") based off of that
information. Since we're using something very similar to Craig Dunn's
roles/profiles pattern, and we store all the configuration in hiera, I
feel like we're getting most of the benefits of an ENC without having to
maintain one.

That's one hell of a clever idea you got there! :D

Our hiera hierarchy is pretty gross right now, since it's layered on top
of what came with puppetopenstackbuilder originally. We don't use
puppet environments. Since we have a puppet master per site &
environment, we haven't had a need for them yet and I'm afraid of this
bug: http://projects.puppetlabs.com/issues/12173

Thanks for pointing that one out. I didn't know about that particular bug.

--
Mathieu

[Openstack-operators] [glance] how to update the contents of an image

I'm trying to update the contents of an image, but it looks like it is
not working at all.

First I upload a test image:

---snip---

dd if=/dev/urandom of=testing.img bs=1M count=10

glance image-create --disk-format raw --container-format bare --name

TESTING --file testing.img
---snap---

Now I want to overwrite the contents of this image:

---snip---

dd if=/dev/urandom of=testing.img bs=1M count=20

glance image-update --file testing.img TESTING

---snap---

After this call the size of the image is still the same like before
(10485760 bytes).

I do not have issues in the logfiles of glance-api and glance-registry.

What am I doing wrong?

Is it not possible to update the contents of an image?

Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: berendt at b1-systems.de

B1 Systems GmbH
Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

Glance images are immutable. In order to update it, you should do as you
are doing, but then rename the old image, then upload the updated one. Take
note of the UUID as well.

On Friday, September 26, 2014, Christian Berendt
wrote:

I'm trying to update the contents of an image, but it looks like it is
not working at all.

First I upload a test image:

---snip---

dd if=/dev/urandom of=testing.img bs=1M count=10

glance image-create --disk-format raw --container-format bare --name

TESTING --file testing.img
---snap---

Now I want to overwrite the contents of this image:

---snip---

dd if=/dev/urandom of=testing.img bs=1M count=20

glance image-update --file testing.img TESTING

---snap---

After this call the size of the image is still the same like before
(10485760 bytes).

I do not have issues in the logfiles of glance-api and glance-registry.

What am I doing wrong?

Is it not possible to update the contents of an image?

Christian.

--
Christian Berendt
Cloud Solution Architect
Mail: berendt at b1-systems.de <javascript:;>

B1 Systems GmbH
Osterfeldstra?e 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org <javascript:;>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] nova rescue issue: boots to non-rescue disk

Hi,

I am running OpenStack nova 2.17.0 on Ubuntu 14.04.1 with libvirt 1.2.2
and QEMU 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.5).

When attempting to nova rescue an instance, the instance boots with
updated libvirt XML but ends up booting to the old, non-rescue disk
(second disk in XML) 80% of the time. Occasionally after several reboots
(for example, via ctrl+alt+del in VNC) I can randomly get it to boot to
the rescue disk.

The XML looks like this:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2' cache='writeback'/>
  <source file='/var/lib/nova/instances/INSTANCE_ID/disk.rescue'/>
  <target dev='vda' bus='virtio'/>
  <alias name='virtio-disk0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04'

function='0x0'/>







And the command line:

/usr/bin/qemu-system-x8664 -name instance-00000a65 -S -machine
pc-i440fx-trusty,accel=kvm,usb=off -m 512 -realtime mlock=off -smp
1,sockets=1,cores=1,threads=1 -uuid e9019a9b-03de-47e9-b372-673305ea5c66
-smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack
Nova,version=2014.1.2,serial=44454c4c-3700-1056-8053-b6c04f504e31,uuid=e9019a9b-03de-47e9-b372-673305ea5c66
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000a65.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -global kvm-pit.lost
tick_policy=discard -no-hpet
-no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/nova/instances/e9019a9b-03de-47e9-b372-673305ea5c66/disk.rescue,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/var/lib/nova/instances/e9019a9b-03de-47e9-b372-673305ea5c66/disk,if=none,id=drive-virtio-disk1,format=qcow2,cache=writeback
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ac:13:08,bus=pci.0,addr=0x3
-chardev
file,id=charserial0,path=/var/lib/nova/instances/e9019a9b-03de-47e9-b372-673305ea5c66/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1
-device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

So everything looks like it should boot to the rescue disk (strict boot
on and boot index set for the desired device), yet usually it still ends
up booting to the old OS. I tried manually specifying boot index on
second device to 2 and it still fails.

Any ideas what's going on with this? All the packages are also
up-to-date on the host node; in this case the VM has Ubuntu 14.04 OS but
I don't think that affects the boot order.

Thanks,
- Favyen Bastani

[Openstack-operators] [Openstack] [OSSA 2014-031] Admin-only network attributes may be reset to defaults by non-privileged users (CVE-2014-6414)

Means no fixes for havana?

Rather boring...

On 09/29/2014 05:10 PM, Grant Murphy wrote:

OpenStack Security Advisory: OSSA-2014-031
CVE: CVE-2014-6414
Date: September 29, 2014

Title: Admin-only network attributes may be reset to defaults by non-privileged users
Reporter: Elena Ezhova (Mirantis)
Products: Neutron
Versions: up to 2013.2.4 and 2014.1 versions up to 2014.1.2

Description:
Elena Ezhova from Mirantis reported a vulnerability in Neutron. By updating a network
attribute with a default value a non-privileged user may reset admin-only network
attributes. This may lead to unexpected behavior with security implications for
operators with a custom policy.json, or in some extreme cases network outages
resulting in denial of service. All deployments using neutron networking are
affected by this flaw.

Juno (development branch) fix:
https://review.openstack.org/114531

Icehouse fix:
https://review.openstack.org/123849

Notes:
This fix will be included in the Juno release 2014.2.0 and in
future 2014.1.3 release.

References:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-6414
https://launchpad.net/bugs/1357379

--
Grant Murphy
OpenStack Vulnerability Management Team


Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack at lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140929/190ed497/attachment.html

On 2014-09-30 08:02:31 +0800 (+0800), gustavo panizzo (gfa) wrote:
icehouse will be supported 18 months IIRC

15 months actually.

i don't have a link here. it was mentioned on Thierry presentation (mid cycle
state of the project ) a few months ago

https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Ficehouse_releases

Specifically, its final 2014.1.5 point release is planned for some
time in July. Note this was agreed to in the stable branch
maintenance session at the design summit in Atlanta and is
contingent on sufficient manpower materializing to keep Icehouse
working and testable for the duration. If you feel strongly that
this should happen, please help make that possible in whatever way
you are able.
--
Jeremy Stanley

[Openstack-operators] Architecture Opinions

Hey folks,

So I've been through a number of POC and smaller deployment clusters since
Folsom, but I'm now working on one that should be a good bit larger and
have a much greater need for HA. The diagrams that I originally drew up
that seemed reasonable a few months ago are in conflict with a lot of the
reference architectures I'm seeing now, and I could really use some
feedback on people actually doing this in production right now.

The architecture I intended to deploy was this:

2 HAProxy nodes to load balance active / active APIs and Horizon
2 HAProxy nodes to load balance a Galera Mysql cluster
2 Control nodes with all API services
3 Galera / MySQL nodes
3 MongoDB nodes running replica sets for Ceilometer
2 Neutron Nodes running Active / Passive L3/DHCP/LBaaS agents (hopefully
active active L3 in Juno)
3 Ceph-Mon nodes
3 Ceph-OSD nodes (hosting Cinder, Glance, and potentially instance storage)
X number of compute nodes depending on the requirement

The reference architectures I'm seeing out of Redhat and Mirantis among
others seem to like putting all of the above eggs except the Storage into 3
identical baskets. This just feels bad and painful to me and like it would
lead to very badly performing everything. Am I totally just stuck in the
past with how I'm thinking of setting all this up?

Any and all feedback would be greatly appreciated

Thanks!

-Erik
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140929/0b0c84d4/attachment.html

On 29 September 2014 23:28, Erik McCormick
wrote:

The architecture I intended to deploy was this:

2 HAProxy nodes to load balance active / active APIs and Horizon
2 HAProxy nodes to load balance a Galera Mysql cluster
2 Control nodes with all API services
3 Galera / MySQL nodes
3 MongoDB nodes running replica sets for Ceilometer
2 Neutron Nodes running Active / Passive L3/DHCP/LBaaS agents (hopefully
active active L3 in Juno)
3 Ceph-Mon nodes
3 Ceph-OSD nodes (hosting Cinder, Glance, and potentially instance storage)
X number of compute nodes depending on the requirement

The reference architectures I'm seeing out of Redhat and Mirantis among
others seem to like putting all of the above eggs except the Storage into 3
identical baskets. This just feels bad and painful to me and like it would
lead to very badly performing everything. Am I totally just stuck in the
past with how I'm thinking of setting all this up?

Depending on the size of your deployment, it's safe enough to combine
almost everything. It does also depend on the hardware you have available.

I would recommend ensuring that:

1) ceph-mon's and ceph-osd's are not hosted on the same server - they both
demand plenty of cpu cycles
2) ceilometer's storage into mongodb is demanding, so it's best to ensure
that mongodb is on different storage to galera
3) neutron's l3 agent active/passive configuration works just fine - it's
probably best to keep them on their own servers in order to handle the
appropriate throughput required
4) ceph-osd's should not run on your compute or OpenStack controller nodes
- kvm/osd contention on the cpu will cause all sort of odd issues
5) instance storage on ceph doesn't work very well if you're trying to use
the kernel module or cephfs - make sure you're using ceph volumes as the
underlying storage (I believe this has been patched in for Juno)
6) neutron has the ability to include proper ha for dhcp agents - we
currently (in a Grizzly environment) have a script that creates a DHCP
service on every network, but I believe that beyond Grizzly the HA story
has been sorted out

HTH,

Jesse
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] DB sync (Havana -> Icehouse). Unknown column 'instances.ephemeralkeyuuid'

Hi,
I'm testing the nova DB sync (Havana -> Icehouse) in a copy of my
production databases
and I'm getting the following "CRITICAL" in nova-manage log for the 230->
231 migration.
Looking at this particular migration it creates a new column "
instances.ephemeralkeyuuid"
however it's complaining that the column doesn't exist!
Anyway, this doesn't block the DB sync and at the end this column is added
to the DB.

Anyone already in Icehouse also experienced this problem?
Is this harmful?

thanks,
Belmiro

2014-09-30 14:44:08.973 32035 INFO migrate.versioning.api [-] 230 -> 231...

2014-09-30 14:45:05.248 32152 CRITICAL nova
[req-0f394ba5-32ff-4f15-afea-92bc5cc06dcf None None] OperationalError:
(OperationalError) (1054, "Unknown column 'instances.ephemeralkeyuuid' in
'field list'") 'SELECT instances.createdat AS instancescreatedat,
instances.updated
at AS instancesupdatedat, instances.deletedat AS
instances
deletedat, instances.deleted AS instancesdeleted, instances.id
AS instancesid, instances.userid AS instancesuserid,
instances.projectid AS instancesprojectid, instances.imageref AS
instancesimageref, instances.kernelid AS instanceskernelid,
instances.ramdisk
id AS instancesramdiskid, instances.hostname AS
instanceshostname, instances.launchindex AS instanceslaunchindex,
instances.keyname AS instanceskeyname, instances.keydata AS
instanceskeydata, instances.powerstate AS instancespowerstate,
instances.vm
state AS instancesvmstate, instances.taskstate AS
instances
taskstate, instances.memorymb AS instancesmemorymb,
instances.vcpus AS instancesvcpus, instances.rootgb AS instancesrootgb,
instances.ephemeralgb AS instancesephemeralgb,
instances.ephemeral
keyuuid AS instancesephemeralkeyuuid,
instances.host AS instanceshost, instances.node AS instancesnode,
instances.instancetypeid AS instancesinstancetypeid,
instances.user
data AS instancesuserdata, instances.reservationid AS
instances
reservationid, instances.scheduledat AS instancesscheduledat,
instances.launchedat AS instanceslaunchedat, instances.terminatedat AS
instancesterminatedat, instances.availabilityzone AS
instances
availabilityzone, instances.displayname AS
instancesdisplayname, instances.displaydescription AS
instances
displaydescription, instances.launchedon AS
instanceslaunchedon, instances.locked AS instanceslocked,
instances.locked
by AS instanceslockedby, instances.ostype AS
instances
ostype, instances.architecture AS instancesarchitecture,
instances.vmmode AS instancesvmmode, instances.uuid AS instancesuuid,
instances.rootdevicename AS instancesrootdevicename,
instances.default
ephemeraldevice AS instancesdefaultephemeraldevice,
instances.defaultswapdevice AS instancesdefaultswapdevice,
instances.config
drive AS instancesconfigdrive, instances.accessipv4 AS
instancesaccessipv4, instances.accessipv6 AS instancesaccessipv6,
instances.autodiskconfig AS instancesautodiskconfig,
instances.progress AS instances
progress, instances.shutdownterminate AS
instances
shutdownterminate, instances.disableterminate AS
instancesdisableterminate, instances.cellname AS instancescellname,
instances.internal
id AS instancesinternalid, instances.cleaned AS
instancescleaned, instanceinfocaches1.createdat AS
instance
infocaches1createdat, instanceinfocaches1.updatedat AS
instanceinfocaches1updatedat, instanceinfocaches1.deletedat AS
instance
infocaches1deletedat, instanceinfocaches1.deleted AS
instance
infocaches1deleted, instanceinfocaches1.id AS
instanceinfocaches1id, instanceinfocaches1.networkinfo AS
instanceinfocaches1networkinfo, instanceinfocaches1.instanceuuid
AS instance
infocaches1instanceuuid, securitygroups1.createdat AS
security
groups1createdat, securitygroups1.updatedat AS
securitygroups1updatedat, securitygroups1.deletedat AS
security
groups1deletedat, securitygroups1.deleted AS
security
groups1deleted, securitygroups1.id AS securitygroups1id,
security
groups1.name AS securitygroups1name,
securitygroups1.description AS securitygroups1description,
security
groups1.userid AS securitygroups1userid,
securitygroups1.projectid AS securitygroups1projectid \nFROM
instances LEFT OUTER JOIN instance
infocaches AS instanceinfocaches1 ON
instanceinfocaches1.instanceuuid = instances.uuid LEFT OUTER JOIN
securitygroupinstanceassociation AS
security
groupinstanceassociation1 ON
security
groupinstanceassociation1.instanceuuid = instances.uuid AND
instances.deleted = %s LEFT OUTER JOIN securitygroups AS securitygroups1
ON security
groups1.id =
security
groupinstanceassociation1.securitygroupid AND
security
groupinstanceassociation1.deleted = %s AND
security
groups_1.deleted = %s \nWHERE instances.deleted = %s' (0, 0, 0, 0)

2014-09-30 14:58:56.928 32035 INFO migrate.versioning.api [-] done

2014-09-30 14:58:56.930 32035 INFO migrate.versioning.api [-] 231 -> 232...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140930/72571061/attachment.html

On Tue, Sep 30, 2014 at 9:07 AM, Belmiro Moreira
<moreira.belmiro.email.lists at gmail.com> wrote:
Hi,
I'm testing the nova DB sync (Havana -> Icehouse) in a copy of my production
databases
and I'm getting the following "CRITICAL" in nova-manage log for the 230->
231 migration.
Looking at this particular migration it creates a new column
"instances.ephemeralkeyuuid"
however it's complaining that the column doesn't exist!
Anyway, this doesn't block the DB sync and at the end this column is added
to the DB.

Anyone already in Icehouse also experienced this problem?
Is this harmful?

I migrated from Havana to Icehouse in early August and did not see
this issue. I did have trouble with some of my tables not being in
UTF8 but pretty sure that was it. My database has been rolling
forward since Essex an I don't have the exact notes but I know on
previous updates I've hit some interesting corner cases because of
previous migrations not being 100% the same as a fresh install of the
last release. Perhaps not the most useful data point, but that was my
experience.

I think I still have a copy of my Havana DB on line so I could
"describe

-Jon

[Openstack-operators] Migration from openvswitch to linuxbridge

Hi,

Just wondering:
Did anyone do a migration from ovs to linuxbridge?

We are considering this due to reducing complexity and improving performance:
- Fewer moving parts = less that can go wrong
- We have hypervisores with the ovs eating a significant amount of cpu cycles (more than one cpu core)

However, it seems that there is more to it then just changing the config on the hypervisors and (hard) rebooting the instances.

From what I have gathered there is stuff saved in the databases:
* neutron - ml2portbindings - This seems to be easily updated with some SQL statements.
* nova - instanceinfocaches - This looks like it is going to be a problem, the whole network stuff is in here.
Although it is called cache it does not seem to be updated if you delete the content :(

Anyone tried this before? :)

Cheers,
Robert van Leeuwen

For high CPU usage for openvswitchd there is a simple solution: upgrade
to anything like 1.11 or higher. There is a huge problem with ovs 1.4,
1.9, 1.10 - they suck at real-world network activity. All new version
(2.0, 2.1) are much better and works great.

You can find more detail in google with keyword 'megaflow ovs'.

On 09/30/2014 05:34 PM, Robert van Leeuwen wrote:
Hi,

Just wondering:
Did anyone do a migration from ovs to linuxbridge?

We are considering this due to reducing complexity and improving performance:
- Fewer moving parts = less that can go wrong
- We have hypervisores with the ovs eating a significant amount of cpu cycles (more than one cpu core)

However, it seems that there is more to it then just changing the config on the hypervisors and (hard) rebooting the instances.
From what I have gathered there is stuff saved in the databases:
* neutron - ml2portbindings - This seems to be easily updated with some SQL statements.
* nova - instanceinfocaches - This looks like it is going to be a problem, the whole network stuff is in here.
Although it is called cache it does not seem to be updated if you delete the content :(

Anyone tried this before? :)

Cheers,
Robert van Leeuwen


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] [keystone] performance issues getting tokens since icehouse upgrade

Hi,

Since upgrading to Icehouse, we've observed some random and unpredictable cases where keystone fails to respond to requests for new tokens (POST /v2.0/tokens). Our clients timeout at 1 minute, so we don't know if it isn't responding at all or if it would eventually return. A restart of keystone solves the problem.

Has anyone else seen anything similar?

We are currently putting some debugging measures in place to get to the bottom of the issue, but we just wanted to check if anyone else is experiencing this.

/Craig J
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20140930/ad0e90d1/attachment.html

[Openstack-operators] Architecture Opinions

I'd like to clarify a few things, specifically related to Ceph usage, in
less of a rushed response. :)

Note - my production experience has only been with Ceph Dumpling. Plenty of
great patches which resolve many of the issues I've experienced have
landed, so YMMV.

On 30 September 2014 15:06, Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

I would recommend ensuring that:

1) ceph-mon's and ceph-osd's are not hosted on the same server - they both
demand plenty of cpu cycles

The ceph-mon will generally not use much CPU. If a whole chassis is lost,
you'll see it spike heavily, but it'll drop off again after the rebuild is
complete. I would still recommend keeping at least one ceph-mon on a host
that isn't hosting OSD's. The mons are where all clients get the data
location details from, so at least one really needs to be available no
matter what happens.

And, FYI, I would definitely recommend implementing separate networks for
client access and the storage back-end. This can allow you to ensure that
your storage replication traffic is separated and you can tune the QoS for
each differently.

5) instance storage on ceph doesn't work very well if you're trying to use
the kernel module or cephfs - make sure you're using ceph volumes as the
underlying storage (I believe this has been patched in for Juno)

cephfs, certainly in Dumpling, is not production ready - our experiment
with using it in production was quickly rolled back when one of the client
servers lost connection to the ceph-mds for some reason and the storage on
it became inaccessible. The client connection to the mds in Dumpling isn't
as resilient as the client connection for the block device.

By 'use the kernel module' I mean create an image and mounting it to the
server through the ceph block device kernel module, then building a file
system on it and using it like you would any network-based storage.
We found that when using one image as shared storage between servers,
updates from one server wasn't always visible quickly enough (within a
minute) on the other server. If you choose to use a single image per
server, then only mount server2's image on server1 in a disaster recovery
situation then it should be just fine.
We did find that mounting a file system using the kernel module would tend
to cause a kernel panic when trying to disconnect the storage. Note that
there have been several improvements in the revisions after Dumpling,
including some bug fixes for issues that look similar to what we
experienced.

By "make sure you're using ceph volumes as the underlying storage" I meant
that each instance root disk should be stored as its own Ceph Image in a
storage pool. This can be facilitated directly from nova by using
'images_type=rbd' in nova.conf which became available in OpenStack Havana.
Support for using RBD for Ephemeral disks as well finally landed in Juno
(see https://bugs.launchpad.net/nova/+bug/1226351), as did support for
copy-on-write cloning (see
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler) which
rounds out the feature set for using an RBD back-end quite nicely. :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141001/4999704d/attachment.html

On Wed, Oct 1, 2014 at 4:08 AM, Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

I'd like to clarify a few things, specifically related to Ceph usage, in
less of a rushed response. :)

Note - my production experience has only been with Ceph Dumpling. Plenty
of great patches which resolve many of the issues I've experienced have
landed, so YMMV.

On 30 September 2014 15:06, Jesse Pretorius <jesse.pretorius at gmail.com>
wrote:

I would recommend ensuring that:

1) ceph-mon's and ceph-osd's are not hosted on the same server - they
both demand plenty of cpu cycles

The ceph-mon will generally not use much CPU. If a whole chassis is lost,
you'll see it spike heavily, but it'll drop off again after the rebuild is
complete. I would still recommend keeping at least one ceph-mon on a host
that isn't hosting OSD's. The mons are where all clients get the data
location details from, so at least one really needs to be available no
matter what happens.

At the beginning when things are small (few OSD) I'm intending to run mons
on the osd nodes. When I start to grow it, my plan is to start deploying
separate monitors and eventually disable the mons on the OSD nodes
entirely.

And, FYI, I would definitely recommend implementing separate networks for
client access and the storage back-end. This can allow you to ensure that
your storage replication traffic is separated and you can tune the QoS for
each differently.

I've got a dedicated, isolated 10 GB network between the Ceph nodes
dedicated purely to replication traffic. Another interface (also 10 GB)
will handle traffic from Openstack, and a 3rd (1 GB) will deal with RadosGW
traffic from the public side.

5) instance storage on ceph doesn't work very well if you're trying to
use the kernel module or cephfs - make sure you're using ceph volumes as
the underlying storage (I believe this has been patched in for Juno)

cephfs, certainly in Dumpling, is not production ready - our experiment
with using it in production was quickly rolled back when one of the client
servers lost connection to the ceph-mds for some reason and the storage on
it became inaccessible. The client connection to the mds in Dumpling isn't
as resilient as the client connection for the block device.

By 'use the kernel module' I mean create an image and mounting it to the
server through the ceph block device kernel module, then building a file
system on it and using it like you would any network-based storage.
We found that when using one image as shared storage between servers,
updates from one server wasn't always visible quickly enough (within a
minute) on the other server. If you choose to use a single image per
server, then only mount server2's image on server1 in a disaster recovery
situation then it should be just fine.
We did find that mounting a file system using the kernel module would tend
to cause a kernel panic when trying to disconnect the storage. Note that
there have been several improvements in the revisions after Dumpling,
including some bug fixes for issues that look similar to what we
experienced.

By "make sure you're using ceph volumes as the underlying storage" I meant
that each instance root disk should be stored as its own Ceph Image in a
storage pool. This can be facilitated directly from nova by using
'images_type=rbd' in nova.conf which became available in OpenStack Havana.
Support for using RBD for Ephemeral disks as well finally landed in Juno
(see https://bugs.launchpad.net/nova/+bug/1226351), as did support for
copy-on-write cloning (see
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler)
which rounds out the feature set for using an RBD back-end quite nicely. :)

I was originally planning on doing what you say about using
images_type=rbd with my main wish being to have the ability to live-migrate
images off a compute node. I discovered yesterday that block migration
works just fine with kvm/libvirt now despite assertions in the Openstack
documentation. I can live with that for now. The last time I tried the RBD
backend was in Havana and it had some goofy behavior, so I think I'll let
this idea sit for a while and maybe try again in Kilo once the new
copy-on-write code has had a chance to age a bit ;).


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] cannot find rebuild instance notifications in message queue

HI all,

I'm looking for the notifications a "rebuild instance" should be
publishing, but I can't seem to see any messages arriving in the "
notifications.info" nor "notifications.error" queue.

We already use this to dynamically create/delete DNS entries, however,
rebuilding a VM screws with our puppet certs for the client and we will
need to trigger a deletion of the client cert if a rebuild happens.

Any pointers ?

Alex
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141001/f1c5dbb3/attachment.html

Hi Andy,

Thanks! I guess they come in via the compute queue and I should match
compute.instance.* to the notification.info queue ?

Alex

On 1 October 2014 19:23, Andy Hill wrote:

I believe the notifications you're looking for are:

  • compute.instance.rebuild.start
  • compute.instance.rebuild.end

-AH

On Wed, Oct 1, 2014 at 11:44 AM, Alex Leonhardt <aleonhardt.py at gmail.com>
wrote:

HI all,

I'm looking for the notifications a "rebuild instance" should be
publishing,
but I can't seem to see any messages arriving in the "notifications.info
"
nor "notifications.error" queue.

We already use this to dynamically create/delete DNS entries, however,
rebuilding a VM screws with our puppet certs for the client and we will
need
to trigger a deletion of the client cert if a rebuild happens.

Any pointers ?

Alex


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] DB archive deleted rows

Hi,
our nova DBs are growing rapidly and it's time to start pruning them...

I'm trying the "archive deleted rows" however is not working and I'm
getting the following
warning in the logs: "IntegrityError detected when archiving table"
Searching about this problem I found the bug "
https://bugs.launchpad.net/nova/+bug/1183523"
which, if I understood correctly, means this functionality is broken for a
while...

How are other deployments dealing with growing DBs?

thanks,
Belmiro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141002/057e88c6/attachment.html

Hi Simon,
thanks for your scripts.

I forgot to mention that I'm trying to "archive deleted rows" on Icehouse.

Belmiro

On Thu, Oct 2, 2014 at 4:51 PM, Simon McCartney wrote:

We're using the exceptionally crude scripts here:
https://gist.github.com/8b90b0b913df9f679d16 &
https://gist.github.com/efbb3b55bffd5bd41a42 (this is on a Grizzly
environment)

if you try the archive script & it fails, it should tell you what record
in what table failed (we had to clean up a few fixed_ip table entries by
hand to clear some dangling FKs)

Simon.

--
Simon McCartney
"If not me, who? If not now, when?"
+447710836915 <//+447710836915>

On 2 October 2014 at 15:18:32, Belmiro Moreira (
moreira.belmiro.email.lists at gmail.com) wrote:

Hi,
our nova DBs are growing rapidly and it's time to start pruning them...

I'm trying the "archive deleted rows" however is not working and I'm
getting the following
warning in the logs: "IntegrityError detected when archiving table"
Searching about this problem I found the bug "
https://bugs.launchpad.net/nova/+bug/1183523"
which, if I understood correctly, means this functionality is broken for a
while...

How are other deployments dealing with growing DBs?

thanks,
Belmiro


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Nodes and configurations management in Puppet

We maintain a fairly flat hiera structure, which largely is due to our OS
infrastructure still being pretty simple.

Like Clayton & Matt, we use a ?world? attribute to indicate dev/test/prod.
(Although in hindsight, I like the ?echelon? term a lot better. We did
the same exercise of thinking of synonyms for ?environment.?) So the
structure looks like:

  • %{::world}/%{::clientcert}

    • %{::world}
    • global

The global file is empty, and almost all of the config is stored in the
world file. Over time, this has led to hiera sprawl so the world files
have gotten quite messy. And there is a lot of items that aren?t unique
across worlds, so should really be in a global file. But, at the same
time, this gives us a [mostly] single source of truth and avoids the ?grep
-R? issue Joe described.

ENC at this point is done by specifying a ?role? parameter in the
individual clientcert file for each node. This is a major downside, and
doesn?t scale, so we need to figure out something better. Maybe we can
come up with a hostname scheme to encode the info there, like others have
done.

We run all masterless, for a variety of reasons (which limits ENC options,
too.) Ansible is used to kick off runs across the environment. r10k
deploys the Puppet environments (?master? and ?prod? which correspond to
git branches), heira data, and all the modules. Hiera data is in a
separate (private) git repo, but there?s only a master branch there.

I?ve been a big fan of the role/profile model, too, and it?s worked well
for us. One thing I?ve thought about is specifying a list of profile
classes for each node or node type in hiera, rather than maintaining a
mostly static role module. Then we can just hiera_include(), which is the
method we use in site.pp to include the role class now. I?d be interested
in others thoughts on this idea. I can?t really think of a compelling
reason to switch, other than it?s kind of clever.

Mike

On 9/26/14, 12:03 PM, "Mathieu Gagn?" wrote:

Hi Joe,

Your experience and story about Puppet and OpenStack makes me feel like
you are a long lost co-worker. :)

On 2014-09-25 10:30 PM, Joe Topjian wrote:

Hiera takes the cake for my love/hate of Puppet. I try really hard to
keep the number of hierarchies small and even then I find it awkward
sometimes. I love the concept of Hiera, but I find it can be
unintuitive.

Same here. The aspect I hate about Hiera is that files become very big
and unorganized very fast due to the quantity of configs. So you try to
split them in multiples files instead and then you have the problem you
describe below...

Similar to the other replies, I have a "common" hierarchy
where 90% of the data is stored. The other hierarchies either override
"common" or append to it. When I need to know where a parameter is
ultimately configured, I find myself thinking "is that parameter common
across everything or specific to a certain location or node, and if so,
why did I make it specific?", then doing a "grep -R" to find where it's
located, and finally thinking "oh right - that's why it's there".

Yep. That's the feeling I was referring to when I said "heart attack".

And now, try to form a new co-worker and explain him how it's organized:
"Oh, I felt the file was too big so I split it in a hope to restore
sanity which it did with limited success."

The other difficulty is the management of "common" configs like keystone
auth URL. Multiple services need this value, yet their might be split in
multiple files and the YAML anchor hack [1] I used so far does not work
across YAML files. Same for database configs which are needed by the
database server (to provision the user) and services (for the database
connection string).

Another area of Puppet that I'm finding difficult to work with is
configuring HA environments. There are two main pain points here and
they're pretty applicable to using Puppet with OpenStack:

The other HA pain point is creating many-to-one configurations [...]

I think a cleaner way of doing this is to introduce service discovery
into my environment, but I haven't had time to look into this in more
detail.

I wholly agree with you and that's a concept I'm interested to explore.
Come to think of it, it strangely looks like the "dependency inversion
principle" in software development.

I however feel that an external ENC becomes inevitable to achieve this
ease of use. Unfortunately, each time I looked into it, I rapidly get
lost in my dream of a simple dashboard to manage everything. I feel I
rapidly come to the limits of what exported resources, Hiera and
puppetdb can do.

One idea would be to export an haproxy::listen resource from one of the
controller (which now becomes a pet as you said) and realize it on the
HAProxy nodes with its associated haproxy::member resources.

I should mention that some of these HA pains can be resolved by just
moving all of the data to the HAProxy nodes themselves. So when I want
to add a new service, such as RabbitMQ, to HAProxy, I add the RabbitMQ
settings to the HAProxy role/profiles. But I want HAProxy to be "dumb"
about what it's hosting. I want to be able to use it in a Juju-like
fashion where I can introduce any arbitrary service and HAProxy
configures itself without prior knowledge of the new service.

Yes! How do you guys think we can implement such discovery?

With Nova cells, this problem became much more apparent due to
inter-relations between the API cell and compute cells. The API cell has
to know about the compute cells and vice versa.

In general, though, I really enjoy working with Puppet. Our current
Puppet configurations allow us to stand up test OpenStack environments
with little manual input as well as upgrade to newer releases of
OpenStack with very little effort.

Yes, I really enjoy Puppet too. After all hardware/infrastructure
aspects are figured out, we are able to bootstrap a new OpenStack region
in less than an hour.

To summarize my current pain points:
- Out of control Hiera configuration files
- Lack of service auto-discovery

[1] https://dmsimard.com/2014/02/15/quick-hiera-tips/

--
Mathieu


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

On 10/3/14, 3:56 PM, "Mathieu Gagn?" wrote:

On 2014-10-02 11:50 PM, Michael Dorman wrote:

r10k
deploys the Puppet environments (?master? and ?prod? which correspond to
git branches), heira data, and all the modules. Hiera data is in a
separate (private) git repo, but there?s only a master branch there.

Are people maintaining the manifests/modules able to access the Hiera
private repository? Should someone wish to introduce a new manifest
requiring a new Hiera value, how does he make sure it get added to the
private repository?

How do we make sure someone introducing a new Hiera config asks the
other people to add it to the private repository.

At this point, it?s all the same group, so it just works. We did start
using hiera-eyaml a while back, which keeps any of the ?secrets?
encrypted, so theoretically we could make this repo non-private. But then
it?s back to the standard key management problem.

Are there tests in place combining your manifests/modules and Hiera
repositories to validate that the catalog compiles correctly?

We do have this test in one of our project and it's kind of cool. But
manifests, some modules and Hiera are all in the same repository, easing
its maintenance, tests and deployment.

No real integration testing to speak of today. The fact that we don?t use
any branches on the hiera repo simplifies it a bit, but it does make it
tricky for testing across multiple branches of each repo.

Our team are struggling to come up with a clever way to handle Hiera
secrets as not all people contributing to our manifests/modules should
be able to access them. The challenges are related to tests, packaging
and distributions. We have yet to come up with ideas, so it's mostly
exploration and popular consultation for now.

You should check out hiera-eyaml if you haven?t already
(https://github.com/TomPoulton/hiera-eyaml ). Doesn?t solve All The
Problems, but helps.

I?ve been a big fan of the role/profile model, too, and it?s worked well
for us. One thing I?ve thought about is specifying a list of profile
classes for each node or node type in hiera, rather than maintaining a
mostly static role module. Then we can just hiera_include(), which is
the
method we use in site.pp to include the role class now. I?d be
interested
in others thoughts on this idea. I can?t really think of a compelling
reason to switch, other than it?s kind of clever.

Unless you face strong limitations with your actual model, I don't see
any reason to switch to a "pure" role model. =)

Just because you can, doesn?t mean you should, right?

--
Mathieu

[Openstack-operators] Ops Meetup to return in Paris - your ideas wanted!

All,

Your user committee is pleased to report that the ops meetup will return
in Paris - at even larger scale!

Recall that this is in addition to the operations (and other) track's
presentations. It's aimed at giving us a design-summit-style place to
congregate, swap best practices, ideas and give feedback.

The biggest feedback we had regarding the organisation of these events
so far is that you want to see direct action happen as a result of our
discussions. To make that reality we're getting developers more involved
and also forming a number of working groups to take concrete steps on a
specific topic.

We had some great success with this in San Antonio a few months back,
and so this time we're hoping to make every session actionable and have
a definable result
.


To do this, we need your help. Please propose session ideas on:

https://etherpad.openstack.org/p/PAR-ops-meetup

ensuring you read the new instructions :)


This time we have not one, but two big rooms on the Monday, and some
smaller rooms on Thursday. The Monday sessions are aimed at interactive
planning discussions, where the Thursday are for working groups in
specific areas. We're seeking suggestions from all areas - ops folk,
those using clouds, or those who are OpenStack contributors.

From here, the user committee will collate the suggestions and propose
an agenda.

Here for any questions you might have :)

Regards,

Tom

on behalf of the OpenStack User Committee

I think having Configuration Management working groups (per tool
puppet, chef, ansible, saltstack, whomever) meet up on Thursday would
be very valuable if we could get some of the people writing the
tooling to volunteer as moderators.

So if you're a contributor to one of those efforts I'd encourage you
to go on the ether pad and add something under the "Thursday working
groups" section.

-Jon

On Fri, Oct 3, 2014 at 2:12 AM, Tom Fifield wrote:
All,

Your user committee is pleased to report that the ops meetup will return
in Paris - at even larger scale!

Recall that this is in addition to the operations (and other) track's
presentations. It's aimed at giving us a design-summit-style place to
congregate, swap best practices, ideas and give feedback.

The biggest feedback we had regarding the organisation of these events
so far is that you want to see direct action happen as a result of our
discussions. To make that reality we're getting developers more involved
and also forming a number of working groups to take concrete steps on a
specific topic.

We had some great success with this in San Antonio a few months back,
and so this time we're hoping to make every session actionable and have
a definable result
.


To do this, we need your help. Please propose session ideas on:

https://etherpad.openstack.org/p/PAR-ops-meetup

ensuring you read the new instructions :)


This time we have not one, but two big rooms on the Monday, and some
smaller rooms on Thursday. The Monday sessions are aimed at interactive
planning discussions, where the Thursday are for working groups in
specific areas. We're seeking suggestions from all areas - ops folk,
those using clouds, or those who are OpenStack contributors.

From here, the user committee will collate the suggestions and propose
an agenda.

Here for any questions you might have :)

Regards,

Tom

on behalf of the OpenStack User Committee


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Problem creating resizable CentOS 6.5 image

Hi,
I'm creating an CentOS 6.5 image with OZ and following the guide here:

http://docs.openstack.org/image-guide/content/ch_openstack_images.html

In particular I made sure that the kickstart creates only one partition
( "/" ) which fills all the available initial image space. Then I made
sure to install the 3 packages

cloud-init
cloud-utils
cloud-utils-growpart

as clearly mentioned in the web page linked above.

When I launch the image with small flavor (20GB disk size), it's "/"
file system is 2GB large.

What am I doing wrong ?

thanks,

 Alvise

P.S. In the following the kickstart and the oz-template files:

===== KICKSTART =====
install
url --url http://mirror3.mirror.garr.it/mirrors/CentOS/6/os/x86_64/
text
key --skip
keyboard it
lang en_US.UTF-8
skipx
network --bootproto dhcp
rootpw --plaintext XXXXXXX

authconfig

authconfig --enableshadow --enablemd5

selinux --disabled

service --enabled=ssh

timezone --utc Europe/Rome

bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr yes
clearpart --all --initlabel

part / --size=200 --grow

part / --size=1 --grow

reboot

%packages
@core
@base

===== OZ TEMPLATE =====

centos65x8664
CentOS Linux 6.5 x8664 template

2G


CentOS-6
5
x8664

file:///Images/CentOSMirror/CentOS-6.4-x8664-minimal.iso

<file name="/etc/sysconfig/network">

NETWORKING=yes
NOZEROCONF=yes

<file name="/etc/sysconfig/network-scripts/ifcfg-eth0">

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes
TYPE=Ethernet



http://vesta.informatik.rwth-aachen.de/ftp/pub/Linux/fedora-epel/6/$basearch
False







echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules


rpm --import http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL
rpm --import http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
rpm -ivh
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm


adduser ec2-user -G adm,wheel


echo "%wheel ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers


/usr/bin/passwd -d root || :
/usr/bin/passwd -l root || :


iptables -F
echo -n > /etc/sysconfig/iptables

When I launch the image with small flavor (20GB disk size), it's "/"
file system is 2GB large.

In our experience (beginning of this year) the cloudinit did not work for re-sizing.
I think a colleague fixed this by using dracut-modules-growroot and some scripting to resize.

Cheers,
Robert van Leeuwen

[Openstack-operators] OpenStack Community Weekly Newsletter (Sep 26 - Oct 3)

  Network Function Virtualization -- The Opportunity for OpenStack
  and Open Source
  <http://blogs.gnome.org/markmc/2014/10/02/network-function-virtualization-the-opportunity-for-openstack-and-open-source/>

This week's launch of OPNFV
https://www.openstack.org/blog/2014/09/telcos-mobilizing-to-drive-nfv-adoption/
is a good opportunity to think about a simmering debate in the OpenStack
developer community for a while now -- what exactly does NFV have to do
with OpenStack, and is it a good thing? Follow Boad Member Mark
McLoughlin http://blogs.gnome.org/markmc's journey around NFV.

  #1 OpenStack contributor: all of us
  <http://ttx.re/largest-openstack-contributor.html>

Thierry Carrez http://ttx.re/ celebrates every little contribution
gone into OpenStack. "It doesn't matter who is #1, it matters that we
all can contribute, and that we all do contribute. It matters that we
keep on making sure everyone can easily contribute. That's what's really
important, and I wish we all were celebrating that."

  OpenStack Havana End of Upstream Support Lifetime
  <http://lists.openstack.org/pipermail/openstack-announce/2014-September/000286.html>

The OpenStack Havana 2013.2.4 integrated point release[1] last Tuesday,
September 23, marks the end of stable support for OpenStack Havana.

The Road To Paris 2014 -- Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey https://www.surveymonkey.com/s/V39BL7H
to provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Welcome new core reviewers: Andreas Jaeger, Anita Kuno and Sean Dague to
project-config-core,
http://lists.openstack.org/pipermail/openstack-dev/2014-September/047480.html
James Carey to oslo-i18n-core
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046899.html

Jiri Suchomel
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3b7c1912-78d1-48f2-8174-17dec33bb904
Rico lin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone7e0332a-d1ca-454e-8d9e-e112828f916a
Jun Hong Li
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb0369b14-6d94-4e97-8afd-4fe88fb9313a
Razumovsky Peter
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person77aef7ba-477f-4553-88a3-26981e350e85
Patrick East
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personbc0e54ba-ae96-408a-8544-78ff799cd755
Nikolay Fedotov
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8ca1b6f4-9b76-4b22-92af-b46c7d6a688f
Johnson koil raj
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8d337240-aa54-4114-b3bb-d13447a37050
Jake Kitchener
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person45628aec-e17f-462f-9fda-2e1290837e87
Tomas Bezdek
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5d38b7b7-9df9-46ca-9c3e-c04607b18e59
Aaron Smith
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person444abf4b-ba39-4da4-99e3-12253bc2c547
Alberto Planas
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person088116c0-d458-49c9-b234-1ef348bab713
Paul Karikh
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3a7ad3b8-95dd-465e-ab1a-7f2b28358782
TAHMINA AHMED
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person258bd569-aac0-4dfb-93e3-233f35f5b78c
dominik dobruchowski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person66d0c592-b37a-456a-98ea-08c86e9e6aa9
Mahati
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone3d8a76d-f9d1-40cf-a182-942c3fd86a43
Kedar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6ae0ead4-cf18-45d8-8cb9-acee9006d39e

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141003/91a1fb0d/attachment.html

[Openstack-operators] Issues with hybrid neutron ml2/ovs-agent ports after Icehouse upgrade

Hi all,

Wanted to share details of an issue we just discovered around hybrid ml2/ovs configuration under Icehouse.

We run ml2 on the API nodes, but the openvswitch plugin/ovs-agent on the compute/network nodes. We ran this split setup because under Havana this was the only way we could get ml2 working correctly, and this setup was recommend by an ml2 dev. We kept this design because it continued to work under Icehouse, seemingly without issue. We upgraded from havana to icehouse without too much trouble a couple months ago.

However, we had not rebooted any compute nodes since then until this week. When the compute nodes came back up, instances that had been created before moving to icehouse did not start up because the vif for them was not being created.

Exact error is: https://gist.githubusercontent.com/krislindgren/c1f4f79dc12403c4815d/raw/386ef0607f32088ad372a27e06e3606f6c1ac220/gistfile1.txt

Turns out this is because ports created under havana were missing the 'hybrid' property. And this was preventing the vif from being recreated on the compute host. The ports for instances created after the icehouse upgrade did have this property, and those instances started back up without a problem.

Specifically, the problem is that in the neutron.ml2portbindings table, instances created before the upgrade had this for vifdetails:
{"port
filter": true}
Instances created after the upgrade had this for vifdetails:
{"port
filter": true, "ovshybridplug": true}
Missing this flag caused instances' vifs to never get plug. The cause is this method:
https://github.com/openstack/nova/blob/2014.1.2/nova/virt/libvirt/vif.py#L464-L470
Specifically, because the ovshybridplug flag isn't in the vifdetails, vif.ishybridplugenabled() returns False and instead of calling plugovshybrid(), the driver calles plugovsbridge(). plugovsbridge() only calls its super implementation, which is a no-op method, so the vif never actually gets plugged.

We ended up solving this by manually assigning the hybrid property on the ports that were missing it via MySQL (maybe paste the mysql query we used, or at least an example.) Then starting all the havana instances worked normally.
Here's the sql update we used:
update ml2portbindings set vifdetails = '{"portfilter": true, "ovshybridplug": true}' where vifdetails not like '%ovshybridplug%';
Note: that update statement will overwrite ALL entries that don't contain the ovs
hybrid_plug property. This was fine for us, but you should verify that it won't munge any of your data.

Not sure if we missed a step in the icehouse upgrade, and/or if this is just a function of our particular configuration. It might be possible that running the ml2 pluging with the openvswitch mechansim driver and the ovs-agent is now the correct solution because that solution has a hardcoded ovshybridplug =true value. https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mech_openvswitch.py#L40

Hope this may be useful info for somebody.

Mike (et. al.)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141003/cecba6e8/attachment.html

Il 04/10/2014 01:19, Michael Dorman ha scritto:

Hi all,

Wanted to share details of an issue we just discovered around hybrid
ml2/ovs configuration under Icehouse.

...

Specifically, the problem is that in the neutron.ml2portbindings
table, instances created before the upgrade had this for vif_details:

{"port_filter": true}

I had the same kind of troubles during the Havana/ovs to Icehouse/ML2
upgrade. But the end result is a bit different for me, the old VMs start
correctly and have network connectivity, but they do not have the bridge
interposed before OVS (so security groups do not get applied).
If I add the missing properties in the mysql table, sometimes ports are
not created, or they have wrong VLAN tags, meaning there is some other
state corrupted somewhere, but I cannot find it.

I am quite sure that the database upgrade script distributed with
Neutron for Icehouse was not tested under even the most simple Havana
configurations. I have a very standard Ubuntu-KVM-OVS setup and I had
that script throw so many exceptions in my face that I had to debug it
line by line. The upgrade documentation, I am sorry to say, is a joke.
There is not even a hint on what to look for if some step fails or how
to check that the upgrade was successful.

With Juno, if we decide to install it, I will probably install from
scratch, I will do no more upgrades for a while, the process is too
unreliable.

[Openstack-operators] Request for feedback on DHCP IP usage

Hi operators,

I wanted to ask for feedback on a design issue regarding DHCP agent and IP per
agent.

So a short introduction first - I want to propose a spec to have a distributed
DHCP agent that can run directly on the compute node, and service only the VMs
running locally on it.
This will help balance out the DHCP agent accross the cloud and each node will
only get the information it requires (no more MB size messages which get the
queue stuck).
It will also limit the scope of failure of the DHCP agent and/or service to
that compute node alone.

Now, regarding the IP consumption there are two possible alternatives:
1. Use single IP per serviced subnet for all the servers. (similar to DVR)
2. Use IP per server per subnet per host where VMs are serviced.

So in a theoretical cloud with 100 running VMs for 10 subnets and 10 compute
nodes, per subnet the 1st approach will take only 1 IP while the second will
take a minimum of 1 IP and a maximum of 10 (limited by amount of compute nodes).

Now, I know the 1st solution seems very appealing but thinking of it further
reveals very serious limitations:
* No HA for DHCP agents is possible (more prone to certain race conditions).
* DHCP IP can't be reached from outside the cloud.
* You will just see a single port per subnet in Neutron, without granularity of
the host binding (but perhaps it's not that bad).
* This solution will be tied initially only to OVS mechanism driver, each other
driver or 3rd party plugin will have to support it individually in some way.

So basically my question is - which solution would you prefer as a cloud op?

Is it that bad to consume more than 1 IP, given that we're talking about private
isolated networks?

Regards,
Mike

Hi operators,

I wanted to ask for feedback on a design issue regarding DHCP agent and IP per
agent.

Very happy about dev's coming here for input :)

Now, regarding the IP consumption there are two possible alternatives:
1. Use single IP per serviced subnet for all the servers. (similar to DVR)
2. Use IP per server per subnet per host where VMs are serviced.

So in a theoretical cloud with 100 running VMs for 10 subnets and 10 compute
nodes, per subnet the 1st approach will take only 1 IP while the second will
take a minimum of 1 IP and a maximum of 10 (limited by amount of compute nodes).

If I understand correctly taking an IP (potentially) by the number of hypervisors can quickly go to insane proportions.
A one on one ratio would not be so far fetched for a cloud with a significant amount of hypervisors.
For us the current "standard" /24 would become smallish...
Also when live-migrating machines to a different hypervisor you could run out if IP's for the DHCP servers...

Now, I know the 1st solution seems very appealing but thinking of it further
reveals very serious limitations:
* No HA for DHCP agents is possible (more prone to certain race conditions).
* DHCP IP can't be reached from outside the cloud.
* You will just see a single port per subnet in Neutron, without granularity of
the host binding (but perhaps it's not that bad).
* This solution will be tied initially only to OVS mechanism driver, each other
driver or 3rd party plugin will have to support it individually in some way.

The thing that worries me the most is the implementation in the OVS mechanism driver.
I'd needs to be very well documented that you might not get feature parity with different drivers.

So basically my question is - which solution would you prefer as a cloud op?
As others before me I also lean toward option one.

Cheers,
Robert van leeuwen

[Openstack-operators] Problem creating resizable CentOS 6.5 image

Does this cover the scenario of a user launching CentOS 6.x, updating the
kernel, snapshotting, and having the relaunched instance resized?

On Mon, Oct 6, 2014 at 12:38 PM, Regan McDonald
wrote:

Seconded. This is what I use with my CentOS images, and it works great.

On Mon, Oct 6, 2014 at 1:54 PM, Robert Plestenjak <
robert.plestenjak at xlab.si> wrote:

Try this:

https://github.com/flegmatik/linux-rootfs-resize

  • Robert

----- Original Message -----
From: "Antonio Messina" <antonio.s.messina at gmail.com>
To: "Robert van Leeuwen" <Robert.vanLeeuwen at spilgames.com>
Cc: openstack-operators at lists.openstack.org
Sent: Friday, October 3, 2014 2:50:44 PM
Subject: Re: [Openstack-operators] Problem creating resizable CentOS 6.5
image

I use this snippet in my %post section. I don't find it particularly
elegant, but it works just fine:

# Set up to grow root in initramfs
cat << EOF > 05-grow-root.sh
#!/bin/sh

/bin/echo
/bin/echo Resizing root filesystem

/bin/echo "d
n
p
1


w
" | /sbin/fdisk -c -u /dev/vda
/sbin/e2fsck -f /dev/vda1
/sbin/resize2fs /dev/vda1
EOF

chmod +x 05-grow-root.sh

dracut --force --include 05-grow-root.sh /mount --install 'echo

fdisk e2fsck resize2fs' /boot/"initramfs-grow_root-$(ls /boot/|grep
initramfs|sed s/initramfs-//g)" $(ls /boot/|grep vmlinuz|sed
s/vmlinuz-//g)
rm -f 05-grow-root.sh

tail -4 /boot/grub/grub.conf | sed

s/initramfs/initramfs-grow_root/g| sed s/CentOS/ResizePartition/g |
sed s/crashkernel=auto/crashkernel=0 at 0/g >> /boot/grub/grub.conf

It only works if the root filesystem is /dev/vd1 (which is a very
common setup anyway) but can be adapted.

I only tested it with CentOS 5 and 6. The full script is available at
https://github.com/gc3-uzh-ch/openstack-tools/

.a.

--
antonio.s.messina at gmail.com
antonio.messina at uzh.ch +41 (0)44 635 42 22
S3IT: Service and Support for Science IT http://www.s3it.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141006/e926a54e/attachment.html

[Openstack-operators] Ops Meetup to return in Paris - your ideas wanted!

And I want to say folks, I feel your pain. I come from a QA and
operations background in large, mission critical, distributed systems
and logs have always been my best friends and the bane of my
existence when they don?t have what you need. I get it. I want to help
fix it.

I'm also looking forward to cleaning up the logging in OpenStack projects.

May I suggest on the etherpad that we take an approach of listing
specific log messages that we (devs doing debugging and operators
doing diagnostics/operations) find less than useful?

I think if we keep the etherpad focused on specific log messages, we can
then start to identify:

  • changes to those log messages (structure, level, audience, payload, etc)
  • log message "archetypes" that we can then use to generalize into
    best practice documentation on the wiki (to add to what is already there
    [1])

Does this sound like a reasonable approach?

+1
We recently setup logstash here and doing the grok magic was quite a pain
with all the different way's the logging is formatted.
We also throw a bit in the bin because there is no useful info in it.

Looking at our logstash grok I could probably make some suggestions on what we find useful and not :)

Cheers,
Robert van Leeuwen

Great idea, Jay!

I was thinking we could do the "dirty dozen" and list all the most hated messages and prioritize them. Game-ify it a bit and keep the top ten worst rolling as low hanging fruit bugs. We keep the worst in the queue and as they get fixed, we mark another dead;-)

But, yeah, we've got to list them. I've added a section to the bottom of the etherpad. And I've moved Mathieu Gagne's to the first nomination for bad log messages.

Also, I will make sure they become bugs and post the links so folks can vote on the actual bug and add info/comments.

Also, thanks for the Wiki link. I've been busy on other fires and was just getting to that. I plan to expand the topic and include decisions we make, info from these discussions and old(and new) ML postings, etc.

Keep the suggestions coming. And please don't be afraid to post to the etherpad. I want to hear your pain.

More coming soon.
--Rocky

Mon, 06 Oct 2014 12:29:53 -0400

From: Jay Pipes
To: openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] Ops Meetup to return in Paris -
your ideas wanted!
Message-ID: <5432C381.2000404 at gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 10/03/2014 03:07 PM, Rochelle.RochelleGrober wrote:

Hey Tom and operators,

Just wanted to say I volunteered to organize/drive the log
rationalization effort. I'm glad to see it on the Operators'
schedule
and added it to the working group schedule. I should have more info
out
and better organized by the summit. I've already got a volunteer or
two;-) The etherpad I started is:

https://etherpad.openstack.org/p/Log-Rationalization

and I also put the link under the working group area of the Tom's
etherpad.

Please, everyone, I'm looking for:

?short term reduction of pain points (where would some focus on
format,
content and/or levels during Kilo help the most?)

?Longer term: standards, automated verification through git review
(hacking, etc), review list additions, etc.

?Volunteers: specifications, bugs, documentation, coding, repairing
code

And I want to say folks, I feel your pain. I come from a QA and
operations background in large, mission critical, distributed systems
and logs have always been my best friends and the bane of my
existence when they don?t have what you need. I get it. I want to
help
fix it.

I'm also looking forward to cleaning up the logging in OpenStack
projects.

May I suggest on the etherpad that we take an approach of listing
specific log messages that we (devs doing debugging and operators
doing diagnostics/operations) find less than useful?

I think if we keep the etherpad focused on specific log messages, we
can
then start to identify:

  • changes to those log messages (structure, level, audience, payload,
    etc)
  • log message "archetypes" that we can then use to generalize into
    best practice documentation on the wiki (to add to what is already
    there
    [1])

Does this sound like a reasonable approach?

-jay

[1] https://wiki.openstack.org/wiki/LoggingStandards

[Openstack-operators] extend the ip address of subnet in Havana?

Hi All,

Is there any way to extend the ip address of subnet Havana?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141009/c8ab1fff/attachment.html

Neutron

On Thu, Oct 9, 2014 at 12:57 PM, Abel Lopez wrote:

Using neutron or nova-network?

On Oct 9, 2014, at 9:47 AM, raju <raju.roks at gmail.com> wrote:

Hi All,

Is there any way to extend the ip address of subnet Havana?


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Cinder with EMC VNX

Hi guys, I'm about to deploy a small cloud (Ubuntu Trusty + OpenStack
Icehouse) using an EMC VNX as cinder backend. So far I've seen at least
four different possibilities to use this storage:

-
http://docs.openstack.org/icehouse/config-reference/content/emc-vnx-direct-driver.html
just
iSCSI, no FC.
-
http://docs.openstack.org/icehouse/config-reference/content/emc-smis-driver.html
should
work with VNX+FC
-
http://www.rethinkstorage.com/vipr-and-openstack-integration-how-it-works#.VDdVlulgcuo
ViPR
solution
-https://github.com/emc-openstack/vnx-direct-driver/blob/master/README_FC.md
EMC
official drivers.

I'd like to know your eperiences/recomendations about this cinder drivers.
Have you tried any of them?

Regards,

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141010/9548a5b4/attachment.html

If you are using VNX in Icehouse, this is the recommended one:

More comments below.

Thanks,
Xing

From: Juan Jos? Pavlik Salles [mailto:jjpavlik at gmail.com]
Sent: Friday, October 10, 2014 12:13 AM
To: openstack-operators at lists.openstack.org
Subject: [Openstack-operators] Cinder with EMC VNX

Hi guys, I'm about to deploy a small cloud (Ubuntu Trusty + OpenStack Icehouse) using an EMC VNX as cinder backend. So far I've seen at least four different possibilities to use this storage:

-http://docs.openstack.org/icehouse/config-reference/content/emc-vnx-direct-driver.html just iSCSI, no FC.
[Xing] This driver works for iSCSI, but we can?t backport FC driver and other new features to Icehouse.

-http://docs.openstack.org/icehouse/config-reference/content/emc-smis-driver.html should work with VNX+FC
[Xing] The SMI-S based driver will only support VMAX, not VNX, starting from Juno. So this one is not recommended any more.

-http://www.rethinkstorage.com/vipr-and-openstack-integration-how-it-works#.VDdVlulgcuo ViPR solution
[Xing] If you have multiple storage platforms, this is the recommended choice.

-https://github.com/emc-openstack/vnx-direct-driver/blob/master/README_FC.md EMC official drivers.

I'd like to know your eperiences/recomendations about this cinder drivers. Have you tried any of them?

Regards,

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] OpenStack Community Weekly Newsletter (Oct 3 - 10)

  Mentoring others and yourself
  <http://blog.flaper87.com/post/5437068bd987d26792ce7139/>

While this topic would have been good for the Tips-and-tricks section
below, I think it deserves to open this week's wrap-up. With a new
session of Outreach Program for Women about to start for OpenStack, our
good mentor Flavio Percoco http://blog.flaper87.com/gives some ideas
on being a good mentor.

  OpenStack Technical Committee Update
  <http://www.openstack.org/blog/2014/10/openstack-technical-committee-update-2/>

The last meeting of the current Technical Committee before the elections
(which started today). Vishvananda Ishaya
http://www.openstack.org/blog/2014/10/openstack-technical-committee-update-2/
wraps up the conversations around graduation, the contributor license
agreement and the "big tent".

    Next steps for 'Hidden Influencers'
    <http://maffulli.net/2014/10/07/next-steps-for-hidden-influencers/>

With Paris only weeks away it's time to announce that we have a time and
place to meet people whose job is to decide what OpenStack means for
their company. The OpenStack Foundation has offered a room to meet in
Paris on Monday, November 3rd, in the afternoon
http://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831#.VDMxjSldVRg:
please add the meeting to your schedule
http://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831#.VDMxjSldVRg
and join the mailing list
http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg.

The Road To Paris 2014 -- Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey https://www.surveymonkey.com/s/V39BL7H
to provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Mudassir Latif
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person291a86f6-3fbe-42f3-9844-888437613988
Daniel Mellado
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc41a7449-0e2d-4514-90bc-d9ef098d9039
Anna
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person61594bc5-6672-4617-911c-bc8da2d267aa
Joakim L?fgren
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc884d105-ebcb-422e-844a-f26cf31012fa
woody
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person4954e97f-7bf3-458b-a233-3d9e1c74f67e
Shaifali Agrawal
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persondaba20ce-094c-43bd-a7d9-393c926290c6
Barnaby Court
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2303be14-3102-4e0c-b1cb-518a9af0c3f9
Oleksii Zamiatin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personefbb5b98-e4e4-409b-bd29-1153905fe0cc

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141010/e27ccfcc/attachment.html

[Openstack-operators] rsyslog update caused services to break?

Hello,

This morning we noticed various nova, glance, and keystone services (not
cinder or swift) not working in two different clouds and required a restart.

We thought it was a network issue since one of the only commonalities
between the two clouds was that they are on the same network.

Then later in the day I logged into a test cloud on a totally separate
network and had the same problem.

Looking at all three environments, the commonality is now that they have
Ubuntu security updates automatically applied in the morning and this
morning rsyslog was patched and restarted.

I found this oslo bug that kind of sounds like the issue we saw:

https://bugs.launchpad.net/oslo.log/+bug/1076466

Doing further investigation, log files do indeed show a lack of entries for
various services/daemons until they were restarted.

Has anyone else run into this? Maybe even this morning, too? :)

Thanks,
Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141010/31ca071c/attachment.html

That's really interesting - thanks for the link.

We've been able to narrow down why didn't see this with Cinder or Swift: in
short, Swift isn't using Oslo (AFAIK) and we had some previous logging
issues with Cinder in Havana so we altered the logging setup a bit.

On Mon, Oct 13, 2014 at 3:05 AM, Francois Deppierraz <francois at ctrlaltdel.ch
wrote:

Hi Joe,

Yes, same problem here running Ubuntu 14.04.

The symptom is nova-api, nova-conductor and glance-api eating all CPU
without responding to API requests anymore.

It is possible to reproduce it thanks to the following script.

https://gist.github.com/dbishop/7a2e224f3aafea1a1fc3

Fran?ois

On 11. 10. 14 00:40, Joe Topjian wrote:

Hello,

This morning we noticed various nova, glance, and keystone services (not
cinder or swift) not working in two different clouds and required a
restart.

We thought it was a network issue since one of the only commonalities
between the two clouds was that they are on the same network.

Then later in the day I logged into a test cloud on a totally separate
network and had the same problem.

Looking at all three environments, the commonality is now that they have
Ubuntu security updates automatically applied in the morning and this
morning rsyslog was patched and restarted.

I found this oslo bug that kind of sounds like the issue we saw:

https://bugs.launchpad.net/oslo.log/+bug/1076466

Doing further investigation, log files do indeed show a lack of entries
for various services/daemons until they were restarted.

Has anyone else run into this? Maybe even this morning, too? :)

Thanks,
Joe


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] [Openstack-docs] High Availability Guide team

Fantastic to see so much interest in updating the HA Guide. I finally have
some free cycles to help move this along so how about we schedule an irc
chat so volunteer contributors can meet and we can decide on an approach? I
propose either this Friday (10/17) or next Friday (10/24). Here's my
schedule.
https://doodle.com/percona-mattgriffin

Here's the initial bug list that Anne referenced in the original email [1]
that we can at least use as a starting point for more updates. Any ideas on
where things stand on the steps that Anne proposed to get a repo set up so
we can move forward with updating content?

Thanks!
Matt

[1] https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=ha-guide

On Mon, Sep 15, 2014 at 2:15 PM, Andreas Jaeger wrote:

On 15. September 2014 20:49:46 MESZ, Sriram Subramanian <
sriram at sriramhere.com> wrote:

Andreas,

You have glossary and pom. Anne's version is similar to your ha-guide
sub-directory below. I like how you retained the history.

.. https://github.com/ajaeger/ha-guideglossary
https://github.com/ajaeger/ha-guide/tree/master/doc/glossarySetup
ha-guide infrastructure
https://github.com/ajaeger/ha-guide/commit/c6bf631ce7a19c363beebefb29bffc853764d569an
hour agoha-guide
https://github.com/ajaeger/ha-guide/tree/master/doc/ha-guideMove to
doc/ha-guide subdir
https://github.com/ajaeger/ha-guide/commit/149c3b3beb21ae8cd8bcae3a5799c40f5eef827022
hours agopom.xml
https://github.com/ajaeger/ha-guide/blob/master/doc/pom.xmlSetup
ha-guide infrastructure
https://github.com/ajaeger/ha-guide/commit/c6bf631ce7a19c363beebefb29bffc853764d569an
hour ago

My understanding was we will end up having this document under
https://github.com/openstack/ha-doc, like
https://github.com/openstack/security-doc. Is that still correct?

thanks,
-Sriram

On Mon, Sep 15, 2014 at 3:00 AM, Andreas Jaeger wrote:

On 09/15/2014 01:24 AM, Anne Gentle wrote:

Cool trick. :) I think I got it all but please do double-check:
https://review.openstack.org/121426

The upstream this will pull from is
https://github.com/annegentle/ha-guide

That one contains a lot more files toplevel than expected - did you
forget to move the files? Compare it with my version,

Andreas
--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB16746 (AG N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


Openstack-docs mailing list
Openstack-docs at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

--
Thanks,
-Sriram
425-610-8465
www.sriramhere.com | www.clouddon.com

My repo is just the staging to import it into git.openstack.org which
gets mirrored to github,

Andreas
--
Andreas Jaeger
aj@{suse.com,novell.com,opensuse.org} Twitter / Identica: jaegerandi
SUSE LINUX Products GmbH , Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG N?rnberg)
This email was sent from my phone


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141015/576f429f/attachment.html

Excellent!
For a meeting, here's a link to help us schedule a time that works best for
those interested in contributing.
https://doodle.com/xkuhru2snhvvyi7v

These are all times throughout next week. Otherwise, if many will be at the
Summit in Paris (I won't unfortunately), we can have a meeting there. My
colleague from Percona, Tushar Katarki, will be in Paris, though, so he can
help kickstart the project if needed.

I also created https://etherpad.openstack.org/p/openstack-haguide-update to
capture some ideas on what parts of the guide need attention. Please add
your comments and include links to the Guide pages if possible.

Thanks!
Matt Griffin
irc: mattgriffin

On Fri, Oct 17, 2014 at 2:54 AM, Alessandro Vozza wrote:

Dear all,

I was looking for an entry point for my first contribution to Openstack
(been around as an operator for years, but lack of programming skills kept
me away from contributing) and I think I finally found something I can give
some meaningful help to :) At my $daily_job I deal with HA architectures
for one major Openstack player, so I?m looking forward to the first meeting
and my first patch (and to meet all of you in Paris!).

regards
Alessandro

On 16 Oct 2014, at 00:55, David Medberry wrote:

Yep, Anne's right. Also, while both email addresses work for me,
openstack at medberry.net is the one subscribed to the list (so the only one
I can reply from.)

On Wed, Oct 15, 2014 at 4:37 PM, Anne Gentle wrote:

On Wed, Oct 15, 2014 at 5:30 PM, Matt Griffin <matt.griffin at percona.com>
wrote:

Fantastic to see so much interest in updating the HA Guide. I finally
have some free cycles to help move this along so how about we schedule an
irc chat so volunteer contributors can meet and we can decide on an
approach? I propose either this Friday (10/17) or next Friday (10/24).
Here's my schedule.
https://doodle.com/percona-mattgriffin

Here's the initial bug list that Anne referenced in the original email
[1] that we can at least use as a starting point for more updates. Any
ideas on where things stand on the steps that Anne proposed to get a repo
set up so we can move forward with updating content?

Repo is ready-to-go here:
http://git.openstack.org/cgit/openstack/ha-guide

Christian did some initial clean up so it is ready to keep iterating on.
A meeting sounds like a great first step, thanks for starting the
conversation!
Anne

Thanks!
Matt

[1]
https://bugs.launchpad.net/openstack-manuals/+bugs/?field.tag=ha-guide

On Mon, Sep 15, 2014 at 2:15 PM, Andreas Jaeger wrote:

On 15. September 2014 20:49:46 MESZ, Sriram Subramanian <
sriram at sriramhere.com> wrote:

Andreas,

You have glossary and pom. Anne's version is similar to your ha-guide
sub-directory below. I like how you retained the history.

.. https://github.com/ajaeger/ha-guideglossary
https://github.com/ajaeger/ha-guide/tree/master/doc/glossarySetup
ha-guide infrastructure
https://github.com/ajaeger/ha-guide/commit/c6bf631ce7a19c363beebefb29bffc853764d569an
hour agoha-guide
https://github.com/ajaeger/ha-guide/tree/master/doc/ha-guideMove to
doc/ha-guide subdir
https://github.com/ajaeger/ha-guide/commit/149c3b3beb21ae8cd8bcae3a5799c40f5eef827022
hours agopom.xml
https://github.com/ajaeger/ha-guide/blob/master/doc/pom.xmlSetup
ha-guide infrastructure
https://github.com/ajaeger/ha-guide/commit/c6bf631ce7a19c363beebefb29bffc853764d569an
hour ago

My understanding was we will end up having this document under
https://github.com/openstack/ha-doc, like
https://github.com/openstack/security-doc. Is that still correct?

thanks,
-Sriram

On Mon, Sep 15, 2014 at 3:00 AM, Andreas Jaeger wrote:

On 09/15/2014 01:24 AM, Anne Gentle wrote:

Cool trick. :) I think I got it all but please do double-check:
https://review.openstack.org/121426

The upstream this will pull from is
https://github.com/annegentle/ha-guide

That one contains a lot more files toplevel than expected - did you
forget to move the files? Compare it with my version,

Andreas
--
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imend?rffer,HRB16746 (AG
N?rnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272
A126


Openstack-docs mailing list
Openstack-docs at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

--
Thanks,
-Sriram
425-610-8465
www.sriramhere.com | www.clouddon.com

My repo is just the staging to import it into git.openstack.org which
gets mirrored to github,

Andreas
--
Andreas Jaeger
aj@{suse.com,novell.com,opensuse.org} Twitter / Identica: jaegerandi
SUSE LINUX Products GmbH , Maxfeldstr. 5, 90409 N?rnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 16746 (AG
N?rnberg)
This email was sent from my phone


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--
Alessandro Vozza
EMEA Openstack Technical Specialist
De Entree 238, 1101EE Amsterdam
tel: + 31 20 5651200, mob:+ 31 6 43197789


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Ops Meetup to return in Paris - your ideas wanted!

Final call! We also need volunteers for architecture show and tell. Add
your name to the etherpad!

On 09/10/14 18:48, Tom Fifield wrote:

Reminder! We probably need to close this etherpad off and schedule
sessions very soon...

On 03/10/14 14:12, Tom Fifield wrote:

All,

Your user committee is pleased to report that the ops meetup will return
in Paris - at even larger scale!

Recall that this is in addition to the operations (and other) track's
presentations. It's aimed at giving us a design-summit-style place to
congregate, swap best practices, ideas and give feedback.

The biggest feedback we had regarding the organisation of these events
so far is that you want to see direct action happen as a result of our
discussions. To make that reality we're getting developers more involved
and also forming a number of working groups to take concrete steps on a
specific topic.

We had some great success with this in San Antonio a few months back,
and so this time we're hoping to make every session actionable and have
a definable result
.


To do this, we need your help. Please propose session ideas on:

https://etherpad.openstack.org/p/PAR-ops-meetup

ensuring you read the new instructions :)


This time we have not one, but two big rooms on the Monday, and some
smaller rooms on Thursday. The Monday sessions are aimed at interactive
planning discussions, where the Thursday are for working groups in
specific areas. We're seeking suggestions from all areas - ops folk,
those using clouds, or those who are OpenStack contributors.

From here, the user committee will collate the suggestions and propose
an agenda.

Here for any questions you might have :)

Regards,

Tom

on behalf of the OpenStack User Committee


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] state of gpu processing / passthrough

Hi all,

I did a standard google search and got a bunch of wiki pages, but I'm not
sure if they are stale.

Does anyone know what the state of GPU processing is in OpenStack? I could
have swore I read it was available for Xen, but what about KVM?

Is anyone doing it? If so, what hardware are you using?

Thanks,
Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141016/15494d47/attachment.html

Hi Simon,

Cool - thanks for the info. :)

If anyone has more information about the state on libvirt/kvm, that'd be
greatly appreciated, too.

Thanks,
Joe

On Thu, Oct 16, 2014 at 9:34 AM, Simon Pasquier
wrote:

Hello Joe,

Yes GPU passthrough is implemented with Xen [1] and it works with NVIDIA.
I've copied Guillaume that developed and tested the feature for the XLcloud
project, he may be able to provide more details. Bob Ball from Citrix even
made a blog post about it [2].

I know that at the time being, Guillaume had difficulties to do GPU
passthrough with NVIDIA cards and libvirt/KVM. This may have changed though.

BR,
Simon

[1] https://blueprints.launchpad.net/nova/+spec/pci-passthrough-xenapi
[2]
http://blogs.citrix.com/2014/03/06/hpc-cloud-enablement-using-xenserver-openstack-and-nvidia-grid-gpus-xlcloud/

On Thu, Oct 16, 2014 at 5:22 PM, Joe Topjian wrote:

Hi all,

I did a standard google search and got a bunch of wiki pages, but I'm not
sure if they are stale.

Does anyone know what the state of GPU processing is in OpenStack? I
could have swore I read it was available for Xen, but what about KVM?

Is anyone doing it? If so, what hardware are you using?

Thanks,
Joe


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Requesting OpenStack Operators to Help Staff Booth at USENIX LISA

Hi everyone, USENIX LISA (Large Installation System Administration Conference) is coming up November 9-14 in Seattle, WA. Yes, it's the week after the Paris Summit! 1000+ sysadmins, DevOps, architects, software engineers, and more will attend. OpenStack is a silver sponsor and a premier exhibitor. We have a limited number of full conference passes to offer to experienced OpenStack operators who would like to help staff our booth to talk to the attendees about OpenStack - and attend sessions.

The free passes cover the main conference program, meals, and evening events Wednesday-Friday, Nov 12-14. The booth is open for staffing Wed., Nov 12 from 12-7pm and Thurs., Nov 13 from 10am-2pm.

Please reply with your role in your company. We'll confirm technical candidates on a first-come, first-served basis. If/when we run out of full conference passes, we have expo only passes to offer.

Other OpenStack activities at LISA:
Half day training on Monday, Nov. 10 by Mirantis (fee applies)
Dreamhost providing OpenStack-based virtual machines to LISA Lab
OpenStack demo on Thursday, Nov. 13 by Canonical
Thank you for your support!
--
Regards,

Kathy Cacciatore
Consulting Marketing Manager
OpenStack Foundation
1-512-970-2807 (mobile)
Part time: Monday - Thursday, 9am - 2pm US CT
kathyc at openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141016/bf9f0de0/attachment.html

[Openstack-operators] libvirt+qemu-kvm passthrough device prep/cleanup

Hi all,

We have a few nodes with Dell ExpressFlash PCIe SSDs with which we are
using Nova pci passthrough associated with special flavors to handle
device assignment, but we need a way to clean up the device contents
for privacy/security. Wondering if anyone can provide
pointers/comments/experience on such things.

I see libvirt has the ability to add hooks, the closest of which seems
to be the qemu release hook (though not sure if this is right to match
instance terminate). I guess if that is appropriate we could hack
something together which:
1) parsed the domain xml to find the appropriate pci BDF of the
device/s in question
2) then we'd have to unbind them from the pci-stub module so the host
could access them
3) then I suppose dd zero the /dev/rssd* nodes
4) rebind the device with pci-stub
5) exit 0

Before we try that path, have others have been-there done-that?

--
Cheers,
~Blairo

[Openstack-operators] OpenStack Community Weekly Newsletter (Oct 10 – 17)

  OpenStack Juno is here! <http://www.openstack.org/software/juno/>

OpenStack Juno, the tenth release of the open source software for
building public, private, and hybrid clouds has 342 new features to
support software development, big data analysis and application
infrastructure at scale. The OpenStack community continues to attract
the best developers and experts in their disciplines with 1,419
individuals employed by more than 133 organizations
http://www.openstack.org/foundation/companies/ contributing to the
Juno release.

  Tweaking DefCore to subdivide OpenStack platform (proposal for
  review) <http://robhirschfeld.com/2014/10/16/defcore-platform/>

For nearly two years
http://robhirschfeld.com/2013/07/22/kicking-off-core/, the OpenStack
Board has been moving towards creating a common platform definition
http://robhirschfeld.com/2014/07/16/openstack-defcore-review-interview-by-jason-baker/
that can help drive interoperability. At the last meeting
http://lists.openstack.org/pipermail/foundation/2014-September/001746.html,
the Board paused to further review one of the core tenants of the
DefCore process (Item #3
https://wiki.openstack.org/wiki/Governance/CoreDefinition: Core
definition can be applied equally to all usage models). The following
material will be a major part of the discussion for The OpenStack Board
meeting on Monday 10/20
https://wiki.openstack.org/wiki/Governance/Foundation/20Oct2014BoardMeeting.
Comments and suggest welcome!

    Forming the OpenStack API Working Group
    <http://blog.phymata.com/2014/10/16/openstack-api-working-group/>

A new working group about APIs is forming in the OpenStack community.
Its purpose is ?To propose, discuss, review, and advocate for API
guidelines for all OpenStack Programs to follow.? To learn more read the
API Working Group https://wiki.openstack.org/wiki/API_Working_Group
wiki page.

  End of the Election Cycle ? Results of PTL & TC Elections
  <http://lists.openstack.org/pipermail/openstack-announce/2014-October/000296.html>

Lots of confirmations and some new names. Thank you for all who served
in the past cycle and welcome to new OpenStack Tech Leads and members of
the Technical Committee.

The Road To Paris 2014 ? Deadlines and Resources

During the Paris Summit there will be a working session for the Women of
OpenStack to frame up more defined goals and line out a blueprint for
the group moving forward. We encourage all women in the community to
complete this very short survey https://www.surveymonkey.com/s/V39BL7H
to provide input for the group.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Dominique
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc81a4e08-ff03-450b-9226-39ae70a5b397
Savanna Jenkins
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3eff30ec-5d84-4b29-ab85-3196399ef4d5

Andrew Boik
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personab6282bb-94a4-4bb6-aa23-df294a737f88
Marcin Karkocha
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person75b95245-3cd0-4b90-a28d-f439f31da3c5

Nelly
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8d9bedd9-ce9e-4788-8ee6-d2aa0ff80cd2
Dmitry Nikishov
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7595cc8d-4f47-46dc-a8c1-cade322b3b34

dominik dobruchowski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person66d0c592-b37a-456a-98ea-08c86e9e6aa9
Cory Benfield
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person2fb5f274-a70f-4c90-811e-8ad9d797596e

mfabros
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persond208244a-d236-4591-94ef-297247443644
Richard Winters
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person81d97d08-0648-4e40-87a7-2d5d2a929da6

Nikolay Fedotov
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person8ca1b6f4-9b76-4b22-92af-b46c7d6a688f
vinod kumar
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person951fc5d2-4631-4737-8ae9-284826e1c0a0

Imran Hayder
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person5775187c-e531-419f-975b-86bce83250cf
Wayne Warren
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personfc358ecd-8c80-4387-96ab-6c73890374a9

Chaitanya Challa
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personaab9c551-822b-4984-8e0f-fde11b972909
Carol Bouchard
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0b608e17-ae7d-48ba-9722-7d9274fea31b

Shaunak Kashyap
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person59ca379f-39b2-43f3-9201-9317cd87dec4
pradeep gondu
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person0c92f430-3d37-4213-ad99-8d7f3f8514c1

Mudassir Latif
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person291a86f6-3fbe-42f3-9844-888437613988
Vineet Menon
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persond95dde1f-852d-44ac-a3c8-1b1bb19a567f

Jiri Suchomel
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3b7c1912-78d1-48f2-8174-17dec33bb904
Evan Callicoat
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personac934f4d-0a5e-4ac7-900c-79c0ec490360

Edmond Kotowski

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person48941d96-0283-4688-b6c7-abbba4cd04df

Julien Anguenot

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6667f40a-1922-487f-a6a2-e8ec22fca8be

Boris Bobrov

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6dbb4557-8ba2-4d58-abe5-7d6ffd7caee4

Rajini Ram

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf6240323-2dff-4db0-8c62-c1a10e222b88

Nikki

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person9131258b-0dca-4130-80df-4921cb76b0c2

Martin Hickey

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person98d7ea71-ea91-4303-8b0d-03a78db15bf2

Lena Novokshonova

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personf26e0eae-3704-40ab-8688-8d257fc092d6

Jin Liu

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person87993962-244f-49ed-9297-122cb0c40234

Hao Chen

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person404b3bff-ad6c-4056-85ed-f19500847fcd

Albert

http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person606a8d91-3af3-470d-95a3-c10094f1d8b7

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a comment./

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141017/afeed744/attachment.html

[Openstack-operators] qemu 1.x to 2.0

Hello,

We recently upgraded an OpenStack Grizzly environment to Icehouse (doing a
quick stop-over at Havana). This environment is still running Ubuntu 12.04.

The Ubuntu 14.04 release notes
https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Ubuntu_Server make
mention of incompatibilities with 12.04 and moving to 14.04 and qemu 2.0. I
didn't think that this would apply for upgrades staying on 12.04, but it
indeed does.

We found that existing instances could not be live migrated (as per the
release notes). Additionally, instances that were hard-rebooted and had the
libvirt xml file rebuilt could no longer start, either.

The exact error message we saw was:

"Length mismatch: vga.vram: 1000000 in != 800000"

I found a few bugs that are related to this, but I don't think they're
fully relevant to the issue I ran into:

https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1308756
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1291321
https://bugs.launchpad.net/nova/+bug/1312133

We ended up downgrading to the stock Ubuntu 12.04 qemu 1.0 packages and
everything is working nicely.

I'm wondering if anyone else has run into this issue and how they dealt
with it or plan to deal with it.

Also, I'm curious as to why exactly qemu 1.x to 2.0 are incompatible with
each other. Is this just an Ubuntu issue? Or is this native of qemu?

Unless I'm missing something, this seems like a big deal. If we continue to
use Ubuntu's OpenStack packages, we're basically stuck at 12.04 and
Icehouse unless we have all users snapshot their instance and re-launch in
a new cloud.

Thanks,
Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141019/5ca23078/attachment.html

The version we are using is:
1.10.2-0ubuntu2~cloud0

The version that was not working for us is:
2.0.1+git20140120-0ubuntu2~cloud1

Network:
Intel Corporation I350 Gigabit Network Connection (igb module)

We were seeing the problem, strangely enough, at the application level,
inside the VMs, where Hadoop was reporting corrupted data on TCP
connections. No other messages on the hypervisor or in the VM kernel.
Hadoop makes lots of connections to lots of different VMs moving lots
(terabytes) of data as fast as posssibile. Also, it was
non-deterministic, Hadoop would try several times to transfer the data,
sometimes successfully, sometimes giving up. I tried some quick iperf
tests, but they worked fine.

Daniele

On 10/20/14 18:46, Manish Godara wrote:

We had to do the same downgrade with openvswitch, the newest
version, under heavy load, corrupts packets in-transit, but we do not
have the time to investigate the issue further.

Daniele, what was the openvswitch version before and after the
upgrade? And which ethernet drivers do you have? The corruption
maybe related to the drivers you have (the issues may be triggered by
the way openvswitch flows are configured in Icehouse vs Havana).

Thanks.

From: Daniele Venzano <daniele.venzano at eurecom.fr
<mailto:daniele.venzano at eurecom.fr>>
Organization: Eurecom
Date: Sunday, October 19, 2014 11:46 PM
To: "openstack-operators at lists.openstack.org
"
>
Subject: Re: [Openstack-operators] qemu 1.x to 2.0

We have the same setup (Icehouse on Ubuntu 12.04) and had similar
issues. We downgraded qemu from 2.x to 1.x, as we cannot terminate all
VMs for all users. We had non-resumable VMs also in the middle of the
1.x series and nothing was documented in the changlelog.
We had to do the same downgrade with openvswitch, the newest version,
under heavy load, corrupts packets in-transit, but we do not have the
time to investigate the issue further.

We plan to warn our users in time for the next major upgrade to Juno
that all VMs need to be terminated, probably during the Christmas
holidays. I do not think they will be happy.
Seeing also all the problems we had upgrading Neutron from OVS to ML2,
terminating all VMs is probably the best policy anyway during an
OpenStack upgrade. Or you do lots of migrations and upgrade qemu one
compute host at the time, but if something goes wrong you end-up with
an angry user and a stuck VM.

It certainly is a big deal.

On 10/20/14 00:59, Joe Topjian wrote:

Hello,

We recently upgraded an OpenStack Grizzly environment to Icehouse
(doing a quick stop-over at Havana). This environment is still
running Ubuntu 12.04.

The Ubuntu 14.04 release notes
https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Ubuntu_Server make
mention of incompatibilities with 12.04 and moving to 14.04 and qemu
2.0. I didn't think that this would apply for upgrades staying on
12.04, but it indeed does.

We found that existing instances could not be live migrated (as per
the release notes). Additionally, instances that were hard-rebooted
and had the libvirt xml file rebuilt could no longer start, either.

The exact error message we saw was:

"Length mismatch: vga.vram: 1000000 in != 800000"

I found a few bugs that are related to this, but I don't think
they're fully relevant to the issue I ran into:

https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1308756
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1291321
https://bugs.launchpad.net/nova/+bug/1312133

We ended up downgrading to the stock Ubuntu 12.04 qemu 1.0 packages
and everything is working nicely.

I'm wondering if anyone else has run into this issue and how they
dealt with it or plan to deal with it.

Also, I'm curious as to why exactly qemu 1.x to 2.0 are incompatible
with each other. Is this just an Ubuntu issue? Or is this native of qemu?

Unless I'm missing something, this seems like a big deal. If we
continue to use Ubuntu's OpenStack packages, we're basically stuck at
12.04 and Icehouse unless we have all users snapshot their instance
and re-launch in a new cloud.

Thanks,
Joe


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Restricting API access as "admin" users based on network

Hello all,

We have an established OpenStack cloud and as part of a round of security
hardening would like to add some additional restrictions on the use of "admin"
permissions.

In particular, we would like to limit it so that API endpoints requiring admin
access can only be used from a VPN (known range of source IP addresses). We do
not want the public-facing APIs to expose these endpoints, even to users with
the right credentials.

Has anyone already been through a similar process and have a method or advice
for us to follow?

Cheers,

Tim

On 10/20/2014 12:11 AM, Tim Goddard wrote:
Hello all,

We have an established OpenStack cloud and as part of a round of security
hardening would like to add some additional restrictions on the use of "admin"
permissions.

In particular, we would like to limit it so that API endpoints requiring admin
access can only be used from a VPN (known range of source IP addresses). We do
not want the public-facing APIs to expose these endpoints, even to users with
the right credentials.

Has anyone already been through a similar process and have a method or advice
for us to follow?
From a Keystone perspective, what you want to do is to user the "admin"
and "main
configuration to have each mapped to different interfaces on the HTTPD
server machine don't try to do this with Eventlet, as Eventlet alone
doesn't support it.

You'll have to decide what you want to do about Horizon, as the Admin
operations on Keystone from Horizon are RBAC controlled. You could run
two different Horizon instances, one internal and one external, and give
each a seaprate Auth URL. Then the Admin port would be hidden from
Horizon, but I think the admin fields wouls still show up on the Horizon
portal, just be non-functional. I'll let some Horizon folks chime in
with how to deal with that.

Unfortunately, each service defines these things a little differntly,
and not all fo them run in Eventlet. For the ones that run in Eventlet,
you'll need to use some form of termination in front of them to bind to
different interfaces.

Cheers,

Tim


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Swift plugins

Hi everyone,

Does anybody have an idea if there is a way or plugin to make swift use ECS or Atmos?
And in any case, if swift has the ability to use plugins to connect to external object storage solutions or is it operating by itself always?

Regards,
Ohad

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141020/10caebec/attachment.html

Hi Ohad,

On 20/10/14 17:35, Baruch, Ohad wrote:
Does anybody have an idea if there is a way or plugin to make swift use
ECS or Atmos?

And in any case, if swift has the ability to use plugins to connect to
external object storage solutions or is it operating by itself always?

Unfortunately, I'm not familiar with ECS or Atmos, but I did find this
article helpful for understanding swift extensibility:

https://swiftstack.com/blog/2014/02/04/swift-extensibility/

I hope it helps!

Regards,

Tom

[Openstack-operators] Disable neutron agents

Hello.

I can't find any option for neutron to disable agents (like nova
service-disable).

Now I just shutdown unwanted agents (service stop on network node). But
if node reboot they will come back, and this is not really welcome.

How to disable agents in neutron?

Thanks.

On Mon, Oct 20, 2014 at 4:05 PM, George Shuklin <george.shuklin at gmail.com>
wrote:

On 10/20/2014 05:30 PM, Christian Berendt wrote:

On 10/20/2014 02:48 PM, George Shuklin wrote:

How to disable agents in neutron?

This should be possible with "neutron agent-update --admin-state-up
False AGENT".

You can list all available agents with "neutron agent-list".

HTH, Christian.

Thanks! It completely missed in documentation (out to launchpad to report).

Yeah, looks like
http://docs.openstack.org/havana/config-reference/content/demo_multiple_operation.html
needs a home in icehouse and juno docs.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Adding vmware datastore with Cinder

Hi,

I have an environment with a vSphere 5.5 cluster managed under Nova, I want to add some storage space by adding more datastores to the cluster.
As far as I know, I can only create volumes and attach them to instances, so does anybody know if I can create new datastores with Cinder?
It will probably have to use a different driver then VMDK, but I just want to know if it's possible.
My datastores are currently allocated from a SAN VNX.

Regards,
Ohad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141020/5ef16fc5/attachment.html

[Openstack-operators] Neutron-2014.1.3-1 ML2+LB/L2 POP+VXLAN

All,

I'm trying to get Neutron working using the following:

  1. ML2 core plugin.
  2. LinuxBridge and L2 pop mech drivers
  3. VXLAN tenant networks.

I used this deployment as a reference:

http://squarey.me/2014/07

Here is my gist for additional details:

https://gist.github.com/danehans/261bb0bfa6fdf8950c67

I am able to ping between qrouter/qdhcp interfaces. However, I do not see the traffic on the brq1df5d89e-ef bridge. I am unable to ping between the test instance and qrouter/qdhcp. In this case I see one way icmp, but ARP appears to work. Any troubleshooting suggestions?

Regards,
Daneyon Hansen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141020/e3b9a149/attachment.html

[Openstack-operators] Hyper-V, Havana, Nova-Network HA

We have an Openstack Cluster running Havana in nova-network HA mode,
on Centos 6.5/KVM.

I'd like to test out Hyper-V hypervisor with this, but I haven't been
able to figure out if this is possible. Anyone running Hyper-V in this
configuration?

Or if not, Hyper-V with the latest juno HA networking features? (ie,
no central network node?)

thanks-

-Ben

[Openstack-operators] Issue with port limit

Hi Leandro,

You didn't mention which openstack version or networking (Nova or Neutron) used.

We had a similar issue with Havana(later upgraded to Icehouse) Neutron networking, issue keeps coming up every now and then.
You might have phantom ports consuming your port quota, could explain "Maximum Number Of Ports Exceeded"

Run these two commands and compare both output tables, you might find some interesting stuff.

neutron floatingip-list

neutron port-list

I'm not a networking export but we used these to find phantom ports which caused the same issue.
Floating IPs should have ports set, if you see ones that don't have ports or unexplained ports, try to delete them.
If you can't delete any of these ports, maybe try to delete floating ip then the ports.

Hope it helps \ sends you in the right direction.
Tzach

----- Original Message -----
From: "Leandro David Cacciagioni" <leandro.21.2008 at gmail.com>
To: openstack-operators at lists.openstack.org, openstack at lists.openstack.org
Sent: Monday, October 20, 2014 11:14:53 PM
Subject: [Openstack-operators] Issue with port limit

Hi Guys!!! I have one issue, I'm getting "Maximum Number Of Ports
Exceeded" when trying to deploy some VMs, I have already updated the
ports quota but message is still there and make my deployment to fail.
Any ideas of what I'm missing?

Thanks,
Leandro.-

--
Cacciagioni, Leandro David
leandro.21.2008 at gmail.com
System Administrator - Development Operations
lcacciagioni.github.io - about.me/cacciald
Cel: +549 341 3673294


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Three Node Deployment Model

Hi there,
I have an openstack deployment question. I have three physical
server(Configurations are listed in the following). What is the best
optimized production-ready deployment model to achieve the following
goals in mind(Users will not exceeds more than 10)?
- Ability to add more services(compute, controller, storage node)
later, without service interruption
- Ability to provide HA for all services, after more nodes added
without service interruption
- No need to change the deployment architecture later when more
physical nodes and resources available(My physical infrastructure will
be scaled later, and not limitted to these three physical machines).
For example I don't know is it better to have 1 neutron node, or have
two neutron service on my two compute nodes?
A really appreciate any helps.
Thanks in advance.


Server 1:
RAM: 128GB
HDD: 300GB

Server 2:
RAM: 128GB
HDD: 300GB

Server 3:
RAM: 256GB
HDD: 512GB

With three node deployment in mind(one controller and 2 compute
nodes), in order to acheive minimum HA with threse three physical
nodes, is it possible to setup second controller as a virtual machine
on one of my compute node?
Does this approach improve HA in my deployment(atleast until my other
physical nodes become available)?
Thanks in advance.

On 10/25/14, Hossein Zabolzadeh wrote:
Thanks for your useful answers.
Very helpful resource and guides.

On 10/23/14, Adam Lawson wrote:

Your dilemna looks like this as I see it:

Highly-Available, Highly-Scalable, Limited Hardware. Pick any 2.

In an ideal world we'd all like the benefits of everything with limited
resources. True HA requires some meaningful separation (logical fencing
if
you will). You can't do that with only 3 nodes so you need to make some
concessions. Starting with motivation that begins your efforts within
Openstack Docs per Anne. ; )

Mahalo,
Adam

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Thu, Oct 23, 2014 at 10:58 AM, Anne Gentle wrote:

On Tue, Oct 21, 2014 at 2:43 PM, Hossein Zabolzadeh

wrote:

Hi there,
I have an openstack deployment question. I have three physical
server(Configurations are listed in the following). What is the best
optimized production-ready deployment model to achieve the following
goals in mind(Users will not exceeds more than 10)?
- Ability to add more services(compute, controller, storage node)
later, without service interruption
- Ability to provide HA for all services, after more nodes added
without service interruption
- No need to change the deployment architecture later when more
physical nodes and resources available(My physical infrastructure will
be scaled later, and not limitted to these three physical machines).
For example I don't know is it better to have 1 neutron node, or have
two neutron service on my two compute nodes?

Hi - Have you read:
http://docs.openstack.org/openstack-ops/content/scaling.html

and

http://docs.openstack.org/openstack-ops/content/example_architecture.html#example_architecture-neutron

Horizontal scaling is covered in these as well as your neutron question.

Anne

A really appreciate any helps.
Thanks in advance.


Server 1:
RAM: 128GB
HDD: 300GB

Server 2:
RAM: 128GB
HDD: 300GB

Server 3:
RAM: 256GB
HDD: 512GB


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] GRE tunnel timeout?

I'm hitting an interesting (more like frustrating) issue on Icehouse. I
have neutron set up to use GRE tunnels to allow network access for machine
instances (VMs)

When I spawn a VM:
- The VM gets an address just fine (10.20.0.59) and can ping the outside
world.
- I assign a floating IP
- The outside world can ping the VM's floating IP
- After a while (between 550-600 seconds, likely more towards 600), the
outside world can not ping the VM.
- If I sign into the VM VNC console, and ping the openstack router
(10.20.0.1 in this case), outside connectivity works again

For a while I assumed this was an arp issue, until I saw the arp record (ip
netns qrouter-... exec arp -an) disappear, and was still able to ping the
floating IP and get a response.

I started investigating the "ovs-ofctl dump-flows br-tun" output and
noticed that open vswitch would set up a flow for the target that had a
hard_timeout value of 300. So I waited for that to disappear and tried
pinging the floating ip. Yep, the flow came back, ping succeeded.

When it doesnt work, 'ip netns qrouter-... ping 10.20.0.59' doesn't work
either.

This VM is the only one scheduled on this compute node right now.

This feels like some sort of timeout that gets broken by the VM initiating
traffic, but I'm not sure.

I have tried to use the technique for listening to patch-tun by
instantiating snooper0 (as mentioned in
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html)
but I don't see any traffic going over that, ever. I know we're using GRE
tunnels, so i feel like there should be some data?

Any help would be greatly appreciated!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141021/9695a6b3/attachment.html

[Openstack-operators] Detach interface "fails" in grizzly (bug 1326183?)

Hi guys, I've been attaching and detaching interfaces to an instance until
I got into a weird situation:

-I see the interface:

root at cocinero:~/tools# nova --os-username noc-admin --os-tenant-name noc
--os-password My_Pass --os-auth-url http://172.19.136.1:35357/v2.0
interface-list c7d4e004-47fc-42b4-9aec-83e196f9c202
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| Port State | Port ID | Net ID
| IP addresses | MAC Address |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| ACTIVE | 572cb037-c0b7-490b-b451-594897817397 |
275c5c97-5a18-41ff-a46c-49d78507fb22 | 172.16.28.34 | |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
root at cocinero:~/tools#

-I want to delete that interface, so:

root at cocinero:~/tools# nova --os-username noc-admin --os-tenant-name noc
--os-password My_Pass --os-auth-url http://172.19.136.1:35357/v2.0
interface-detach c7d4e004-47fc-42b4-9aec-83e196f9c202
572cb037-c0b7-490b-b451-594897817397
root at cocinero:~/tools#

No error messages at all.

-But the interface stills there:

root at cocinero:~/tools# nova --os-username noc-admin --os-tenant-name noc
--os-password My_Pass --os-auth-url http://172.19.136.1:35357/v2.0
interface-list c7d4e004-47fc-42b4-9aec-83e196f9c202
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| Port State | Port ID | Net ID
| IP addresses | MAC Address |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| ACTIVE | 572cb037-c0b7-490b-b451-594897817397 |
275c5c97-5a18-41ff-a46c-49d78507fb22 | 172.16.28.34 | |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
root at cocinero:~/tools#

Doing some research I've found this bug
https://bugs.launchpad.net/nova/+bug/1326183 , which is exactly the same
behaviour I see in my cloud. It seems to be related to a race condition
during the cache update.

Any ideas how to:

-Reset the conditions so I can detach the interface cleanly?
-Backport the proposed patch to Grizzly? (Upgrading to IceHouse is almost
impossible)

Thanks!

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141021/780c8c8e/attachment.html

Whenever I try to delete that port that's what I get.

root at cocinero:~# nova interface-list 0babcc31-08e0-4d45-8db1-34dd101dbc93
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| Port State | Port ID | Net ID
| IP addresses | MAC Address |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| ACTIVE | 2ec8ae0d-598a-4bd4-81d4-5d43d4014198 |
038482b5-caa9-4205-af93-ff04589f35a1 | 172.16.47.14 | |
| ACTIVE | be091982-6afe-4746-8508-33d2b5104d37 |
275c5c97-5a18-41ff-a46c-49d78507fb22 | 200.16.28.39 | |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
root at cocinero:~# nova interface-detach 0babcc31-08e0-4d45-8db1-34dd101dbc93
2ec8ae0d-598a-4bd4-81d4-5d43d4014198
root at cocinero:~# nova interface-list 0babcc31-08e0-4d45-8db1-34dd101dbc93
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| Port State | Port ID | Net ID
| IP addresses | MAC Address |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| ACTIVE | 2ec8ae0d-598a-4bd4-81d4-5d43d4014198 |
038482b5-caa9-4205-af93-ff04589f35a1 | 172.16.47.14 | |
| ACTIVE | be091982-6afe-4746-8508-33d2b5104d37 |
275c5c97-5a18-41ff-a46c-49d78507fb22 | 200.16.28.39 | |
+------------+--------------------------------------+--------------------------------------+--------------+-------------+
root at cocinero:~#

this is what I see in my nova-compute.log file:

2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp Traceback
(most recent call last):
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
430, in processdata
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp rval
= self.proxy.dispatch(ctxt, version, method, **args)
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
line 133, in dispatch
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp
return getattr(proxyobj, method)(ctxt, **kwargs)
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3069, in
detach_interface
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp
"attached") % locals())
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp
PortNotFound: Port 2ec8ae0d-598a-4bd4-81d4-5d43d4014198 is not attached
2014-10-23 17:27:57.575 8817 TRACE nova.openstack.common.rpc.amqp

It says the port isn't attached, when it is and it answers ping and
everything.

2014-10-22 11:23 GMT-03:00 Juan Jos? Pavlik Salles :

I achieved a "clean" deletion of the interface deleting the hole instance,
but is not a reasonable solution. Maybe some db updates could do a better
job?

2014-10-21 17:15 GMT-03:00 Juan Jos? Pavlik Salles :

Hi guys, I've been attaching and detaching interfaces to an instance until

I got into a weird situation:

-I see the interface:

root at cocinero:~/tools# nova --os-username noc-admin --os-tenant-name noc
--os-password My_Pass --os-auth-url http://172.19.136.1:35357/v2.0
interface-list c7d4e004-47fc-42b4-9aec-83e196f9c202

+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| Port State | Port ID | Net ID
| IP addresses | MAC Address |

+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| ACTIVE | 572cb037-c0b7-490b-b451-594897817397 |
275c5c97-5a18-41ff-a46c-49d78507fb22 | 172.16.28.34 | |

+------------+--------------------------------------+--------------------------------------+--------------+-------------+
root at cocinero:~/tools#

-I want to delete that interface, so:

root at cocinero:~/tools# nova --os-username noc-admin --os-tenant-name noc
--os-password My_Pass --os-auth-url http://172.19.136.1:35357/v2.0
interface-detach c7d4e004-47fc-42b4-9aec-83e196f9c202
572cb037-c0b7-490b-b451-594897817397
root at cocinero:~/tools#

No error messages at all.

-But the interface stills there:

root at cocinero:~/tools# nova --os-username noc-admin --os-tenant-name noc
--os-password My_Pass --os-auth-url http://172.19.136.1:35357/v2.0
interface-list c7d4e004-47fc-42b4-9aec-83e196f9c202

+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| Port State | Port ID | Net ID
| IP addresses | MAC Address |

+------------+--------------------------------------+--------------------------------------+--------------+-------------+
| ACTIVE | 572cb037-c0b7-490b-b451-594897817397 |
275c5c97-5a18-41ff-a46c-49d78507fb22 | 172.16.28.34 | |

+------------+--------------------------------------+--------------------------------------+--------------+-------------+
root at cocinero:~/tools#

Doing some research I've found this bug
https://bugs.launchpad.net/nova/+bug/1326183 , which is exactly the same
behaviour I see in my cloud. It seems to be related to a race condition
during the cache update.

Any ideas how to:

-Reset the conditions so I can detach the interface cleanly?
-Backport the proposed patch to Grizzly? (Upgrading to IceHouse is almost
impossible)

Thanks!

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com

--
Pavlik Salles Juan Jos?
Blog - http://viviendolared.blogspot.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] neutron metadata service crumples under load

running Icehouse + Neutron ML2/OVS and network names spaces.

Was running well unitl recently, most recent change was switching to
Ceph RBD for ephemeral storage on the hypervisors (and glance). I
suspect this of being relevant because it makes the instances launch
much more quickly.

I haven't classified the breaking point but launching 64 instances
deterministically breaks the metadata agent.

The service seems to be running on the controller, but is not
listening in the network namespace. It seems to require restarting
both the dhcp-agent and the metadata agent to get it to go again.

Even in debug mode I get no errors in the logs.

Anyone seen this?

-Jon

Ah there's the log many instances of:

2014-10-21 19:50:15.527 12931 INFO neutron.wsgi [-] 10.10.167.98 - -
[21/Oct/2014 19:50:15] "GET /openstack/2012-08-10 HTTP/1.1" 500 343
120.411705

2014-10-21 19:50:15.528 12931 ERROR
neutron.agent.metadata.namespaceproxy [-] Unexpected error.
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy Traceback (most recent call
last):
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy File
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespace
proxy.py",
line 74, in cal
l

2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy req.body)
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy File
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/namespaceproxy.py",
line 105, in _pro
xy
request
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy
connection
type=UnixDomainHTTPConnection)
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy File
"/usr/lib/python2.7/dist-packages/httplib2/init.py", line 1569, in
request
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy (response, content) =
self.request(conn, authority, uri, requesturi, method, body,
headers, redi
rections, cachekey)
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy File
"/usr/lib/python2.7/dist-packages/httplib2/init.py", line 1316, in
_request
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy (response, content) =
self.connrequest(conn, requesturi, method, body, headers)
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy File
"/usr/lib/python2.7/dist-packages/httplib2/init.py", line 1285, in
connrequest
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy response =
conn.getresponse()
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy File
"/usr/lib/python2.7/httplib.py", line 1045, in getresponse
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy response.begin()
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy File
"/usr/lib/python2.7/httplib.py", line 409, in begin
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy version, status, reason =
self.
readstatus()
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy File
"/usr/lib/python2.7/httplib.py", line 373, in readstatus
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespaceproxy raise BadStatusLine(line)
2014-10-21 19:50:15.528 12931 TRACE
neutron.agent.metadata.namespace
proxy BadStatusLine: ''
2014-10-21 19:50:15.528 12931 TRACE neutron.agent.metadata.namespace_proxy

On Tue, Oct 21, 2014 at 8:17 PM, Jonathan Proulx wrote:
running Icehouse + Neutron ML2/OVS and network names spaces.

Was running well unitl recently, most recent change was switching to
Ceph RBD for ephemeral storage on the hypervisors (and glance). I
suspect this of being relevant because it makes the instances launch
much more quickly.

I haven't classified the breaking point but launching 64 instances
deterministically breaks the metadata agent.

The service seems to be running on the controller, but is not
listening in the network namespace. It seems to require restarting
both the dhcp-agent and the metadata agent to get it to go again.

Even in debug mode I get no errors in the logs.

Anyone seen this?

-Jon

[Openstack-operators] Console security question when using nova-novncproxy to access console

Hi all,

I have a question about a security consideration on a compute node when
using nova-novncproxy for console access.

Is there any existing mechanism within Nova to automatically
authenticate against the VNC console an instance
(I'm talking about plain old VNC authentication) or to generally prevent
unauthorized local user accounts on the compute-node from accessing the
VNC console of an instance?

I understand that nova-novnc proxy and websockify bridge between the
public network and the private internal/infrastructure network of the
compute-node using wss:// to secure and encrypt the connection over the
public network. I also understand that VNC authentication is
comparatively very weak....

This is perhaps only an issue when the compute-node is also permitting
traditional Unix type user logins.
Let's say we have an instance running on the compute-node and the
hypervisor or container manager serves out the console over VNC on a
known port and the tenant has authenticated and logged in on the console
using Horizon, perhaps as the administrator. A local user on the compute
node, if they specified the correct port, could in theory then access
the console and the administrative account of that instance without
needing to authenticate.

VNC authentication using password (and optionally username) would seem
like the traditional way to prevent such unauthorized access. I can't
find anything within the Nova code base that seems to cater for password
authentication with the VNC server. For example the vmware nova driver
returns the following dictionary
of parameters for an instance console in vmops.py:getvncconsole():
{'host': CONF.vmware.hostip,
'port': self.
getvncport(vmref),
'internal
access_path': None}

No suggestion of a password to authenticate with the VNC server. Is this
intentionally not supported, lacking, or is there perhaps simply a
better way to address this problem?

Thanks in advance!
Niall Power

Hi Niall,

It looks like vnc password support was removed from the vmware driver last
October:

https://github.com/openstack/nova/commit/058ea40e7b7fb2181a2058e6118dce3f051e1ff3

For libvirt, there is an option in qemu.conf for "vnc_password", but I'm
not sure how it would work with OpenStack.

Thanks,
Joe

On Tue, Oct 21, 2014 at 9:30 PM, Niall Power <niall.power at oracle.com> wrote:

Hi all,

I have a question about a security consideration on a compute node when
using nova-novncproxy for console access.

Is there any existing mechanism within Nova to automatically authenticate
against the VNC console an instance
(I'm talking about plain old VNC authentication) or to generally prevent
unauthorized local user accounts on the compute-node from accessing the VNC
console of an instance?

I understand that nova-novnc proxy and websockify bridge between the
public network and the private internal/infrastructure network of the
compute-node using wss:// to secure and encrypt the connection over the
public network. I also understand that VNC authentication is comparatively
very weak....

This is perhaps only an issue when the compute-node is also permitting
traditional Unix type user logins.
Let's say we have an instance running on the compute-node and the
hypervisor or container manager serves out the console over VNC on a known
port and the tenant has authenticated and logged in on the console using
Horizon, perhaps as the administrator. A local user on the compute node, if
they specified the correct port, could in theory then access the console
and the administrative account of that instance without needing to
authenticate.

VNC authentication using password (and optionally username) would seem
like the traditional way to prevent such unauthorized access. I can't find
anything within the Nova code base that seems to cater for password
authentication with the VNC server. For example the vmware nova driver
returns the following dictionary
of parameters for an instance console in vmops.py:getvncconsole():
{'host': CONF.vmware.hostip,
'port': self.
getvncport(vmref),
'internal
access_path': None}

No suggestion of a password to authenticate with the VNC server. Is this
intentionally not supported, lacking, or is there perhaps simply a better
way to address this problem?

Thanks in advance!
Niall Power


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Importing volumes between openstack clusters

I currently have a 16 node Icehouse cluster and would like to start playing
around with Juno. I am not yet ready to upgrade my Icehouse release, so I
am looking for a way to import a dozen or so volumes into the Juno cinder.

The volumes are currently in ceph and in Icehouse as bootable, ideally I
would like to just build a record in Juno without having to import from
cinder to glance, and then glance back to cinder for Juno. If I was more of
a DB guys I guess I could just manually insert a record into Juno, but
looking to see if there is another way before I try that option.

P.S. Yes I know that it would be VERY bad to try to have two instances one
in Icehouse and one in Juno trying to use the same volume, I will make sure
I don't do that. :)

<>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141022/bad806ef/attachment.html

[Openstack-operators] Guaranteed Resources

Hello,

I'm sure some of you have run into this situation before and I'm wondering
how you've dealt with it:

A user requests that they must have access to a certain amount of resources
at all times. This is to prevent them from being unable to launch instances
in the cloud when the cloud is at full capacity.

I've always seen the nova.reservations table, so I thought there was some
simple reservation system in OpenStack but never got around to looking into
it. I think I was totally wrong about what that table does -- it looks like
it's just used to assist in deducting resources from a user's quota when
they launch an instance.

There are also projects like Climate/Blazar, but a cursory look says it
requires Keystone v3, which we're not using right now.

Curiously, the quotas table has a column called "hardlimit" which would
make one think that there was such a thing as a "soft
limit", but that's
not the case, either. I see a few blueprints about adding soft limits, but
nothing in place.

Has anyone cooked up their own solution for this?

Thanks,
Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141023/cfaf34cb/attachment.html

Thanks, Simon. That's one idea that we were thinking of -- sort of a DIY
reservation system that the users can handle on their own.

On Fri, Oct 24, 2014 at 1:38 AM, Simon Pasquier
wrote:

Hello Joe,
I would have recommended to have a look at Blazar but since you already
did... Maybe your users could mimic how Blazar accomplishes resource
reservation? IIUC Blazar will spawn the reserved instances but in shelved
mode [1] so they won't consume any cloud resources but they will still be
accounted by the resource tracker. When the lease starts, Blazar will
unshelve the instances.
HTH
Simon
[1] https://wiki.openstack.org/wiki/Blazar#Virtual_instance_reservation

On Thu, Oct 23, 2014 at 5:20 PM, Joe Topjian wrote:

Hello,

I'm sure some of you have run into this situation before and I'm
wondering how you've dealt with it:

A user requests that they must have access to a certain amount of
resources at all times. This is to prevent them from being unable to launch
instances in the cloud when the cloud is at full capacity.

I've always seen the nova.reservations table, so I thought there was some
simple reservation system in OpenStack but never got around to looking into
it. I think I was totally wrong about what that table does -- it looks like
it's just used to assist in deducting resources from a user's quota when
they launch an instance.

There are also projects like Climate/Blazar, but a cursory look says it
requires Keystone v3, which we're not using right now.

Curiously, the quotas table has a column called "hardlimit" which would
make one think that there was such a thing as a "soft
limit", but that's
not the case, either. I see a few blueprints about adding soft limits, but
nothing in place.

Has anyone cooked up their own solution for this?

Thanks,
Joe


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Ephemeral instances in RBD issue

Hey all,
Just ran into a strange problem, curious if anyone has seen this.
We have some compute nodes with very little local disk space, so we have 'libvirtimagestype=rbd' in nova.conf.
This works great, except that snapshots do a full hairpin and take about an hour to show up.

I saw this last commit to jdurgin's nova fork which solves the issue ( https://github.com/jdurgin/nova/commit/ea4b5369e4bec4dd7a0ce9f68769600329cda6c6 )
now a snapshot happens in seconds.

The problem that we've introduced however, is that about 15-20m after we do a snapshot, the VM is powered off.
Every time.
I can start the instance back up with nova start, but I am leery of pushing this out to prod and having to tell users to expect a shutdown after a snapshot.

Anyone else using this in Havana?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141023/4a485a12/attachment.pgp

So, I figured out why this was happening.
The gist of it is, the direct_snapshot doesn't start the domain back up. I contacted inktank and they're adding what I found to their fork.

The long story:
in nova/virt/libvirt/driver.py, the snapshot method does everything. the code doesn't support live snapshot for lvm or rbd, so we have to do a cold snapshot.
We go into 'managedSave' ( the instance is suspended to quiesce I/O), the snapshot is taken, then much later, we start a new domain based on the save.
The last commit to jdurin's fork splits snapshot right after the 'try direct snapshot' and adds a 'genericsnapshot' method. The code for starting up the domain again is in 'genericsnapshot' which only gets called if the exception "ImageUnacceptable" gets raised.
as a quick hack, I just copied the 'new_dom' lines into 'snapshot' method and HUZZAH, it works as expected.

Hopefully we'll see a new commit to jdurgin's fork for icehouse and havana soon.

On Oct 26, 2014, at 6:08 AM, Simon Leinen <simon.leinen at switch.ch> wrote:

Abel Lopez writes:

I saw this last commit to jdurgin's nova fork which solves the issue (
https://github.com/jdurgin/nova/commit/ea4b5369e4bec4dd7a0ce9f68769600329cda6c6
)
now a snapshot happens in seconds.

The problem that we've introduced however, is that about 15-20m after
we do a snapshot, the VM is powered off.
Every time.

Ouch! Have you checked the logs (nova-compute and maybe libvirtd's)?

I can start the instance back up with nova start, but I am leery of
pushing this out to prod and having to tell users to expect a shutdown
after a snapshot.

Understood.

Anyone else using this in Havana?

Not me, but I'm sympathetic with your worries, and want this resolved as
well. We're using Icehouse with RBD, currently without the "ephemeral"
patches, but we would really like to (re-) activate that part of the
integration soon.

It's maybe worth asking on #ceph or posting to one of the CEPH mailing
lists, too.

Good luck,
--
Simon.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] [nova] instance resource quota quesetions

We?re using the cinder IOPS throttling which seems to work well. The default volume type is rate limited so that the standard requests don?t overload the ceph pool and the high IOPS is reserved for those apps that need it.

We?ve not used glance throttling (but the load is much less in our environment)

Tim

From: Joe Topjian [mailto:joe at topjian.net]
Sent: 23 October 2014 19:42
To: Craig Jellick
Cc: openstack-operators at lists.openstack.org; openstack at lists.openstack.org
Subject: Re: [Openstack-operators] [nova] instance resource quota quesetions

I can confidently say that throttling will work with KVM. I think both virt_types will work since libvirt is controlling everything in the end.

One caveat about IO throttling to keep in mind is that the Nova settings are not applied to volumes -- just the root and ephemeral disk. We were unable to verify if IO throttling through Cinder worked due to this bug:

https://bugs.launchpad.net/nova/+bug/1362129

For bandwidth, I want to say that it is agnostic to nova-network and Neutron since it happens at the libvirt layer, but I am not 100% sure as I've never tested the settings on both.

It's safe to test these settings live (IMO). If you figure out the correct "virsh" commands to use to apply the settings, you can run them directly on the compute node against a test instance and no other instances will be affected.

Hope that helps,
Joe

On Thu, Oct 23, 2014 at 11:20 AM, Craig Jellick > wrote:
Hello,

I have a few questions regarding the instance resource quota feature in nova which is documented here: https://wiki.openstack.org/wiki/InstanceResourceQuota

First, the section on disk IO states "IO throttling are handled by QEMU.? Does this mean that this feature only works when the hypervisor is QEMU (virt_type = QEMU in nova.conf) or will this feature work with KVM?

Second, this feature also allows control over network bandwidth. Will that work if you are using neutron or does it only work if you?re using nova-network? Our setup is neutron w/ ml2+ovs with the ovs agent living on each compute node.

/Craig J


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141024/8d71a8b5/attachment.html

[Openstack-operators] floatin ip issue

Hello,

Assigned a floating ip to an instance. But I can't ping the instance. This
instance can reach internet with no problem. But I can't ssh or icmp to
this instance. Its not a security group issue.

On my network node that runs l3, I can see qrouter. The extenel subnet
looks like this:

allocation-pool start=192.168.122.193,end=192.168.122.222 --disable-dhcp
--gateway 192.168.122.1 192.168.122.0/24

I can ping 192.168.122.193 using: ip netns exec
qrouter-34f3b828-b7b8-4f44-b430-14d9c5bd0d0c ping 192.168.122.193

but not 192.168.122.194 (which is the floating ip)

Doing tcp dump on the interace that connects to the external world, I can
see ICMP request but not reply from the interface :

11:36:40.360255 IP 192.168.122.1 > 192.168.122.194: ICMP echo request, id
2589, seq 312, length 64

11:36:41.360222 IP 192.168.122.1 > 192.168.122.194: ICMP echo request, id
2589, seq 313, length 64

Ideas?

Thanks

Paras.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141024/f9f26784/attachment.html

Hi George,

You mean .193 and .194 should be in the different subnets?
192.168.122.193/24 reserved from the allocation pool and
192.168.122.194/32 is the floating ip.

Here are the outputs for the commands

neutron port-list --device-id=8725dd16-8831-4a09-ae98-6c5342ea501f

+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

| id | name | macaddress |
fixed
ips
|

+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

| 6f835de4-c15b-44b8-9002-160ff4870643 | | fa:16:3e:85:dc:ee |
{"subnetid": "0189699c-8ffc-44cb-aebc-054c8d6001ee", "ipaddress":
"192.168.122.193"} |

| be3c4294-5f16-45b6-8c21-44b35247d102 | | fa:16:3e:72:ae:da |
{"subnetid": "d01a6522-063d-40ba-b4dc-5843177aab51", "ipaddress":
"10.10.0.1"} |

+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

neutron floatingip-list

+--------------------------------------+------------------+---------------------+--------------------------------------+

| id | fixedipaddress |
floatingipaddress | port_id |

+--------------------------------------+------------------+---------------------+--------------------------------------+

| 55b00e9c-5b79-4553-956b-e342ae0a430a | 10.10.0.9 | 192.168.122.194
| 82bcbb91-827a-41aa-9dd9-cb7a4f8e7166 |

+--------------------------------------+------------------+---------------------+--------------------------------------+

neutron net-list

+--------------------------------------+----------+-------------------------------------------------------+

| id | name | subnets
|

+--------------------------------------+----------+-------------------------------------------------------+

| dabc2c18-da64-467b-a2ba-373e460444a7 | demo-net |
d01a6522-063d-40ba-b4dc-5843177aab51 10.10.0.0/24 |

| ceaaf189-5b6f-4215-8686-fbdeae87c12d | ext-net |
0189699c-8ffc-44cb-aebc-054c8d6001ee 192.168.122.0/24 |

+--------------------------------------+----------+-------------------------------------------------------+

neutron subnet-list

+--------------------------------------+-------------+------------------+--------------------------------------------------------+

| id | name | cidr |
allocation_pools |

+--------------------------------------+-------------+------------------+--------------------------------------------------------+

| d01a6522-063d-40ba-b4dc-5843177aab51 | demo-subnet | 10.10.0.0/24 |
{"start": "10.10.0.2", "end": "10.10.0.254"} |

| 0189699c-8ffc-44cb-aebc-054c8d6001ee | ext-subnet | 192.168.122.0/24 |
{"start": "192.168.122.193", "end": "192.168.122.222"} |

+--------------------------------------+-------------+------------------+--------------------------------------------------------+

P.S: External subnet is 192.168.122.0/24 and internal vm instance's subnet
is 10.10.0.0/24

Thanks

Paras.

On Mon, Oct 27, 2014 at 5:51 PM, George Shuklin <george.shuklin at gmail.com>
wrote:

I don't like this:

15: qg-d351f21a-08: <BROADCAST,UP,LOWERUP> mtu 1500 qdisc noqueue state
UNKNOWN group default
inet 192.168.122.193/24 brd 192.168.122.255 scope global
qg-d351f21a-08
valid
lft forever preferredlft forever
inet 192.168.122.194/32 brd 192.168.122.194 scope global
qg-d351f21a-08
valid
lft forever preferred_lft forever

Why you got two IPs on same interface with different netmasks?

I just rechecked it on our installations - it should not be happens.

Next: or this is a bug, or this is uncleaned network node (lesser bug), or
someone messing with neutron.

Starts from neutron:

show ports for router:

neutron port-list --device-id=router-uuid-here
neutron floatingips-list
neutron net-list
neutron subnet-list
(trim to related only)

(and please mark again who is 'internet' and who is 'internal' ips, i'm
kinda loosing in '192.168.*'.

On 10/27/2014 04:47 PM, Paras pradhan wrote:

Yes it got its ip which is 192.168.122.194 in the paste below.

--

root at juno2:~# ip netns exec qrouter-34f3b828-b7b8-4f44-b430-14d9c5bd0d0c
ip -4 a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

14: qr-ac50d700-29: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN group default

inet 50.50.50.1/24 brd 50.50.50.255 scope global qr-ac50d700-29

   valid_lft forever preferred_lft forever

15: qg-d351f21a-08: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN group default

inet 192.168.122.193/24 brd 192.168.122.255 scope global

qg-d351f21a-08

   valid_lft forever preferred_lft forever

inet 192.168.122.194/32 brd 192.168.122.194 scope global

qg-d351f21a-08

   valid_lft forever preferred_lft forever

*stdbuf -e0 -o0 ip net exec qrouter... /bin/bash give me the following *

--

root at juno2:~# ifconfig

lo Link encap:Local Loopback

      inet addr:127.0.0.1  Mask:255.0.0.0

      inet6 addr: ::1/128 Scope:Host

      UP LOOPBACK RUNNING  MTU:65536  Metric:1

      RX packets:2 errors:0 dropped:0 overruns:0 frame:0

      TX packets:2 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:168 (168.0 B)  TX bytes:168 (168.0 B)

qg-d351f21a-08 Link encap:Ethernet HWaddr fa:16:3e:79:0f:a2

      inet addr:192.168.122.193  Bcast:192.168.122.255

Mask:255.255.255.0

      inet6 addr: fe80::f816:3eff:fe79:fa2/64 Scope:Link

      UP BROADCAST RUNNING  MTU:1500  Metric:1

      RX packets:2673 errors:0 dropped:0 overruns:0 frame:0

      TX packets:112 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

      RX bytes:205377 (205.3 KB)  TX bytes:6537 (6.5 KB)

qr-ac50d700-29 Link encap:Ethernet HWaddr fa:16:3e:7e:6d:f3

      inet addr:50.50.50.1  Bcast:50.50.50.255  Mask:255.255.255.0

      inet6 addr: fe80::f816:3eff:fe7e:6df3/64 Scope:Link

      UP BROADCAST RUNNING  MTU:1500  Metric:1

      RX packets:345 errors:0 dropped:0 overruns:0 frame:0

      TX packets:1719 errors:0 dropped:0 overruns:0 carrier:0

      collisions:0 txqueuelen:0

       RX bytes:27377 (27.3 KB)  TX bytes:164541 (164.5 KB)

--

Thanks

Paras.

On Sat, Oct 25, 2014 at 3:18 AM, George Shuklin <george.shuklin at gmail.com>
wrote:

Check out if qrouter got floating inside network namespace (ip net exec
qrouter... ip -4 a), or just bash in to it (stdbuf -e0 -o0 ip net exec
qrouter... /bin/bash) and play with it like with normal server.

On 10/24/2014 07:38 PM, Paras pradhan wrote:

Hello,

Assigned a floating ip to an instance. But I can't ping the instance.
This instance can reach internet with no problem. But I can't ssh or icmp
to this instance. Its not a security group issue.

On my network node that runs l3, I can see qrouter. The extenel subnet
looks like this:

allocation-pool start=192.168.122.193,end=192.168.122.222
--disable-dhcp --gateway 192.168.122.1 192.168.122.0/24

I can ping 192.168.122.193 using: ip netns exec
qrouter-34f3b828-b7b8-4f44-b430-14d9c5bd0d0c ping 192.168.122.193

but not 192.168.122.194 (which is the floating ip)

Doing tcp dump on the interace that connects to the external world, I can
see ICMP request but not reply from the interface :

11:36:40.360255 IP 192.168.122.1 > 192.168.122.194: ICMP echo request,
id 2589, seq 312, length 64

11:36:41.360222 IP 192.168.122.1 > 192.168.122.194: ICMP echo request,
id 2589, seq 313, length 64

Ideas?

Thanks

Paras.


OpenStack-operators mailing listOpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] OpenStack Community Weekly Newsletter (Oct 17 – 24)

  OpenStack Startup/Venture Capital Ecosystem ? it?s real and coming
  to Paris!
  <http://www.openstack.org/blog/2014/10/openstack-startupventure-capital-ecosystem-its-real-and-coming-to-paris/>

Recently OpenStack has been generating financial headlines with the
acquisitions of OpenStack ecosystem startups eNovance, Metacloud,
Cloudscaling and OpenStack veteran Mirantis raising $100M in venture
capital this week. At the OpenStack Summit in Paris next week, we are
launching a new track called ?CloudFunding
https://openstacksummitnovember2014paris.sched.org/overview/type/cloudfunding#.VEldLYvF8is?
where we will hear from startups that have been successful in attracting
essential capital and ventures capitalists who are actively investing in
OpenStack startups.

  OpenStack Foundation Staffing News!
  <http://www.openstack.org/blog/2014/10/openstack-foundation-staffing-news/>

The Board of Directors approved the promotion of Lauren Sell
https://twitter.com/laurensell to Vice President of Marketing and
Community Services. Lauren has been instrumental in the growth of
Openstack from the beginning. Thierry Carrez
https://twitter.com/tcarrez, who has managed the OpenStack
releases from the beginning has taken on the role of Director of
Engineering, and is building out a team of technical leaders. Be sure to
check out our open positions
http://www.openstack.org/community/jobs/?foundation=1 if you?d like to
join our team!

  Peer Reviews for Neutron Core Reviewers
  <http://www.siliconloons.com/peer-reviews-for-neutron-core-reviewers/>

Food for thoughts from members of the Neutron community: they have
started an exploration to improve the process by which we understand a
core?s responsibilities, and also a process under which we can judge how
cores are performing up to that standard. Join the conversation and give
comments to Neutron?s PTL Kyle Mestery http://www.siliconloons.com/
blog post.

  Numerical Dosimetry in the cloud
  <http://blog.zhaw.ch/icclab/numerical-dosimetry-in-the-cloud/>

What?s the connection between a dentist?s chair and OpenStack?
Fascinating post by Patrik Eschle
http://blog.zhaw.ch/icclab/numerical-dosimetry-in-the-cloud/ about the
practical uses of the clouds we?re building.

The Road To Paris 2014 ? Deadlines and Resources

Full access sold out! Only a few spots left for Keynotes and Expo Hall
passes.

Ask OpenStack https://ask.openstack.org/ is the go-to destination for
OpenStack users. Interesting questions waiting for answers:

Dmitry Nikishov
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7595cc8d-4f47-46dc-a8c1-cade322b3b34
Peng Xiao
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona1bdb897-9001-46d0-8aa0-ac9818242313

Weidong Shao
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person7f0e295c-5f1e-4d16-8baa-d38ee954b4ec
Jiri Suchomel
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3b7c1912-78d1-48f2-8174-17dec33bb904

Roman Dashevsky
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persondcdb3dad-4e22-4422-8035-d0716fd894f1
Amaury Medeiros
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1fa3b984-0b53-4f1b-990f-f5c99cdd1a24

Peng Xiao
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persona1bdb897-9001-46d0-8aa0-ac9818242313
Chris Grivas
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persondf048d7b-6fbf-4b29-9954-2b31daee713f

M. David Bennett
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person3c3e0507-03a1-45ab-a310-81af284ebf85
Sridhar Ramaswamy
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,persone94a6f84-9042-4201-b2c1-aaca07a9ac8c

Edmond Kotowski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person48941d96-0283-4688-b6c7-abbba4cd04df
Jun Hong Li
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personb0369b14-6d94-4e97-8afd-4fe88fb9313a

Amandeep
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person408ededf-e9c2-4bda-bd7e-bf0b5fa541ef
Jorge Niedbalski
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person27582e87-4899-4600-9798-bacd3e4595ee

Wayne Warren
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personfc358ecd-8c80-4387-96ab-6c73890374a9
Alan Erwin
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person51113e45-dd33-4fda-bc16-b6a781aba40f

Amaury Medeiros
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person1fa3b984-0b53-4f1b-990f-f5c99cdd1a24
Y L Sun
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,person6a4fe818-33c6-4b5d-88ff-7e801c2b9f06

Vijayaguru Guruchave
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personc94b5f03-e22e-41fe-aec3-eb3ce91b53b5

Sagar Damani
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personad328547-7355-4048-a835-4995e6072a62

Daniel Wakefield
http://activity.openstack.org/data/plugins/zfacts/view.action?viewproperties=true&instance=Person,personea4201fa-f91c-4436-916f-bc04b681251a

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly update or have an idea about this newsletter, please
leave a/

[Openstack-operators] Juno nova-network no longer working with v4-fixed ip?

In Icehouse the following worked:

nova boot --flavor 4 --boot-volume 13cf15c8-e5fa-484f-b8b5-54e1498dfb48
spacewalk --nic
net-id=a0e8f4f0-c1c4-483d-9524-300fcede7a69,v4-fixed-ip=10.71.0.206

However in Juno the only way to get it to build the instance is to leave
off the ",v4-fixed-ip=10.71.0.206". With it I get:

2014-10-24 20:33:10.721 2899 DEBUG keystoneclient.session [-] REQ: curl -i
-X GET http://127.0.0.1:35357/v2.0/tokens/revoked -H "User-Agent:
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
TOKENREDACTED" _httplogrequest
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
2014-10-24 20:33:10.743 2899 DEBUG keystoneclient.session [-] RESP: [200]
{'date': 'Sat, 25 Oct 2014 00:33:10 GMT', 'content-type':
'application/json', 'content-length': '686', 'vary': 'X-Auth-Token'}
RESP BODY: {"signed": "-----BEGIN
CMS-----\nMIIBxgYJKoZIhvcNAQcCoIIBtzCCAbMCAQExCTAHBgUrDgMCGjAeBgkqhkiG9w0B\nBwGgEQQPeyJyZXZva2VkIjogW119MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJV\nUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNl\ndDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3\nDQEBAQUABIIBANPLKniK+n+mxd4tIAKrm0rj5u/wQkdlxlToJRhwogKwv1+Tujp/\nFrSjoZSu+tzVsLrHQGVwKdo9DJSN3gTRzQx+TqgIxpduji1gG3uop/VCqSEimtHq\nmmz9hewQGS/lE51xkMwsiWoUmcPruVF2bTfcjAeYsvSOoqLD2jAnnu4jtG68LaWn\n21ew62qzIumwYxfb9BlpvVebShFpKrM4/XWBg7k2KUJ7E+wd6lgo39Sr7FfAxnNv\npvLgfKb0SBXCJYfKrG52lZOkodGcHwNOT9tizm/tHKIVXv/0MN0dLUZY1+NCGkxx\nXETUgJdPHMLfwP/ipVkvih57C1PzD0OZJNI=\n-----END
CMS-----\n"}
_http
logresponse
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
2014-10-24 20:33:10.764 2899 DEBUG nova.api.openstack.wsgi
[req-c52d68de-62de-4162-806d-33838f5a7c18 None] Calling method '>' _process
stack
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:935
2014-10-24 20:33:10.777 2899 INFO nova.osapicompute.wsgi.server
[req-c52d68de-62de-4162-806d-33838f5a7c18 None] 10.71.0.137 "GET
/v2/71e48f8b2afb4db99f588752b0c720c5/flavors/4 HTTP/1.1" status: 200 len:
596 time: 0.0571671
2014-10-24 20:33:10.781 2901 DEBUG keystoneclient.session [-] REQ: curl -i
-X GET http://127.0.0.1:35357/v2.0/tokens/revoked -H "User-Agent:
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
TOKEN
REDACTED" httplogrequest
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
2014-10-24 20:33:10.804 2901 DEBUG keystoneclient.session [-] RESP: [200]
{'date': 'Sat, 25 Oct 2014 00:33:10 GMT', 'content-type':
'application/json', 'content-length': '686', 'vary': 'X-Auth-Token'}
RESP BODY: {"signed": "-----BEGIN
CMS-----\nMIIBxgYJKoZIhvcNAQcCoIIBtzCCAbMCAQExCTAHBgUrDgMCGjAeBgkqhkiG9w0B\nBwGgEQQPeyJyZXZva2VkIjogW119MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJV\nUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNl\ndDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3\nDQEBAQUABIIBANPLKniK+n+mxd4tIAKrm0rj5u/wQkdlxlToJRhwogKwv1+Tujp/\nFrSjoZSu+tzVsLrHQGVwKdo9DJSN3gTRzQx+TqgIxpduji1gG3uop/VCqSEimtHq\nmmz9hewQGS/lE51xkMwsiWoUmcPruVF2bTfcjAeYsvSOoqLD2jAnnu4jtG68LaWn\n21ew62qzIumwYxfb9BlpvVebShFpKrM4/XWBg7k2KUJ7E+wd6lgo39Sr7FfAxnNv\npvLgfKb0SBXCJYfKrG52lZOkodGcHwNOT9tizm/tHKIVXv/0MN0dLUZY1+NCGkxx\nXETUgJdPHMLfwP/ipVkvih57C1PzD0OZJNI=\n-----END
CMS-----\n"}
_http
logresponse
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
2014-10-24 20:33:10.828 2901 DEBUG nova.api.openstack.wsgi
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Action: 'create', calling
method: >, body:
{"server": {"name": "spacewalk", "imageRef": "", "block
devicemappingv2":
[{"sourcetype": "volume", "deleteontermination": false, "bootindex": 0,
"uuid": "13cf15c8-e5fa-484f-b8b5-54e1498dfb48", "destinationtype":
"volume"}], "flavorRef": "4", "max
count": 1, "mincount": 1, "networks":
[{"fixed
ip": "10.71.0.206", "uuid":
"a0e8f4f0-c1c4-483d-9524-300fcede7a69"}]}} processstack
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:932
2014-10-24 20:33:10.841 2901 DEBUG nova.volume.cinder
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Cinderclient connection
created using URL:
http://10.71.0.137:8776/v1/71e48f8b2afb4db99f588752b0c720c5
getcinderclient_version
/usr/lib/python2.7/site-packages/nova/volume/cinder.py:255
2014-10-24 20:33:11.049 2901 ERROR nova.api.openstack.wsgi
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Exception handling
resource: 'NoneType' object has no attribute 'getitem'
Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line
400, in objectdispatch
return getattr(target, method)(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 155,
in wrapper
result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py", line
111, in get
byaddress
expected
attrs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py", line
90, in _from
dbobject
context, objects.Network(context), db
fixedip['network'])

File "/usr/lib/python2.7/site-packages/nova/objects/network.py", line
115, in fromdbobject
db
value = db_network[field]

TypeError: 'NoneType' object has no attribute 'getitem'

Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 134, in dispatchand_reply
incoming.message))

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 177, in dispatch
return self.
do_dispatch(endpoint, method, ctxt, args)

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 123, in dodispatch
result = getattr(endpoint, method)(ctxt, **new_args)

File "/usr/lib/python2.7/site-packages/nova/network/manager.py", line
1503, in validatenetworks
context, address, expected
attrs=['network'])

File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 153,
in wrapper
args, kwargs)

File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line
346, in objectclassaction
objver=objver, args=args, kwargs=kwargs)

File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py",
line 152, in call
retry=self.retry)

File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line
90, in _send
timeout=timeout, retry=retry)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
retry=retry)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 399, in _send
raise result

TypeError: 'NoneType' object has no attribute 'getitem'
Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line
400, in objectdispatch
return getattr(target, method)(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line 155,
in wrapper
result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py", line
111, in get
byaddress
expected
attrs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py", line
90, in _from
dbobject
context, objects.Network(context), db
fixedip['network'])

File "/usr/lib/python2.7/site-packages/nova/objects/network.py", line
115, in fromdbobject
db
value = db_network[field]

TypeError: 'NoneType' object has no attribute 'getitem'

2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi Traceback (most
recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 973, in
processstack
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
actionresult = self.dispatch(meth, request, actionargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line 1057,
in dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
method(req=request, **actionargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py",
line 958, in create
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
check
servergroupquota=checkservergroupquota)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 131, in inner
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi rv =
f(*args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1447, in create
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
check
servergroupquota=checkservergroupquota)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1069, in
_create
instance
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi maxcount)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 769, in
_validate
andbuildbaseoptions
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
requested
networks, maxcount)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 478, in
_check
requestednetworks
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi max
count)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 48, in wrapped
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
func(self, context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 404, in
validate_networks
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
requested_networks)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 245, in
validate_networks
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
self.client.call(ctxt, 'validate_networks', networks=networks)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 389,
in call
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
self.prepare().call(ctxt, method, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152,
in call
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
retry=self.retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in
_send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
timeout=timeout, retry=retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi retry=retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 399, in _send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi raise result
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi TypeError:
'NoneType' object has no attribute '__getitem__'
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi Traceback (most
recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in
_object_dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
getattr(target, method)(context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 155, in
wrapper
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi result =
fn(cls, context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line 111, in
get_by_address
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
expected_attrs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line 90, in
_from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi context,
objects.Network(context), db_fixedip['network'])
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/network.py", line 115, in
_from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi db_value =
db_network[field]
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi TypeError:
'NoneType' object has no attribute '__getitem__'
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi Traceback (most
recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line
134, in _dispatch_and_reply
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
incoming.message))
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line
177, in _dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line
123, in _do_dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi result =
getattr(endpoint, method)(ctxt, **new_args)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/manager.py", line 1503, in
validate_networks
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi context,
address, expected_attrs=['network'])
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 153, in
wrapper
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi args, kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 346, in
object_class_action
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
objver=objver, args=args, kwargs=kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152,
in call
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
retry=self.retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in
_send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
timeout=timeout, retry=retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi retry=retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 399, in _send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi raise result
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi TypeError:
'NoneType' object has no attribute '__getitem__'
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi Traceback (most
recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 400, in
_object_dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
getattr(target, method)(context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 155, in
wrapper
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi result =
fn(cls, context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line 111, in
get_by_address
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
expected_attrs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line 90, in
_from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi context,
objects.Network(context), db_fixedip['network'])
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/network.py", line 115, in
_from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi db_value =
db_network[field]
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi TypeError:
'NoneType' object has no attribute '__getitem__'
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.051 2901 DEBUG nova.api.openstack.wsgi
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Returning 400 to user: The
server could not comply with the request since it is either malformed or
otherwise incorrect. __call__
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1199
2014-10-24 20:33:11.052 2901 INFO nova.osapi_compute.wsgi.server
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] 10.71.0.137 "POST
/v2/71e48f8b2afb4db99f588752b0c720c5/os-volumes_boot HTTP/1.1" status: 400
len: 338 time: 0.2714109
ERROR (BadRequest): The server could not comply with the request since it
is either malformed or otherwise incorrect. (HTTP 400) (Request-ID:
req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0)

<>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141024/2a3fc71a/attachment.html

Hi Nathan,

Would you mind submitting this as a bug report at:

https://bugs.launchpad.net/nova

From a quick look it looks like a bug, but a bug report will make sure
progress on it gets tracked.

Thanks,

Chris

On Fri, 24 Oct 2014 20:44:51 -0400
Nathan Stratton wrote:

In Icehouse the following worked:

nova boot --flavor 4 --boot-volume
13cf15c8-e5fa-484f-b8b5-54e1498dfb48 spacewalk --nic
net-id=a0e8f4f0-c1c4-483d-9524-300fcede7a69,v4-fixed-ip=10.71.0.206

However in Juno the only way to get it to build the instance is to
leave off the ",v4-fixed-ip=10.71.0.206". With it I get:

2014-10-24 20:33:10.721 2899 DEBUG keystoneclient.session [-] REQ:
curl -i -X GET http://127.0.0.1:35357/v2.0/tokens/revoked -H
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H
"X-Auth-Token: TOKENREDACTED" _httplogrequest
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
2014-10-24 20:33:10.743 2899 DEBUG keystoneclient.session [-] RESP:
[200] {'date': 'Sat, 25 Oct 2014 00:33:10 GMT', 'content-type':
'application/json', 'content-length': '686', 'vary': 'X-Auth-Token'}
RESP BODY: {"signed": "-----BEGIN
CMS-----\nMIIBxgYJKoZIhvcNAQcCoIIBtzCCAbMCAQExCTAHBgUrDgMCGjAeBgkqhkiG9w0B\nBwGgEQQPeyJyZXZva2VkIjogW119MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJV\nUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNl\ndDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3\nDQEBAQUABIIBANPLKniK+n+mxd4tIAKrm0rj5u/wQkdlxlToJRhwogKwv1+Tujp/\nFrSjoZSu+tzVsLrHQGVwKdo9DJSN3gTRzQx+TqgIxpduji1gG3uop/VCqSEimtHq\nmmz9hewQGS/lE51xkMwsiWoUmcPruVF2bTfcjAeYsvSOoqLD2jAnnu4jtG68LaWn\n21ew62qzIumwYxfb9BlpvVebShFpKrM4/XWBg7k2KUJ7E+wd6lgo39Sr7FfAxnNv\npvLgfKb0SBXCJYfKrG52lZOkodGcHwNOT9tizm/tHKIVXv/0MN0dLUZY1+NCGkxx\nXETUgJdPHMLfwP/ipVkvih57C1PzD0OZJNI=\n-----END
CMS-----\n"}
_http
logresponse
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
2014-10-24 20:33:10.764 2899 DEBUG nova.api.openstack.wsgi
[req-c52d68de-62de-4162-806d-33838f5a7c18 None] Calling method '>'
_process
stack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:935
2014-10-24 20:33:10.777 2899 INFO nova.osapicompute.wsgi.server
[req-c52d68de-62de-4162-806d-33838f5a7c18 None] 10.71.0.137 "GET
/v2/71e48f8b2afb4db99f588752b0c720c5/flavors/4 HTTP/1.1" status: 200
len: 596 time: 0.0571671
2014-10-24 20:33:10.781 2901 DEBUG keystoneclient.session [-] REQ:
curl -i -X GET http://127.0.0.1:35357/v2.0/tokens/revoked -H
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H
"X-Auth-Token: TOKEN
REDACTED" httplogrequest
/usr/lib/python2.7/site-packages/keystoneclient/session.py:155
2014-10-24 20:33:10.804 2901 DEBUG keystoneclient.session [-] RESP:
[200] {'date': 'Sat, 25 Oct 2014 00:33:10 GMT', 'content-type':
'application/json', 'content-length': '686', 'vary': 'X-Auth-Token'}
RESP BODY: {"signed": "-----BEGIN
CMS-----\nMIIBxgYJKoZIhvcNAQcCoIIBtzCCAbMCAQExCTAHBgUrDgMCGjAeBgkqhkiG9w0B\nBwGgEQQPeyJyZXZva2VkIjogW119MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJV\nUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNl\ndDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3\nDQEBAQUABIIBANPLKniK+n+mxd4tIAKrm0rj5u/wQkdlxlToJRhwogKwv1+Tujp/\nFrSjoZSu+tzVsLrHQGVwKdo9DJSN3gTRzQx+TqgIxpduji1gG3uop/VCqSEimtHq\nmmz9hewQGS/lE51xkMwsiWoUmcPruVF2bTfcjAeYsvSOoqLD2jAnnu4jtG68LaWn\n21ew62qzIumwYxfb9BlpvVebShFpKrM4/XWBg7k2KUJ7E+wd6lgo39Sr7FfAxnNv\npvLgfKb0SBXCJYfKrG52lZOkodGcHwNOT9tizm/tHKIVXv/0MN0dLUZY1+NCGkxx\nXETUgJdPHMLfwP/ipVkvih57C1PzD0OZJNI=\n-----END
CMS-----\n"}
_http
logresponse
/usr/lib/python2.7/site-packages/keystoneclient/session.py:182
2014-10-24 20:33:10.828 2901 DEBUG nova.api.openstack.wsgi
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Action: 'create',
calling method: >,
body: {"server": {"name": "spacewalk", "imageRef": "",
"block
devicemappingv2": [{"sourcetype": "volume",
"delete
ontermination": false, "bootindex": 0, "uuid":
"13cf15c8-e5fa-484f-b8b5-54e1498dfb48", "destinationtype":
"volume"}], "flavorRef": "4", "max
count": 1, "mincount": 1,
"networks": [{"fixed
ip": "10.71.0.206", "uuid":
"a0e8f4f0-c1c4-483d-9524-300fcede7a69"}]}}
processstack /usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:932
2014-10-24 20:33:10.841 2901 DEBUG nova.volume.cinder
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Cinderclient
connection created using URL:
http://10.71.0.137:8776/v1/71e48f8b2afb4db99f588752b0c720c5
getcinderclient_version /usr/lib/python2.7/site-packages/nova/volume/cinder.py:255
2014-10-24 20:33:11.049 2901 ERROR nova.api.openstack.wsgi
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Exception handling
resource: 'NoneType' object has no attribute 'getitem'
Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py",
line 400, in objectdispatch
return getattr(target, method)(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line
155, in wrapper
result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py",
line 111, in get
byaddress
expected
attrs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py",
line 90, in _from
dbobject
context, objects.Network(context), db
fixedip['network'])

File "/usr/lib/python2.7/site-packages/nova/objects/network.py",
line 115, in fromdbobject
db
value = db_network[field]

TypeError: 'NoneType' object has no attribute 'getitem'

Traceback (most recent call last):

File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 134, in dispatchand_reply incoming.message))

File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 177, in dispatch return self.do_dispatch(endpoint, method,
ctxt, args)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 123, in dodispatch result = getattr(endpoint, method)(ctxt,
**new_args)

File "/usr/lib/python2.7/site-packages/nova/network/manager.py",
line 1503, in validatenetworks
context, address, expected
attrs=['network'])

File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line
153, in wrapper
args, kwargs)

File "/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py",
line 346, in objectclassaction
objver=objver, args=args, kwargs=kwargs)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line
152, in call retry=self.retry)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line
90, in _send timeout=timeout, retry=retry)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
retry=retry)

File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 399, in _send
raise result

TypeError: 'NoneType' object has no attribute 'getitem'
Traceback (most recent call last):

File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py",
line 400, in objectdispatch
return getattr(target, method)(context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/base.py", line
155, in wrapper
result = fn(cls, context, *args, **kwargs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py",
line 111, in get
byaddress
expected
attrs)

File "/usr/lib/python2.7/site-packages/nova/objects/fixedip.py",
line 90, in _from
dbobject
context, objects.Network(context), db
fixedip['network'])

File "/usr/lib/python2.7/site-packages/nova/objects/network.py",
line 115, in fromdbobject
db
value = db_network[field]

TypeError: 'NoneType' object has no attribute 'getitem'

2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi Traceback
(most recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line
973, in processstack
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
actionresult = self.dispatch(meth, request, actionargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", line
1057, in dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
method(req=request, **actionargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py",
line 958, in create
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
check
servergroupquota=checkservergroupquota)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 131, in inner
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi rv =
f(*args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1447, in
create 2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
check
servergroupquota=checkservergroupquota)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1069, in
_create
instance
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
maxcount) 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 769, in
_validate
andbuildbaseoptions 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi requested
networks, maxcount)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 478, in
_check
requestednetworks
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
max
count) 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 48, in
wrapped 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi return func(self, context, *args,
**kwargs) 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 404, in
validate_networks 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi requested_networks) 2014-10-24 20:33:11.049
2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 245,
in validate_networks 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi return self.client.call(ctxt,
'validate_networks', networks=networks) 2014-10-24 20:33:11.049 2901
TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line
389, in call 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi return self.prepare().call(ctxt, method,
**kwargs) 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line
152, in call 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi retry=self.retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line
90, in _send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
timeout=timeout, retry=retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
retry=retry) 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 399, in _send 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi raise result 2014-10-24 20:33:11.049 2901
TRACE nova.api.openstack.wsgi TypeError: 'NoneType' object has no
attribute '__getitem__' 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi Traceback (most recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line
400, in _object_dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
getattr(target, method)(context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 155, in
wrapper
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi result
= fn(cls, context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line
111, in get_by_address
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
expected_attrs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line 90,
in _from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
context, objects.Network(context), db_fixedip['network'])
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/network.py", line 115,
in _from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
db_value = db_network[field]
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi TypeError:
'NoneType' object has no attribute '__getitem__'
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi Traceback
(most recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 134, in _dispatch_and_reply
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
incoming.message))
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 177, in _dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
line 123, in _do_dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi result
= getattr(endpoint, method)(ctxt, **new_args)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/manager.py", line
1503, in validate_networks
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
context, address, expected_attrs=['network'])
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 153, in
wrapper
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi args,
kwargs) 2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line
346, in object_class_action
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
objver=objver, args=args, kwargs=kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line
152, in call
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
retry=self.retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line
90, in _send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
timeout=timeout, retry=retry)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 408, in send
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
retry=retry) 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
line 399, in _send 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi raise result 2014-10-24 20:33:11.049 2901
TRACE nova.api.openstack.wsgi 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi TypeError: 'NoneType' object has no attribute
'__getitem__' 2014-10-24 20:33:11.049 2901 TRACE
nova.api.openstack.wsgi Traceback (most recent call last):
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line
400, in _object_dispatch
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi return
getattr(target, method)(context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 155, in
wrapper
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi result
= fn(cls, context, *args, **kwargs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line
111, in get_by_address
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
expected_attrs)
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/fixed_ip.py", line 90,
in _from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
context, objects.Network(context), db_fixedip['network'])
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/objects/network.py", line 115,
in _from_db_object
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
db_value = db_network[field]
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi TypeError:
'NoneType' object has no attribute '__getitem__'
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.049 2901 TRACE nova.api.openstack.wsgi
2014-10-24 20:33:11.051 2901 DEBUG nova.api.openstack.wsgi
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] Returning 400 to
user: The server could not comply with the request since it is either
malformed or otherwise incorrect. __call__
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1199
2014-10-24 20:33:11.052 2901 INFO nova.osapi_compute.wsgi.server
[req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0 None] 10.71.0.137 "POST
/v2/71e48f8b2afb4db99f588752b0c720c5/os-volumes_boot HTTP/1.1"
status: 400 len: 338 time: 0.2714109
ERROR (BadRequest): The server could not comply with the request
since it is either malformed or otherwise incorrect. (HTTP 400)
(Request-ID: req-3c410d38-49f5-4f86-acb5-58a2d56d9ae0)

<>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

[Openstack-operators] [Nova] bind vnic to phys nic

I hate requests like this but I've been asked one again how to bind a
single vm to a physical NIC on a cookout node. Kicking and screaming I've
tried to steer them away from this bad decision with no luck. So I have to
ask, is this even possible within Openstack? I'm sure it's technically
possible but I really hate wasting everyone's time trying to eliminate the
benefits of the cloud...

Thoughts?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141025/5292d528/attachment.html

[Openstack-operators] Fwd: Glance Image Upload error

Hi ,

I am facing keystone configuration problem. My problem is
give in bellow.

KEYSTONE-MANAGE PKI_SETUP --KEYSTONE-USER KEYSTONE
--KEYSTONE-GROUP KEYSTONE
Traceback (most recent call last):
File
"/usr/bin/keystone-manage", line 30, in
from keystone import
cli
File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 19,
in
from oslo.config import cfg
ImportError: No module named
oslo.config

Please help me for solution this problem.

Thanks

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141026/a54c053e/attachment.html

This sounds like a packaging error. What linux distribution are you using?
The quick fix is to just install the oslo.config module manually, not sure why it wasn't included.

sudo pip install oslo.config should help

On Oct 26, 2014, at 1:26 AM, mohib at qmail.com.bd wrote:

Hi ,

I am facing keystone configuration problem. My problem is give in bellow.


keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 30, in
from keystone import cli
File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 19, in
from oslo.config import cfg
ImportError: No module named oslo.config


Please help me for solution this problem.

Thanks



OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Keystone error

Hi ,

I am facing keystone configuration problem. My problem is
give in bellow.

KEYSTONE-MANAGE PKI_SETUP --KEYSTONE-USER KEYSTONE
--KEYSTONE-GROUP KEYSTONE
Traceback (most recent call last):
File
"/usr/bin/keystone-manage", line 30, in
from keystone import
cli
File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 19,
in
from oslo.config import cfg
ImportError: No module named
oslo.config

Please help me for solution this problem.

Thanks

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141026/b010152d/attachment.html

I don?t know what OS you have since you didn?t mention but I think you are missing a package. On Ubuntu it would be python-oslo.config.

From: "mohib at qmail.com.bd" >
Date: Sunday, October 26, 2014 at 2:28 AM
To: "openstack-operators at lists.openstack.org" >
Subject: [Openstack-operators] Keystone error

Hi ,

I am facing keystone configuration problem. My problem is give in bellow.

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 30, in
from keystone import cli
File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 19, in
from oslo.config import cfg
ImportError: No module named oslo.config

Please help me for solution this problem.

Thanks


This E-mail and any of its attachments may contain Time Warner Cable proprietary information, which is privileged, confidential, or subject to copyright belonging to Time Warner Cable. This E-mail is intended solely for the use of the individual or entity to which it is addressed. If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination, distribution, copying, or action taken in relation to the contents of and attachments to this E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error, please notify the sender immediately and permanently delete the original and any copy of this E-mail and any printout.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] DVR + DNAT + L3 Fabric

After quite a bit of research I'm starting to worry if this configuration
is not possible. I'm building a L3 fabric with L2 domains terminated at the
top of rack switches and running VXLAN on top. The few documents I've come
across have stated that each compute node in a DVR setup would need to have
access to the external network directly (L2).

There's 2 possible solutions I can see, running external subnets within a
VXLAN segment and using a manually configured VTEP to terminate and route
the traffic. Or somehow force the L3 (dvr_snat) agents to do DNAT in
addition to SNAT.

I'm open to any suggestions at this point.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141028/2b1044d0/attachment.html

[Openstack-operators] Active/passive nova-network failover results in both controllers APRing for gateway addresses

I?ve been running nova-network in VLAN mode as an active/passive cluster resource (corosync + rgmanager) on my OpenStack Havana and Folsom controller pairs for a good long while. This week I found an oddity that I hadn?t noticed before, and I?d like to ask the community about it.

When nova-network starts up, it of course launches a dnsmasq process for each network, which listens on the .1 address of the assigned network and acts as the gateway for that network. When the nova-network service is moved to the passive node, nova-network starts up dnsmasq processes on that node as well, again listening on the .1 addresses. However, since now both nodes have the .1 addresses configured, they basically take turns ARPing for the addresses and stealing the traffic from each other. VMs will route through the ?active? node for a minute or so and then suddenly start routing through the ?passive? node. Then the cycle repeats. Among other things, this results in only one controller at a time being able to reach the VMs and adds latency to VM traffic when the shift happens.

To stop this, I had to manually remove the VLAN interfaces from the bridges, bring down the bridges, then delete the bridges from the now-passive node. Things then returned to normal, with all traffic flowing through the ?active? controller and both controllers being able to reach the VMs.

I have not seen anything in the HA guides about how people are preventing this situation from occuring - nothing about killing off dnsmasq or tearing down these network interfaces to prevent the ARP wars. Anybody else out there experienced this? How are people handling the situation?

I am considering bringing up arptables to block ARP for the gateway addresses when cluster failover happens, or alternatively automating the tear-down of these gateway addresses. Am I missing something here?

Thanks,

Mike Smith
Principal Engineer, Website Systems
Overstock.com


CONFIDENTIALITY NOTICE: This message is intended only for the use and review of the individual or entity to which it is addressed and may contain information that is privileged and confidential. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message solely to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone or return email. Thank you.

Hi Mike, I'm no networking or HA expert, but I've added some comments
inline and cc'd Florian Haas (who is an HA expert!) to see if he can
help you out...

On 10/29/2014 12:34 AM, Mike Smith wrote:
I?ve been running nova-network in VLAN mode as an active/passive
cluster resource (corosync + rgmanager) on my OpenStack Havana and
Folsom controller pairs for a good long while. This week I found an
oddity that I hadn?t noticed before, and I?d like to ask the
community about it.

When nova-network starts up, it of course launches a dnsmasq process
for each network, which listens on the .1 address of the assigned
network and acts as the gateway for that network. When the
nova-network service is moved to the passive node, nova-network
starts up dnsmasq processes on that node as well, again listening on
the .1 addresses. However, since now both nodes have the .1
addresses configured, they basically take turns ARPing for the
addresses and stealing the traffic from each other. VMs will route
through the ?active? node for a minute or so and then suddenly start
routing through the ?passive? node. Then the cycle repeats. Among
other things, this results in only one controller at a time being
able to reach the VMs and adds latency to VM traffic when the shift
happens.

It sounds like your failover is not actually failing over. In other
words, it sounds like your previously active node is not being marked as
fully down in order to facilitate the transition to the backup/passive
node. I would expect some minimal disruption during the failover while
the ARP table entries are repopulated when network connectivity to the
old active node is not possible, but it's the "Then the cycle repeats."
part that has me questioning things...

To stop this, I had to manually remove the VLAN interfaces from the
bridges, bring down the bridges, then delete the bridges from the
now-passive node. Things then returned to normal, with all traffic
flowing through the ?active? controller and both controllers being
able to reach the VMs.

I have not seen anything in the HA guides about how people are
preventing this situation from occuring - nothing about killing off
dnsmasq or tearing down these network interfaces to prevent the ARP
wars. Anybody else out there experienced this? How are people
handling the situation?

I am considering bringing up arptables to block ARP for the gateway
addresses when cluster failover happens, or alternatively automating
the tear-down of these gateway addresses. Am I missing something
here?

I'll let Florian talk about what is expected of the networking layer
during failover, but I'll just say that we used multi-host nova-network
node in our Folsom deployments to great effect. It was incredibly
reliable, and the nice thing about it was that if nova-network went down
on a compute node, it only affected the VMs running on that particular
compute node. A simple (re)start of nova-network daemon was enough to
bring up tenant networking on the compute node, and there was no
disruption in service to any other VMs on other compute nodes. The
downside was each compute node would use an extra public IP address...

Anyway, just something to think about. The DVR functionality in Neutron
is attempting to achieve some parity with the nova-network multi-host
functionality, so if you're interested in this area, it's something to
keep an eye on.

All the best,
-jay

Thanks,

Mike Smith Principal Engineer, Website Systems Overstock.com

[Openstack-operators] multiple subnets in a single network

Hi,
I?ve tried to put two different subnets (10.0.0.0/24 and 11.0.0.0/24) in the same network created with neutron. The system lets me doing that. But when I create a VM and attach it to the network with 2 subnets, the dhcp apparently assigns an IP always from the second subnets address, and the VM is unable to contact the metadata IP (169.254.169.254).

I?m not a great expert of networking, so I?m wondering if it does make any sense creating two subnets like this, why the system doesn?t complain about it and why after allowing me doing that, the metadata server doesn?t work.

Does anybody can explain me more ?

thanks a lot!

Alvise

[Openstack-operators] Glance on Ceph Swift API with dynamic large objects

I've got Glance running on a Ceph Swift backend store (NOT the OpenStack implementation of Swift.) I'm noticing a problem around large images and the checksums/ETags on the manifest object. I'm seeing a 422 Unprocessable Entity response from Swift on those PUTs.

(* Background on large objects in Swift below.)

I figured out this is due to a manifest object ETag verification implementation difference between Ceph Swift and OS Swift.

OS Swift verifies it just like any other object, md5'ing the content of the object - https://github.com/openstack/swift/blob/master/swift/obj/server.py#L439-L459

Ceph Swift actually does the full DLO checksum across all the component objects - https://github.com/ceph/ceph/blob/master/src/rgw/rgw_op.cc#L1765-L1781

The problem comes into play in the Glance Swift store driver. It assumes the OS Swift behavior, and sends an ETag of md5("") in the PUT request - https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L552

TBH, I don't understand why the Swift store driver is even sending an ETag there. It would function just as well without sending an ETag at all.

Wondering if anyone else had bumped up against this? I did a basic search over the Glance bugs and I did't see anything around this, so I opened https://bugs.launchpad.net/glance/+bug/1387311. I'm surprised this hasn't surfaced before.

Thanks,
Mike

When the client provides an ETag header in the PUT request, the Swift API checks that against what it determines the checksum for the object to be. If there is a mismatch, the response is a 422 Unprocessable Entity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141029/5dc86ff5/attachment.html

[Openstack-operators] Controle de Versão de Arquivos de Configuração da Cloud.

Boa tarde,

Estou tendo dificuldades nessa atividade e n?o estou achando a solu??o da
forma que discutimos na ?ltima reuni?o.

Explicando:

O que queremos: Manter os arquivos de configura??o em no git, organizado
da seguinte forma:

rootdogit/configurations/nuvem/nomedam?quina/diret?riodentroda_m?quina.

e.g.:
cloudinfra/configurations/cloud2/cavala/etc/nova

*Como est? sendo feito atualmente: *Est? sendo feito hard link para os
arquivos de configura??o que est?o no diret?rio do git.
*Problema: *Se substituir o arquivo, o git n?o ir? detectar essa mudan?a.

*Objetivo: *N?o criar hard link, fazer o controle de vers?o diretamente nos
arquivos. (e.g. Adicionar /etc/nova/nova.conf ao git).

*Como tentei: *Utilizando o comando $ git config core.worktree "/etc"
consegui fazer com que o git enxergue os diret?rios com as configura??es.
*Problema: *Quando ? feito o push, o diret?rio /etc ? criado na pasta root
do git, dessa forma n?o ? poss?vel organizar da forma que queremos.

e.g.
Ao inv?s de:
cloudinfra/configurations/cloud2/cavala/etc/nova

Temos:
cloudinfra/etc/nova

N?o tenho experi?ncia com git, algu?m sabe de outra forma que posso fazer
isso?

[]'s
Fl?vio
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141029/b4e38779/attachment.html

Alop searches for a Portugese translator...

On Oct 29, 2014, at 12:35 PM, Fl?vio Ramalho <f.ramalhoo at gmail.com> wrote:

Boa tarde,

Estou tendo dificuldades nessa atividade e n?o estou achando a solu??o da forma que discutimos na ?ltima reuni?o.

Explicando:

O que queremos: Manter os arquivos de configura??o em no git, organizado da seguinte forma:

rootdogit/configurations/nuvem/nomedam?quina/diret?riodentroda_m?quina.

e.g.:
cloudinfra/configurations/cloud2/cavala/etc/nova

Como est? sendo feito atualmente: Est? sendo feito hard link para os arquivos de configura??o que est?o no diret?rio do git.
Problema: Se substituir o arquivo, o git n?o ir? detectar essa mudan?a.

Objetivo: N?o criar hard link, fazer o controle de vers?o diretamente nos arquivos. (e.g. Adicionar /etc/nova/nova.conf ao git).

Como tentei: Utilizando o comando $ git config core.worktree "/etc" consegui fazer com que o git enxergue os diret?rios com as configura??es.
Problema: Quando ? feito o push, o diret?rio /etc ? criado na pasta root do git, dessa forma n?o ? poss?vel organizar da forma que queremos.

e.g.
Ao inv?s de:
cloudinfra/configurations/cloud2/cavala/etc/nova

Temos:
cloudinfra/etc/nova

N?o tenho experi?ncia com git, algu?m sabe de outra forma que posso fazer isso?

[]'s
Fl?vio


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL:

[Openstack-operators] Database cleanup policy

We just had this question come up regarding the labs, but it applies to production as well.

I'm thinking that we need to implement some sort of periodic database pruning. Perhaps like every two months or so, go through all databases, all tables, and do like

delete from FOO where deleted=1 and deletedat < datesub(now(), interval 2 month)

Just as an example.

Does anyone have see any issues with purging deleted=1 data?
I've seen this be very helpful for things like Keystone tokens, etc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141030/6be7a3b4/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141030/6be7a3b4/attachment.pgp

Hi Abel,

For Keystone we already have a way to prune out expired records: keystone-manage token_flush

This can be run via cron (recommended). The reason for the side band tool is that keystone does not have an internal scheduler for periodic tasks (not a common use keystone needs to do across all the functionality)

If you have a large number of tokens and use MYSQL, we have logic to help limit he impact to the backend by doing batched flushes.

I am not sure what the requirements for holding on to data (e.g. Nova instances) once they've been deleted, but I think it is definitely worth setting some clear guidelines on this for each service so it can be followed / implemented as a built in function.

Cheers,
Morgan

Sent via mobile

On Oct 30, 2014, at 13:20, Abel Lopez wrote:

We just had this question come up regarding the labs, but it applies to production as well.

I'm thinking that we need to implement some sort of periodic database pruning. Perhaps like every two months or so, go through all databases, all tables, and do like

delete from FOO where deleted=1 and deletedat < datesub(now(), interval 2 month)

Just as an example.

Does anyone have see any issues with purging deleted=1 data?
I've seen this be very helpful for things like Keystone tokens, etc.


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] Migrating Parallels Virtuozzo Containers to OpenStack

Anyone have any experience moving from Parallels Virtuozzo Containers to OpenStack (KVM)? We have a large number of PVC Vms and would like to get those moved over to OpenStack KVM.

At first glance, the plan would be to shut down the PVC, copy and convert the image to qcow2, and [magic] bring it up in OpenStack. But I am sure it's not that easy.

Any advice or war stories would be really useful.

Thanks,
Mike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.openstack.org/pipermail/openstack-operators/attachments/20141030/f58e48e0/attachment.html

Well I believe PVC are basically OpenVZ containers, that means there is
on kernel/ramdisk to boot from it, AFAIK. Openstack doesn't support
OpenVZ, just LXC (and Docker)

You should take that into account as it may require changes inside the
'disk image'.

On 2014-10-30 5:51 PM, Michael Dorman wrote:
Anyone have any experience moving from Parallels Virtuozzo Containers
to OpenStack (KVM)? We have a large number of PVC Vms and would like
to get those moved over to OpenStack KVM.

At first glance, the plan would be to shut down the PVC, copy and
convert the image to qcow2, and [magic] bring it up in OpenStack. But
I am sure it?s not that easy.

Any advice or war stories would be really useful.

Thanks,
Mike


OpenStack-operators mailing list
OpenStack-operators at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

--

*Marcos Garcia
*
Technical Sales Engineer

PHONE : *(514) ? 907 - 0068 *EMAIL :marcos.garcia at enovance.com
<mailto:marcos.garcia at enovance.com> - SKYPE : *enovance-marcos.garcia**
*ADDRESS :
127 St-Pierre ? Montr?al (QC) H2Y 2L6, Canada *WEB :
*www.enovance.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL:

[Openstack-operators] database hoarding

On 10/30/14 23:30, Abel Lopez wro